[
  {
    "path": ".cursor/rules/overall-guidelines.mdc",
    "content": "---\nalwaysApply: true\n---\n\n# Cursor Rules for Go Snowflake Driver\n\n## General Development Standards\n\n### Code Quality\n- Follow Go formatting standards (use `gofmt`)\n- Use meaningful variable and function names\n- Include error handling for all operations that can fail\n- Write comprehensive documentation for public APIs\n\n### Project Structure\n- Place test files in the same package as the code being tested\n- Use `test_data/` directory for test fixtures and sample data\n- Group related functionality in logical packages\n\n### Testing\n- Test files should be named `*_test.go`\n- **For test-specific rules, see `testing.mdc`**\n- Write both positive and negative test cases\n- Use table-driven tests for testing multiple scenarios\n\n### Code Review Guidelines\n- Ensure code follows Go best practices\n- Verify comprehensive test coverage\n- Check that error messages are descriptive and helpful for debugging\n- Validate that public APIs are properly documented\n"
  },
  {
    "path": ".cursor/rules/testing.mdc",
    "content": "---\nalwaysApply: true\n---\n\n# Cursor Rules for Go Test Files\n\nThis file automatically applies when working on `*_test.go` files.\n\n## Testing Standards\n\n### Assertion Helper Usage\n- **ALWAYS** Attempt to use assertion helpers from `assert_test.go` instead of direct `t.Fatal`, `t.Fatalf`, `t.Error`, or `t.Errorf` calls. Where it makes sense, add new assertion helpers.\n- **NEVER** write manual if-then-fatal patterns in test functions when a suitable assertion helper exists.\n\n#### Common Assertion Patterns:\n\n**Error Checking:**\n```go\n// ❌ WRONG\nif err != nil {\n    t.Fatalf(\"Unexpected error: %v\", err)\n}\n\n// ✅ CORRECT  \nassertNilF(t, err, \"Unexpected error\")\n```\n\n**Nil Checking:**\n```go\n// ❌ WRONG\nif obj == nil {\n    t.Fatal(\"Expected non-nil object\")\n}\n\n// ✅ CORRECT\nassertNotNilF(t, obj, \"Expected non-nil object\")\n```\n\n**Equality Checking:**\n```go\n// ❌ WRONG\nif actual != expected {\n    t.Fatalf(\"Expected %v, got %v\", expected, actual)\n}\n\n// ✅ CORRECT\nassertEqualF(t, actual, expected, \"Values should match\")\n```\n\n**Error Message Validation:**\n```go\n// ❌ WRONG\nif err.Error() != expectedMsg {\n    t.Fatalf(\"Expected error: %s, got: %s\", expectedMsg, err.Error())\n}\n\n// ✅ CORRECT\nassertEqualF(t, err.Error(), expectedMsg, \"Error message should match\")\n```\n\n**Boolean Assertions:**\n```go\n// ❌ WRONG\nif !condition {\n    t.Fatal(\"Condition should be true\")\n}\n\n// ✅ CORRECT\nassertTrueF(t, condition, \"Condition should be true\")\n```\n\n#### Helper Function Reference:\nAlways examine `assertion_helpers.go` for the latest set of helpers. Consider these existing examples below.\n- `assertNilF/E(t, value, description)` - Assert value is nil\n- `assertNotNilF/E(t, value, description)` - Assert value is not nil  \n- `assertEqualF/E(t, actual, expected, description)` - Assert equality\n- `assertNotEqualF/E(t, actual, expected, description)` - Assert inequality\n- `assertTrueF/E(t, value, description)` - Assert boolean is true\n- `assertFalseF/E(t, value, description)` - Assert boolean is false\n- `assertStringContainsF/E(t, str, substring, description)` - Assert string contains substring\n- `assertErrIsF/E(t, actual, expected, description)` - Assert error matches expected error\n\n#### When to Use F vs E:\n- Use `F` suffix (Fatal) for critical failures that should stop the test immediately as well as for preconditions\n- Use `E` suffix (Error) for non-critical failures that allow the test to continue\n\n## Code Review Guidelines:\n- Flag any direct use of `t.Fatal*` or `t.Error*` in new code\n- Ensure all test functions use appropriate assertion helpers\n- Verify that error messages are descriptive and helpful for debugging\n- Check that tests are comprehensive and cover edge cases# Cursor Rules for Go Test Files"
  },
  {
    "path": ".github/CODEOWNERS",
    "content": "* @snowflakedb/Client\n\n/transport.go @snowflakedb/pki-oversight @snowflakedb/Client\n/crl.go @snowflakedb/pki-oversight @snowflakedb/Client\n/ocsp.go @snowflakedb/pki-oversight @snowflakedb/Client\n\n# GitHub Advanced Security Secret Scanning config\n/.github/secret_scanning.yml @snowflakedb/prodsec-security-manager-write"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/BUG_REPORT.md",
    "content": "---\nname: Bug Report 🐞\nabout: Something isn't working as expected? Here is the right place to report.\nlabels: bug\n---\n\n\n:exclamation: If you need **urgent assistance** then [file a case with Snowflake Support](https://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge).\nOtherwise continue here.\n\n\nPlease answer these questions before submitting your issue. \nIn order to accurately debug the issue this information is required. Thanks!\n\n1. What version of GO driver are you using?\n\n   \n2. What operating system and processor architecture are you using?\n\n   \n3. What version of GO are you using?\nrun `go version` in your console\n\n4.Server version:* E.g. 1.90.1\nYou may get the server version by running a query:\n```\nSELECT CURRENT_VERSION();\n```\n5. What did you do?\n\n   If possible, provide a recipe for reproducing the error.\n   A complete runnable program is good.\n\n6. What did you expect to see?\n\n   What should have happened and what happened instead?\n\n7. Can you set logging to DEBUG and collect the logs?\n\n   https://community.snowflake.com/s/article/How-to-generate-log-file-on-Snowflake-connectors\n   \n   Before sharing any information, please be sure to review the log and remove any sensitive\n   information.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/FEATURE_REQUEST.md",
    "content": "---\nname: Feature Request 💡\nabout: Suggest a new idea for the project.\nlabels: feature\n---\n\n<!--\nIf you need urgent assistance then file the feature request using the support process:\nhttps://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge\notherwise continue here.\n-->\n## What is the current behavior?\n\n## What is the desired behavior?\n\n## How would this improve `gosnowflake`?\n\n## References, Other Background\n\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE.md",
    "content": "### Issue description\nTell us what should happen and what happens instead\n\n### Example code\n```go\nIf possible, please enter some example code here to reproduce the issue.\n```\n\n### Error log\n```\nIf you have an error log, please paste it here.\n```\nAdd ``glog` option to your application to collect log files.\n\n### Configuration\n*Driver version (or git SHA):*\n\n*Go version:* run `go version` in your console\n\n*Server version:* E.g. 1.90.1\nYou may get the server version by running a query:\n```\nSELECT CURRENT_VERSION();\n```\n\n*Client OS:* E.g. Debian 8.1 (Jessie), Windows 10\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "### Description\n\nSNOW-XXX Please explain the changes you made here.\n\n### Checklist\n- [ ] Added proper logging (if possible)\n- [ ] Created tests which fail without the change (if possible)\n- [ ] Extended the README / documentation, if necessary\n"
  },
  {
    "path": ".github/repo_meta.yaml",
    "content": "point_of_contact: @snowflakedb/client\nproduction: true\ncode_owners_file_present: false\njira_area: Developer Platform\n"
  },
  {
    "path": ".github/secret_scanning.yml",
    "content": "paths-ignore:\n  - \"**/test_data/**\"\n"
  },
  {
    "path": ".github/workflows/build-test.yml",
    "content": "name: Build and Test\n\npermissions:\n  contents: read\n\non:\n  push:\n    branches:\n      - master\n    tags:\n      - v*\n  pull_request:\n  schedule:\n    - cron: '7 3 * * *'\n  workflow_dispatch:\n    inputs:\n      goTestParams:\n        default:\n        description: 'Parameters passed to go test'\n      sequentialTests:\n        type: boolean\n        default: false\n        description: 'Run tests sequentially (no buffering, slower)'\n\nconcurrency:\n  # older builds for the same pull request numer or branch should be cancelled\n  group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}\n  cancel-in-progress: true\n\njobs:\n    lint:\n        runs-on: ubuntu-latest\n        name: Check linter\n        steps:\n          - uses: actions/checkout@v4\n          - name: Setup go\n            uses: actions/setup-go@v5\n            with:\n              go-version: '1.26'\n          - name: golangci-lint\n            uses: golangci/golangci-lint-action@v7\n            with:\n              version: v2.11\n          - name: Format, Lint\n            shell: bash\n            run: ./ci/build.sh\n          - name: Run go fix across all platforms and tags\n            shell: bash\n            run: ./ci/gofix.sh\n    build-test-linux:\n        runs-on: ubuntu-latest\n        strategy:\n            fail-fast: false\n            matrix:\n                cloud: [ 'AWS', 'AZURE', 'GCP' ]\n                go: [ '1.24', '1.25', '1.26' ]\n        name: ${{ matrix.cloud }} Go ${{ matrix.go }} on Ubuntu\n        steps:\n            - uses: actions/checkout@v4\n            - uses: actions/setup-java@v4 # for wiremock\n              with:\n                java-version: 17\n                distribution: 'temurin'\n            - name: Setup go\n              uses: actions/setup-go@v5\n              with:\n                  go-version: ${{ matrix.go }}\n            - name: Test\n              shell: bash\n              env:\n                PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n                GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n                CLOUD_PROVIDER: ${{ matrix.cloud }}\n                GORACE: history_size=7\n                GO_TEST_PARAMS: ${{ inputs.goTestParams }}\n                SEQUENTIAL_TESTS: ${{ inputs.sequentialTests }}\n                WIREMOCK_PORT: 14335\n                WIREMOCK_HTTPS_PORT: 13567\n              run: ./ci/test.sh\n            - name: Upload test results to Codecov\n              if: ${{!cancelled()}}\n              uses: codecov/test-results-action@v1\n              with:\n                token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n            - name: Upload coverage to Codecov\n              uses: codecov/codecov-action@v5\n              with:\n                token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n    build-test-linux-no-home:\n      runs-on: ubuntu-latest\n      name: Ubuntu - no HOME\n      steps:\n        - uses: actions/checkout@v4\n        - uses: actions/setup-java@v4 # for wiremock\n          with:\n            java-version: 17\n            distribution: 'temurin'\n        - name: Setup go\n          uses: actions/setup-go@v5\n          with:\n            go-version: '1.25'\n        - name: Test\n          shell: bash\n          env:\n            PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n            GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n            CLOUD_PROVIDER: AWS\n            GORACE: history_size=7\n            GO_TEST_PARAMS: ${{ inputs.goTestParams }}\n            SEQUENTIAL_TESTS: ${{ inputs.sequentialTests }}\n            WIREMOCK_PORT: 14335\n            WIREMOCK_HTTPS_PORT: 13567\n            HOME_EMPTY: \"yes\"\n          run: ./ci/test.sh\n    build-test-mac:\n        runs-on: macos-latest\n        strategy:\n            fail-fast: false\n            matrix:\n                cloud: [ 'AWS', 'AZURE', 'GCP' ]\n                go: [ '1.24', '1.25', '1.26' ]\n        name: ${{ matrix.cloud }} Go ${{ matrix.go }} on Mac\n        steps:\n            - uses: actions/checkout@v4\n            - uses: actions/setup-java@v4 # for wiremock\n              with:\n                java-version: 17\n                distribution: 'temurin'\n            - name: Setup go\n              uses: actions/setup-go@v5\n              with:\n                  go-version: ${{ matrix.go }}\n            - name: Test\n              shell: bash\n              env:\n                PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n                GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n                CLOUD_PROVIDER: ${{ matrix.cloud }}\n                GO_TEST_PARAMS: ${{ inputs.goTestParams }}\n                WIREMOCK_PORT: 14335\n                WIREMOCK_HTTPS_PORT: 13567\n              run: ./ci/test.sh\n            - name: Upload test results to Codecov\n              if: ${{!cancelled()}}\n              uses: codecov/test-results-action@v1\n              with:\n                token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n            - name: Upload coverage to Codecov\n              uses: codecov/codecov-action@v5\n              with:\n                token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n    build-test-mac-no-home:\n      runs-on: macos-latest\n      name: Mac - no HOME\n      steps:\n        - uses: actions/checkout@v4\n        - uses: actions/setup-java@v4 # for wiremock\n          with:\n            java-version: 17\n            distribution: 'temurin'\n        - name: Setup go\n          uses: actions/setup-go@v5\n          with:\n            go-version: '1.25'\n        - name: Test\n          shell: bash\n          env:\n            PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n            GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n            CLOUD_PROVIDER: AWS\n            GO_TEST_PARAMS: ${{ inputs.goTestParams }}\n            WIREMOCK_PORT: 14335\n            WIREMOCK_HTTPS_PORT: 13567\n            HOME_EMPTY: \"yes\"\n          run: ./ci/test.sh\n    build-test-windows:\n        runs-on: windows-latest\n        strategy:\n            fail-fast: false\n            matrix:\n                cloud: [ 'AWS', 'AZURE', 'GCP' ]\n                go: [ '1.24', '1.25', '1.26' ]\n        name: ${{ matrix.cloud }} Go ${{ matrix.go }} on Windows\n        steps:\n            - uses: actions/checkout@v4\n            - uses: actions/setup-java@v4 # for wiremock\n              with:\n                java-version: 17\n                distribution: 'temurin'\n            - name: Setup go\n              uses: actions/setup-go@v5\n              with:\n                  go-version: ${{ matrix.go }}\n            - uses: actions/setup-python@v5\n              with:\n                python-version: '3.x'\n                architecture: 'x64'\n            - name: Test\n              shell: cmd\n              env:\n                PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n                GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n                CLOUD_PROVIDER: ${{ matrix.cloud }}\n                GO_TEST_PARAMS: ${{ inputs.goTestParams }}\n                SEQUENTIAL_TESTS: ${{ inputs.sequentialTests }}\n                WIREMOCK_PORT: 14335\n                WIREMOCK_HTTPS_PORT: 13567\n              run: ci\\\\test.bat\n            - name: Upload test results to Codecov\n              if: ${{!cancelled()}}\n              uses: codecov/test-results-action@v1\n              with:\n                token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n            - name: Upload coverage to Codecov\n              uses: codecov/codecov-action@v5\n              with:\n                token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n    fipsOnly:\n      runs-on: ubuntu-latest\n      strategy:\n        fail-fast: false\n      name: FIPS only mode\n      steps:\n        - uses: actions/checkout@v4\n        - uses: actions/setup-java@v4 # for wiremock\n          with:\n            java-version: 17\n            distribution: 'temurin'\n        - name: Setup go\n          uses: actions/setup-go@v5\n          with:\n            go-version: ${{ matrix.go }}\n        - name: Test\n          shell: bash\n          env:\n            PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n            GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n            CLOUD_PROVIDER: ${{ matrix.cloud }}\n            GORACE: history_size=7\n            GO_TEST_PARAMS: ${{ inputs.goTestParams }}\n            TEST_GODEBUG: fips140=only\n            SEQUENTIAL_TESTS: ${{ inputs.sequentialTests }}\n            WIREMOCK_PORT: 14335\n            WIREMOCK_HTTPS_PORT: 13567\n          run: ./ci/test.sh\n        - name: Upload test results to Codecov\n          if: ${{!cancelled()}}\n          uses: codecov/test-results-action@v1\n          with:\n            token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n        - name: Upload coverage to Codecov\n          uses: codecov/codecov-action@v5\n          with:\n            token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n    build-test-linux-minicore-disabled:\n      runs-on: ubuntu-latest\n      name: Ubuntu - minicore disabled\n      steps:\n        - uses: actions/checkout@v4\n        - uses: actions/setup-java@v4 # for wiremock\n          with:\n            java-version: 17\n            distribution: 'temurin'\n        - name: Setup go\n          uses: actions/setup-go@v5\n          with:\n            go-version: '1.25'\n        - name: Test\n          shell: bash\n          env:\n            PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n            GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n            CLOUD_PROVIDER: AWS\n            GORACE: history_size=7\n            GO_TEST_PARAMS: ${{ inputs.goTestParams }} -tags=minicore_disabled\n            WIREMOCK_PORT: 14335\n            WIREMOCK_HTTPS_PORT: 13567\n          run: ./ci/test.sh\n    build-test-mac-minicore-disabled:\n      runs-on: macos-latest\n      name: Mac - minicore disabled\n      steps:\n        - uses: actions/checkout@v4\n        - uses: actions/setup-java@v4 # for wiremock\n          with:\n            java-version: 17\n            distribution: 'temurin'\n        - name: Setup go\n          uses: actions/setup-go@v5\n          with:\n            go-version: '1.25'\n        - name: Test\n          shell: bash\n          env:\n            PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n            GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n            CLOUD_PROVIDER: AWS\n            GO_TEST_PARAMS: ${{ inputs.goTestParams }} -tags=minicore_disabled\n            WIREMOCK_PORT: 14335\n            WIREMOCK_HTTPS_PORT: 13567\n          run: ./ci/test.sh\n    build-test-windows-minicore-disabled:\n      runs-on: windows-latest\n      name: Windows - minicore disabled\n      steps:\n        - uses: actions/checkout@v4\n        - uses: actions/setup-java@v4 # for wiremock\n          with:\n            java-version: 17\n            distribution: 'temurin'\n        - name: Setup go\n          uses: actions/setup-go@v5\n          with:\n            go-version: '1.25'\n        - uses: actions/setup-python@v5\n          with:\n            python-version: '3.x'\n            architecture: 'x64'\n        - name: Test\n          shell: cmd\n          env:\n            PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n            GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n            CLOUD_PROVIDER: AWS\n            GO_TEST_PARAMS: ${{ inputs.goTestParams }} -tags=minicore_disabled\n            WIREMOCK_PORT: 14335\n            WIREMOCK_HTTPS_PORT: 13567\n          run: ci\\\\test.bat\n    ecc:\n      runs-on: ubuntu-latest\n      strategy:\n        fail-fast: false\n      name: Elliptic curves check\n      steps:\n        - uses: actions/checkout@v4\n        - uses: actions/setup-java@v4 # for wiremock\n          with:\n            java-version: 17\n            distribution: 'temurin'\n        - name: Setup go\n          uses: actions/setup-go@v5\n          with:\n            go-version: '1.25'\n        - name: Test\n          shell: bash\n          env:\n            PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n            GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n            CLOUD_PROVIDER: AWS\n            GORACE: history_size=7\n            GO_TEST_PARAMS: ${{ inputs.goTestParams }} -run TestQueryViaHttps\n            WIREMOCK_PORT: 14335\n            WIREMOCK_HTTPS_PORT: 13567\n            WIREMOCK_ENABLE_ECDSA: true\n          run: ./ci/test.sh\n    build-test-rockylinux9:\n        runs-on: ubuntu-latest\n        strategy:\n            fail-fast: false\n            matrix:\n              cloud_go:\n              - cloud: 'AWS'\n                go: '1.24.2'\n              - cloud: 'AZURE'\n                go: '1.25.0'\n              - cloud: 'GCP'\n                go: '1.26.0'\n        name: ${{ matrix.cloud_go.cloud }} Go ${{ matrix.cloud_go.go }} on Rocky Linux 9\n        steps:\n            - uses: actions/checkout@v4\n            - name: Test\n              shell: bash\n              env:\n                PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n                GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n                CLOUD_PROVIDER: ${{ matrix.cloud_go.cloud }}\n                GORACE: history_size=7\n                GO_TEST_PARAMS: ${{ inputs.goTestParams }}\n                SEQUENTIAL_TESTS: ${{ inputs.sequentialTests }}\n                WIREMOCK_PORT: 14335\n                WIREMOCK_HTTPS_PORT: 13567\n              run: ./ci/test_rockylinux9_docker.sh ${{ matrix.cloud_go.go }}\n    build-test-ubuntu-arm:\n      runs-on: ubuntu-24.04-arm\n      strategy:\n        fail-fast: false\n        matrix:\n          cloud_go:\n            - cloud: 'AWS'\n              go: '1.24'\n            - cloud: 'AZURE'\n              go: '1.25'\n            - cloud: 'GCP'\n              go: '1.26'\n      name: ${{ matrix.cloud_go.cloud }} Go ${{ matrix.cloud_go.go }} on Ubuntu ARM\n      steps:\n        - uses: actions/checkout@v4\n        - uses: actions/setup-java@v4 # for wiremock\n          with:\n            java-version: 17\n            distribution: 'temurin'\n        - name: Setup go\n          uses: actions/setup-go@v5\n          with:\n            go-version: ${{ matrix.cloud_go.go }}\n        - name: Test\n          shell: bash\n          env:\n            PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n            GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n            CLOUD_PROVIDER: ${{ matrix.cloud_go.cloud }}\n            GORACE: history_size=7\n            GO_TEST_PARAMS: ${{ inputs.goTestParams }}\n            WIREMOCK_PORT: 14335\n            WIREMOCK_HTTPS_PORT: 13567\n          run: ./ci/test.sh\n        - name: Upload test results to Codecov\n          if: ${{!cancelled()}}\n          uses: codecov/test-results-action@v1\n          with:\n            token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n        - name: Upload coverage to Codecov\n          uses: codecov/codecov-action@v5\n          with:\n            token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n    build-test-windows-arm:\n      runs-on: windows-11-arm\n      strategy:\n        fail-fast: false\n        matrix:\n          cloud_go:\n            - cloud: 'AWS'\n              go: '1.24'\n            - cloud: 'AZURE'\n              go: '1.25'\n            - cloud: 'GCP'\n              go: '1.26'\n      name: ${{ matrix.cloud_go.cloud }} Go ${{ matrix.cloud_go.cloud }} on Windows ARM\n      steps:\n        - uses: actions/checkout@v4\n        - uses: actions/setup-java@v4 # for wiremock\n          with:\n            java-version: 21\n            distribution: 'temurin'\n        - name: Setup go\n          uses: actions/setup-go@v5\n          with:\n            go-version: ${{ matrix.cloud_go.go }}\n        - uses: actions/setup-python@v5\n          with:\n            python-version: '3.x'\n            architecture: 'x64'\n        - name: Test\n          shell: cmd\n          env:\n            PARAMETERS_SECRET: ${{ secrets.PARAMETERS_SECRET }}\n            GOLANG_PRIVATE_KEY_SECRET: ${{ secrets.GOLANG_PRIVATE_KEY_SECRET }}\n            CLOUD_PROVIDER: ${{ matrix.cloud_go.cloud }}\n            GO_TEST_PARAMS: ${{ inputs.goTestParams }}\n            WIREMOCK_PORT: 14335\n            WIREMOCK_HTTPS_PORT: 13567\n          run: ci\\\\test.bat\n        - name: Upload test results to Codecov\n          if: ${{!cancelled()}}\n          uses: codecov/test-results-action@v1\n          with:\n            token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n        - name: Upload coverage to Codecov\n          uses: codecov/codecov-action@v5\n          with:\n            token: ${{ secrets.CODE_COV_UPLOAD_TOKEN }}\n"
  },
  {
    "path": ".github/workflows/changelog.yml",
    "content": "name: Changelog Check\n\non:\n  pull_request:\n    types: [opened, synchronize, labeled, unlabeled]\n\njobs:\n  check_change_log:\n    runs-on: ubuntu-latest\n    if: ${{!contains(github.event.pull_request.labels.*.name, 'NO-CHANGELOG-UPDATES')}}\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v3\n        with:\n          fetch-depth: 0\n\n      - name: Ensure CHANGELOG.md is updated\n        run: git diff --name-only --diff-filter=ACMRT ${{ github.event.pull_request.base.sha }} ${{ github.sha }} | grep -wq \"CHANGELOG.md\"\n"
  },
  {
    "path": ".github/workflows/cla_bot.yml",
    "content": "name: \"CLA Assistant\"\non:\n  issue_comment:\n    types: [created]\n  pull_request_target:\n    types: [opened,closed,synchronize]\n\njobs:\n  CLAAssistant:\n    runs-on: ubuntu-latest\n    permissions:\n      actions: write\n      contents: write\n      pull-requests: write\n      statuses: write\n    steps:\n      - name: \"CLA Assistant\"\n        if: (github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA') || github.event_name == 'pull_request_target'\n        uses: contributor-assistant/github-action/@master\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n          PERSONAL_ACCESS_TOKEN : ${{ secrets.CLA_BOT_TOKEN }}\n        with:\n          path-to-signatures: 'signatures/version1.json'\n          path-to-document: 'https://github.com/snowflakedb/CLA/blob/main/README.md'\n          branch: 'main'\n          allowlist: 'dependabot[bot],github-actions,Jenkins User,_jenkins,sfc-gh-snyk-sca-sa,snyk-bot'\n          remote-organization-name: 'snowflake-eng'\n          remote-repository-name: 'cla-db'\n"
  },
  {
    "path": ".github/workflows/jira_close.yml",
    "content": "name: Jira closure\n\non:\n  issues:\n    types: [closed, deleted]\n\njobs:\n  close-issue:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Extract issue from title\n        id: extract\n        env:\n          TITLE: \"${{ github.event.issue.title }}\"\n        run: |\n          jira=$(echo -n $TITLE | awk '{print $1}' | sed -e 's/://')\n          echo ::set-output name=jira::$jira\n\n      - name: Close Jira Issue\n        if: startsWith(steps.extract.outputs.jira, 'SNOW-')\n        env:\n          ISSUE_KEY: ${{ steps.extract.outputs.jira }}\n          JIRA_BASE_URL: ${{ secrets.JIRA_BASE_URL }}\n          JIRA_USER_EMAIL: ${{ secrets.JIRA_USER_EMAIL }}\n          JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}\n        run: |\n          JIRA_API_URL=\"${JIRA_BASE_URL}/rest/api/2/issue/${ISSUE_KEY}/transitions\"\n          curl -X POST \\\n            --url \"$JIRA_API_URL\" \\\n            --user \"${JIRA_USER_EMAIL}:${JIRA_API_TOKEN}\" \\\n            --header \"Content-Type: application/json\" \\\n            --data \"{\n              \\\"update\\\": {\n                \\\"comment\\\": [\n                  { \\\"add\\\": { \\\"body\\\": \\\"Closed on GitHub\\\" } }\n                ]\n              },\n              \\\"fields\\\": {\n                \\\"customfield_12860\\\": { \\\"id\\\": \\\"11506\\\" },\n                \\\"customfield_10800\\\": { \\\"id\\\": \\\"-1\\\" },\n                \\\"customfield_12500\\\": { \\\"id\\\": \\\"11302\\\" },\n                \\\"customfield_12400\\\": { \\\"id\\\": \\\"-1\\\" },\n                \\\"resolution\\\": { \\\"name\\\": \\\"Done\\\" }\n              },\n              \\\"transition\\\": { \\\"id\\\": \\\"71\\\" }\n            }\"\n"
  },
  {
    "path": ".github/workflows/jira_comment.yml",
    "content": "name: Jira comment\n\non:\n  issue_comment:\n    types: [created]\n\njobs:\n  comment-issue:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Jira login\n        uses: atlassian/gajira-login@master\n        env:\n          JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}\n          JIRA_BASE_URL: ${{ secrets.JIRA_BASE_URL }}\n          JIRA_USER_EMAIL: ${{ secrets.JIRA_USER_EMAIL }}\n      - name: Extract issue from title\n        id: extract\n        env:\n          TITLE: \"${{ github.event.issue.title }}\"\n        run: |\n          jira=$(echo -n $TITLE | awk '{print $1}' | sed -e 's/://')\n          echo ::set-output name=jira::$jira\n      - name: Comment on issue\n        uses: atlassian/gajira-comment@master\n        if: startsWith(steps.extract.outputs.jira, 'SNOW-') && github.event.comment.user.login != 'codecov[bot]'\n        with:\n          issue: \"${{ steps.extract.outputs.jira }}\"\n          comment: \"${{ github.event.comment.user.login }} commented:\\n\\n${{ github.event.comment.body }}\\n\\n${{ github.event.comment.html_url }}\"\n"
  },
  {
    "path": ".github/workflows/jira_issue.yml",
    "content": "name: Jira creation\n\non:\n  issues:\n    types: [opened]\n  issue_comment:\n    types: [created]\n\njobs:\n  create-issue:\n    runs-on: ubuntu-latest\n    permissions:\n      issues: write\n    if: ((github.event_name == 'issue_comment' && github.event.comment.body == 'recreate jira' && github.event.comment.user.login == 'sfc-gh-mkeller') || (github.event_name == 'issues' && github.event.pull_request.user.login != 'whitesource-for-github-com[bot]'))\n    steps:\n      - name: Create JIRA Ticket\n        id: create\n        env:\n          JIRA_BASE_URL: ${{ secrets.JIRA_BASE_URL }}\n          JIRA_USER_EMAIL: ${{ secrets.JIRA_USER_EMAIL }}\n          JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}\n          ISSUE_TITLE: ${{ github.event.issue.title }}\n          ISSUE_BODY: ${{ github.event.issue.body }}\n          ISSUE_URL: ${{ github.event.issue.html_url }}\n        run: |\n          # debug\n          #set -x\n          TMP_BODY=$(mktemp)\n          trap \"rm -f $TMP_BODY\" EXIT\n\n          # Escape special characters in title and body\n          TITLE=$(echo \"${ISSUE_TITLE//`/\\\\`}\" | sed 's/\"/\\\\\"/g' | sed \"s/'/\\\\\\'/g\")\n          echo \"${ISSUE_BODY//`/\\\\`}\" | sed 's/\"/\\\\\"/g' | sed \"s/'/\\\\\\'/g\" > $TMP_BODY\n          echo -e \"\\n\\n_Created from GitHub Action_ for $ISSUE_URL\" >> $TMP_BODY\n          BODY=$(cat \"$TMP_BODY\")\n\n          PAYLOAD=$(jq -n \\\n          --arg issuetitle \"$TITLE\" \\\n          --arg issuebody \"$BODY\" \\\n          '{\n            fields: {\n              project: { key: \"SNOW\" },\n              issuetype: { name: \"Bug\" },\n              summary: $issuetitle,\n              description: $issuebody,\n              customfield_11401: { id: \"14723\" },\n              assignee: { id: \"712020:e527ae71-55cc-4e02-9217-1ca4ca8028a2\" },\n              components: [{ id: \"19286\" }],\n              labels: [\"oss\"],\n              priority: { id: \"10001\" }\n            }\n          }')\n\n          # Create JIRA issue using REST API\n          RESPONSE=$(curl -s -X POST \\\n            -H \"Content-Type: application/json\" \\\n            -H \"Accept: application/json\" \\\n            -u \"$JIRA_USER_EMAIL:$JIRA_API_TOKEN\" \\\n            \"$JIRA_BASE_URL/rest/api/2/issue\" \\\n            -d \"$PAYLOAD\")\n\n          # Extract JIRA issue key from response\n          JIRA_KEY=$(echo \"$RESPONSE\" | jq -r '.key')\n\n          if [ \"$JIRA_KEY\" = \"null\" ] || [ -z \"$JIRA_KEY\" ]; then\n            echo \"Failed to create JIRA issue\"\n            echo \"Response: $RESPONSE\"\n            echo \"Request payload: $PAYLOAD\"\n            exit 1\n          fi\n\n          echo \"Created JIRA issue: $JIRA_KEY\"\n          echo \"jira_key=$JIRA_KEY\" >> $GITHUB_OUTPUT\n\n      - name: Update GitHub Issue\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n          REPOSITORY: ${{ github.repository }}\n          ISSUE_NUMBER: ${{ github.event.issue.number }}\n          JIRA_KEY: ${{ steps.create.outputs.jira_key }}\n          ISSUE_TITLE: ${{ github.event.issue.title }}\n        run: |\n          TITLE=$(echo \"${ISSUE_TITLE//`/\\\\`}\" | sed 's/\"/\\\\\"/g' | sed \"s/'/\\\\\\'/g\")\n          PAYLOAD=$(jq -n \\\n          --arg issuetitle \"$TITLE\" \\\n          --arg jirakey \"$JIRA_KEY\" \\\n          '{\n            title: ($jirakey + \": \" + $issuetitle)\n          }')\n\n          # Update Github issue title with jira id\n          curl -s \\\n            -X PATCH \\\n            -H \"Authorization: Bearer $GITHUB_TOKEN\" \\\n            -H \"Accept: application/vnd.github+json\" \\\n            -H \"X-GitHub-Api-Version: 2022-11-28\" \\\n            \"https://api.github.com/repos/$REPOSITORY/issues/$ISSUE_NUMBER\" \\\n            -d \"$PAYLOAD\"\n\n          if [ \"$?\" != 0 ]; then\n            echo \"Failed to update GH issue. Payload was:\"\n            echo \"$PAYLOAD\"\n            exit 1\n          fi\n"
  },
  {
    "path": ".github/workflows/semgrep.yml",
    "content": "name: Run semgrep checks\n\non:\n  pull_request:\n      branches: [main, master]\n\npermissions:\n  contents: read\n\njobs:\n  run-semgrep-reusable-workflow:\n    uses: snowflakedb/reusable-workflows/.github/workflows/semgrep-v2.yml@main\n    secrets:\n      token: ${{ secrets.SEMGREP_APP_TOKEN }}\n"
  },
  {
    "path": ".gitignore",
    "content": "*.DS_Store\n.idea/\n.vscode/\nparameters*.json\nparameters*.bat\n*.p8\ncoverage.txt\nfuzz-*/\n/select1\n/selectmany\n/verifycert\nwss-golang-agent.config\nwss-unified-agent.jar\nwhitesource/\n*.swp\ncp.out\n__debug_bin*\ntest-output.txt\ntest-report.junit.xml\n\n# exclude vendor\nvendor\n\n# SSH private key for WIF tests\nci/wif/parameters/rsa_wif_aws_azure\nci/wif/parameters/rsa_wif_gcp\n"
  },
  {
    "path": ".golangci.yml",
    "content": "version: \"2\"\n\nrun:\n  tests: true\n\nlinters:\n  exclusions:\n    rules:\n      - path: \"_test.go\"\n        linters:\n          - errcheck\n      - path: \"cmd/\"\n        linters:\n          - errcheck\n      - path: \"_test.go\"\n        linters:\n          - staticcheck\n        text: \"implement StmtQueryContext\"\n      - path: \"_test.go\"\n        linters:\n          - staticcheck\n        text: \"implement StmtExecContext\"\n      - linters:\n          - staticcheck\n        text: \"QF1001\"\n      - linters:\n          - staticcheck\n        text: \"SA1019: .+\\\\.(LoginTimeout|RequestTimeout|JWTExpireTimeout|ClientTimeout|JWTClientTimeout|ExternalBrowserTimeout|CloudStorageTimeout|Tracing) is deprecated\""
  },
  {
    "path": ".pre-commit-config.yaml",
    "content": "repos:\n- repo: git@github.com:snowflakedb/casec_precommit.git # SSH\n# - repo: https://github.com/snowflakedb/casec_precommit.git # HTTPS\n  rev: v1.5\n  hooks:\n  - id: snapps-secret-scanner\n"
  },
  {
    "path": ".windsurf/rules/go.md",
    "content": "---\ntrigger: glob\ndescription: \nglobs: **/*.go\n---\n\n# Go files rules\n\n## General\n\n1. Unless it's necessary or told otherwise, try reusing existing files, both for implementation and tests.\n2. If possible, try running relevant tests.\n\n## Tests\n\n1. Create a test file with the name same as prod code file by default.\n2. For assertions use our test helpers defined in assert_test.go.\n\n## Logging\n\n1. Add reasonable logging - don't repeat logs, but add them when it's meaningful.\n2. Always consider log levels."
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Changelog\n\n## Upcoming release\n\nBug fixes:\n\n- Fixed empty `Account` when connecting with programmatic `Config` and `database/sql.Connector` by deriving `Account` from the first DNS label of `Host` in `FillMissingConfigParameters` when `Host` matches the Snowflake hostname pattern (snowflakedb/gosnowflake#1772).\n\n## 2.0.1\n\nBug fixes:\n\n- Fixed default `CrlDownloadMaxSize` to be 20MB instead of 200MB, as the previous value was set too high and could cause out-of-memory issues (snowflakedb/gosnowflake#1735).\n- Replaced global `paramsMutex` with per-connection `syncParams` to encapsulate parameter synchronization and avoid cross-connection contention (snowflakedb/gosnoflake#1747).\n- `Config.Params` map is not modified anymore, to avoid changing parameter values across connections of the same connection pool (snowflakedb/gosnowflake#1747).\n- Set `BlobContentMD5` on Azure uploads so that multi-part uploads have the blob content-MD5 property populated (snowflakedb/gosnowflake#1757).\n- Fixed 403 errors from Google/GCP/GCS PUT queries on versioned stages (snowflakedb/gosnowflake#1760).\n- Fixed not updating query context cache for failed queries (snowflakedb/gosnowflake#1763).\n\nInternal changes:\n\n- Moved configuration to a dedicated internal package (snowflakedb/gosnowflake#1720).\n- Modernized Go syntax idioms throughout the codebase.\n- Added libc family, version and dynamic linking marker to client environment telemetry (snowflakedb/gosnowflake#1750).\n- Bumped a few libraries to fix vulnerabilities (snowflakedb/gosnowflake#1751, snowflakedb/gosnowflake#1756).\n- Depointerised query context cache in `snowflakeConn` (snowflakedb/gosnowflake#1763).\n\n## 2.0.0\n\nBreaking changes:\n\n- Removed `RaisePutGetError` from `SnowflakeFileTransferOptions` - current behaviour is aligned to always raise errors for PUT/GET operations (snowflakedb/gosnowflake#1690).\n- Removed `GetFileToStream` from `SnowflakeFileTransferOptions` - using `WithFileGetStream` automatically enables file streaming for GETs (snowflakedb/gosnowflake#1690).\n- Renamed `WithFileStream` to `WithFilePutStream` for consistency (snowflakedb/gosnowflake#1690).\n- `Array` function now returns error for unsupported types (snowflakedb/gosnowflake#1693).\n- `WithMultiStatement` does not return error anymore (snowflakedb/gosnowflake#1693).\n- `WithOriginalTimestamp` is removed, use `WithArrowBatchesTimestampOption(UseOriginalTimestamp)` instead (snowflakedb/gosnowflake#1693).\n- `WithMapValuesNullable` and `WithArrayValuesNullable` combined into one option `WithEmbeddedValuesNullable` (snowflakedb/gosnowflake#1693).\n- Hid streaming chunk downloader. It will be removed completely in the future (snowflakedb/gosnowflake#1696).\n- Maximum number of chunk download goroutines is now configured with `CLIENT_PREFETCH_THREADS` session parameter (snowflakedb/gosnowflake#1696)\n  and default to 4.\n- Fixed typo in `GOSNOWFLAKE_SKIP_REGISTRATION` env variable (snowflakedb/gosnowflake#1696).\n- Removed `ClientIP` field from `Config` struct. This field was never used and is not needed for any functionality (snowflakedb/gosnowflake#1692).\n- Unexported MfaToken and IdToken (snowflakedb/gosnowflake#1692).\n- Removed `InsecureMode` field from `Config` struct. Use `DisableOCSPChecks` instead (snowflakedb/gosnowflake#1692).\n- Renamed `KeepSessionAlive` field in `Config` struct to `ServerSessionKeepAlive` to adjust with the remaining drivers (snowflakedb/gosnowflake#1692).\n- Removed `DisableTelemetry` field from `Config` struct. Use `CLIENT_TELEMETRY_ENABLED` session parameter instead (snowflakedb/gosnowflake#1692).\n- Removed stream chunk downloader. Use a regular, default downloader instead. (snowflakedb/gosnowflake#1702).\n- Removed `SnowflakeTransport`. Use `Config.Transporter` or simply register your own TLS config with `RegisterTLSConfig` if you just need a custom root certificates set (snowflakedb/gosnowflake#1703).\n- Arrow batches changes (snowflakedb/gosnowflake#1706):\n  - Arrow batches have been extracted to a separate package. It should significantly drop the compilation size for those who don't need arrow batches (~34MB -> ~18MB).\n  - Removed `GetArrowBatches` from `SnowflakeRows` and `SnowflakeResult`. Use `arrowbatches.GetArrowBatches(rows.(SnowflakeRows))` instead.\n  - Migrated functions:\n    - `sf.WithArrowBatchesTimestampOption` -> `arrowbatches.WithTimstampOption`\n    - `sf.WithArrowBatchesUtf8Validation` -> `arrowbatches.WithUtf8Validation`\n    - `sf.ArrowSnowflakeTimestampToTime` -> `arrowbatches.ArrowSnowflakeTimestampToTime`\n- Logging changes (snowflakedb/gosnowflake#1710):\n  - Removed Logrus logger and migrated to slog.\n  - Simplified `SFLogger` interface.\n  - Added `SFSlogLogger` interface for setting custom slog handler.\n\nBug fixes:\n\n- The query `context.Context` is now propagated to cloud storage operations for PUT and GET queries, allowing for better cancellation handling (snowflakedb/gosnowflake#1690).\n\nNew features:\n\n- Added support for Go 1.26, dropped support for Go 1.23 (snowflakedb/gosnowflake#1707).\n- Added support for FIPS-only mode (snowflakedb/gosnowflake#1496).\n\nBug fixes:\n\n- Added panic recovery block for stage file uploads and downloads operation (snowflakedb/gosnowflake#1687).\n- Fixed WIF metadata request from Azure container, manifested with HTTP 400 error (snowflakedb/gosnowflake#1701).\n- Fixed SAML authentication port validation bypass in `isPrefixEqual` where the second URL's port was never checked (snowflakedb/gosnowflake#1712).\n- Fixed a race condition in OCSP cache clearer (snowflakedb/gosnowflake#1704).\n- The query `context.Context` is now propagated to cloud storage operations for PUT and GET queries, allowing for better cancellation handling (snowflakedb/gosnowflake#1690).\n- Fixed `tokenFilePath` DSN parameter triggering false validation error claiming both `token` and `tokenFilePath` were specified when only `tokenFilePath` was provided in the DSN string (snowflakedb/gosnowflake#1715).\n- Fixed minicore crash (SIGFPE) on fully statically linked Linux binaries by detecting static linking via ELF PT_INTERP inspection and skipping `dlopen` gracefully (snowflakedb/gosnowflake#1721).\n\nInternal changes:\n\n- Moved configuration to a dedicated internal package (snowflakedb/gosnowflake#1720).\n\n## 1.19.0\n\nNew features:\n\n- Added ability to disable minicore loading at compile time (snowflakedb/gosnowflake#1679).\n- Exposed `tokenFilePath` in `Config` (snowflakedb/gosnowflake#1666).\n- `tokenFilePath` is now read for every new connection (snowflakedb/gosnowflake#1666).\n- Added support for identity impersonation when using workload identity federation (snowflakedb/gosnowflake#1652, snowflakedb/gosnowflake#1660).\n\nBug fixes:\n\n- Fixed getting file from an unencrypted stage (snowflakedb/gosnowflake#1672).\n- Fixed minicore file name gathering in client environment (snowflakedb/gosnowflake#1661).\n- Fixed file descriptor leaks in cloud storage calls (snowflakedb/gosnowflake#1682)\n- Fixed path escaping for GCS urls (snowflakedb/gosnowflake#1678).\n\nInternal changes:\n\n- Improved Linux telemetry gathering (snowflakedb/gosnowflake#1677).\n- Improved some logs returned from cloud storage clients (snowflakedb/gosnowflake#1665).\n\n## 1.18.1\n\nBug fixes:\n\n- Handle HTTP307 & 308 in drivers to achieve better resiliency to backend errors (snowflakedb/gosnowflake#1616).\n- Create temp directory only if needed during file transfer (snowflakedb/gosnowflake#1647)\n- Fix unnecessary user expansion for file paths (snowflakedb/gosnowflake#1646).\n\nInternal changes:\n- Remove spammy \"telemetry disabled\" log messages (snowflakedb/gosnowflake#1638).\n- Introduced shared library ([source code](https://github.com/snowflakedb/universal-driver/tree/main/sf_mini_core)) for extended telemetry to identify and prepare testing platform for native rust extensions (snowflakedb/gosnowflake#1629)\n\n## 1.18.0\n\nNew features:\n\n- Added validation of CRL `NextUpdate` for freshly downloaded CRLs (snowflakedb/gosnowflake#1617)\n- Exposed function to send arbitrary telemetry data (snowflakedb/gosnowflake#1627)\n- Added logging of query text and parameters (snowflakedb/gosnowflake#1625)\n\nBug fixes:\n\n- Fixed a data race error in tests caused by platform_detection init() function (snowflakedb/gosnowflake#1618)\n- Make secrets detector initialization thread safe and more maintainable (snowflakedb/gosnowflake#1621)\n\nInternal changes:\n\n- Added ISA to login request telemetry (snowflakedb/gosnowflake#1620)\n\n## 1.17.1\n\n- Fix unsafe reflection of nil pointer on DECFLOAT func in bind uploader (snowflakedb/gosnowflake#1604).\n- Added temporary download files cleanup (snowflakedb/gosnowflake#1577)\n- Marked fields as deprecated (snowflakedb/gosnowflake#1556)\n- Exposed `QueryStatus` from `SnowflakeResult` and `SnowflakeRows` in `GetStatus()` function (snowflakedb/gosnowflake#1556)\n- Split timeout settings into separate groups based on target service types (snowflakedb/gosnowflake#1531)\n- Added small clarification in oauth.go example on token escaping (snowflakedb/gosnowflake#1574)\n- Ensured proper permissions for CRL cache directory (snowflakedb/gosnowflake#1588)\n- Added `CrlDownloadMaxSize` to limit the size of CRL downloads (snowflakedb/gosnowflake#1588)\n- Added platform telemetry to login requests. Can be disabled with `SNOWFLAKE_DISABLE_PLATFORM_DETECTION` environment variable (snowflakedb/gosnowflake#1601)\n- Bypassed proxy settings for WIF metadata requests (snowflakedb/gosnowflake#1593)\n- Fixed a bug where GCP PUT/GET operations would fail when the connection context was cancelled (snowflakedb/gosnowflake#1584)\n- Fixed nil pointer dereference while calling long-running queries (snowflakedb/gosnowflake#1592) (snowflakedb/gosnowflake#1596)\n- Moved keyring-based secure storage manager into separate file to avoid the need to initialize keyring on Linux. (snowflakedb/gosnowflake#1595)\n- Enabling official support for RHEL9 by testing and enabling CI/CD checks for Rocky Linux in CICD, (snowflakedb/gosnowflake#1597)\n- Improve logging (snowflakedb/gosnowflake#1570)\n\n## 1.17.0\n\n- Added ability to configure OCSP per connection (snowflakedb/gosnowflake#1528)\n- Added `DECFLOAT` support, see details in `doc.go` (snowflakedb/gosnowflake#1504, snowflakedb/gosnowflake#1506)\n- Added support for Go 1.25, dropped support for Go 1.22 (snowflakedb/gosnowflake#1544)\n- Added proxy options to connection parameters (snowflakedb/gosnowflake#1511)\n- Added `client_session_keep_alive_heartbeat_frequency` connection param (snowflakedb/gosnowflake#1576)\n- Added support for multi-part downloads for S3, Azure and GCP (snowflakedb/gosnowflake#1549)\n- Added `singleAuthenticationPrompt` to control whether only one authentication should be performed at the same time for authentications that need human interactions (like MFA or OAuth authorization code). Default is true. (snowflakedb/gosnowflake#1561)\n- Fixed missing `DisableTelemetry` option in connection parameters (snowflakedb/gosnowflake#1520)\n- Fixed multistatements in large result sets (snowflakedb/gosnowflake#1539, snowflakedb/gosnowflake#1543, snowflakedb/gosnowflake#1547)\n- Fixed unnecessary retries when context is cancelled (snowflakedb/gosnowflake#1540)\n- Fixed regression in TOML connection file (snowflakedb/gosnowflake#1530)\n\n## Prior Releases\n\nRelease notes available at https://docs.snowflake.com/en/release-notes/clients-drivers/golang\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing Guidelines\n\n## Reporting Issues\n\nBefore creating a new Issue, please check first if a similar Issue [already exists](https://github.com/snowflakedb/gosnowflake/issues?state=open) or was [recently closed](https://github.com/snowflakedb/gosnowflake/issues?direction=desc&page=1&sort=updated&state=closed).\n\n## Contributing Code\n\nBy contributing to this project, you share your code under the Apache License 2, as specified in the LICENSE file.\n\n### Code Review\n\nEveryone is invited to review and comment on pull requests.\nIf it looks fine to you, comment with \"LGTM\" (Looks good to me).\n\nIf changes are required, notice the reviewers with \"PTAL\" (Please take another look) after committing the fixes.\n\nBefore merging the Pull Request, at least one Snowflake team member must have commented with \"LGTM\".\n"
  },
  {
    "path": "Jenkinsfile",
    "content": "@Library('pipeline-utils')\nimport com.snowflake.DevEnvUtils\nimport groovy.json.JsonOutput\n\n\ntimestamps {\n  node('high-memory-node') {\n    stage('checkout') {\n      scmInfo = checkout scm\n      println(\"${scmInfo}\")\n      env.GIT_BRANCH = scmInfo.GIT_BRANCH\n      env.GIT_COMMIT = scmInfo.GIT_COMMIT\n    }\n    params = [\n      string(name: 'svn_revision', value: 'temptest-deployed'),\n      string(name: 'branch', value: 'main'),\n      string(name: 'client_git_commit', value: scmInfo.GIT_COMMIT),\n      string(name: 'client_git_branch', value: scmInfo.GIT_BRANCH),\n      string(name: 'TARGET_DOCKER_TEST_IMAGE', value: 'go-chainguard-go1_24'),\n      string(name: 'parent_job', value: env.JOB_NAME),\n      string(name: 'parent_build_number', value: env.BUILD_NUMBER)\n    ]\n    \n    stage('Authenticate Artifactory') {\n      script {\n        new DevEnvUtils().withSfCli {\n          sh \"sf artifact oci auth\"\n        }\n      }\n    }\n\n    parallel(\n      'Test': {\n        stage('Test') {\n          build job: 'RT-LanguageGo-PC', parameters: params\n        }\n      },\n      'Test Authentication': {\n        stage('Test Authentication') {\n          withCredentials([\n            string(credentialsId: 'sfctest0-parameters-secret', variable: 'PARAMETERS_SECRET')\n          ]) {\n            sh '''\\\n            |#!/bin/bash -e\n            |$WORKSPACE/ci/test_authentication.sh\n            '''.stripMargin()\n          }\n        }\n      },\n      'Test WIF Auth': {\n        stage('Test WIF Auth') {\n          withCredentials([\n            string(credentialsId: 'sfctest0-parameters-secret', variable: 'PARAMETERS_SECRET'),\n          ]) {\n            sh '''\\\n            |#!/bin/bash -e\n            |$WORKSPACE/ci/test_wif.sh\n            '''.stripMargin()\n          }\n        }\n      },\n      'Test Revocation Validation': {\n        stage('Test Revocation Validation') {\n          withCredentials([\n            usernamePassword(credentialsId: 'jenkins-snowflakedb-github-app',\n              usernameVariable: 'GITHUB_USER',\n              passwordVariable: 'GITHUB_TOKEN')\n          ]) {\n            try {\n              sh '''\\\n              |#!/bin/bash -e\n              |chmod +x $WORKSPACE/ci/test_revocation.sh\n              |$WORKSPACE/ci/test_revocation.sh\n              '''.stripMargin()\n            } finally {\n              archiveArtifacts artifacts: 'revocation-results.json,revocation-report.html', allowEmptyArchive: true\n              publishHTML(target: [\n                allowMissing: true,\n                alwaysLinkToLastBuild: true,\n                keepAll: true,\n                reportDir: '.',\n                reportFiles: 'revocation-report.html',\n                reportName: 'Revocation Validation Report'\n              ])\n            }\n          }\n        }\n      }\n    )\n  }\n}\n\n\npipeline {\n  agent { label 'high-memory-node' }\n  options { timestamps() }\n  environment {\n    COMMIT_SHA_LONG = sh(returnStdout: true, script: \"echo \\$(git rev-parse \" + \"HEAD)\").trim()\n\n    // environment variables for semgrep_agent (for findings / analytics page)\n    // remove .git at the end\n    // remove SCM URL + .git at the end\n\n    BASELINE_BRANCH = \"${env.CHANGE_TARGET}\"\n  }\n  stages {\n    stage('Checkout') {\n      steps {\n        checkout scm\n      }\n    }\n  }\n}\n\ndef wgetUpdateGithub(String state, String folder, String targetUrl, String seconds) {\n    def ghURL = \"https://api.github.com/repos/snowflakedb/gosnowflake/statuses/$COMMIT_SHA_LONG\"\n    def data = JsonOutput.toJson([state: \"${state}\", context: \"jenkins/${folder}\",target_url: \"${targetUrl}\"])\n    sh \"wget ${ghURL} --spider -q --header='Authorization: token $GIT_PASSWORD' --post-data='${data}'\"\n}\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright (c) 2017-2022 Snowflake Computing Inc. All rights reserved.\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "Makefile",
    "content": "NAME:=gosnowflake\nVERSION:=$(shell git describe --tags --abbrev=0)\nREVISION:=$(shell git rev-parse --short HEAD)\nCOVFLAGS:=\n\n## Run fmt, lint and test\nall: fmt lint cov\n\ninclude gosnowflake.mak\n\n## Run tests\ntest_setup: test_teardown\n\tpython3 ci/scripts/hang_webserver.py 12345 &\n\ntest_teardown:\n\tpkill -9 hang_webserver || true\n\ntest: deps test_setup\n\t./ci/scripts/execute_tests.sh\n\n## Run Coverage tests\ncov:\n\tmake test COVFLAGS=\"-coverprofile=coverage.txt -covermode=atomic\"\n\n\n\n## Lint\nlint: clint\n\n## Format source codes\nfmt: cfmt\n\t@for c in $$(ls cmd); do \\\n\t\t(cd cmd/$$c; make fmt); \\\n\tdone\n\n## Install sample programs\ninstall:\n\tfor c in $$(ls cmd); do \\\n\t\t(cd cmd/$$c;  GOBIN=$$GOPATH/bin go install $$c.go); \\\n\tdone\n\n## Build fuzz tests\nfuzz-build:\n\tfor c in $$(ls | grep -E \"fuzz-*\"); do \\\n\t\t(cd $$c; make fuzz-build); \\\n\tdone\n\n## Run fuzz-dsn\nfuzz-dsn:\n\t(cd fuzz-dsn; go-fuzz -bin=./dsn-fuzz.zip -workdir=.)\n\n.PHONY: setup deps update test lint help fuzz-dsn\n"
  },
  {
    "path": "README.md",
    "content": "## Migrating to v2\n\n**Version 2.0.0 of the Go Snowflake Driver was released on March 3rd, 2026.** This major version includes breaking changes that require code updates when migrating from v1.x.\n\n### Key Changes and Migration Steps\n\n#### 1. Update Import Paths\n\nUpdate your `go.mod` to use v2:\n\n```sh\ngo get -u github.com/snowflakedb/gosnowflake/v2\n```\n\nUpdate imports in your code:\n\n```go\n// Old (v1)\nimport \"github.com/snowflakedb/gosnowflake\"\n\n// New (v2)\nimport \"github.com/snowflakedb/gosnowflake/v2\"\n```\n\n#### 2. Arrow Batches Moved to Separate Package\n\nThe public Arrow batches API now lives in `github.com/snowflakedb/gosnowflake/v2/arrowbatches`.\nImporting that sub-package pulls in the additional Arrow compute dependency only for applications\nthat use Arrow batches directly.\n\n**Migration:**\n\n```go\nimport (\n    \"context\"\n    \"database/sql/driver\"\n\n    sf \"github.com/snowflakedb/gosnowflake/v2\"\n    \"github.com/snowflakedb/gosnowflake/v2/arrowbatches\"\n)\n\nctx := arrowbatches.WithArrowBatches(context.Background())\n\nvar rows driver.Rows\nerr := conn.Raw(func(x any) error {\n    rows, err = x.(driver.QueryerContext).QueryContext(ctx, query, nil)\n    return err\n})\nif err != nil {\n    // handle error\n}\n\nbatches, err := arrowbatches.GetArrowBatches(rows.(sf.SnowflakeRows))\nif err != nil {\n    // handle error\n}\n```\n\n**Optional helper mapping:**\n- `sf.WithArrowBatchesTimestampOption` → `arrowbatches.WithTimestampOption`\n- `sf.WithArrowBatchesUtf8Validation` → `arrowbatches.WithUtf8Validation`\n- `sf.ArrowSnowflakeTimestampToTime` → `arrowbatches.ArrowSnowflakeTimestampToTime`\n- `sf.WithOriginalTimestamp` → `arrowbatches.WithTimestampOption(ctx, arrowbatches.UseOriginalTimestamp)`\n\n#### 3. Configuration Struct Changes\n\n**Renamed fields:**\n```go\n// Old (v1)\nconfig := &gosnowflake.Config{\n    KeepSessionAlive: true,\n    InsecureMode: true,\n    DisableTelemetry: true,\n}\n\n// New (v2)\nconfig := &gosnowflake.Config{\n    ServerSessionKeepAlive: true,  // Renamed for consistency with other drivers\n    DisableOCSPChecks: true,        // Replaces InsecureMode\n    // DisableTelemetry removed - use CLIENT_TELEMETRY_ENABLED session parameter\n}\n```\n\n**Removed fields:**\n- `ClientIP` - No longer used\n- `MfaToken` and `IdToken` - Now unexported\n- `DisableTelemetry` - Use `CLIENT_TELEMETRY_ENABLED` session parameter instead\n\n#### 4. Logger Changes\n\nThe built-in logger is now based on Go's standard `log/slog`:\n\n```go\nlogger := gosnowflake.GetLogger()\n_ = logger.SetLogLevel(\"debug\")\n```\n\nFor custom logging, continue implementing `SFLogger`.\nIf you want to customize the built-in slog handler, type-assert `GetLogger()` to `SFSlogLogger`\nand call `SetHandler`.\n\n#### 5. File Transfer Changes\n\n**Configuration options:**\n\n```go\n// Old (v1)\noptions := &gosnowflake.SnowflakeFileTransferOptions{\n    RaisePutGetError: true,\n    GetFileToStream: true,\n}\nctx = gosnowflake.WithFileStream(ctx, stream)\n\n// New (v2)\n// RaisePutGetError removed - errors always raised\n// GetFileToStream removed - use WithFileGetStream instead\nctx = gosnowflake.WithFilePutStream(ctx, stream)  // Renamed from WithFileStream\nctx = gosnowflake.WithFileGetStream(ctx, stream)  // For GET operations\n```\n\n#### 6. Context and Function Changes\n\n```go\n// Old (v1)\nctx, err := gosnowflake.WithMultiStatement(ctx, 0)\nif err != nil {\n    // handle error\n}\n\n// New (v2)\nctx = gosnowflake.WithMultiStatement(ctx, 0)  // No error returned\n```\n\n```go\n// Old (v1)\nvalues := gosnowflake.Array(data)\n\n// New (v2)\nvalues, err := gosnowflake.Array(data)  // Now returns error for unsupported types\nif err != nil {\n    // handle error\n}\n```\n\n#### 7. Nullable Options Combined\n\n```go\n// Old (v1)\nctx = gosnowflake.WithMapValuesNullable(ctx)\nctx = gosnowflake.WithArrayValuesNullable(ctx)\n\n// New (v2)\nctx = gosnowflake.WithEmbeddedValuesNullable(ctx)  // Handles both maps and arrays\n```\n\n#### 8. Session Parameter Changes\n\n**Chunk download workers:**\n\n```go\n// Old (v1)\ngosnowflake.MaxChunkDownloadWorkers = 10  // Global variable\n\n// New (v2)\n// Configure via CLIENT_PREFETCH_THREADS session parameter.\n// NOTE: The default is 4.\ndb.Exec(\"ALTER SESSION SET CLIENT_PREFETCH_THREADS = 10\")\n```\n\n#### 9. Transport Configuration\n\n```go\nimport \"crypto/tls\"\n\n// Old (v1)\ngosnowflake.SnowflakeTransport = yourTransport\n\n// New (v2)\nconfig := &gosnowflake.Config{\n    Transporter: yourCustomTransport,\n}\n\n// Or, if you only need custom TLS settings/certificates:\ntlsConfig := &tls.Config{\n    // ...\n}\n_ = gosnowflake.RegisterTLSConfig(\"custom\", tlsConfig)\nconfig.TLSConfigName = \"custom\"\n```\n\n#### 10. Environment Variable Fix\n\nIf you use the skip registration environment variable:\n\n```sh\n# Old (v1)\nGOSNOWFLAKE_SKIP_REGISTERATION=true  # Note the typo\n\n# New (v2)\nGOSNOWFLAKE_SKIP_REGISTRATION=true  # Typo fixed\n```\n\n### Additional Resources\n\n- Full list of changes: See [CHANGELOG.md](./CHANGELOG.md)\n- Questions or issues: [GitHub Issues](https://github.com/snowflakedb/gosnowflake/issues)\n\n\n## Support\n\nFor official support and urgent, production-impacting issues, please [contact Snowflake Support](https://community.snowflake.com/s/article/How-To-Submit-a-Support-Case-in-Snowflake-Lodge).\n\n# Go Snowflake Driver\n\n<a href=\"https://codecov.io/github/snowflakedb/gosnowflake?branch=master\">\n    <img alt=\"Coverage\" src=\"https://codecov.io/github/snowflakedb/gosnowflake/coverage.svg?branch=master\">\n</a>\n<a href=\"https://github.com/snowflakedb/gosnowflake/actions?query=workflow%3A%22Build+and+Test%22\">\n    <img src=\"https://github.com/snowflakedb/gosnowflake/workflows/Build%20and%20Test/badge.svg?branch=master\">\n</a>\n<a href=\"http://www.apache.org/licenses/LICENSE-2.0.txt\">\n    <img src=\"http://img.shields.io/:license-Apache%202-brightgreen.svg\">\n</a>\n<a href=\"https://goreportcard.com/report/github.com/snowflakedb/gosnowflake\">\n    <img src=\"https://goreportcard.com/badge/github.com/snowflakedb/gosnowflake\">\n</a>\n\nThis topic provides instructions for installing, running, and modifying the Go Snowflake Driver. The driver supports Go's [database/sql](https://golang.org/pkg/database/sql/) package.\n\n# Prerequisites\n\nThe following software packages are required to use the Go Snowflake Driver.\n\n## Go\n\nThe latest driver requires the [Go language](https://golang.org/) 1.24 or higher. The supported operating systems are 64-bits Linux, Mac OS, and Windows, but you may run the driver on other platforms if the Go language works correctly on those platforms.\n\n# Installation\n\nIf you don't have a project initialized, set it up.\n\n```sh\ngo mod init example.com/snowflake\n```\n\nGet Gosnowflake source code, if not installed.\n\n```sh\ngo get -u github.com/snowflakedb/gosnowflake/v2\n```\n\n# Docs\n\nFor detailed documentation and basic usage examples, please see the documentation at\n[godoc.org](https://godoc.org/github.com/snowflakedb/gosnowflake/v2).\n\n## Notes\n\nThis driver currently does not support GCP regional endpoints. Please ensure that any workloads using through this driver do not require support for regional endpoints on GCP. If you have questions about this, please contact Snowflake Support.\n\nThe driver uses Rust library called sf_mini_core, you can find its source code [here](https://github.com/snowflakedb/universal-driver/tree/main/sf_mini_core)\n\n# Sample Programs\n\nSnowflake provides a set of sample programs to test with. Set the environment variable ``$GOPATH`` to the top directory of your workspace, e.g., ``~/go`` and make certain to\ninclude ``$GOPATH/bin`` in the environment variable ``$PATH``. Run the ``make`` command to build all sample programs.\n\n```sh\nmake install\n```\n\nIn the following example, the program ``select1.go`` is built and installed in ``$GOPATH/bin`` and can be run from the command line:\n\n```sh\nSNOWFLAKE_TEST_ACCOUNT=<your_account> \\\nSNOWFLAKE_TEST_USER=<your_user> \\\nSNOWFLAKE_TEST_PASSWORD=<your_password> \\\nselect1\nCongrats! You have successfully run SELECT 1 with Snowflake DB!\n```\n\n# Development\n\nThe developer notes are hosted with the source code on [GitHub](https://github.com/snowflakedb/gosnowflake/v2).\n\n## Testing Code\n\n\nSet the Snowflake connection info in ``parameters.json``:\n\n```json\n{\n    \"testconnection\": {\n        \"SNOWFLAKE_TEST_USER\":      \"<your_user>\",\n        \"SNOWFLAKE_TEST_PASSWORD\":  \"<your_password>\",\n        \"SNOWFLAKE_TEST_ACCOUNT\":   \"<your_account>\",\n        \"SNOWFLAKE_TEST_WAREHOUSE\": \"<your_warehouse>\",\n        \"SNOWFLAKE_TEST_DATABASE\":  \"<your_database>\",\n        \"SNOWFLAKE_TEST_SCHEMA\":    \"<your_schema>\",\n        \"SNOWFLAKE_TEST_ROLE\":      \"<your_role>\",\n        \"SNOWFLAKE_TEST_DEBUG\":     \"false\"\n    }\n}\n```\n\nInstall [jq](https://stedolan.github.io/jq) so that the parameters can get parsed correctly, and run ``make test`` in your Go development environment:\n\n```sh\nmake test\n```\n\n### Setting debug mode during tests\nThis is for debugging Large SQL statements (greater than 300 characters). If you want to enable debug mode, set `SNOWFLAKE_TEST_DEBUG` to `true` in `parameters.json`, or export it in your shell instance.\n\n## customizing Logging Tags\n\nIf you would like to ensure that certain tags are always present in the logs, `RegisterClientLogContextHook` can be used in your init function. See example below.\n```go\nimport \"github.com/snowflakedb/gosnowflake/v2\"\n\nfunc init() {\n    // each time the logger is used, the logs will contain a REQUEST_ID field with requestID the value extracted \n    // from the context\n\tgosnowflake.RegisterClientLogContextHook(\"REQUEST_ID\", func(ctx context.Context) interface{} {\n\t\treturn requestIdFromContext(ctx)\n\t})\n}\n```\n\n## Setting Log Level\nIf you want to change the log level, `SetLogLevel` can be used in your init function like this:\n```go\nimport \"github.com/snowflakedb/gosnowflake/v2\"\n\nfunc init() {\n    // The following line changes the log level to debug\n\t_ = gosnowflake.GetLogger().SetLogLevel(\"debug\")\n}\n```\nThe following is a list of options you can pass in to set the level from least to most verbose:\n- `\"OFF\"`\n- `\"fatal\"`\n- `\"error\"`\n- `\"warn\"`\n- `\"info\"`\n- `\"debug\"`\n- `\"trace\"`\n\n\n## Capturing Code Coverage\n\nConfigure your testing environment as described above and run ``make cov``. The coverage percentage will be printed on the console when the testing completes.\n\n```sh\nmake cov\n```\n\nFor more detailed analysis, results are printed to ``coverage.txt`` in the project directory.\n\nTo read the coverage report, run:\n\n```sh\ngo tool cover -html=coverage.txt\n```\n\n## Submitting Pull Requests\n\nYou may use your preferred editor to edit the driver code. Make certain to run ``make fmt lint`` before submitting any pull request to Snowflake. This command formats your source code according to the standard Go style and detects any coding style issues.\n"
  },
  {
    "path": "SECURITY.md",
    "content": "# Security Policy\n\nPlease refer to the Snowflake [HackerOne program](https://hackerone.com/snowflake?type=team) for our security policies and for reporting any security vulnerabilities.\n\nFor other security related questions and concerns, please contact the Snowflake security team at security@snowflake.com\n"
  },
  {
    "path": "aaa_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"testing\"\n)\n\nfunc TestShowServerVersion(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQuery(\"SELECT CURRENT_VERSION()\")\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\n\t\tvar version string\n\t\trows.Next()\n\t\tassertNilF(t, rows.Scan(&version))\n\t\tprintln(version)\n\t})\n}\n"
  },
  {
    "path": "arrow_chunk.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/base64\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/ipc\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n)\n\ntype arrowResultChunk struct {\n\treader    *ipc.Reader\n\trowCount  int\n\tloc       *time.Location\n\tallocator memory.Allocator\n}\n\nfunc (arc *arrowResultChunk) decodeArrowChunk(ctx context.Context, rowType []query.ExecResponseRowType, highPrec bool, params *syncParams) ([]chunkRowType, error) {\n\tdefer arc.reader.Release()\n\tlogger.Debug(\"Arrow Decoder\")\n\tvar chunkRows []chunkRowType\n\n\tfor arc.reader.Next() {\n\t\trecord := arc.reader.Record()\n\n\t\tstart := len(chunkRows)\n\t\tnumRows := int(record.NumRows())\n\t\tlogger.Debugf(\"rows in current record: %v\", numRows)\n\t\tcolumns := record.Columns()\n\t\tchunkRows = append(chunkRows, make([]chunkRowType, numRows)...)\n\t\tfor i := start; i < start+numRows; i++ {\n\t\t\tchunkRows[i].ArrowRow = make([]snowflakeValue, len(columns))\n\t\t}\n\n\t\tfor colIdx, col := range columns {\n\t\t\tvalues := make([]snowflakeValue, numRows)\n\t\t\tif err := arrowToValues(ctx, values, rowType[colIdx], col, arc.loc, highPrec, params); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\n\t\t\tfor i := range values {\n\t\t\t\tchunkRows[start+i].ArrowRow[colIdx] = values[i]\n\t\t\t}\n\t\t}\n\t\tarc.rowCount += numRows\n\t}\n\tlogger.Debugf(\"The number of chunk rows: %v\", len(chunkRows))\n\n\treturn chunkRows, arc.reader.Err()\n}\n\n// decodeArrowBatchRaw reads raw (untransformed) arrow records from the IPC reader.\n// The records are not transformed with arrow-compute; the arrowbatches sub-package\n// handles transformation when the user calls ArrowBatch.Fetch().\nfunc (arc *arrowResultChunk) decodeArrowBatchRaw() (*[]arrow.Record, error) {\n\tvar records []arrow.Record\n\tdefer arc.reader.Release()\n\n\tfor arc.reader.Next() {\n\t\trecord := arc.reader.Record()\n\t\trecord.Retain()\n\t\trecords = append(records, record)\n\t}\n\n\treturn &records, arc.reader.Err()\n}\n\n// Build arrow chunk based on RowSet of base64\nfunc buildFirstArrowChunk(rowsetBase64 string, loc *time.Location, alloc memory.Allocator) (arrowResultChunk, error) {\n\trowSetBytes, err := base64.StdEncoding.DecodeString(rowsetBase64)\n\tif err != nil {\n\t\treturn arrowResultChunk{}, err\n\t}\n\trr, err := ipc.NewReader(bytes.NewReader(rowSetBytes), ipc.WithAllocator(alloc))\n\tif err != nil {\n\t\treturn arrowResultChunk{}, err\n\t}\n\n\treturn arrowResultChunk{rr, 0, loc, alloc}, nil\n}\n"
  },
  {
    "path": "arrow_stream.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"compress/gzip\"\n\t\"context\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"io\"\n\t\"maps\"\n\t\"net/http\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow/ipc\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n)\n\n// ArrowStreamLoader is a convenience interface for downloading\n// Snowflake results via multiple Arrow Record Batch streams.\n//\n// Some queries from Snowflake do not return Arrow data regardless\n// of the settings, such as \"SHOW WAREHOUSES\". In these cases,\n// you'll find TotalRows() > 0 but GetBatches returns no batches\n// and no errors. In this case, the data is accessible via JSONData\n// with the actual types matching up to the metadata in RowTypes.\ntype ArrowStreamLoader interface {\n\tGetBatches() ([]ArrowStreamBatch, error)\n\tNextResultSet(ctx context.Context) error\n\tTotalRows() int64\n\tRowTypes() []query.ExecResponseRowType\n\tLocation() *time.Location\n\tJSONData() [][]*string\n}\n\n// ArrowStreamBatch is a type describing a potentially yet-to-be-downloaded\n// Arrow IPC stream. Call GetStream to download and retrieve an io.Reader\n// that can be used with ipc.NewReader to get record batch results.\ntype ArrowStreamBatch struct {\n\tidx     int\n\tnumrows int64\n\tscd     *snowflakeArrowStreamChunkDownloader\n\tLoc     *time.Location\n\trr      io.ReadCloser\n}\n\n// NumRows returns the total number of rows that the metadata stated should\n// be in this stream of record batches.\nfunc (asb *ArrowStreamBatch) NumRows() int64 { return asb.numrows }\n\n// GetStream returns a stream of bytes consisting of an Arrow IPC Record\n// batch stream. Close should be called on the returned stream when done\n// to ensure no leaked memory.\nfunc (asb *ArrowStreamBatch) GetStream(ctx context.Context) (io.ReadCloser, error) {\n\tif asb.rr == nil {\n\t\tif err := asb.downloadChunkStreamHelper(ctx); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn asb.rr, nil\n}\n\n// streamWrapReader wraps an io.Reader so that Close closes the underlying body.\ntype streamWrapReader struct {\n\tio.Reader\n\twrapped io.ReadCloser\n}\n\nfunc (w *streamWrapReader) Close() error {\n\tif cl, ok := w.Reader.(io.ReadCloser); ok {\n\t\tif err := cl.Close(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn w.wrapped.Close()\n}\n\nfunc (asb *ArrowStreamBatch) downloadChunkStreamHelper(ctx context.Context) error {\n\theaders := make(map[string]string)\n\tif len(asb.scd.ChunkHeader) > 0 {\n\t\tmaps.Copy(headers, asb.scd.ChunkHeader)\n\t} else {\n\t\theaders[headerSseCAlgorithm] = headerSseCAes\n\t\theaders[headerSseCKey] = asb.scd.Qrmk\n\t}\n\n\tresp, err := asb.scd.FuncGet(ctx, asb.scd.sc, asb.scd.ChunkMetas[asb.idx].URL, headers, asb.scd.sc.rest.RequestTimeout)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif resp.StatusCode != http.StatusOK {\n\t\tdefer func() {\n\t\t\t_ = resp.Body.Close()\n\t\t}()\n\t\tb, err := io.ReadAll(resp.Body)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_ = b\n\t\treturn &SnowflakeError{\n\t\t\tNumber:      ErrFailedToGetChunk,\n\t\t\tSQLState:    SQLStateConnectionFailure,\n\t\t\tMessage:     fmt.Sprintf(\"failed to get chunk. idx: %v\", asb.idx),\n\t\t\tMessageArgs: []any{asb.idx},\n\t\t}\n\t}\n\n\tdefer func() {\n\t\tif asb.rr == nil {\n\t\t\t_ = resp.Body.Close()\n\t\t}\n\t}()\n\n\tbufStream := bufio.NewReader(resp.Body)\n\tgzipMagic, err := bufStream.Peek(2)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif gzipMagic[0] == 0x1f && gzipMagic[1] == 0x8b {\n\t\tbufStream0, err := gzip.NewReader(bufStream)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tasb.rr = &streamWrapReader{Reader: bufStream0, wrapped: resp.Body}\n\t} else {\n\t\tasb.rr = &streamWrapReader{Reader: bufStream, wrapped: resp.Body}\n\t}\n\treturn nil\n}\n\ntype snowflakeArrowStreamChunkDownloader struct {\n\tsc          *snowflakeConn\n\tChunkMetas  []query.ExecResponseChunk\n\tTotal       int64\n\tQrmk        string\n\tChunkHeader map[string]string\n\tFuncGet     func(context.Context, *snowflakeConn, string, map[string]string, time.Duration) (*http.Response, error)\n\tRowSet      rowSetType\n\tresultIDs   []string\n}\n\nfunc (scd *snowflakeArrowStreamChunkDownloader) Location() *time.Location {\n\tif scd.sc != nil {\n\t\treturn getCurrentLocation(&scd.sc.syncParams)\n\t}\n\treturn nil\n}\n\nfunc (scd *snowflakeArrowStreamChunkDownloader) TotalRows() int64 { return scd.Total }\n\nfunc (scd *snowflakeArrowStreamChunkDownloader) RowTypes() []query.ExecResponseRowType {\n\treturn scd.RowSet.RowType\n}\n\nfunc (scd *snowflakeArrowStreamChunkDownloader) JSONData() [][]*string {\n\treturn scd.RowSet.JSON\n}\n\nfunc (scd *snowflakeArrowStreamChunkDownloader) maybeFirstBatch() ([]byte, error) {\n\tif scd.RowSet.RowSetBase64 == \"\" {\n\t\treturn nil, nil\n\t}\n\n\trowSetBytes, err := base64.StdEncoding.DecodeString(scd.RowSet.RowSetBase64)\n\tif err != nil {\n\t\tlogger.Warnf(\"skipping first batch as it is not a valid base64 response. %v\", err)\n\t\treturn nil, err\n\t}\n\n\trr, err := ipc.NewReader(bytes.NewReader(rowSetBytes))\n\tif err != nil {\n\t\tlogger.Warnf(\"skipping first batch as it is not a valid IPC stream. %v\", err)\n\t\treturn nil, err\n\t}\n\trr.Release()\n\n\treturn rowSetBytes, nil\n}\n\nfunc (scd *snowflakeArrowStreamChunkDownloader) GetBatches() (out []ArrowStreamBatch, err error) {\n\tchunkMetaLen := len(scd.ChunkMetas)\n\tloc := scd.Location()\n\n\tout = make([]ArrowStreamBatch, chunkMetaLen, chunkMetaLen+1)\n\ttoFill := out\n\trowSetBytes, err := scd.maybeFirstBatch()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif len(rowSetBytes) > 0 {\n\t\tout = out[:chunkMetaLen+1]\n\t\tout[0] = ArrowStreamBatch{\n\t\t\tscd: scd,\n\t\t\tLoc: loc,\n\t\t\trr:  io.NopCloser(bytes.NewReader(rowSetBytes)),\n\t\t}\n\t\ttoFill = out[1:]\n\t}\n\n\tvar totalCounted int64\n\tfor i := range toFill {\n\t\ttoFill[i] = ArrowStreamBatch{\n\t\t\tidx:     i,\n\t\t\tnumrows: int64(scd.ChunkMetas[i].RowCount),\n\t\t\tLoc:     loc,\n\t\t\tscd:     scd,\n\t\t}\n\t\ttotalCounted += int64(scd.ChunkMetas[i].RowCount)\n\t}\n\n\tif len(rowSetBytes) > 0 {\n\t\tout[0].numrows = scd.Total - totalCounted\n\t}\n\treturn\n}\n\nfunc (scd *snowflakeArrowStreamChunkDownloader) NextResultSet(ctx context.Context) error {\n\tif !scd.hasNextResultSet() {\n\t\treturn io.EOF\n\t}\n\tresultID := scd.resultIDs[0]\n\tscd.resultIDs = scd.resultIDs[1:]\n\tresultPath := fmt.Sprintf(urlQueriesResultFmt, resultID)\n\tresp, err := scd.sc.getQueryResultResp(ctx, resultPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif !resp.Success {\n\t\tcode, err := strconv.Atoi(resp.Code)\n\t\tif err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"error while parsing code: %v\", err)\n\t\t}\n\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   code,\n\t\t\tSQLState: resp.Data.SQLState,\n\t\t\tMessage:  resp.Message,\n\t\t\tQueryID:  resp.Data.QueryID,\n\t\t}, scd.sc)\n\t}\n\tscd.ChunkMetas = resp.Data.Chunks\n\tscd.Total = resp.Data.Total\n\tscd.Qrmk = resp.Data.Qrmk\n\tscd.ChunkHeader = resp.Data.ChunkHeaders\n\tscd.RowSet = rowSetType{\n\t\tRowType:      resp.Data.RowType,\n\t\tJSON:         resp.Data.RowSet,\n\t\tRowSetBase64: resp.Data.RowSetBase64,\n\t}\n\treturn nil\n}\n\nfunc (scd *snowflakeArrowStreamChunkDownloader) hasNextResultSet() bool {\n\treturn len(scd.resultIDs) > 0\n}\n"
  },
  {
    "path": "arrow_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"math/big\"\n\t\"reflect\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\n\t\"database/sql/driver\"\n)\n\nfunc TestArrowBatchDataProvider(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tctx := ia.EnableArrowBatches(context.Background())\n\t\tquery := \"select '0.1':: DECIMAL(38, 19) as c\"\n\n\t\tvar rows driver.Rows\n\t\tvar err error\n\n\t\terr = dbt.conn.Raw(func(x any) error {\n\t\t\tqueryer, implementsQueryContext := x.(driver.QueryerContext)\n\t\t\tassertTrueF(t, implementsQueryContext, \"snowflake connection driver does not implement queryerContext\")\n\n\t\t\trows, err = queryer.QueryContext(ctx, query, nil)\n\t\t\treturn err\n\t\t})\n\n\t\tassertNilF(t, err, \"error running select query\")\n\n\t\tsfRows, isSfRows := rows.(SnowflakeRows)\n\t\tassertTrueF(t, isSfRows, \"rows should be snowflakeRows\")\n\n\t\tprovider, isProvider := sfRows.(ia.BatchDataProvider)\n\t\tassertTrueF(t, isProvider, \"rows should implement BatchDataProvider\")\n\n\t\tinfo, err := provider.GetArrowBatches()\n\t\tassertNilF(t, err, \"error getting arrow batch data\")\n\t\tassertNotEqualF(t, len(info.Batches), 0, \"should have at least one batch\")\n\n\t\t// Verify raw records are available for the first batch\n\t\tbatch := info.Batches[0]\n\t\tassertNotNilF(t, batch.Records, \"first batch should have pre-decoded records\")\n\n\t\trecords := *batch.Records\n\t\tassertNotEqualF(t, len(records), 0, \"should have at least one record\")\n\n\t\t// Verify column 0 has data (raw decimal value)\n\t\tstrVal := records[0].Column(0).ValueStr(0)\n\t\tassertTrueF(t, len(strVal) > 0, fmt.Sprintf(\"column should have a value, got: %s\", strVal))\n\t})\n}\n\nfunc TestArrowBigInt(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttestcases := []struct {\n\t\t\tnum  string\n\t\t\tprec int\n\t\t\tsc   int\n\t\t}{\n\t\t\t{\"10000000000000000000000000000000000000\", 38, 0},\n\t\t\t{\"-10000000000000000000000000000000000000\", 38, 0},\n\t\t\t{\"12345678901234567890123456789012345678\", 38, 0}, // #pragma: allowlist secret\n\t\t\t{\"-12345678901234567890123456789012345678\", 38, 0},\n\t\t\t{\"99999999999999999999999999999999999999\", 38, 0},\n\t\t\t{\"-99999999999999999999999999999999999999\", 38, 0},\n\t\t}\n\n\t\tfor _, tc := range testcases {\n\t\t\trows := dbt.mustQueryContext(WithHigherPrecision(context.Background()),\n\t\t\t\tfmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\tif !rows.Next() {\n\t\t\t\tdbt.Error(\"failed to query\")\n\t\t\t}\n\t\t\tdefer rows.Close()\n\t\t\tvar v *big.Int\n\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\tdbt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t}\n\n\t\t\tb, ok := new(big.Int).SetString(tc.num, 10)\n\t\t\tif !ok {\n\t\t\t\tdbt.Errorf(\"failed to convert %v big.Int.\", tc.num)\n\t\t\t}\n\t\t\tif v.Cmp(b) != 0 {\n\t\t\t\tdbt.Errorf(\"big.Int value mismatch: expected %v, got %v\", b, v)\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestArrowBigFloat(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttestcases := []struct {\n\t\t\tnum  string\n\t\t\tprec int\n\t\t\tsc   int\n\t\t}{\n\t\t\t{\"1.23\", 30, 2},\n\t\t\t{\"1.0000000000000000000000000000000000000\", 38, 37},\n\t\t\t{\"-1.0000000000000000000000000000000000000\", 38, 37},\n\t\t\t{\"1.2345678901234567890123456789012345678\", 38, 37},\n\t\t\t{\"-1.2345678901234567890123456789012345678\", 38, 37},\n\t\t\t{\"9.9999999999999999999999999999999999999\", 38, 37},\n\t\t\t{\"-9.9999999999999999999999999999999999999\", 38, 37},\n\t\t}\n\n\t\tfor _, tc := range testcases {\n\t\t\trows := dbt.mustQueryContext(WithHigherPrecision(context.Background()),\n\t\t\t\tfmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\tif !rows.Next() {\n\t\t\t\tdbt.Error(\"failed to query\")\n\t\t\t}\n\t\t\tdefer rows.Close()\n\t\t\tvar v *big.Float\n\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\tdbt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t}\n\n\t\t\tprec := v.Prec()\n\t\t\tb, ok := new(big.Float).SetPrec(prec).SetString(tc.num)\n\t\t\tif !ok {\n\t\t\t\tdbt.Errorf(\"failed to convert %v to big.Float.\", tc.num)\n\t\t\t}\n\t\t\tif v.Cmp(b) != 0 {\n\t\t\t\tdbt.Errorf(\"big.Float value mismatch: expected %v, got %v\", b, v)\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestArrowIntPrecision(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(forceJSON)\n\n\t\tintTestcases := []struct {\n\t\t\tnum  string\n\t\t\tprec int\n\t\t\tsc   int\n\t\t}{\n\t\t\t{\"10000000000000000000000000000000000000\", 38, 0},\n\t\t\t{\"-10000000000000000000000000000000000000\", 38, 0},\n\t\t\t{\"12345678901234567890123456789012345678\", 38, 0}, // pragma: allowlist secret\n\t\t\t{\"-12345678901234567890123456789012345678\", 38, 0},\n\t\t\t{\"99999999999999999999999999999999999999\", 38, 0},\n\t\t\t{\"-99999999999999999999999999999999999999\", 38, 0},\n\t\t}\n\n\t\tt.Run(\"arrow_disabled_scan_int64\", func(t *testing.T) {\n\t\t\tfor _, tc := range intTestcases {\n\t\t\t\trows := dbt.mustQuery(fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v int64\n\t\t\t\tif err := rows.Scan(&v); err == nil {\n\t\t\t\t\tt.Error(\"should fail to scan\")\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t\tt.Run(\"arrow_disabled_scan_string\", func(t *testing.T) {\n\t\t\tfor _, tc := range intTestcases {\n\t\t\t\trows := dbt.mustQuery(fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v string\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\t\t\t\tif v != tc.num {\n\t\t\t\t\tt.Errorf(\"string value mismatch: expected %v, got %v\", tc.num, v)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\n\t\tdbt.mustExec(forceARROW)\n\n\t\tt.Run(\"arrow_enabled_scan_big_int\", func(t *testing.T) {\n\t\t\tfor _, tc := range intTestcases {\n\t\t\t\trows := dbt.mustQuery(fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v string\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\t\t\t\tif !strings.EqualFold(v, tc.num) {\n\t\t\t\t\tt.Errorf(\"int value mismatch: expected %v, got %v\", tc.num, v)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t\tt.Run(\"arrow_high_precision_enabled_scan_big_int\", func(t *testing.T) {\n\t\t\tfor _, tc := range intTestcases {\n\t\t\t\trows := dbt.mustQueryContext(WithHigherPrecision(context.Background()), fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v *big.Int\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\n\t\t\t\tb, ok := new(big.Int).SetString(tc.num, 10)\n\t\t\t\tif !ok {\n\t\t\t\t\tt.Errorf(\"failed to convert %v big.Int.\", tc.num)\n\t\t\t\t}\n\t\t\t\tif v.Cmp(b) != 0 {\n\t\t\t\t\tt.Errorf(\"big.Int value mismatch: expected %v, got %v\", b, v)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t})\n}\n\n// TestArrowFloatPrecision tests the different variable types allowed in the\n// rows.Scan() method. Note that for lower precision types we do not attempt\n// to check the value as precision could be lost.\nfunc TestArrowFloatPrecision(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(forceJSON)\n\n\t\tfltTestcases := []struct {\n\t\t\tnum  string\n\t\t\tprec int\n\t\t\tsc   int\n\t\t}{\n\t\t\t{\"1.23\", 30, 2},\n\t\t\t{\"1.0000000000000000000000000000000000000\", 38, 37},\n\t\t\t{\"-1.0000000000000000000000000000000000000\", 38, 37},\n\t\t\t{\"1.2345678901234567890123456789012345678\", 38, 37},\n\t\t\t{\"-1.2345678901234567890123456789012345678\", 38, 37},\n\t\t\t{\"9.9999999999999999999999999999999999999\", 38, 37},\n\t\t\t{\"-9.9999999999999999999999999999999999999\", 38, 37},\n\t\t}\n\n\t\tt.Run(\"arrow_disabled_scan_float64\", func(t *testing.T) {\n\t\t\tfor _, tc := range fltTestcases {\n\t\t\t\trows := dbt.mustQuery(fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v float64\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t\tt.Run(\"arrow_disabled_scan_float32\", func(t *testing.T) {\n\t\t\tfor _, tc := range fltTestcases {\n\t\t\t\trows := dbt.mustQuery(fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v float32\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t\tt.Run(\"arrow_disabled_scan_string\", func(t *testing.T) {\n\t\t\tfor _, tc := range fltTestcases {\n\t\t\t\trows := dbt.mustQuery(fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v string\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\t\t\t\tif !strings.EqualFold(v, tc.num) {\n\t\t\t\t\tt.Errorf(\"int value mismatch: expected %v, got %v\", tc.num, v)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\n\t\tdbt.mustExec(forceARROW)\n\n\t\tt.Run(\"arrow_enabled_scan_float64\", func(t *testing.T) {\n\t\t\tfor _, tc := range fltTestcases {\n\t\t\t\trows := dbt.mustQuery(fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v float64\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t\tt.Run(\"arrow_enabled_scan_float32\", func(t *testing.T) {\n\t\t\tfor _, tc := range fltTestcases {\n\t\t\t\trows := dbt.mustQuery(fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v float32\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t\tt.Run(\"arrow_enabled_scan_string\", func(t *testing.T) {\n\t\t\tfor _, tc := range fltTestcases {\n\t\t\t\trows := dbt.mustQuery(fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v string\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\t\t\t\tif v != tc.num {\n\t\t\t\t\tt.Errorf(\"string value mismatch: expected %v, got %v\", tc.num, v)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t\tt.Run(\"arrow_high_precision_enabled_scan_big_float\", func(t *testing.T) {\n\t\t\tfor _, tc := range fltTestcases {\n\t\t\t\trows := dbt.mustQueryContext(WithHigherPrecision(context.Background()), fmt.Sprintf(selectNumberSQL, tc.num, tc.prec, tc.sc))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Error(\"failed to query\")\n\t\t\t\t}\n\t\t\t\tvar v *big.Float\n\t\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\t\tt.Errorf(\"failed to scan. %#v\", err)\n\t\t\t\t}\n\n\t\t\t\tprec := v.Prec()\n\t\t\t\tb, ok := new(big.Float).SetPrec(prec).SetString(tc.num)\n\t\t\t\tif !ok {\n\t\t\t\t\tt.Errorf(\"failed to convert %v to big.Float.\", tc.num)\n\t\t\t\t}\n\t\t\t\tif v.Cmp(b) != 0 {\n\t\t\t\t\tt.Errorf(\"big.Float value mismatch: expected %v, got %v\", b, v)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestArrowTimePrecision(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"CREATE TABLE t (col5 TIME(5), col6 TIME(6), col7 TIME(7), col8 TIME(8));\")\n\t\tdefer dbt.mustExec(\"DROP TABLE IF EXISTS t\")\n\t\tdbt.mustExec(\"INSERT INTO t VALUES ('23:59:59.99999', '23:59:59.999999', '23:59:59.9999999', '23:59:59.99999999');\")\n\n\t\trows := dbt.mustQuery(\"select * from t\")\n\t\tdefer rows.Close()\n\t\tvar c5, c6, c7, c8 time.Time\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&c5, &c6, &c7, &c8); err != nil {\n\t\t\t\tt.Errorf(\"values were not scanned: %v\", err)\n\t\t\t}\n\t\t}\n\n\t\tnano := 999999990\n\t\texpected := time.Time{}.Add(23*time.Hour + 59*time.Minute + 59*time.Second + 99*time.Millisecond)\n\t\tif c8.Unix() != expected.Unix() || c8.Nanosecond() != nano {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c8)\n\t\t}\n\t\tif c7.Unix() != expected.Unix() || c7.Nanosecond() != nano-(nano%1e2) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c7)\n\t\t}\n\t\tif c6.Unix() != expected.Unix() || c6.Nanosecond() != nano-(nano%1e3) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c6)\n\t\t}\n\t\tif c5.Unix() != expected.Unix() || c5.Nanosecond() != nano-(nano%1e4) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c5)\n\t\t}\n\n\t\tdbt.mustExec(`CREATE TABLE t_ntz (\n\t\t  col1 TIMESTAMP_NTZ(1),\n\t\t  col2 TIMESTAMP_NTZ(2),\n\t\t  col3 TIMESTAMP_NTZ(3),\n\t\t  col4 TIMESTAMP_NTZ(4),\n\t\t  col5 TIMESTAMP_NTZ(5),\n\t\t  col6 TIMESTAMP_NTZ(6),\n\t\t  col7 TIMESTAMP_NTZ(7),\n\t\t  col8 TIMESTAMP_NTZ(8)\n\t\t);`)\n\t\tdefer dbt.mustExec(\"DROP TABLE IF EXISTS t_ntz\")\n\t\tdbt.mustExec(`INSERT INTO t_ntz VALUES (\n\t\t  '9999-12-31T23:59:59.9',\n\t\t  '9999-12-31T23:59:59.99',\n\t\t  '9999-12-31T23:59:59.999',\n\t\t  '9999-12-31T23:59:59.9999',\n\t\t  '9999-12-31T23:59:59.99999',\n\t\t  '9999-12-31T23:59:59.999999',\n\t\t  '9999-12-31T23:59:59.9999999',\n\t\t  '9999-12-31T23:59:59.99999999'\n\t\t);`)\n\n\t\trows2 := dbt.mustQuery(\"select * from t_ntz\")\n\t\tdefer rows2.Close()\n\t\tvar c1, c2, c3, c4 time.Time\n\t\tfor rows2.Next() {\n\t\t\tif err := rows2.Scan(&c1, &c2, &c3, &c4, &c5, &c6, &c7, &c8); err != nil {\n\t\t\t\tt.Errorf(\"values were not scanned: %v\", err)\n\t\t\t}\n\t\t}\n\n\t\texpected = time.Date(9999, 12, 31, 23, 59, 59, 0, time.UTC)\n\t\tif c8.Unix() != expected.Unix() || c8.Nanosecond() != nano {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c8)\n\t\t}\n\t\tif c7.Unix() != expected.Unix() || c7.Nanosecond() != nano-(nano%1e2) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c7)\n\t\t}\n\t\tif c6.Unix() != expected.Unix() || c6.Nanosecond() != nano-(nano%1e3) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c6)\n\t\t}\n\t\tif c5.Unix() != expected.Unix() || c5.Nanosecond() != nano-(nano%1e4) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c5)\n\t\t}\n\t\tif c4.Unix() != expected.Unix() || c4.Nanosecond() != nano-(nano%1e5) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c4)\n\t\t}\n\t\tif c3.Unix() != expected.Unix() || c3.Nanosecond() != nano-(nano%1e6) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c3)\n\t\t}\n\t\tif c2.Unix() != expected.Unix() || c2.Nanosecond() != nano-(nano%1e7) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c2)\n\t\t}\n\t\tif c1.Unix() != expected.Unix() || c1.Nanosecond() != nano-(nano%1e8) {\n\t\t\tt.Errorf(\"the value did not match. expected: %v, got: %v\", expected, c1)\n\t\t}\n\t})\n}\n\nfunc TestArrowVariousTypes(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryContext(\n\t\t\tWithHigherPrecision(context.Background()), selectVariousTypes)\n\t\tdefer rows.Close()\n\t\tif !rows.Next() {\n\t\t\tdbt.Error(\"failed to query\")\n\t\t}\n\t\tcc, err := rows.Columns()\n\t\tif err != nil {\n\t\t\tdbt.Errorf(\"columns: %v\", cc)\n\t\t}\n\t\tct, err := rows.ColumnTypes()\n\t\tif err != nil {\n\t\t\tdbt.Errorf(\"column types: %v\", ct)\n\t\t}\n\t\tvar v1 *big.Float\n\t\tvar v2, v2a int\n\t\tvar v3 string\n\t\tvar v4 float64\n\t\tvar v5 []byte\n\t\tvar v6 bool\n\t\tif err = rows.Scan(&v1, &v2, &v2a, &v3, &v4, &v5, &v6); err != nil {\n\t\t\tdbt.Errorf(\"failed to scan: %#v\", err)\n\t\t}\n\t\tif v1.Cmp(big.NewFloat(1.0)) != 0 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", *v1)\n\t\t}\n\t\tif ct[0].Name() != \"C1\" || ct[1].Name() != \"C2\" || ct[2].Name() != \"C2A\" || ct[3].Name() != \"C3\" || ct[4].Name() != \"C4\" || ct[5].Name() != \"C5\" || ct[6].Name() != \"C6\" {\n\t\t\tdbt.Errorf(\"failed to get column names: %#v\", ct)\n\t\t}\n\t\tif ct[0].ScanType() != reflect.TypeFor[*big.Float]() {\n\t\t\tdbt.Errorf(\"failed to get scan type. expected: %v, got: %v\", reflect.TypeFor[float64](), ct[0].ScanType())\n\t\t}\n\t\tif ct[1].ScanType() != reflect.TypeFor[int64]() {\n\t\t\tdbt.Errorf(\"failed to get scan type. expected: %v, got: %v\", reflect.TypeFor[int64](), ct[1].ScanType())\n\t\t}\n\t\tif ct[2].ScanType() != reflect.TypeFor[*big.Int]() {\n\t\t\tdbt.Errorf(\"failed to get scan type. expected: %v, got: %v\", reflect.TypeFor[*big.Int](), ct[2].ScanType())\n\t\t}\n\t\tvar pr, sc int64\n\t\tvar cLen int64\n\t\tpr, sc = dbt.mustDecimalSize(ct[0])\n\t\tif pr != 30 || sc != 2 {\n\t\t\tdbt.Errorf(\"failed to get precision and scale. %#v\", ct[0])\n\t\t}\n\t\tdbt.mustFailLength(ct[0])\n\t\tif canNull := dbt.mustNullable(ct[0]); canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[0])\n\t\t}\n\t\tif cLen != 0 {\n\t\t\tdbt.Errorf(\"failed to get length. %#v\", ct[0])\n\t\t}\n\t\tif v2 != 2 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v2)\n\t\t}\n\t\tpr, sc = dbt.mustDecimalSize(ct[1])\n\t\tif pr != 18 || sc != 0 {\n\t\t\tdbt.Errorf(\"failed to get precision and scale. %#v\", ct[1])\n\t\t}\n\t\tdbt.mustFailLength(ct[1])\n\t\tif canNull := dbt.mustNullable(ct[1]); canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[1])\n\t\t}\n\t\tif v2a != 22 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v2a)\n\t\t}\n\t\tdbt.mustFailLength(ct[2])\n\t\tif canNull := dbt.mustNullable(ct[2]); canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[2])\n\t\t}\n\t\tif v3 != \"t3\" {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v3)\n\t\t}\n\t\tdbt.mustFailDecimalSize(ct[3])\n\t\tif cLen = dbt.mustLength(ct[3]); cLen != 2 {\n\t\t\tdbt.Errorf(\"failed to get length. %#v\", ct[3])\n\t\t}\n\t\tif canNull := dbt.mustNullable(ct[3]); canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[3])\n\t\t}\n\t\tif v4 != 4.2 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v4)\n\t\t}\n\t\tdbt.mustFailDecimalSize(ct[4])\n\t\tdbt.mustFailLength(ct[4])\n\t\tif canNull := dbt.mustNullable(ct[4]); canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[4])\n\t\t}\n\t\tif !bytes.Equal(v5, []byte{0xab, 0xcd}) {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v5)\n\t\t}\n\t\tdbt.mustFailDecimalSize(ct[5])\n\t\tif cLen = dbt.mustLength(ct[5]); cLen != 8388608 { // BINARY\n\t\t\tdbt.Errorf(\"failed to get length. %#v\", ct[5])\n\t\t}\n\t\tif canNull := dbt.mustNullable(ct[5]); canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[5])\n\t\t}\n\t\tif !v6 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v6)\n\t\t}\n\t\tdbt.mustFailDecimalSize(ct[6])\n\t\tdbt.mustFailLength(ct[6])\n\t})\n}\n\nfunc TestArrowMemoryCleanedUp(t *testing.T) {\n\tmem := memory.NewCheckedAllocator(memory.NewGoAllocator())\n\tdefer mem.AssertSize(t, 0)\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tctx := WithArrowAllocator(\n\t\t\tcontext.Background(),\n\t\t\tmem,\n\t\t)\n\n\t\trows := dbt.mustQueryContext(ctx, \"select 1 UNION select 2 ORDER BY 1\")\n\t\tdefer rows.Close()\n\t\tvar v int\n\t\tassertTrueF(t, rows.Next())\n\t\tassertNilF(t, rows.Scan(&v))\n\t\tassertEqualE(t, v, 1)\n\t\tassertTrueF(t, rows.Next())\n\t\tassertNilF(t, rows.Scan(&v))\n\t\tassertEqualE(t, v, 2)\n\t\tassertFalseE(t, rows.Next())\n\t})\n}\n"
  },
  {
    "path": "arrowbatches/batches.go",
    "content": "package arrowbatches\n\nimport (\n\t\"cmp\"\n\t\"context\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n\t\"time\"\n\n\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n)\n\n// ArrowBatch represents a chunk of data retrievable in arrow.Record format.\ntype ArrowBatch struct {\n\traw       ia.BatchRaw\n\trowTypes  []query.ExecResponseRowType\n\tallocator memory.Allocator\n\tctx       context.Context\n}\n\n// WithContext sets the context for subsequent Fetch calls on this batch.\nfunc (rb *ArrowBatch) WithContext(ctx context.Context) *ArrowBatch {\n\trb.ctx = ctx\n\treturn rb\n}\n\n// Fetch returns an array of arrow.Record representing this batch's data.\n// Records are transformed from Snowflake's internal format to standard Arrow types.\nfunc (rb *ArrowBatch) Fetch() (*[]arrow.Record, error) {\n\tvar rawRecords *[]arrow.Record\n\tctx := cmp.Or(rb.ctx, context.Background())\n\n\tif rb.raw.Records != nil {\n\t\trawRecords = rb.raw.Records\n\t} else if rb.raw.Download != nil {\n\t\trecs, rowCount, err := rb.raw.Download(ctx)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\trawRecords = recs\n\t\trb.raw.Records = recs\n\t\trb.raw.RowCount = rowCount\n\t}\n\n\tif rawRecords == nil || len(*rawRecords) == 0 {\n\t\tempty := make([]arrow.Record, 0)\n\t\treturn &empty, nil\n\t}\n\n\tvar transformed []arrow.Record\n\tfor i, rec := range *rawRecords {\n\t\tnewRec, err := arrowToRecord(ctx, rec, rb.allocator, rb.rowTypes, rb.raw.Location)\n\t\tif err != nil {\n\t\t\tfor _, t := range transformed {\n\t\t\t\tt.Release()\n\t\t\t}\n\t\t\tfor _, r := range (*rawRecords)[i:] {\n\t\t\t\tr.Release()\n\t\t\t}\n\t\t\trb.raw.Records = nil\n\t\t\treturn nil, err\n\t\t}\n\t\ttransformed = append(transformed, newRec)\n\t\trec.Release()\n\t}\n\trb.raw.Records = nil\n\trb.raw.RowCount = countArrowBatchRows(&transformed)\n\treturn &transformed, nil\n}\n\n// GetRowCount returns the number of rows in this batch.\nfunc (rb *ArrowBatch) GetRowCount() int {\n\treturn rb.raw.RowCount\n}\n\n// GetLocation returns the timezone location for this batch.\nfunc (rb *ArrowBatch) GetLocation() *time.Location {\n\treturn rb.raw.Location\n}\n\n// GetRowTypes returns the column metadata for this batch.\nfunc (rb *ArrowBatch) GetRowTypes() []query.ExecResponseRowType {\n\treturn rb.rowTypes\n}\n\n// ArrowSnowflakeTimestampToTime converts an original Snowflake timestamp to time.Time.\nfunc (rb *ArrowBatch) ArrowSnowflakeTimestampToTime(rec arrow.Record, colIdx int, recIdx int) *time.Time {\n\tscale := int(rb.rowTypes[colIdx].Scale)\n\tdbType := rb.rowTypes[colIdx].Type\n\treturn ArrowSnowflakeTimestampToTime(rec.Column(colIdx), types.GetSnowflakeType(dbType), scale, recIdx, rb.raw.Location)\n}\n\n// GetArrowBatches retrieves arrow batches from SnowflakeRows.\n// The rows must have been queried with arrowbatches.WithArrowBatches(ctx).\nfunc GetArrowBatches(rows sf.SnowflakeRows) ([]*ArrowBatch, error) {\n\tprovider, ok := rows.(ia.BatchDataProvider)\n\tif !ok {\n\t\treturn nil, &sf.SnowflakeError{\n\t\t\tNumber:  sf.ErrNotImplemented,\n\t\t\tMessage: \"rows do not support arrow batch data\",\n\t\t}\n\t}\n\n\tinfo, err := provider.GetArrowBatches()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tbatches := make([]*ArrowBatch, len(info.Batches))\n\tfor i, raw := range info.Batches {\n\t\tbatches[i] = &ArrowBatch{\n\t\t\traw:       raw,\n\t\t\trowTypes:  info.RowTypes,\n\t\t\tallocator: info.Allocator,\n\t\t\tctx:       info.Ctx,\n\t\t}\n\t}\n\treturn batches, nil\n}\n\nfunc countArrowBatchRows(recs *[]arrow.Record) (cnt int) {\n\tfor _, r := range *recs {\n\t\tcnt += int(r.NumRows())\n\t}\n\treturn\n}\n\n// GetAllocator returns the memory allocator for this batch.\nfunc (rb *ArrowBatch) GetAllocator() memory.Allocator {\n\treturn rb.allocator\n}\n"
  },
  {
    "path": "arrowbatches/batches_test.go",
    "content": "package arrowbatches\n\nimport (\n\t\"context\"\n\t\"crypto/rsa\"\n\t\"crypto/x509\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"encoding/pem\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/array\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n\n\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n)\n\n// testConn holds a reusable database connection for running multiple queries.\ntype testConn struct {\n\tdb   *sql.DB\n\tconn *sql.Conn\n}\n\n// repoRoot walks up from the current working directory to find the directory\n// containing go.mod, which is the repository root.\nfunc repoRoot(t *testing.T) string {\n\tt.Helper()\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get working directory: %v\", err)\n\t}\n\tfor {\n\t\tif _, err = os.Stat(filepath.Join(dir, \"go.mod\")); err == nil {\n\t\t\treturn dir\n\t\t}\n\t\tif !os.IsNotExist(err) {\n\t\t\tt.Fatalf(\"failed to stat go.mod in %q: %v\", dir, err)\n\t\t}\n\t\tparent := filepath.Dir(dir)\n\t\tif parent == dir {\n\t\t\tt.Fatal(\"could not find repository root (no go.mod found)\")\n\t\t}\n\t\tdir = parent\n\t}\n}\n\n// readPrivateKey reads an RSA private key from a PEM file. If the path is\n// relative it is resolved against the repository root so that tests in\n// sub-packages work with repo-root-relative paths.\nfunc readPrivateKey(t *testing.T, path string) *rsa.PrivateKey {\n\tt.Helper()\n\tif !filepath.IsAbs(path) {\n\t\tpath = filepath.Join(repoRoot(t), path)\n\t}\n\tdata, err := os.ReadFile(path)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to read private key file %q: %v\", path, err)\n\t}\n\tblock, _ := pem.Decode(data)\n\tif block == nil {\n\t\tt.Fatalf(\"failed to decode PEM block from %q\", path)\n\t}\n\tkey, err := x509.ParsePKCS8PrivateKey(block.Bytes)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to parse private key from %q: %v\", path, err)\n\t}\n\trsaKey, ok := key.(*rsa.PrivateKey)\n\tif !ok {\n\t\tt.Fatalf(\"private key in %q is not RSA (got %T)\", path, key)\n\t}\n\treturn rsaKey\n}\n\nfunc testConfig(t *testing.T) *sf.Config {\n\tt.Helper()\n\tconfigParams := []*sf.ConfigParam{\n\t\t{Name: \"Account\", EnvName: \"SNOWFLAKE_TEST_ACCOUNT\", FailOnMissing: true},\n\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_TEST_USER\", FailOnMissing: true},\n\t\t{Name: \"Host\", EnvName: \"SNOWFLAKE_TEST_HOST\", FailOnMissing: false},\n\t\t{Name: \"Port\", EnvName: \"SNOWFLAKE_TEST_PORT\", FailOnMissing: false},\n\t\t{Name: \"Protocol\", EnvName: \"SNOWFLAKE_TEST_PROTOCOL\", FailOnMissing: false},\n\t\t{Name: \"Warehouse\", EnvName: \"SNOWFLAKE_TEST_WAREHOUSE\", FailOnMissing: false},\n\t}\n\tisJWT := os.Getenv(\"SNOWFLAKE_TEST_AUTHENTICATOR\") == \"SNOWFLAKE_JWT\"\n\tif !isJWT {\n\t\tconfigParams = append(configParams,\n\t\t\t&sf.ConfigParam{Name: \"Password\", EnvName: \"SNOWFLAKE_TEST_PASSWORD\", FailOnMissing: true},\n\t\t)\n\t}\n\tcfg, err := sf.GetConfigFromEnv(configParams)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get config from environment: %v\", err)\n\t}\n\tif isJWT {\n\t\tprivKeyPath := os.Getenv(\"SNOWFLAKE_TEST_PRIVATE_KEY\")\n\t\tif privKeyPath == \"\" {\n\t\t\tt.Fatal(\"SNOWFLAKE_TEST_PRIVATE_KEY must be set for JWT authentication\")\n\t\t}\n\t\tcfg.PrivateKey = readPrivateKey(t, privKeyPath)\n\t\tcfg.Authenticator = sf.AuthTypeJwt\n\t}\n\ttz := \"UTC\"\n\tif cfg.Params == nil {\n\t\tcfg.Params = make(map[string]*string)\n\t}\n\tcfg.Params[\"timezone\"] = &tz\n\treturn cfg\n}\n\nfunc openTestConn(ctx context.Context, t *testing.T) *testConn {\n\tt.Helper()\n\tcfg := testConfig(t)\n\tdsn, err := sf.DSN(cfg)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create DSN: %v\", err)\n\t}\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to open db: %v\", err)\n\t}\n\tconn, err := db.Conn(ctx)\n\tif err != nil {\n\t\tdb.Close()\n\t\tt.Fatalf(\"failed to get connection: %v\", err)\n\t}\n\treturn &testConn{db: db, conn: conn}\n}\n\nfunc (tc *testConn) close() {\n\ttc.conn.Close()\n\ttc.db.Close()\n}\n\n// queryRows executes a query on the existing connection and returns\n// SnowflakeRows plus a function to close just the rows.\nfunc (tc *testConn) queryRows(ctx context.Context, t *testing.T, query string) (sf.SnowflakeRows, func()) {\n\tt.Helper()\n\tvar rows driver.Rows\n\tvar err error\n\terr = tc.conn.Raw(func(x any) error {\n\t\tqueryer, ok := x.(driver.QueryerContext)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"connection does not implement QueryerContext\")\n\t\t}\n\t\trows, err = queryer.QueryContext(ctx, query, nil)\n\t\treturn err\n\t})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to execute query: %v\", err)\n\t}\n\tsfRows, ok := rows.(sf.SnowflakeRows)\n\tif !ok {\n\t\trows.Close()\n\t\tt.Fatalf(\"rows do not implement SnowflakeRows\")\n\t}\n\treturn sfRows, func() { rows.Close() }\n}\n\n// queryRawRows is a convenience wrapper that opens a new connection,\n// runs a single query, and returns SnowflakeRows with a full cleanup.\nfunc queryRawRows(ctx context.Context, t *testing.T, query string) (sf.SnowflakeRows, func()) {\n\tt.Helper()\n\ttc := openTestConn(ctx, t)\n\tsfRows, closeRows := tc.queryRows(ctx, t, query)\n\treturn sfRows, func() {\n\t\tcloseRows()\n\t\ttc.close()\n\t}\n}\n\nfunc TestGetArrowBatches(t *testing.T) {\n\tctx := WithArrowBatches(context.Background())\n\n\tsfRows, cleanup := queryRawRows(ctx, t, \"SELECT 1 AS num, 'hello' AS str\")\n\tdefer cleanup()\n\n\tbatches, err := GetArrowBatches(sfRows)\n\tif err != nil {\n\t\tt.Fatalf(\"GetArrowBatches failed: %v\", err)\n\t}\n\tif len(batches) == 0 {\n\t\tt.Fatal(\"expected at least one batch\")\n\t}\n\n\trecords, err := batches[0].Fetch()\n\tif err != nil {\n\t\tt.Fatalf(\"Fetch failed: %v\", err)\n\t}\n\tif records == nil || len(*records) == 0 {\n\t\tt.Fatal(\"expected at least one record\")\n\t}\n\n\trec := (*records)[0]\n\tdefer rec.Release()\n\n\tif rec.NumCols() != 2 {\n\t\tt.Fatalf(\"expected 2 columns, got %d\", rec.NumCols())\n\t}\n\tif rec.NumRows() != 1 {\n\t\tt.Fatalf(\"expected 1 row, got %d\", rec.NumRows())\n\t}\n}\n\nfunc TestGetArrowBatchesHighPrecision(t *testing.T) {\n\tctx := sf.WithHigherPrecision(WithArrowBatches(context.Background()))\n\n\tsfRows, cleanup := queryRawRows(ctx, t, \"SELECT '0.1'::DECIMAL(38, 19) AS c\")\n\tdefer cleanup()\n\n\tbatches, err := GetArrowBatches(sfRows)\n\tif err != nil {\n\t\tt.Fatalf(\"GetArrowBatches failed: %v\", err)\n\t}\n\tif len(batches) == 0 {\n\t\tt.Fatal(\"expected at least one batch\")\n\t}\n\n\trecords, err := batches[0].Fetch()\n\tif err != nil {\n\t\tt.Fatalf(\"Fetch failed: %v\", err)\n\t}\n\tif records == nil || len(*records) == 0 {\n\t\tt.Fatal(\"expected at least one record\")\n\t}\n\n\trec := (*records)[0]\n\tdefer rec.Release()\n\n\tstrVal := rec.Column(0).ValueStr(0)\n\texpected := \"1000000000000000000\"\n\tif strVal != expected {\n\t\tt.Fatalf(\"expected %q, got %q\", expected, strVal)\n\t}\n}\n\nfunc TestGetArrowBatchesLargeResultSet(t *testing.T) {\n\tnumrows := 3000\n\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\tdefer pool.AssertSize(t, 0)\n\n\tctx := sf.WithArrowAllocator(WithArrowBatches(context.Background()), pool)\n\n\tquery := fmt.Sprintf(\"SELECT SEQ8(), RANDSTR(1000, RANDOM()) FROM TABLE(GENERATOR(ROWCOUNT=>%v))\", numrows)\n\tsfRows, cleanup := queryRawRows(ctx, t, query)\n\tdefer cleanup()\n\n\tbatches, err := GetArrowBatches(sfRows)\n\tif err != nil {\n\t\tt.Fatalf(\"GetArrowBatches failed: %v\", err)\n\t}\n\tif len(batches) == 0 {\n\t\tt.Fatal(\"expected at least one batch\")\n\t}\n\n\tmaxWorkers := 10\n\ttype count struct {\n\t\tmu  sync.Mutex\n\t\tval int\n\t}\n\tcnt := &count{}\n\tvar wg sync.WaitGroup\n\twork := make(chan int, len(batches))\n\n\tfor range maxWorkers {\n\t\twg.Add(1)\n\t\tgo func() {\n\t\t\tdefer wg.Done()\n\t\t\tfor i := range work {\n\t\t\t\trecs, fetchErr := batches[i].Fetch()\n\t\t\t\tif fetchErr != nil {\n\t\t\t\t\tt.Errorf(\"Fetch failed for batch %d: %v\", i, fetchErr)\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tfor _, r := range *recs {\n\t\t\t\t\tcnt.mu.Lock()\n\t\t\t\t\tcnt.val += int(r.NumRows())\n\t\t\t\t\tcnt.mu.Unlock()\n\t\t\t\t\tr.Release()\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\t}\n\tfor i := range batches {\n\t\twork <- i\n\t}\n\tclose(work)\n\twg.Wait()\n\n\tif cnt.val != numrows {\n\t\tt.Fatalf(\"row count mismatch: expected %d, got %d\", numrows, cnt.val)\n\t}\n}\n\nfunc TestGetArrowBatchesWithTimestampOption(t *testing.T) {\n\tctx := WithTimestampOption(\n\t\tWithArrowBatches(context.Background()),\n\t\tUseOriginalTimestamp,\n\t)\n\n\tsfRows, cleanup := queryRawRows(ctx, t, \"SELECT TO_TIMESTAMP_NTZ('2024-01-15 13:45:30.123456789') AS ts\")\n\tdefer cleanup()\n\n\tbatches, err := GetArrowBatches(sfRows)\n\tif err != nil {\n\t\tt.Fatalf(\"GetArrowBatches failed: %v\", err)\n\t}\n\tif len(batches) == 0 {\n\t\tt.Fatal(\"expected at least one batch\")\n\t}\n\n\trecords, err := batches[0].Fetch()\n\tif err != nil {\n\t\tt.Fatalf(\"Fetch failed: %v\", err)\n\t}\n\tif records == nil || len(*records) == 0 {\n\t\tt.Fatal(\"expected at least one record\")\n\t}\n\n\trec := (*records)[0]\n\tdefer rec.Release()\n\n\tif rec.NumRows() != 1 {\n\t\tt.Fatalf(\"expected 1 row, got %d\", rec.NumRows())\n\t}\n\tif rec.NumCols() != 1 {\n\t\tt.Fatalf(\"expected 1 column, got %d\", rec.NumCols())\n\t}\n}\n\nfunc TestGetArrowBatchesJSONResponseError(t *testing.T) {\n\tctx := WithArrowBatches(context.Background())\n\n\tcfg := testConfig(t)\n\n\tdsn, err := sf.DSN(cfg)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create DSN: %v\", err)\n\t}\n\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to open db: %v\", err)\n\t}\n\tdefer db.Close()\n\n\tconn, err := db.Conn(ctx)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get connection: %v\", err)\n\t}\n\tdefer conn.Close()\n\n\t_, err = conn.ExecContext(ctx, \"ALTER SESSION SET GO_QUERY_RESULT_FORMAT = json\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to set JSON format: %v\", err)\n\t}\n\n\tvar rows driver.Rows\n\terr = conn.Raw(func(x any) error {\n\t\tqueryer, ok := x.(driver.QueryerContext)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"connection does not implement QueryerContext\")\n\t\t}\n\t\trows, err = queryer.QueryContext(ctx, \"SELECT 'hello'\", nil)\n\t\treturn err\n\t})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to execute query: %v\", err)\n\t}\n\tdefer rows.Close()\n\n\tsfRows, ok := rows.(sf.SnowflakeRows)\n\tif !ok {\n\t\tt.Fatal(\"rows do not implement SnowflakeRows\")\n\t}\n\n\t_, err = GetArrowBatches(sfRows)\n\tif err == nil {\n\t\tt.Fatal(\"expected error when using arrow batches with JSON response\")\n\t}\n\n\tvar se *sf.SnowflakeError\n\tif !errors.As(err, &se) {\n\t\tt.Fatalf(\"expected SnowflakeError, got %T: %v\", err, err)\n\t}\n\tif se.Number != sf.ErrNonArrowResponseInArrowBatches {\n\t\tt.Fatalf(\"expected error code %d, got %d\", sf.ErrNonArrowResponseInArrowBatches, se.Number)\n\t}\n}\n\n// TestTimestampConversionDistantDates tests all 10 timestamp scales (0-9)\n// because each scale exercises a mathematically distinct code path in\n// extractEpoch/extractFraction (converter.go). Past bugs have been\n// scale-specific: SNOW-526255 (time scale for arrow) and SNOW-2091309\n// (precision loss at scale 0). Do not reduce the scale range.\nfunc TestTimestampConversionDistantDates(t *testing.T) {\n\ttimestamps := [2]string{\n\t\t\"9999-12-12 23:59:59.999999999\",\n\t\t\"0001-01-01 00:00:00.000000000\",\n\t}\n\ttsTypes := [3]string{\"TIMESTAMP_NTZ\", \"TIMESTAMP_LTZ\", \"TIMESTAMP_TZ\"}\n\n\tprecisions := []struct {\n\t\tname        string\n\t\toption      ia.TimestampOption\n\t\tunit        arrow.TimeUnit\n\t\texpectError bool\n\t}{\n\t\t{\"second\", UseSecondTimestamp, arrow.Second, false},\n\t\t{\"millisecond\", UseMillisecondTimestamp, arrow.Millisecond, false},\n\t\t{\"microsecond\", UseMicrosecondTimestamp, arrow.Microsecond, false},\n\t\t{\"nanosecond\", UseNanosecondTimestamp, arrow.Nanosecond, true},\n\t}\n\n\tfor _, prec := range precisions {\n\t\tt.Run(prec.name, func(t *testing.T) {\n\t\t\tt.Parallel()\n\t\t\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\t\t\tdefer pool.AssertSize(t, 0)\n\n\t\t\tctx := sf.WithArrowAllocator(\n\t\t\t\tWithTimestampOption(WithArrowBatches(context.Background()), prec.option),\n\t\t\t\tpool,\n\t\t\t)\n\n\t\t\ttc := openTestConn(ctx, t)\n\t\t\tdefer tc.close()\n\n\t\t\tfor _, tsStr := range timestamps {\n\t\t\t\tfor _, tp := range tsTypes {\n\t\t\t\t\tfor scale := 0; scale <= 9; scale++ {\n\t\t\t\t\t\tt.Run(tp+\"(\"+strconv.Itoa(scale)+\")_\"+tsStr, func(t *testing.T) {\n\t\t\t\t\t\t\tquery := fmt.Sprintf(\"SELECT '%s'::%s(%v)\", tsStr, tp, scale)\n\t\t\t\t\t\t\tsfRows, closeRows := tc.queryRows(ctx, t, query)\n\t\t\t\t\t\t\tdefer closeRows()\n\n\t\t\t\t\t\t\tbatches, err := GetArrowBatches(sfRows)\n\t\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\t\tt.Fatalf(\"GetArrowBatches failed: %v\", err)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif len(batches) == 0 {\n\t\t\t\t\t\t\t\tt.Fatal(\"expected at least one batch\")\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\trecords, err := batches[0].Fetch()\n\n\t\t\t\t\t\t\tif prec.expectError {\n\t\t\t\t\t\t\t\texpectedError := \"Cannot convert timestamp\"\n\t\t\t\t\t\t\t\tif err == nil {\n\t\t\t\t\t\t\t\t\tt.Fatalf(\"no error, expected: %v\", expectedError)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif !strings.Contains(err.Error(), expectedError) {\n\t\t\t\t\t\t\t\t\tt.Fatalf(\"improper error, expected: %v, got: %v\", expectedError, err.Error())\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\treturn\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\t\tt.Fatalf(\"Fetch failed: %v\", err)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif records == nil || len(*records) == 0 {\n\t\t\t\t\t\t\t\tt.Fatal(\"expected at least one record\")\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\trec := (*records)[0]\n\t\t\t\t\t\t\tdefer rec.Release()\n\n\t\t\t\t\t\t\tactual := rec.Column(0).(*array.Timestamp).TimestampValues()[0]\n\t\t\t\t\t\t\tactualYear := actual.ToTime(prec.unit).Year()\n\n\t\t\t\t\t\t\tts, err := time.Parse(\"2006-01-02 15:04:05\", tsStr)\n\t\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\t\tt.Fatalf(\"failed to parse time: %v\", err)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\texp := ts.Truncate(time.Duration(math.Pow10(9 - scale)))\n\n\t\t\t\t\t\t\tif actualYear != exp.Year() {\n\t\t\t\t\t\t\t\tt.Fatalf(\"unexpected year, expected: %v, got: %v\", exp.Year(), actualYear)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t})\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\n// TestTimestampConversionWithOriginalTimestamp tests all 10 timestamp scales\n// (0-9) because each scale exercises a mathematically distinct code path in\n// extractEpoch/extractFraction. See TestTimestampConversionDistantDates for\n// rationale on why the full scale range is required.\nfunc TestTimestampConversionWithOriginalTimestamp(t *testing.T) {\n\ttimestamps := [3]string{\n\t\t\"2000-10-10 10:10:10.123456789\",\n\t\t\"9999-12-12 23:59:59.999999999\",\n\t\t\"0001-01-01 00:00:00.000000000\",\n\t}\n\ttsTypes := [3]string{\"TIMESTAMP_NTZ\", \"TIMESTAMP_LTZ\", \"TIMESTAMP_TZ\"}\n\n\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\tdefer pool.AssertSize(t, 0)\n\n\tctx := sf.WithArrowAllocator(\n\t\tWithTimestampOption(WithArrowBatches(context.Background()), UseOriginalTimestamp),\n\t\tpool,\n\t)\n\n\ttc := openTestConn(ctx, t)\n\tdefer tc.close()\n\n\tfor _, tsStr := range timestamps {\n\t\tts, err := time.Parse(\"2006-01-02 15:04:05\", tsStr)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed to parse time: %v\", err)\n\t\t}\n\t\tfor _, tp := range tsTypes {\n\t\t\tt.Run(tp+\"_\"+tsStr, func(t *testing.T) {\n\t\t\t\t// Batch all 10 scales into a single multi-column query to reduce round trips.\n\t\t\t\tvar cols []string\n\t\t\t\tfor scale := 0; scale <= 9; scale++ {\n\t\t\t\t\tcols = append(cols, fmt.Sprintf(\"'%s'::%s(%v)\", tsStr, tp, scale))\n\t\t\t\t}\n\t\t\t\tquery := \"SELECT \" + strings.Join(cols, \", \")\n\t\t\t\tsfRows, closeRows := tc.queryRows(ctx, t, query)\n\t\t\t\tdefer closeRows()\n\n\t\t\t\tbatches, err := GetArrowBatches(sfRows)\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"GetArrowBatches failed: %v\", err)\n\t\t\t\t}\n\t\t\t\tif len(batches) != 1 {\n\t\t\t\t\tt.Fatalf(\"expected 1 batch, got %d\", len(batches))\n\t\t\t\t}\n\n\t\t\t\trecords, err := batches[0].Fetch()\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"Fetch failed: %v\", err)\n\t\t\t\t}\n\t\t\t\tif records == nil || len(*records) == 0 {\n\t\t\t\t\tt.Fatal(\"expected at least one record\")\n\t\t\t\t}\n\n\t\t\t\tfor scale := 0; scale <= 9; scale++ {\n\t\t\t\t\texp := ts.Truncate(time.Duration(math.Pow10(9 - scale)))\n\t\t\t\t\tfor _, r := range *records {\n\t\t\t\t\t\tdefer r.Release()\n\t\t\t\t\t\tact := batches[0].ArrowSnowflakeTimestampToTime(r, scale, 0)\n\t\t\t\t\t\tif act == nil {\n\t\t\t\t\t\t\tt.Fatalf(\"scale %d: unexpected nil, expected: %v\", scale, exp)\n\t\t\t\t\t\t} else if !exp.Equal(*act) {\n\t\t\t\t\t\t\tt.Fatalf(\"scale %d: unexpected result, expected: %v, got: %v\", scale, exp, *act)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "arrowbatches/context.go",
    "content": "package arrowbatches\n\nimport (\n\t\"context\"\n\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n)\n\n// Timestamp option constants.\nconst (\n\tUseNanosecondTimestamp  = ia.UseNanosecondTimestamp\n\tUseMicrosecondTimestamp = ia.UseMicrosecondTimestamp\n\tUseMillisecondTimestamp = ia.UseMillisecondTimestamp\n\tUseSecondTimestamp      = ia.UseSecondTimestamp\n\tUseOriginalTimestamp    = ia.UseOriginalTimestamp\n)\n\n// WithArrowBatches returns a context that enables arrow batch mode for queries.\nfunc WithArrowBatches(ctx context.Context) context.Context {\n\treturn ia.EnableArrowBatches(ctx)\n}\n\n// WithTimestampOption returns a context that sets the timestamp conversion option\n// for arrow batches.\nfunc WithTimestampOption(ctx context.Context, option ia.TimestampOption) context.Context {\n\treturn ia.WithTimestampOption(ctx, option)\n}\n\n// WithUtf8Validation returns a context that enables UTF-8 validation for\n// string columns in arrow batches.\nfunc WithUtf8Validation(ctx context.Context) context.Context {\n\treturn ia.EnableUtf8Validation(ctx)\n}\n"
  },
  {
    "path": "arrowbatches/converter.go",
    "content": "package arrowbatches\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n\t\"math\"\n\t\"math/big\"\n\t\"strings\"\n\t\"time\"\n\t\"unicode/utf8\"\n\n\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/array\"\n\t\"github.com/apache/arrow-go/v18/arrow/compute\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n)\n\n// arrowToRecord transforms a raw arrow.Record from Snowflake into a record\n// with standard Arrow types (e.g., converting struct-based timestamps to\n// arrow.Timestamp, decimal128 to int64/float64, etc.)\nfunc arrowToRecord(ctx context.Context, record arrow.Record, pool memory.Allocator, rowType []query.ExecResponseRowType, loc *time.Location) (arrow.Record, error) {\n\ttimestampOption := ia.GetTimestampOption(ctx)\n\thigherPrecision := ia.HigherPrecisionEnabled(ctx)\n\n\ts, err := recordToSchema(record.Schema(), rowType, loc, timestampOption, higherPrecision)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar cols []arrow.Array\n\tnumRows := record.NumRows()\n\tctxAlloc := compute.WithAllocator(ctx, pool)\n\n\tfor i, col := range record.Columns() {\n\t\tfieldMetadata := rowType[i].ToFieldMetadata()\n\n\t\tnewCol, err := arrowToRecordSingleColumn(ctxAlloc, s.Field(i), col, fieldMetadata, higherPrecision, timestampOption, pool, loc, numRows)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tcols = append(cols, newCol)\n\t\tdefer newCol.Release()\n\t}\n\tnewRecord := array.NewRecord(s, cols, numRows)\n\treturn newRecord, nil\n}\n\nfunc arrowToRecordSingleColumn(ctx context.Context, field arrow.Field, col arrow.Array, fieldMetadata query.FieldMetadata, higherPrecisionEnabled bool, timestampOption ia.TimestampOption, pool memory.Allocator, loc *time.Location, numRows int64) (arrow.Array, error) {\n\tvar err error\n\tnewCol := col\n\tsnowflakeType := types.GetSnowflakeType(fieldMetadata.Type)\n\tswitch snowflakeType {\n\tcase types.FixedType:\n\t\tif higherPrecisionEnabled {\n\t\t\tcol.Retain()\n\t\t} else if col.DataType().ID() == arrow.DECIMAL || col.DataType().ID() == arrow.DECIMAL256 {\n\t\t\tvar toType arrow.DataType\n\t\t\tif fieldMetadata.Scale == 0 {\n\t\t\t\ttoType = arrow.PrimitiveTypes.Int64\n\t\t\t} else {\n\t\t\t\ttoType = arrow.PrimitiveTypes.Float64\n\t\t\t}\n\t\t\tnewCol, err = compute.CastArray(ctx, col, compute.UnsafeCastOptions(toType))\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t} else if fieldMetadata.Scale != 0 && col.DataType().ID() != arrow.INT64 {\n\t\t\tresult, err := compute.Divide(ctx, compute.ArithmeticOptions{NoCheckOverflow: true},\n\t\t\t\t&compute.ArrayDatum{Value: newCol.Data()},\n\t\t\t\tcompute.NewDatum(math.Pow10(int(fieldMetadata.Scale))))\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tdefer result.Release()\n\t\t\tnewCol = result.(*compute.ArrayDatum).MakeArray()\n\t\t} else if fieldMetadata.Scale != 0 && col.DataType().ID() == arrow.INT64 {\n\t\t\tvalues := col.(*array.Int64).Int64Values()\n\t\t\tfloatValues := make([]float64, len(values))\n\t\t\tfor i, val := range values {\n\t\t\t\tfloatValues[i], _ = intToBigFloat(val, int64(fieldMetadata.Scale)).Float64()\n\t\t\t}\n\t\t\tbuilder := array.NewFloat64Builder(pool)\n\t\t\tbuilder.AppendValues(floatValues, nil)\n\t\t\tnewCol = builder.NewArray()\n\t\t\tbuilder.Release()\n\t\t} else {\n\t\t\tcol.Retain()\n\t\t}\n\tcase types.TimeType:\n\t\tnewCol, err = compute.CastArray(ctx, col, compute.SafeCastOptions(arrow.FixedWidthTypes.Time64ns))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\tcase types.TimestampNtzType, types.TimestampLtzType, types.TimestampTzType:\n\t\tif timestampOption == ia.UseOriginalTimestamp {\n\t\t\tcol.Retain()\n\t\t} else {\n\t\t\tvar unit arrow.TimeUnit\n\t\t\tswitch timestampOption {\n\t\t\tcase ia.UseMicrosecondTimestamp:\n\t\t\t\tunit = arrow.Microsecond\n\t\t\tcase ia.UseMillisecondTimestamp:\n\t\t\t\tunit = arrow.Millisecond\n\t\t\tcase ia.UseSecondTimestamp:\n\t\t\t\tunit = arrow.Second\n\t\t\tcase ia.UseNanosecondTimestamp:\n\t\t\t\tunit = arrow.Nanosecond\n\t\t\t}\n\t\t\tvar tb *array.TimestampBuilder\n\t\t\tif snowflakeType == types.TimestampLtzType {\n\t\t\t\ttb = array.NewTimestampBuilder(pool, &arrow.TimestampType{Unit: unit, TimeZone: loc.String()})\n\t\t\t} else {\n\t\t\t\ttb = array.NewTimestampBuilder(pool, &arrow.TimestampType{Unit: unit})\n\t\t\t}\n\t\t\tdefer tb.Release()\n\n\t\t\tfor i := 0; i < int(numRows); i++ {\n\t\t\t\tts := ArrowSnowflakeTimestampToTime(col, snowflakeType, int(fieldMetadata.Scale), i, loc)\n\t\t\t\tif ts != nil {\n\t\t\t\t\tvar ar arrow.Timestamp\n\t\t\t\t\tswitch timestampOption {\n\t\t\t\t\tcase ia.UseMicrosecondTimestamp:\n\t\t\t\t\t\tar = arrow.Timestamp(ts.UnixMicro())\n\t\t\t\t\tcase ia.UseMillisecondTimestamp:\n\t\t\t\t\t\tar = arrow.Timestamp(ts.UnixMilli())\n\t\t\t\t\tcase ia.UseSecondTimestamp:\n\t\t\t\t\t\tar = arrow.Timestamp(ts.Unix())\n\t\t\t\t\tcase ia.UseNanosecondTimestamp:\n\t\t\t\t\t\tar = arrow.Timestamp(ts.UnixNano())\n\t\t\t\t\t\tif ts.UTC().Year() != ar.ToTime(arrow.Nanosecond).Year() {\n\t\t\t\t\t\t\treturn nil, &sf.SnowflakeError{\n\t\t\t\t\t\t\t\tNumber:   sf.ErrTooHighTimestampPrecision,\n\t\t\t\t\t\t\t\tSQLState: sf.SQLStateInvalidDataTimeFormat,\n\t\t\t\t\t\t\t\tMessage:  fmt.Sprintf(\"Cannot convert timestamp %v in column %v to Arrow.Timestamp data type due to too high precision. Please use context with WithOriginalTimestamp.\", ts.UTC(), fieldMetadata.Name),\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\ttb.Append(ar)\n\t\t\t\t} else {\n\t\t\t\t\ttb.AppendNull()\n\t\t\t\t}\n\t\t\t}\n\t\t\tnewCol = tb.NewArray()\n\t\t}\n\tcase types.TextType:\n\t\tif stringCol, ok := col.(*array.String); ok {\n\t\t\tnewCol = arrowStringRecordToColumn(ctx, stringCol, pool, numRows)\n\t\t}\n\tcase types.ObjectType:\n\t\tif structCol, ok := col.(*array.Struct); ok {\n\t\t\tvar internalCols []arrow.Array\n\t\t\tfor i := 0; i < structCol.NumField(); i++ {\n\t\t\t\tinternalCol := structCol.Field(i)\n\t\t\t\tnewInternalCol, err := arrowToRecordSingleColumn(ctx, field.Type.(*arrow.StructType).Field(i), internalCol, fieldMetadata.Fields[i], higherPrecisionEnabled, timestampOption, pool, loc, numRows)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tinternalCols = append(internalCols, newInternalCol)\n\t\t\t\tdefer newInternalCol.Release()\n\t\t\t}\n\t\t\tvar fieldNames []string\n\t\t\tfor _, f := range field.Type.(*arrow.StructType).Fields() {\n\t\t\t\tfieldNames = append(fieldNames, f.Name)\n\t\t\t}\n\t\t\tnullBitmap := memory.NewBufferBytes(structCol.NullBitmapBytes())\n\t\t\tnumberOfNulls := structCol.NullN()\n\t\t\treturn array.NewStructArrayWithNulls(internalCols, fieldNames, nullBitmap, numberOfNulls, 0)\n\t\t} else if stringCol, ok := col.(*array.String); ok {\n\t\t\tnewCol = arrowStringRecordToColumn(ctx, stringCol, pool, numRows)\n\t\t}\n\tcase types.ArrayType:\n\t\tif listCol, ok := col.(*array.List); ok {\n\t\t\tnewCol, err = arrowToRecordSingleColumn(ctx, field.Type.(*arrow.ListType).ElemField(), listCol.ListValues(), fieldMetadata.Fields[0], higherPrecisionEnabled, timestampOption, pool, loc, numRows)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tdefer newCol.Release()\n\t\t\tnewData := array.NewData(arrow.ListOf(newCol.DataType()), listCol.Len(), listCol.Data().Buffers(), []arrow.ArrayData{newCol.Data()}, listCol.NullN(), 0)\n\t\t\tdefer newData.Release()\n\t\t\treturn array.NewListData(newData), nil\n\t\t} else if stringCol, ok := col.(*array.String); ok {\n\t\t\tnewCol = arrowStringRecordToColumn(ctx, stringCol, pool, numRows)\n\t\t}\n\tcase types.MapType:\n\t\tif mapCol, ok := col.(*array.Map); ok {\n\t\t\tkeyCol, err := arrowToRecordSingleColumn(ctx, field.Type.(*arrow.MapType).KeyField(), mapCol.Keys(), fieldMetadata.Fields[0], higherPrecisionEnabled, timestampOption, pool, loc, numRows)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tdefer keyCol.Release()\n\t\t\tvalueCol, err := arrowToRecordSingleColumn(ctx, field.Type.(*arrow.MapType).ItemField(), mapCol.Items(), fieldMetadata.Fields[1], higherPrecisionEnabled, timestampOption, pool, loc, numRows)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tdefer valueCol.Release()\n\n\t\t\tstructArr, err := array.NewStructArray([]arrow.Array{keyCol, valueCol}, []string{\"k\", \"v\"})\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tdefer structArr.Release()\n\t\t\tnewData := array.NewData(arrow.MapOf(keyCol.DataType(), valueCol.DataType()), mapCol.Len(), mapCol.Data().Buffers(), []arrow.ArrayData{structArr.Data()}, mapCol.NullN(), 0)\n\t\t\tdefer newData.Release()\n\t\t\treturn array.NewMapData(newData), nil\n\t\t} else if stringCol, ok := col.(*array.String); ok {\n\t\t\tnewCol = arrowStringRecordToColumn(ctx, stringCol, pool, numRows)\n\t\t}\n\tdefault:\n\t\tcol.Retain()\n\t}\n\treturn newCol, nil\n}\n\nfunc arrowStringRecordToColumn(\n\tctx context.Context,\n\tstringCol *array.String,\n\tmem memory.Allocator,\n\tnumRows int64,\n) arrow.Array {\n\tif ia.Utf8ValidationEnabled(ctx) && stringCol.DataType().ID() == arrow.STRING {\n\t\ttb := array.NewStringBuilder(mem)\n\t\tdefer tb.Release()\n\n\t\tfor i := 0; i < int(numRows); i++ {\n\t\t\tif stringCol.IsValid(i) {\n\t\t\t\tstringValue := stringCol.Value(i)\n\t\t\t\tif !utf8.ValidString(stringValue) {\n\t\t\t\t\tstringValue = strings.ToValidUTF8(stringValue, \"�\")\n\t\t\t\t}\n\t\t\t\ttb.Append(stringValue)\n\t\t\t} else {\n\t\t\t\ttb.AppendNull()\n\t\t\t}\n\t\t}\n\t\tarr := tb.NewArray()\n\t\treturn arr\n\t}\n\tstringCol.Retain()\n\treturn stringCol\n}\n\nfunc intToBigFloat(val int64, scale int64) *big.Float {\n\tf := new(big.Float).SetInt64(val)\n\ts := new(big.Float).SetInt(new(big.Int).Exp(big.NewInt(10), big.NewInt(scale), nil))\n\treturn new(big.Float).Quo(f, s)\n}\n\n// ArrowSnowflakeTimestampToTime converts original timestamp returned by Snowflake to time.Time.\nfunc ArrowSnowflakeTimestampToTime(\n\tcolumn arrow.Array,\n\tsfType types.SnowflakeType,\n\tscale int,\n\trecIdx int,\n\tloc *time.Location) *time.Time {\n\n\tif column.IsNull(recIdx) {\n\t\treturn nil\n\t}\n\tvar ret time.Time\n\tswitch sfType {\n\tcase types.TimestampNtzType:\n\t\tif column.DataType().ID() == arrow.STRUCT {\n\t\t\tstructData := column.(*array.Struct)\n\t\t\tepoch := structData.Field(0).(*array.Int64).Int64Values()\n\t\t\tfraction := structData.Field(1).(*array.Int32).Int32Values()\n\t\t\tret = time.Unix(epoch[recIdx], int64(fraction[recIdx])).UTC()\n\t\t} else {\n\t\t\tintData := column.(*array.Int64)\n\t\t\tvalue := intData.Value(recIdx)\n\t\t\tepoch := extractEpoch(value, scale)\n\t\t\tfraction := extractFraction(value, scale)\n\t\t\tret = time.Unix(epoch, fraction).UTC()\n\t\t}\n\tcase types.TimestampLtzType:\n\t\tif column.DataType().ID() == arrow.STRUCT {\n\t\t\tstructData := column.(*array.Struct)\n\t\t\tepoch := structData.Field(0).(*array.Int64).Int64Values()\n\t\t\tfraction := structData.Field(1).(*array.Int32).Int32Values()\n\t\t\tret = time.Unix(epoch[recIdx], int64(fraction[recIdx])).In(loc)\n\t\t} else {\n\t\t\tintData := column.(*array.Int64)\n\t\t\tvalue := intData.Value(recIdx)\n\t\t\tepoch := extractEpoch(value, scale)\n\t\t\tfraction := extractFraction(value, scale)\n\t\t\tret = time.Unix(epoch, fraction).In(loc)\n\t\t}\n\tcase types.TimestampTzType:\n\t\tstructData := column.(*array.Struct)\n\t\tif structData.NumField() == 2 {\n\t\t\tvalue := structData.Field(0).(*array.Int64).Int64Values()\n\t\t\ttimezone := structData.Field(1).(*array.Int32).Int32Values()\n\t\t\tepoch := extractEpoch(value[recIdx], scale)\n\t\t\tfraction := extractFraction(value[recIdx], scale)\n\t\t\tlocTz := sf.Location(int(timezone[recIdx]) - 1440)\n\t\t\tret = time.Unix(epoch, fraction).In(locTz)\n\t\t} else {\n\t\t\tepoch := structData.Field(0).(*array.Int64).Int64Values()\n\t\t\tfraction := structData.Field(1).(*array.Int32).Int32Values()\n\t\t\ttimezone := structData.Field(2).(*array.Int32).Int32Values()\n\t\t\tlocTz := sf.Location(int(timezone[recIdx]) - 1440)\n\t\t\tret = time.Unix(epoch[recIdx], int64(fraction[recIdx])).In(locTz)\n\t\t}\n\t}\n\treturn &ret\n}\n\nfunc extractEpoch(value int64, scale int) int64 {\n\treturn value / int64(math.Pow10(scale))\n}\n\nfunc extractFraction(value int64, scale int) int64 {\n\treturn (value % int64(math.Pow10(scale))) * int64(math.Pow10(9-scale))\n}\n"
  },
  {
    "path": "arrowbatches/converter_test.go",
    "content": "package arrowbatches\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n\t\"math/big\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/array\"\n\t\"github.com/apache/arrow-go/v18/arrow/decimal128\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n)\n\nvar decimalShift = new(big.Int).Exp(big.NewInt(2), big.NewInt(64), nil)\n\nfunc stringIntToDecimal(src string) (decimal128.Num, bool) {\n\tb, ok := new(big.Int).SetString(src, 10)\n\tif !ok {\n\t\treturn decimal128.Num{}, ok\n\t}\n\tvar high, low big.Int\n\thigh.QuoRem(b, decimalShift, &low)\n\treturn decimal128.New(high.Int64(), low.Uint64()), true\n}\n\nfunc decimalToBigInt(num decimal128.Num) *big.Int {\n\thigh := new(big.Int).SetInt64(num.HighBits())\n\tlow := new(big.Int).SetUint64(num.LowBits())\n\treturn new(big.Int).Add(new(big.Int).Mul(high, decimalShift), low)\n}\n\nfunc TestArrowToRecord(t *testing.T) {\n\tpool := memory.NewCheckedAllocator(memory.NewGoAllocator())\n\tdefer pool.AssertSize(t, 0)\n\tvar valids []bool\n\n\tlocalTime := time.Date(2019, 1, 1, 1, 17, 31, 123456789, time.FixedZone(\"-08:00\", -8*3600))\n\tlocalTimeFarIntoFuture := time.Date(9000, 2, 6, 14, 17, 31, 123456789, time.FixedZone(\"-08:00\", -8*3600))\n\n\tepochField := arrow.Field{Name: \"epoch\", Type: &arrow.Int64Type{}}\n\ttimezoneField := arrow.Field{Name: \"timezone\", Type: &arrow.Int32Type{}}\n\tfractionField := arrow.Field{Name: \"fraction\", Type: &arrow.Int32Type{}}\n\ttimestampTzStructWithoutFraction := arrow.StructOf(epochField, timezoneField)\n\ttimestampTzStructWithFraction := arrow.StructOf(epochField, fractionField, timezoneField)\n\ttimestampNtzStruct := arrow.StructOf(epochField, fractionField)\n\ttimestampLtzStruct := arrow.StructOf(epochField, fractionField)\n\n\ttype testObj struct {\n\t\tfield1 int\n\t\tfield2 string\n\t}\n\n\tfor _, tc := range []struct {\n\t\tlogical                          string\n\t\tphysical                         string\n\t\tsc                               *arrow.Schema\n\t\trowType                          query.ExecResponseRowType\n\t\tvalues                           any\n\t\texpected                         any\n\t\terror                            string\n\t\tarrowBatchesTimestampOption      ia.TimestampOption\n\t\tenableArrowBatchesUtf8Validation bool\n\t\twithHigherPrecision              bool\n\t\tnrows                            int\n\t\tbuilder                          array.Builder\n\t\tappend                           func(b array.Builder, vs any)\n\t\tcompare                          func(src any, expected any, rec arrow.Record) int\n\t}{\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"number\",\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tvalues:   []int64{1, 2},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewInt64Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int64Builder).AppendValues(vs.([]int64), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int64\",\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Decimal128Type{Precision: 38, Scale: 0}}}, nil),\n\t\t\tvalues:   []string{\"10000000000000000000000000000000000000\", \"-12345678901234567890123456789012345678\"},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewDecimal128Builder(pool, &arrow.Decimal128Type{Precision: 38, Scale: 0}),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringIntToDecimal(s)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to Int64\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Decimal128Builder).Append(num)\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i, dec := range convertedRec.Column(0).(*array.Int64).Int64Values() {\n\t\t\t\t\tnum, ok := stringIntToDecimal(srcvs[i])\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := decimalToBigInt(num).Int64()\n\t\t\t\t\tif srcDec != dec {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:             \"fixed\",\n\t\t\tphysical:            \"number(38,0)\",\n\t\t\tsc:                  arrow.NewSchema([]arrow.Field{{Type: &arrow.Decimal128Type{Precision: 38, Scale: 0}}}, nil),\n\t\t\tvalues:              []string{\"10000000000000000000000000000000000000\", \"-12345678901234567890123456789012345678\"},\n\t\t\twithHigherPrecision: true,\n\t\t\tnrows:               2,\n\t\t\tbuilder:             array.NewDecimal128Builder(pool, &arrow.Decimal128Type{Precision: 38, Scale: 0}),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringIntToDecimal(s)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to Int64\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Decimal128Builder).Append(num)\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i, dec := range convertedRec.Column(0).(*array.Decimal128).Values() {\n\t\t\t\t\tsrcDec, ok := stringIntToDecimal(srcvs[i])\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tif srcDec != dec {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"float64\",\n\t\t\trowType:  query.ExecResponseRowType{Scale: 37},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Decimal128Type{Precision: 38, Scale: 37}}}, nil),\n\t\t\tvalues:   []string{\"1.2345678901234567890123456789012345678\", \"-9.999999999999999\"},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewDecimal128Builder(pool, &arrow.Decimal128Type{Precision: 38, Scale: 37}),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, err := decimal128.FromString(s, 38, 37)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to decimal: %s\", err)\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Decimal128Builder).Append(num)\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i, dec := range convertedRec.Column(0).(*array.Float64).Float64Values() {\n\t\t\t\t\tnum, err := decimal128.FromString(srcvs[i], 38, 37)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := num.ToFloat64(37)\n\t\t\t\t\tif srcDec != dec {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:             \"fixed\",\n\t\t\tphysical:            \"number(38,37)\",\n\t\t\trowType:             query.ExecResponseRowType{Scale: 37},\n\t\t\tsc:                  arrow.NewSchema([]arrow.Field{{Type: &arrow.Decimal128Type{Precision: 38, Scale: 37}}}, nil),\n\t\t\tvalues:              []string{\"1.2345678901234567890123456789012345678\", \"-9.999999999999999\"},\n\t\t\twithHigherPrecision: true,\n\t\t\tnrows:               2,\n\t\t\tbuilder:             array.NewDecimal128Builder(pool, &arrow.Decimal128Type{Precision: 38, Scale: 37}),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, err := decimal128.FromString(s, 38, 37)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to decimal: %s\", err)\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Decimal128Builder).Append(num)\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i, dec := range convertedRec.Column(0).(*array.Decimal128).Values() {\n\t\t\t\t\tsrcDec, err := decimal128.FromString(srcvs[i], 38, 37)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tif srcDec != dec {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int8\",\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int8Type{}}}, nil),\n\t\t\tvalues:   []int8{1, 2},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewInt8Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int8Builder).AppendValues(vs.([]int8), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int16\",\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int16Type{}}}, nil),\n\t\t\tvalues:   []int16{1, 2},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewInt16Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int16Builder).AppendValues(vs.([]int16), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int32\",\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int32Type{}}}, nil),\n\t\t\tvalues:   []int32{1, 2},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewInt32Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int32Builder).AppendValues(vs.([]int32), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int64\",\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tvalues:   []int64{1, 2},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewInt64Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int64Builder).AppendValues(vs.([]int64), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"float8\",\n\t\t\trowType:  query.ExecResponseRowType{Scale: 1},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int8Type{}}}, nil),\n\t\t\tvalues:   []int8{10, 16},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewInt8Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int8Builder).AppendValues(vs.([]int8), valids) },\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]int8)\n\t\t\t\tfor i, f := range convertedRec.Column(0).(*array.Float64).Float64Values() {\n\t\t\t\t\trawFloat, _ := intToBigFloat(int64(srcvs[i]), 1).Float64()\n\t\t\t\t\tif rawFloat != f {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:             \"fixed\",\n\t\t\tphysical:            \"int8\",\n\t\t\trowType:             query.ExecResponseRowType{Scale: 1},\n\t\t\tsc:                  arrow.NewSchema([]arrow.Field{{Type: &arrow.Int8Type{}}}, nil),\n\t\t\tvalues:              []int8{10, 16},\n\t\t\twithHigherPrecision: true,\n\t\t\tnrows:               2,\n\t\t\tbuilder:             array.NewInt8Builder(pool),\n\t\t\tappend:              func(b array.Builder, vs any) { b.(*array.Int8Builder).AppendValues(vs.([]int8), valids) },\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]int8)\n\t\t\t\tfor i, f := range convertedRec.Column(0).(*array.Int8).Int8Values() {\n\t\t\t\t\tif srcvs[i] != f {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"float16\",\n\t\t\trowType:  query.ExecResponseRowType{Scale: 1},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int16Type{}}}, nil),\n\t\t\tvalues:   []int16{20, 26},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewInt16Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int16Builder).AppendValues(vs.([]int16), valids) },\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]int16)\n\t\t\t\tfor i, f := range convertedRec.Column(0).(*array.Float64).Float64Values() {\n\t\t\t\t\trawFloat, _ := intToBigFloat(int64(srcvs[i]), 1).Float64()\n\t\t\t\t\tif rawFloat != f {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:             \"fixed\",\n\t\t\tphysical:            \"int16\",\n\t\t\trowType:             query.ExecResponseRowType{Scale: 1},\n\t\t\tsc:                  arrow.NewSchema([]arrow.Field{{Type: &arrow.Int16Type{}}}, nil),\n\t\t\tvalues:              []int16{20, 26},\n\t\t\twithHigherPrecision: true,\n\t\t\tnrows:               2,\n\t\t\tbuilder:             array.NewInt16Builder(pool),\n\t\t\tappend:              func(b array.Builder, vs any) { b.(*array.Int16Builder).AppendValues(vs.([]int16), valids) },\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]int16)\n\t\t\t\tfor i, f := range convertedRec.Column(0).(*array.Int16).Int16Values() {\n\t\t\t\t\tif srcvs[i] != f {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"float32\",\n\t\t\trowType:  query.ExecResponseRowType{Scale: 2},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int32Type{}}}, nil),\n\t\t\tvalues:   []int32{200, 265},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewInt32Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int32Builder).AppendValues(vs.([]int32), valids) },\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]int32)\n\t\t\t\tfor i, f := range convertedRec.Column(0).(*array.Float64).Float64Values() {\n\t\t\t\t\trawFloat, _ := intToBigFloat(int64(srcvs[i]), 2).Float64()\n\t\t\t\t\tif rawFloat != f {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:             \"fixed\",\n\t\t\tphysical:            \"int32\",\n\t\t\trowType:             query.ExecResponseRowType{Scale: 2},\n\t\t\tsc:                  arrow.NewSchema([]arrow.Field{{Type: &arrow.Int32Type{}}}, nil),\n\t\t\tvalues:              []int32{200, 265},\n\t\t\twithHigherPrecision: true,\n\t\t\tnrows:               2,\n\t\t\tbuilder:             array.NewInt32Builder(pool),\n\t\t\tappend:              func(b array.Builder, vs any) { b.(*array.Int32Builder).AppendValues(vs.([]int32), valids) },\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]int32)\n\t\t\t\tfor i, f := range convertedRec.Column(0).(*array.Int32).Int32Values() {\n\t\t\t\t\tif srcvs[i] != f {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"float64\",\n\t\t\trowType:  query.ExecResponseRowType{Scale: 5},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tvalues:   []int64{12345, 234567},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewInt64Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int64Builder).AppendValues(vs.([]int64), valids) },\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]int64)\n\t\t\t\tfor i, f := range convertedRec.Column(0).(*array.Float64).Float64Values() {\n\t\t\t\t\trawFloat, _ := intToBigFloat(srcvs[i], 5).Float64()\n\t\t\t\t\tif rawFloat != f {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:             \"fixed\",\n\t\t\tphysical:            \"int64\",\n\t\t\trowType:             query.ExecResponseRowType{Scale: 5},\n\t\t\tsc:                  arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tvalues:              []int64{12345, 234567},\n\t\t\twithHigherPrecision: true,\n\t\t\tnrows:               2,\n\t\t\tbuilder:             array.NewInt64Builder(pool),\n\t\t\tappend:              func(b array.Builder, vs any) { b.(*array.Int64Builder).AppendValues(vs.([]int64), valids) },\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]int64)\n\t\t\t\tfor i, f := range convertedRec.Column(0).(*array.Int64).Int64Values() {\n\t\t\t\t\tif srcvs[i] != f {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"boolean\",\n\t\t\tsc:      arrow.NewSchema([]arrow.Field{{Type: &arrow.BooleanType{}}}, nil),\n\t\t\tvalues:  []bool{true, false},\n\t\t\tnrows:   2,\n\t\t\tbuilder: array.NewBooleanBuilder(pool),\n\t\t\tappend:  func(b array.Builder, vs any) { b.(*array.BooleanBuilder).AppendValues(vs.([]bool), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:  \"real\",\n\t\t\tphysical: \"float\",\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Float64Type{}}}, nil),\n\t\t\tvalues:   []float64{1, 2},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewFloat64Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Float64Builder).AppendValues(vs.([]float64), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:  \"text\",\n\t\t\tphysical: \"string\",\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.StringType{}}}, nil),\n\t\t\tvalues:   []string{\"foo\", \"bar\"},\n\t\t\tnrows:    2,\n\t\t\tbuilder:  array.NewStringBuilder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.StringBuilder).AppendValues(vs.([]string), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:                          \"text\",\n\t\t\tphysical:                         \"string with invalid utf8\",\n\t\t\tsc:                               arrow.NewSchema([]arrow.Field{{Type: &arrow.StringType{}}}, nil),\n\t\t\trowType:                          query.ExecResponseRowType{Type: \"TEXT\"},\n\t\t\tvalues:                           []string{\"\\xFF\", \"bar\", \"baz\\xFF\\xFF\"},\n\t\t\texpected:                         []string{\"�\", \"bar\", \"baz��\"},\n\t\t\tenableArrowBatchesUtf8Validation: true,\n\t\t\tnrows:                            2,\n\t\t\tbuilder:                          array.NewStringBuilder(pool),\n\t\t\tappend:                           func(b array.Builder, vs any) { b.(*array.StringBuilder).AppendValues(vs.([]string), valids) },\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tarr := convertedRec.Column(0).(*array.String)\n\t\t\t\tfor i := 0; i < arr.Len(); i++ {\n\t\t\t\t\tif expected.([]string)[i] != arr.Value(i) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"binary\",\n\t\t\tsc:      arrow.NewSchema([]arrow.Field{{Type: &arrow.BinaryType{}}}, nil),\n\t\t\tvalues:  [][]byte{[]byte(\"foo\"), []byte(\"bar\")},\n\t\t\tnrows:   2,\n\t\t\tbuilder: array.NewBinaryBuilder(pool, arrow.BinaryTypes.Binary),\n\t\t\tappend:  func(b array.Builder, vs any) { b.(*array.BinaryBuilder).AppendValues(vs.([][]byte), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical: \"date\",\n\t\t\tsc:      arrow.NewSchema([]arrow.Field{{Type: &arrow.Date32Type{}}}, nil),\n\t\t\tvalues:  []time.Time{time.Now(), localTime},\n\t\t\tnrows:   2,\n\t\t\tbuilder: array.NewDate32Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, d := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Date32Builder).Append(arrow.Date32(d.Unix()))\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"time\",\n\t\t\tsc:      arrow.NewSchema([]arrow.Field{{Type: arrow.FixedWidthTypes.Time64ns}}, nil),\n\t\t\tvalues:  []time.Time{time.Now(), time.Now()},\n\t\t\tnrows:   2,\n\t\t\tbuilder: array.NewTime64Builder(pool, arrow.FixedWidthTypes.Time64ns.(*arrow.Time64Type)),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Time64Builder).Append(arrow.Time64(t.UnixNano()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tarr := convertedRec.Column(0).(*array.Time64)\n\t\t\t\tfor i := 0; i < arr.Len(); i++ {\n\t\t\t\t\tif srcvs[i].UnixNano() != int64(arr.Value(i)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"timestamp_ntz\",\n\t\t\tphysical: \"int64\",\n\t\t\tvalues:   []time.Time{time.Now().Truncate(time.Millisecond), localTime.Truncate(time.Millisecond)},\n\t\t\tnrows:    2,\n\t\t\trowType:  query.ExecResponseRowType{Scale: 3},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tbuilder:  array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Int64Builder).Append(t.UnixMilli())\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Nanosecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"timestamp_ntz\",\n\t\t\tphysical: \"struct\",\n\t\t\tvalues:   []time.Time{time.Now(), localTime},\n\t\t\tnrows:    2,\n\t\t\trowType:  query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: timestampNtzStruct}}, nil),\n\t\t\tbuilder:  array.NewStructBuilder(pool, timestampNtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Nanosecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ntz\",\n\t\t\tphysical:                    \"struct\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Microsecond), localTime.Truncate(time.Microsecond)},\n\t\t\tarrowBatchesTimestampOption: ia.UseMicrosecondTimestamp,\n\t\t\tnrows:                       2,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampNtzStruct}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampNtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Microsecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ntz\",\n\t\t\tphysical:                    \"struct\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Millisecond), localTime.Truncate(time.Millisecond)},\n\t\t\tarrowBatchesTimestampOption: ia.UseMillisecondTimestamp,\n\t\t\tnrows:                       2,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampNtzStruct}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampNtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Millisecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ntz\",\n\t\t\tphysical:                    \"struct\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Second), localTime.Truncate(time.Second)},\n\t\t\tarrowBatchesTimestampOption: ia.UseSecondTimestamp,\n\t\t\tnrows:                       2,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampNtzStruct}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampNtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Second)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"timestamp_ntz\",\n\t\t\tphysical: \"error\",\n\t\t\tvalues:   []time.Time{localTimeFarIntoFuture},\n\t\t\terror:    \"Cannot convert timestamp\",\n\t\t\tnrows:    1,\n\t\t\trowType:  query.ExecResponseRowType{Scale: 3},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tbuilder:  array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Int64Builder).Append(t.UnixMilli())\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int { return 0 },\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ntz\",\n\t\t\tphysical:                    \"int64 with original timestamp\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Millisecond), localTime.Truncate(time.Millisecond), localTimeFarIntoFuture.Truncate(time.Millisecond)},\n\t\t\tarrowBatchesTimestampOption: ia.UseOriginalTimestamp,\n\t\t\tnrows:                       3,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 3},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tbuilder:                     array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Int64Builder).Append(t.UnixMilli())\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := 0; i < convertedRec.Column(0).Len(); i++ {\n\t\t\t\t\tts := ArrowSnowflakeTimestampToTime(convertedRec.Column(0), types.GetSnowflakeType(\"timestamp_ntz\"), 3, i, nil)\n\t\t\t\t\tif !srcvs[i].Equal(*ts) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ntz\",\n\t\t\tphysical:                    \"struct with original timestamp\",\n\t\t\tvalues:                      []time.Time{time.Now(), localTime, localTimeFarIntoFuture},\n\t\t\tarrowBatchesTimestampOption: ia.UseOriginalTimestamp,\n\t\t\tnrows:                       3,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampNtzStruct}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampNtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := 0; i < convertedRec.Column(0).Len(); i++ {\n\t\t\t\t\tts := ArrowSnowflakeTimestampToTime(convertedRec.Column(0), types.GetSnowflakeType(\"timestamp_ntz\"), 9, i, nil)\n\t\t\t\t\tif !srcvs[i].Equal(*ts) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"timestamp_ltz\",\n\t\t\tphysical: \"int64\",\n\t\t\tvalues:   []time.Time{time.Now().Truncate(time.Millisecond), localTime.Truncate(time.Millisecond)},\n\t\t\tnrows:    2,\n\t\t\trowType:  query.ExecResponseRowType{Scale: 3},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tbuilder:  array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Int64Builder).Append(t.UnixMilli())\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Nanosecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"timestamp_ltz\",\n\t\t\tphysical: \"struct\",\n\t\t\tvalues:   []time.Time{time.Now(), localTime},\n\t\t\tnrows:    2,\n\t\t\trowType:  query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: timestampNtzStruct}}, nil),\n\t\t\tbuilder:  array.NewStructBuilder(pool, timestampNtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Nanosecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ltz\",\n\t\t\tphysical:                    \"struct\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Microsecond), localTime.Truncate(time.Microsecond)},\n\t\t\tarrowBatchesTimestampOption: ia.UseMicrosecondTimestamp,\n\t\t\tnrows:                       2,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampNtzStruct}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampNtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Microsecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ltz\",\n\t\t\tphysical:                    \"struct\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Millisecond), localTime.Truncate(time.Millisecond)},\n\t\t\tarrowBatchesTimestampOption: ia.UseMillisecondTimestamp,\n\t\t\tnrows:                       2,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampNtzStruct}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampNtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Millisecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ltz\",\n\t\t\tphysical:                    \"struct\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Second), localTime.Truncate(time.Second)},\n\t\t\tarrowBatchesTimestampOption: ia.UseSecondTimestamp,\n\t\t\tnrows:                       2,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampNtzStruct}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampNtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Second)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"timestamp_ltz\",\n\t\t\tphysical: \"error\",\n\t\t\tvalues:   []time.Time{localTimeFarIntoFuture},\n\t\t\terror:    \"Cannot convert timestamp\",\n\t\t\tnrows:    1,\n\t\t\trowType:  query.ExecResponseRowType{Scale: 3},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tbuilder:  array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Int64Builder).Append(t.UnixMilli())\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int { return 0 },\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ltz\",\n\t\t\tphysical:                    \"int64 with original timestamp\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Millisecond), localTime.Truncate(time.Millisecond), localTimeFarIntoFuture.Truncate(time.Millisecond)},\n\t\t\tarrowBatchesTimestampOption: ia.UseOriginalTimestamp,\n\t\t\tnrows:                       3,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 3},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: &arrow.Int64Type{}}}, nil),\n\t\t\tbuilder:                     array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Int64Builder).Append(t.UnixMilli())\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := 0; i < convertedRec.Column(0).Len(); i++ {\n\t\t\t\t\tts := ArrowSnowflakeTimestampToTime(convertedRec.Column(0), types.GetSnowflakeType(\"timestamp_ltz\"), 3, i, localTime.Location())\n\t\t\t\t\tif !srcvs[i].Equal(*ts) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_ltz\",\n\t\t\tphysical:                    \"struct with original timestamp\",\n\t\t\tvalues:                      []time.Time{time.Now(), localTime, localTimeFarIntoFuture},\n\t\t\tarrowBatchesTimestampOption: ia.UseOriginalTimestamp,\n\t\t\tnrows:                       3,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampLtzStruct}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampLtzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := 0; i < convertedRec.Column(0).Len(); i++ {\n\t\t\t\t\tts := ArrowSnowflakeTimestampToTime(convertedRec.Column(0), types.GetSnowflakeType(\"timestamp_ltz\"), 9, i, localTime.Location())\n\t\t\t\t\tif !srcvs[i].Equal(*ts) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"timestamp_tz\",\n\t\t\tphysical: \"struct2\",\n\t\t\tvalues:   []time.Time{time.Now().Truncate(time.Millisecond), localTime.Truncate(time.Millisecond)},\n\t\t\tnrows:    2,\n\t\t\trowType:  query.ExecResponseRowType{Scale: 3},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: timestampTzStructWithoutFraction}}, nil),\n\t\t\tbuilder:  array.NewStructBuilder(pool, timestampTzStructWithoutFraction),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.UnixMilli())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(0))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Nanosecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:  \"timestamp_tz\",\n\t\t\tphysical: \"struct3\",\n\t\t\tvalues:   []time.Time{time.Now(), localTime},\n\t\t\tnrows:    2,\n\t\t\trowType:  query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:       arrow.NewSchema([]arrow.Field{{Type: timestampTzStructWithFraction}}, nil),\n\t\t\tbuilder:  array.NewStructBuilder(pool, timestampTzStructWithFraction),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t\tsb.FieldBuilder(2).(*array.Int32Builder).Append(int32(0))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Nanosecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_tz\",\n\t\t\tphysical:                    \"struct3\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Microsecond), localTime.Truncate(time.Microsecond)},\n\t\t\tarrowBatchesTimestampOption: ia.UseMicrosecondTimestamp,\n\t\t\tnrows:                       2,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampTzStructWithFraction}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampTzStructWithFraction),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t\tsb.FieldBuilder(2).(*array.Int32Builder).Append(int32(0))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Microsecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_tz\",\n\t\t\tphysical:                    \"struct3\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Millisecond), localTime.Truncate(time.Millisecond)},\n\t\t\tarrowBatchesTimestampOption: ia.UseMillisecondTimestamp,\n\t\t\tnrows:                       2,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampTzStructWithFraction}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampTzStructWithFraction),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t\tsb.FieldBuilder(2).(*array.Int32Builder).Append(int32(0))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Millisecond)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_tz\",\n\t\t\tphysical:                    \"struct3\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Second), localTime.Truncate(time.Second)},\n\t\t\tarrowBatchesTimestampOption: ia.UseSecondTimestamp,\n\t\t\tnrows:                       2,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampTzStructWithFraction}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampTzStructWithFraction),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t\tsb.FieldBuilder(2).(*array.Int32Builder).Append(int32(0))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i, t := range convertedRec.Column(0).(*array.Timestamp).TimestampValues() {\n\t\t\t\t\tif !srcvs[i].Equal(t.ToTime(arrow.Second)) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_tz\",\n\t\t\tphysical:                    \"struct2 with original timestamp\",\n\t\t\tvalues:                      []time.Time{time.Now().Truncate(time.Millisecond), localTime.Truncate(time.Millisecond), localTimeFarIntoFuture.Truncate(time.Millisecond)},\n\t\t\tarrowBatchesTimestampOption: ia.UseOriginalTimestamp,\n\t\t\tnrows:                       3,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 3},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampTzStructWithoutFraction}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampTzStructWithoutFraction),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.UnixMilli())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(0))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := 0; i < convertedRec.Column(0).Len(); i++ {\n\t\t\t\t\tts := ArrowSnowflakeTimestampToTime(convertedRec.Column(0), types.GetSnowflakeType(\"timestamp_tz\"), 3, i, nil)\n\t\t\t\t\tif !srcvs[i].Equal(*ts) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical:                     \"timestamp_tz\",\n\t\t\tphysical:                    \"struct3 with original timestamp\",\n\t\t\tvalues:                      []time.Time{time.Now(), localTime, localTimeFarIntoFuture},\n\t\t\tarrowBatchesTimestampOption: ia.UseOriginalTimestamp,\n\t\t\tnrows:                       3,\n\t\t\trowType:                     query.ExecResponseRowType{Scale: 9},\n\t\t\tsc:                          arrow.NewSchema([]arrow.Field{{Type: timestampTzStructWithFraction}}, nil),\n\t\t\tbuilder:                     array.NewStructBuilder(pool, timestampTzStructWithFraction),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.Nanosecond()))\n\t\t\t\t\tsb.FieldBuilder(2).(*array.Int32Builder).Append(int32(0))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, expected any, convertedRec arrow.Record) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := 0; i < convertedRec.Column(0).Len(); i++ {\n\t\t\t\t\tts := ArrowSnowflakeTimestampToTime(convertedRec.Column(0), types.GetSnowflakeType(\"timestamp_tz\"), 9, i, nil)\n\t\t\t\t\tif !srcvs[i].Equal(*ts) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"array\",\n\t\t\tvalues:  [][]string{{\"foo\", \"bar\"}, {\"baz\", \"quz\", \"quux\"}},\n\t\t\tnrows:   2,\n\t\t\tsc:      arrow.NewSchema([]arrow.Field{{Type: &arrow.StringType{}}}, nil),\n\t\t\tbuilder: array.NewStringBuilder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, a := range vs.([][]string) {\n\t\t\t\t\tb.(*array.StringBuilder).Append(fmt.Sprint(a))\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"object\",\n\t\t\tvalues:  []testObj{{0, \"foo\"}, {1, \"bar\"}},\n\t\t\tnrows:   2,\n\t\t\tsc:      arrow.NewSchema([]arrow.Field{{Type: &arrow.StringType{}}}, nil),\n\t\t\tbuilder: array.NewStringBuilder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, o := range vs.([]testObj) {\n\t\t\t\t\tb.(*array.StringBuilder).Append(fmt.Sprint(o))\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t} {\n\t\ttestName := tc.logical\n\t\tif tc.physical != \"\" {\n\t\t\ttestName += \" \" + tc.physical\n\t\t}\n\t\tt.Run(testName, func(t *testing.T) {\n\t\t\tscope := memory.NewCheckedAllocatorScope(pool)\n\t\t\tdefer scope.CheckSize(t)\n\n\t\t\tb := tc.builder\n\t\t\tdefer b.Release()\n\t\t\ttc.append(b, tc.values)\n\t\t\tarr := b.NewArray()\n\t\t\tdefer arr.Release()\n\t\t\trawRec := array.NewRecord(tc.sc, []arrow.Array{arr}, int64(tc.nrows))\n\t\t\tdefer rawRec.Release()\n\n\t\t\tmeta := tc.rowType\n\t\t\tmeta.Type = tc.logical\n\n\t\t\tctx := context.Background()\n\t\t\tswitch tc.arrowBatchesTimestampOption {\n\t\t\tcase ia.UseOriginalTimestamp:\n\t\t\t\tctx = ia.WithTimestampOption(ctx, ia.UseOriginalTimestamp)\n\t\t\tcase ia.UseSecondTimestamp:\n\t\t\t\tctx = ia.WithTimestampOption(ctx, ia.UseSecondTimestamp)\n\t\t\tcase ia.UseMillisecondTimestamp:\n\t\t\t\tctx = ia.WithTimestampOption(ctx, ia.UseMillisecondTimestamp)\n\t\t\tcase ia.UseMicrosecondTimestamp:\n\t\t\t\tctx = ia.WithTimestampOption(ctx, ia.UseMicrosecondTimestamp)\n\t\t\tdefault:\n\t\t\t\tctx = ia.WithTimestampOption(ctx, ia.UseNanosecondTimestamp)\n\t\t\t}\n\n\t\t\tif tc.enableArrowBatchesUtf8Validation {\n\t\t\t\tctx = ia.EnableUtf8Validation(ctx)\n\t\t\t}\n\n\t\t\tif tc.withHigherPrecision {\n\t\t\t\tctx = ia.WithHigherPrecision(ctx)\n\t\t\t}\n\n\t\t\ttransformedRec, err := arrowToRecord(ctx, rawRec, pool, []query.ExecResponseRowType{meta}, localTime.Location())\n\t\t\tif err != nil {\n\t\t\t\tif tc.error == \"\" || !strings.Contains(err.Error(), tc.error) {\n\t\t\t\t\tt.Fatalf(\"error: %s\", err)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tdefer transformedRec.Release()\n\t\t\t\tif tc.error != \"\" {\n\t\t\t\t\tt.Fatalf(\"expected error: %s\", tc.error)\n\t\t\t\t}\n\n\t\t\t\tif tc.compare != nil {\n\t\t\t\t\tidx := tc.compare(tc.values, tc.expected, transformedRec)\n\t\t\t\t\tif idx != -1 {\n\t\t\t\t\t\tt.Fatalf(\"error: column array value mismatch at index %v\", idx)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tfor i, c := range transformedRec.Columns() {\n\t\t\t\t\t\trawCol := rawRec.Column(i)\n\t\t\t\t\t\tif rawCol != c {\n\t\t\t\t\t\t\tt.Fatalf(\"error: expected column %s, got column %s\", rawCol, c)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "arrowbatches/schema.go",
    "content": "package arrowbatches\n\nimport (\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n\t\"time\"\n\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n)\n\nfunc recordToSchema(sc *arrow.Schema, rowType []query.ExecResponseRowType, loc *time.Location, timestampOption ia.TimestampOption, withHigherPrecision bool) (*arrow.Schema, error) {\n\tfields := recordToSchemaRecursive(sc.Fields(), rowType, loc, timestampOption, withHigherPrecision)\n\tmeta := sc.Metadata()\n\treturn arrow.NewSchema(fields, &meta), nil\n}\n\nfunc recordToSchemaRecursive(inFields []arrow.Field, rowType []query.ExecResponseRowType, loc *time.Location, timestampOption ia.TimestampOption, withHigherPrecision bool) []arrow.Field {\n\tvar outFields []arrow.Field\n\tfor i, f := range inFields {\n\t\tfieldMetadata := rowType[i].ToFieldMetadata()\n\t\tconverted, t := recordToSchemaSingleField(fieldMetadata, f, withHigherPrecision, timestampOption, loc)\n\n\t\tnewField := f\n\t\tif converted {\n\t\t\tnewField = arrow.Field{\n\t\t\t\tName:     f.Name,\n\t\t\t\tType:     t,\n\t\t\t\tNullable: f.Nullable,\n\t\t\t\tMetadata: f.Metadata,\n\t\t\t}\n\t\t}\n\t\toutFields = append(outFields, newField)\n\t}\n\treturn outFields\n}\n\nfunc recordToSchemaSingleField(fieldMetadata query.FieldMetadata, f arrow.Field, withHigherPrecision bool, timestampOption ia.TimestampOption, loc *time.Location) (bool, arrow.DataType) {\n\tt := f.Type\n\tconverted := true\n\tswitch types.GetSnowflakeType(fieldMetadata.Type) {\n\tcase types.FixedType:\n\t\tswitch f.Type.ID() {\n\t\tcase arrow.DECIMAL:\n\t\t\tif withHigherPrecision {\n\t\t\t\tconverted = false\n\t\t\t} else if fieldMetadata.Scale == 0 {\n\t\t\t\tt = &arrow.Int64Type{}\n\t\t\t} else {\n\t\t\t\tt = &arrow.Float64Type{}\n\t\t\t}\n\t\tdefault:\n\t\t\tif withHigherPrecision {\n\t\t\t\tconverted = false\n\t\t\t} else if fieldMetadata.Scale != 0 {\n\t\t\t\tt = &arrow.Float64Type{}\n\t\t\t} else {\n\t\t\t\tconverted = false\n\t\t\t}\n\t\t}\n\tcase types.TimeType:\n\t\tt = &arrow.Time64Type{Unit: arrow.Nanosecond}\n\tcase types.TimestampNtzType, types.TimestampTzType:\n\t\tswitch timestampOption {\n\t\tcase ia.UseOriginalTimestamp:\n\t\t\tconverted = false\n\t\tcase ia.UseMicrosecondTimestamp:\n\t\t\tt = &arrow.TimestampType{Unit: arrow.Microsecond}\n\t\tcase ia.UseMillisecondTimestamp:\n\t\t\tt = &arrow.TimestampType{Unit: arrow.Millisecond}\n\t\tcase ia.UseSecondTimestamp:\n\t\t\tt = &arrow.TimestampType{Unit: arrow.Second}\n\t\tdefault:\n\t\t\tt = &arrow.TimestampType{Unit: arrow.Nanosecond}\n\t\t}\n\tcase types.TimestampLtzType:\n\t\tswitch timestampOption {\n\t\tcase ia.UseOriginalTimestamp:\n\t\t\tconverted = false\n\t\tcase ia.UseMicrosecondTimestamp:\n\t\t\tt = &arrow.TimestampType{Unit: arrow.Microsecond, TimeZone: loc.String()}\n\t\tcase ia.UseMillisecondTimestamp:\n\t\t\tt = &arrow.TimestampType{Unit: arrow.Millisecond, TimeZone: loc.String()}\n\t\tcase ia.UseSecondTimestamp:\n\t\t\tt = &arrow.TimestampType{Unit: arrow.Second, TimeZone: loc.String()}\n\t\tdefault:\n\t\t\tt = &arrow.TimestampType{Unit: arrow.Nanosecond, TimeZone: loc.String()}\n\t\t}\n\tcase types.ObjectType:\n\t\tconverted = false\n\t\tif f.Type.ID() == arrow.STRUCT {\n\t\t\tvar internalFields []arrow.Field\n\t\t\tfor idx, internalField := range f.Type.(*arrow.StructType).Fields() {\n\t\t\t\tinternalConverted, convertedDataType := recordToSchemaSingleField(fieldMetadata.Fields[idx], internalField, withHigherPrecision, timestampOption, loc)\n\t\t\t\tconverted = converted || internalConverted\n\t\t\t\tif internalConverted {\n\t\t\t\t\tnewInternalField := arrow.Field{\n\t\t\t\t\t\tName:     internalField.Name,\n\t\t\t\t\t\tType:     convertedDataType,\n\t\t\t\t\t\tMetadata: internalField.Metadata,\n\t\t\t\t\t\tNullable: internalField.Nullable,\n\t\t\t\t\t}\n\t\t\t\t\tinternalFields = append(internalFields, newInternalField)\n\t\t\t\t} else {\n\t\t\t\t\tinternalFields = append(internalFields, internalField)\n\t\t\t\t}\n\t\t\t}\n\t\t\tt = arrow.StructOf(internalFields...)\n\t\t}\n\tcase types.ArrayType:\n\t\tif _, ok := f.Type.(*arrow.ListType); ok {\n\t\t\tconverted, dataType := recordToSchemaSingleField(fieldMetadata.Fields[0], f.Type.(*arrow.ListType).ElemField(), withHigherPrecision, timestampOption, loc)\n\t\t\tif converted {\n\t\t\t\tt = arrow.ListOf(dataType)\n\t\t\t}\n\t\t} else {\n\t\t\tt = f.Type\n\t\t}\n\tcase types.MapType:\n\t\tconvertedKey, keyDataType := recordToSchemaSingleField(fieldMetadata.Fields[0], f.Type.(*arrow.MapType).KeyField(), withHigherPrecision, timestampOption, loc)\n\t\tconvertedValue, valueDataType := recordToSchemaSingleField(fieldMetadata.Fields[1], f.Type.(*arrow.MapType).ItemField(), withHigherPrecision, timestampOption, loc)\n\t\tconverted = convertedKey || convertedValue\n\t\tif converted {\n\t\t\tt = arrow.MapOf(keyDataType, valueDataType)\n\t\t}\n\tdefault:\n\t\tconverted = false\n\t}\n\treturn converted, t\n}\n"
  },
  {
    "path": "assert_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math\"\n\t\"reflect\"\n\t\"regexp\"\n\t\"slices\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc assertNilE(t *testing.T, actual any, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateNil(actual, descriptions...))\n}\n\nfunc assertNilF(t *testing.T, actual any, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateNil(actual, descriptions...))\n}\n\nfunc assertNotNilE(t *testing.T, actual any, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateNotNil(actual, descriptions...))\n}\n\nfunc assertNotNilF(t *testing.T, actual any, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateNotNil(actual, descriptions...))\n}\n\nfunc assertErrIsF(t *testing.T, actual, expected error, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateErrIs(actual, expected, descriptions...))\n}\n\nfunc assertErrIsE(t *testing.T, actual, expected error, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateErrIs(actual, expected, descriptions...))\n}\n\nfunc assertErrorsAsF(t *testing.T, err error, target any, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateErrorsAs(err, target, descriptions...))\n}\n\nfunc assertEqualE(t *testing.T, actual any, expected any, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEqual(actual, expected, descriptions...))\n}\n\nfunc assertEqualF(t *testing.T, actual any, expected any, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateEqual(actual, expected, descriptions...))\n}\n\nfunc assertEqualIgnoringWhitespaceE(t *testing.T, actual string, expected string, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEqualIgnoringWhitespace(actual, expected, descriptions...))\n}\n\nfunc assertEqualEpsilonE(t *testing.T, actual, expected, epsilon float64, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEqualEpsilon(actual, expected, epsilon, descriptions...))\n}\n\nfunc assertDeepEqualE(t *testing.T, actual any, expected any, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateDeepEqual(actual, expected, descriptions...))\n}\n\nfunc assertNotEqualF(t *testing.T, actual any, expected any, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateNotEqual(actual, expected, descriptions...))\n}\n\nfunc assertNotEqualE(t *testing.T, actual any, expected any, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateNotEqual(actual, expected, descriptions...))\n}\n\nfunc assertBytesEqualE(t *testing.T, actual []byte, expected []byte, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateBytesEqual(actual, expected, descriptions...))\n}\n\nfunc assertTrueF(t *testing.T, actual bool, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateEqual(actual, true, descriptions...))\n}\n\nfunc assertTrueE(t *testing.T, actual bool, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEqual(actual, true, descriptions...))\n}\n\nfunc assertFalseF(t *testing.T, actual bool, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateEqual(actual, false, descriptions...))\n}\n\nfunc assertFalseE(t *testing.T, actual bool, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEqual(actual, false, descriptions...))\n}\n\nfunc assertStringContainsE(t *testing.T, actual string, expectedToContain string, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateStringContains(actual, expectedToContain, descriptions...))\n}\n\nfunc assertStringContainsF(t *testing.T, actual string, expectedToContain string, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateStringContains(actual, expectedToContain, descriptions...))\n}\n\nfunc assertEmptyStringE(t *testing.T, actual string, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEmptyString(actual, descriptions...))\n}\n\nfunc assertHasPrefixF(t *testing.T, actual string, expectedPrefix string, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateHasPrefix(actual, expectedPrefix, descriptions...))\n}\n\nfunc assertHasPrefixE(t *testing.T, actual string, expectedPrefix string, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateHasPrefix(actual, expectedPrefix, descriptions...))\n}\n\nfunc assertBetweenE(t *testing.T, value float64, min float64, max float64, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateValueBetween(value, min, max, descriptions...))\n}\n\nfunc assertBetweenInclusiveE(t *testing.T, value float64, min float64, max float64, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateValueBetweenInclusive(value, min, max, descriptions...))\n}\n\nfunc assertEmptyE[T any](t *testing.T, actual []T, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEmpty(actual, descriptions...))\n}\n\nfunc fatalOnNonEmpty(t *testing.T, errMsg string) {\n\tif errMsg != \"\" {\n\t\tt.Helper()\n\t\tt.Fatal(formatErrorMessage(errMsg))\n\t}\n}\n\nfunc errorOnNonEmpty(t *testing.T, errMsg string) {\n\tif errMsg != \"\" {\n\t\tt.Helper()\n\t\tt.Error(formatErrorMessage(errMsg))\n\t}\n}\n\nfunc formatErrorMessage(errMsg string) string {\n\treturn fmt.Sprintf(\"[%s] %s\", time.Now().Format(time.RFC3339Nano), maskSecrets(errMsg))\n}\n\nfunc validateNil(actual any, descriptions ...string) string {\n\tif isNil(actual) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to be nil but was not. %s\", maskSecrets(fmt.Sprintf(\"%v\", actual)), desc)\n}\n\nfunc validateNotNil(actual any, descriptions ...string) string {\n\tif !isNil(actual) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected to be not nil but was not. %s\", desc)\n}\n\nfunc validateErrIs(actual, expected error, descriptions ...string) string {\n\tif errors.Is(actual, expected) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\tactualStr := \"nil\"\n\texpectedStr := \"nil\"\n\tif actual != nil {\n\t\tactualStr = maskSecrets(actual.Error())\n\t}\n\tif expected != nil {\n\t\texpectedStr = maskSecrets(expected.Error())\n\t}\n\treturn fmt.Sprintf(\"expected %v to be %v. %s\", actualStr, expectedStr, desc)\n}\n\nfunc validateErrorsAs(err error, target any, descriptions ...string) string {\n\tif errors.As(err, target) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\terrStr := \"nil\"\n\tif err != nil {\n\t\terrStr = maskSecrets(err.Error())\n\t}\n\ttargetType := reflect.TypeOf(target)\n\treturn fmt.Sprintf(\"expected error %v to be assignable to %v but was not. %s\", errStr, targetType, desc)\n}\n\nfunc validateEqual(actual any, expected any, descriptions ...string) string {\n\tif expected == actual {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to be equal to \\\"%s\\\" but was not. %s\",\n\t\tmaskSecrets(fmt.Sprintf(\"%v\", actual)),\n\t\tmaskSecrets(fmt.Sprintf(\"%v\", expected)),\n\t\tdesc)\n}\n\nfunc removeWhitespaces(s string) string {\n\tpattern, err := regexp.Compile(`\\s+`)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn pattern.ReplaceAllString(s, \"\")\n}\n\nfunc validateEqualIgnoringWhitespace(actual string, expected string, descriptions ...string) string {\n\tif removeWhitespaces(expected) == removeWhitespaces(actual) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to be equal to \\\"%s\\\" but was not. %s\",\n\t\tmaskSecrets(actual),\n\t\tmaskSecrets(expected),\n\t\tdesc)\n}\n\nfunc validateEqualEpsilon(actual, expected, epsilon float64, descriptions ...string) string {\n\tif math.Abs(actual-expected) < epsilon {\n\t\treturn \"\"\n\t}\n\treturn fmt.Sprintf(\"expected \\\"%f\\\" to be equal to \\\"%f\\\" within epsilon \\\"%f\\\" but was not. %s\", actual, expected, epsilon, joinDescriptions(descriptions...))\n}\n\nfunc validateDeepEqual(actual any, expected any, descriptions ...string) string {\n\tif reflect.DeepEqual(actual, expected) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to be equal to \\\"%s\\\" but was not. %s\",\n\t\tmaskSecrets(fmt.Sprintf(\"%v\", actual)),\n\t\tmaskSecrets(fmt.Sprintf(\"%v\", expected)),\n\t\tdesc)\n}\n\nfunc validateNotEqual(actual any, expected any, descriptions ...string) string {\n\tif expected != actual {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" not to be equal to \\\"%s\\\" but they were the same. %s\",\n\t\tmaskSecrets(fmt.Sprintf(\"%v\", actual)),\n\t\tmaskSecrets(fmt.Sprintf(\"%v\", expected)),\n\t\tdesc)\n}\n\nfunc validateBytesEqual(actual []byte, expected []byte, descriptions ...string) string {\n\tif bytes.Equal(actual, expected) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to be equal to \\\"%s\\\" but was not. %s\",\n\t\tmaskSecrets(string(actual)),\n\t\tmaskSecrets(string(expected)),\n\t\tdesc)\n}\n\nfunc validateStringContains(actual string, expectedToContain string, descriptions ...string) string {\n\tif strings.Contains(actual, expectedToContain) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to contain \\\"%s\\\" but did not. %s\",\n\t\tmaskSecrets(actual),\n\t\tmaskSecrets(expectedToContain),\n\t\tdesc)\n}\n\nfunc validateEmptyString(actual string, descriptions ...string) string {\n\tif actual == \"\" {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to be empty, but was not. %s\", maskSecrets(actual), desc)\n}\n\nfunc validateHasPrefix(actual string, expectedPrefix string, descriptions ...string) string {\n\tif strings.HasPrefix(actual, expectedPrefix) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to start with \\\"%s\\\" but did not. %s\",\n\t\tmaskSecrets(actual),\n\t\tmaskSecrets(expectedPrefix),\n\t\tdesc)\n}\n\nfunc validateValueBetween(value float64, min float64, max float64, descriptions ...string) string {\n\tif value > min && value < max {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" should be between \\\"%s\\\" and  \\\"%s\\\" but did not. %s\",\n\t\tfmt.Sprintf(\"%f\", value),\n\t\tfmt.Sprintf(\"%f\", min),\n\t\tfmt.Sprintf(\"%f\", max),\n\t\tdesc)\n}\n\nfunc validateValueBetweenInclusive(value float64, min float64, max float64, descriptions ...string) string {\n\tif value >= min && value <= max {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" should be between \\\"%s\\\" and  \\\"%s\\\" inclusively but did not. %s\",\n\t\tfmt.Sprintf(\"%f\", value),\n\t\tfmt.Sprintf(\"%f\", min),\n\t\tfmt.Sprintf(\"%f\", max),\n\t\tdesc)\n}\n\nfunc validateEmpty[T any](value []T, descriptions ...string) string {\n\tif len(value) == 0 {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%v\\\" to be empty. %s\", maskSecrets(fmt.Sprintf(\"%v\", value)), desc)\n}\n\nfunc joinDescriptions(descriptions ...string) string {\n\treturn strings.Join(descriptions, \" \")\n}\n\nfunc isNil(value any) bool {\n\tif value == nil {\n\t\treturn true\n\t}\n\tval := reflect.ValueOf(value)\n\treturn slices.Contains([]reflect.Kind{reflect.Pointer, reflect.Slice, reflect.Map, reflect.Interface, reflect.Func}, val.Kind()) && val.IsNil()\n}\n"
  },
  {
    "path": "async.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"time\"\n)\n\nfunc (sr *snowflakeRestful) processAsync(\n\tctx context.Context,\n\trespd *execResponse,\n\theaders map[string]string,\n\ttimeout time.Duration,\n\tcfg *Config) (*execResponse, error) {\n\t// placeholder object to return to user while retrieving results\n\trows := new(snowflakeRows)\n\tres := new(snowflakeResult)\n\tswitch resType := getResultType(ctx); resType {\n\tcase execResultType:\n\t\tres.queryID = respd.Data.QueryID\n\t\tres.status = QueryStatusInProgress\n\t\tres.errChannel = make(chan error)\n\t\trespd.Data.AsyncResult = res\n\tcase queryResultType:\n\t\trows.queryID = respd.Data.QueryID\n\t\trows.status = QueryStatusInProgress\n\t\trows.errChannel = make(chan error)\n\t\trows.ctx = ctx\n\t\trespd.Data.AsyncRows = rows\n\tdefault:\n\t\treturn respd, nil\n\t}\n\n\t// spawn goroutine to retrieve asynchronous results\n\tgo GoroutineWrapper(\n\t\tctx,\n\t\tfunc() {\n\t\t\terr := sr.getAsync(ctx, headers, sr.getFullURL(respd.Data.GetResultURL, nil), timeout, res, rows, cfg)\n\t\t\tif err != nil {\n\t\t\t\tlogger.WithContext(ctx).Errorf(\"error while calling getAsync. %v\", err)\n\t\t\t}\n\t\t},\n\t)\n\treturn respd, nil\n}\n\nfunc (sr *snowflakeRestful) getAsync(\n\tctx context.Context,\n\theaders map[string]string,\n\tURL *url.URL,\n\ttimeout time.Duration,\n\tres *snowflakeResult,\n\trows *snowflakeRows,\n\tcfg *Config) error {\n\tresType := getResultType(ctx)\n\tvar errChannel chan error\n\tsfError := &SnowflakeError{\n\t\tNumber: ErrAsync,\n\t}\n\tif resType == execResultType {\n\t\terrChannel = res.errChannel\n\t\tsfError.QueryID = res.queryID\n\t} else {\n\t\terrChannel = rows.errChannel\n\t\tsfError.QueryID = rows.queryID\n\t}\n\tdefer close(errChannel)\n\ttoken, _, _ := sr.TokenAccessor.GetTokens()\n\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, token)\n\n\trespd, err := getQueryResultWithRetriesForAsyncMode(ctx, sr, URL, headers, timeout)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"error: %v\", err)\n\t\tsfError.Message = err.Error()\n\t\terrChannel <- sfError\n\t\treturn err\n\t}\n\n\tsc := &snowflakeConn{rest: sr, cfg: cfg, currentTimeProvider: defaultTimeProvider}\n\tif respd.Success {\n\t\tif resType == execResultType {\n\t\t\tres.insertID = -1\n\t\t\tif isDml(respd.Data.StatementTypeID) {\n\t\t\t\tres.affectedRows, err = updateRows(respd.Data)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if isMultiStmt(&respd.Data) {\n\t\t\t\tr, err := sc.handleMultiExec(ctx, respd.Data)\n\t\t\t\tif err != nil {\n\t\t\t\t\tres.errChannel <- err\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tres.affectedRows, err = r.RowsAffected()\n\t\t\t\tif err != nil {\n\t\t\t\t\tres.errChannel <- err\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t\tres.queryID = respd.Data.QueryID\n\t\t\tres.errChannel <- nil // mark exec status complete\n\t\t} else {\n\t\t\trows.sc = sc\n\t\t\trows.queryID = respd.Data.QueryID\n\t\t\tif isMultiStmt(&respd.Data) {\n\t\t\t\tif err = sc.handleMultiQuery(ctx, respd.Data, rows); err != nil {\n\t\t\t\t\trows.errChannel <- err\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\trows.addDownloader(populateChunkDownloader(ctx, sc, respd.Data))\n\t\t\t}\n\t\t\tif err = rows.ChunkDownloader.start(); err != nil {\n\t\t\t\trows.errChannel <- err\n\t\t\t\treturn err\n\t\t\t}\n\t\t\trows.errChannel <- nil // mark query status complete\n\t\t}\n\t} else {\n\t\tvar code int\n\t\tif respd.Code != \"\" {\n\t\t\tcode, err = strconv.Atoi(respd.Code)\n\t\t\tif err != nil {\n\t\t\t\tcode = -1\n\t\t\t}\n\t\t} else {\n\t\t\tcode = -1\n\t\t}\n\t\terrChannel <- &SnowflakeError{\n\t\t\tNumber:   code,\n\t\t\tSQLState: respd.Data.SQLState,\n\t\t\tMessage:  respd.Message,\n\t\t\tQueryID:  respd.Data.QueryID,\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc getQueryResultWithRetriesForAsyncMode(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\tURL *url.URL,\n\theaders map[string]string,\n\ttimeout time.Duration) (respd *execResponse, err error) {\n\tretry := 0\n\tretryPattern := []int32{1, 1, 2, 3, 4, 8, 10}\n\tretryPatternIndex := 0\n\tretryCountForSessionRenewal := 0\n\n\tfor {\n\t\tlogger.WithContext(ctx).Debugf(\"Retry count for get query result request in async mode: %v\", retry)\n\n\t\trespd, err = getExecResponse(ctx, sr, URL, headers, timeout)\n\t\tif err != nil {\n\t\t\treturn respd, err\n\t\t}\n\t\tif respd.Code == sessionExpiredCode {\n\t\t\t// Update the session token in the header and retry\n\t\t\ttoken, _, _ := sr.TokenAccessor.GetTokens()\n\t\t\tif token != \"\" && headers[headerAuthorizationKey] != fmt.Sprintf(headerSnowflakeToken, token) {\n\t\t\t\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, token)\n\t\t\t\tlogger.WithContext(ctx).Debug(\"Session token has been updated.\")\n\t\t\t\tretry++\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Renew the session token\n\t\t\tif err = sr.renewExpiredSessionToken(ctx, timeout, token); err != nil {\n\t\t\t\tlogger.WithContext(ctx).Errorf(\"failed to renew session token. err: %v\", err)\n\t\t\t\treturn respd, err\n\t\t\t}\n\t\t\tretryCountForSessionRenewal++\n\n\t\t\t// If this is the first response, go back to retry the query\n\t\t\t// since it failed due to session expiration\n\t\t\tlogger.WithContext(ctx).Debugf(\"retry count for session renewal: %v\", retryCountForSessionRenewal)\n\t\t\tif retryCountForSessionRenewal < 2 {\n\t\t\t\tretry++\n\t\t\t\tcontinue\n\t\t\t} else {\n\t\t\t\tlogger.WithContext(ctx).Errorf(\"failed to get query result with the renewed session token. err: %v\", err)\n\t\t\t\treturn respd, err\n\t\t\t}\n\t\t} else if respd.Code != queryInProgressAsyncCode {\n\t\t\t// If the query takes longer than 45 seconds to complete the results are not returned.\n\t\t\t// If the query is still in progress after 45 seconds, retry the request to the /results endpoint.\n\t\t\t// For all other scenarios continue processing results response\n\t\t\tbreak\n\t\t} else {\n\t\t\t// Sleep before retrying get result request. Exponential backoff up to 5 seconds.\n\t\t\t// Once 5 second backoff is reached it will keep retrying with this sleeptime.\n\t\t\tsleepTime := time.Millisecond * time.Duration(500*retryPattern[retryPatternIndex])\n\t\t\tlogger.WithContext(ctx).Debugf(\"Query execution still in progress. Response code: %v, message: %v Sleep for %v ms\", respd.Code, respd.Message, sleepTime)\n\t\t\ttime.Sleep(sleepTime)\n\t\t\tretry++\n\n\t\t\tif retryPatternIndex < len(retryPattern)-1 {\n\t\t\t\tretryPatternIndex++\n\t\t\t}\n\t\t}\n\t}\n\tif len(respd.Data.RowType) > 0 {\n\t\tlogger.Infof(\"[Server Response Validation]: RowType: %s, QueryResultFormat: %s\", respd.Data.RowType[0].Name, respd.Data.QueryResultFormat)\n\t}\n\treturn respd, nil\n}\n"
  },
  {
    "path": "async_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"fmt\"\n\t\"testing\"\n)\n\nfunc TestAsyncMode(t *testing.T) {\n\tctx := WithAsyncMode(context.Background())\n\tnumrows := 100000\n\tcnt := 0\n\tvar idx int\n\tvar v string\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryContext(ctx, fmt.Sprintf(selectRandomGenerator, numrows))\n\t\tdefer rows.Close()\n\n\t\t// Next() will block and wait until results are available\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&idx, &v); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tcnt++\n\t\t}\n\t\tlogger.Infof(\"NextResultSet: %v\", rows.NextResultSet())\n\n\t\tif cnt != numrows {\n\t\t\tt.Errorf(\"number of rows didn't match. expected: %v, got: %v\", numrows, cnt)\n\t\t}\n\n\t\tdbt.mustExec(\"create or replace table test_async_exec (value boolean)\")\n\t\tres := dbt.mustExecContext(ctx, \"insert into test_async_exec values (true)\")\n\t\tcount, err := res.RowsAffected()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"res.RowsAffected() returned error: %v\", err)\n\t\t}\n\t\tif count != 1 {\n\t\t\tt.Fatalf(\"expected 1 affected row, got %d\", count)\n\t\t}\n\t})\n}\n\nfunc TestAsyncModePing(t *testing.T) {\n\tctx := WithAsyncMode(context.Background())\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tt.Fatalf(\"panic during ping: %v\", r)\n\t\t\t}\n\t\t}()\n\t\terr := dbt.conn.PingContext(ctx)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t})\n}\n\nfunc TestAsyncModeMultiStatement(t *testing.T) {\n\twithMultiStmtCtx := WithMultiStatement(context.Background(), 6)\n\tctx := WithAsyncMode(withMultiStmtCtx)\n\tmultiStmtQuery := \"begin;\\n\" +\n\t\t\"delete from test_multi_statement_async;\\n\" +\n\t\t\"insert into test_multi_statement_async values (1, 'a'), (2, 'b');\\n\" +\n\t\t\"select 1;\\n\" +\n\t\t\"select 2;\\n\" +\n\t\t\"rollback;\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"drop table if exists test_multi_statement_async\")\n\t\tdbt.mustExec(`create or replace table test_multi_statement_async(\n\t\t\tc1 number, c2 string) as select 10, 'z'`)\n\t\tdefer dbt.mustExec(\"drop table if exists test_multi_statement_async\")\n\n\t\tres := dbt.mustExecContext(ctx, multiStmtQuery)\n\t\tcount, err := res.RowsAffected()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"res.RowsAffected() returned error: %v\", err)\n\t\t}\n\t\tif count != 3 {\n\t\t\tt.Fatalf(\"expected 3 affected rows, got %d\", count)\n\t\t}\n\t})\n}\n\nfunc TestAsyncModeCancel(t *testing.T) {\n\twithCancelCtx, cancel := context.WithCancel(context.Background())\n\tctx := WithAsyncMode(withCancelCtx)\n\tnumrows := 100000\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustQueryContext(ctx, fmt.Sprintf(selectRandomGenerator, numrows))\n\t\tcancel()\n\t})\n}\n\nfunc TestAsyncQueryFail(t *testing.T) {\n\tctx := WithAsyncMode(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryContext(ctx, \"selectt 1\")\n\t\tdefer rows.Close()\n\n\t\tif rows.Next() {\n\t\t\tt.Fatal(\"should have no rows available\")\n\t\t} else {\n\t\t\tif err := rows.Err(); err == nil {\n\t\t\t\tt.Fatal(\"should return a syntax error\")\n\t\t\t}\n\t\t}\n\t})\n}\n\n// TestMultipleAsyncQueries validates that shorter async queries return before\n// longer ones. The TIMELIMIT values (30 and 10) must have sufficient separation\n// to avoid flaky ordering. Do not reduce these values significantly.\nfunc TestMultipleAsyncQueries(t *testing.T) {\n\tctx := WithAsyncMode(context.Background())\n\ts1 := \"foo\"\n\ts2 := \"bar\"\n\tch1 := make(chan string)\n\tch2 := make(chan string)\n\n\tdb := openDB(t)\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows1, err := db.QueryContext(ctx, fmt.Sprintf(\"select distinct '%v' from table (generator(timelimit=>%v))\", s1, 30))\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"can't read rows1: %v\", err)\n\t\t}\n\t\tdefer rows1.Close()\n\t\trows2, err := db.QueryContext(ctx, fmt.Sprintf(\"select distinct '%v' from table (generator(timelimit=>%v))\", s2, 10))\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"can't read rows2: %v\", err)\n\t\t}\n\t\tdefer rows2.Close()\n\n\t\tgo retrieveRows(rows1, ch1)\n\t\tgo retrieveRows(rows2, ch2)\n\t\tselect {\n\t\tcase res := <-ch1:\n\t\t\tt.Fatalf(\"value %v should not have been called earlier.\", res)\n\t\tcase res := <-ch2:\n\t\t\tif res != s2 {\n\t\t\t\tt.Fatalf(\"query failed. expected: %v, got: %v\", s2, res)\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc retrieveRows(rows *sql.Rows, ch chan string) {\n\tvar s string\n\tfor rows.Next() {\n\t\tif err := rows.Scan(&s); err != nil {\n\t\t\tch <- err.Error()\n\t\t\tclose(ch)\n\t\t\treturn\n\t\t}\n\t}\n\tch <- s\n\tclose(ch)\n}\n\n// TestLongRunningAsyncQuery validates the retry logic for async queries that\n// exceed Snowflake's 45-second threshold. After 45 seconds, the /results\n// endpoint returns \"query in progress\" (code 333334) and the driver must retry.\n// The 50-second wait MUST exceed 45 seconds to exercise this code path.\nfunc TestLongRunningAsyncQuery(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tctx := WithMultiStatement(context.Background(), 0)\n\t\tquery := \"CALL SYSTEM$WAIT(50, 'SECONDS');use snowflake_sample_data\"\n\n\t\trows := dbt.mustQueryContext(WithAsyncMode(ctx), query)\n\t\tdefer rows.Close()\n\t\tvar v string\n\t\ti := 0\n\t\tfor {\n\t\t\tfor rows.Next() {\n\t\t\t\terr := rows.Scan(&v)\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"failed to get result. err: %v\", err)\n\t\t\t\t}\n\t\t\t\tif v == \"\" {\n\t\t\t\t\tt.Fatal(\"should have returned a result\")\n\t\t\t\t}\n\t\t\t\tresults := []string{\"waited 50 seconds\", \"Statement executed successfully.\"}\n\t\t\t\tif v != results[i] {\n\t\t\t\t\tt.Fatalf(\"unexpected result returned. expected: %v, but got: %v\", results[i], v)\n\t\t\t\t}\n\t\t\t\ti++\n\t\t\t}\n\t\t\tif !rows.NextResultSet() {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestLongRunningAsyncQueryFetchResultByID(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tqueryIDChan := make(chan string, 1)\n\t\tctx := WithAsyncMode(context.Background())\n\t\tctx = WithQueryIDChan(ctx, queryIDChan)\n\n\t\t// Run a long running query asynchronously\n\t\tgo dbt.mustExecContext(ctx, \"CALL SYSTEM$WAIT(50, 'SECONDS')\")\n\n\t\t// Get the query ID without waiting for the query to finish\n\t\tqueryID := <-queryIDChan\n\t\tassertNotNilF(t, queryID, \"expected a nonempty query ID\")\n\n\t\tctx = WithFetchResultByID(ctx, queryID)\n\t\trows := dbt.mustQueryContext(ctx, \"\")\n\t\tdefer rows.Close()\n\n\t\tvar v string\n\t\tassertTrueF(t, rows.Next())\n\t\terr := rows.Scan(&v)\n\t\tassertNilF(t, err, fmt.Sprintf(\"failed to get result. err: %v\", err))\n\t\tassertNotNilF(t, v, \"should have returned a result\")\n\n\t\texpected := \"waited 50 seconds\"\n\t\tif v != expected {\n\t\t\tt.Fatalf(\"unexpected result returned. expected: %v, but got: %v\", expected, v)\n\t\t}\n\t\tassertFalseF(t, rows.NextResultSet())\n\t})\n}\n"
  },
  {
    "path": "auth.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"crypto/sha256\"\n\t\"crypto/x509\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"runtime\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\tsferrors \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\n\t\"github.com/golang-jwt/jwt/v5\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/compilation\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\tinternalos \"github.com/snowflakedb/gosnowflake/v2/internal/os\"\n)\n\nconst (\n\tclientType = \"Go\"\n)\n\nconst (\n\tclientStoreTemporaryCredential = \"CLIENT_STORE_TEMPORARY_CREDENTIAL\"\n\tclientRequestMfaToken          = \"CLIENT_REQUEST_MFA_TOKEN\"\n\tidTokenAuthenticator           = \"ID_TOKEN\"\n)\n\n// AuthType indicates the type of authentication in Snowflake\ntype AuthType = sfconfig.AuthType\n\nconst (\n\t// AuthTypeSnowflake is the general username password authentication\n\tAuthTypeSnowflake = sfconfig.AuthTypeSnowflake\n\t// AuthTypeOAuth is the OAuth authentication\n\tAuthTypeOAuth = sfconfig.AuthTypeOAuth\n\t// AuthTypeExternalBrowser is to use a browser to access an Fed and perform SSO authentication\n\tAuthTypeExternalBrowser = sfconfig.AuthTypeExternalBrowser\n\t// AuthTypeOkta is to use a native okta URL to perform SSO authentication on Okta\n\tAuthTypeOkta = sfconfig.AuthTypeOkta\n\t// AuthTypeJwt is to use Jwt to perform authentication\n\tAuthTypeJwt = sfconfig.AuthTypeJwt\n\t// AuthTypeTokenAccessor is to use the provided token accessor and bypass authentication\n\tAuthTypeTokenAccessor = sfconfig.AuthTypeTokenAccessor\n\t// AuthTypeUsernamePasswordMFA is to use username and password with mfa\n\tAuthTypeUsernamePasswordMFA = sfconfig.AuthTypeUsernamePasswordMFA\n\t// AuthTypePat is to use programmatic access token\n\tAuthTypePat = sfconfig.AuthTypePat\n\t// AuthTypeOAuthAuthorizationCode is to use browser-based OAuth2 flow\n\tAuthTypeOAuthAuthorizationCode = sfconfig.AuthTypeOAuthAuthorizationCode\n\t// AuthTypeOAuthClientCredentials is to use non-interactive OAuth2 flow\n\tAuthTypeOAuthClientCredentials = sfconfig.AuthTypeOAuthClientCredentials\n\t// AuthTypeWorkloadIdentityFederation is to use CSP identity for authentication\n\tAuthTypeWorkloadIdentityFederation = sfconfig.AuthTypeWorkloadIdentityFederation\n)\n\nfunc isOauthNativeFlow(authType AuthType) bool {\n\treturn authType == AuthTypeOAuthAuthorizationCode || authType == AuthTypeOAuthClientCredentials\n}\n\nvar refreshOAuthTokenErrorCodes = []string{\n\tstrconv.Itoa(ErrMissingAccessATokenButRefreshTokenPresent),\n\tinvalidOAuthAccessTokenCode,\n\texpiredOAuthAccessTokenCode,\n}\n\n// userAgent shows up in User-Agent HTTP header\nvar userAgent = fmt.Sprintf(\"%v/%v (%v-%v) %v/%v\",\n\tclientType,\n\tSnowflakeGoDriverVersion,\n\truntime.GOOS,\n\truntime.GOARCH,\n\truntime.Compiler,\n\truntime.Version())\n\ntype authRequestClientEnvironment struct {\n\tApplication             string            `json:\"APPLICATION\"`\n\tApplicationPath         string            `json:\"APPLICATION_PATH\"`\n\tOs                      string            `json:\"OS\"`\n\tOsVersion               string            `json:\"OS_VERSION\"`\n\tOsDetails               map[string]string `json:\"OS_DETAILS,omitempty\"`\n\tIsa                     string            `json:\"ISA,omitempty\"`\n\tOCSPMode                string            `json:\"OCSP_MODE\"`\n\tGoVersion               string            `json:\"GO_VERSION\"`\n\tOAuthType               string            `json:\"OAUTH_TYPE,omitempty\"`\n\tCertRevocationCheckMode string            `json:\"CERT_REVOCATION_CHECK_MODE,omitempty\"`\n\tPlatform                []string          `json:\"PLATFORM,omitempty\"`\n\tCoreVersion             string            `json:\"CORE_VERSION,omitempty\"`\n\tCoreLoadError           string            `json:\"CORE_LOAD_ERROR,omitempty\"`\n\tCoreFileName            string            `json:\"CORE_FILE_NAME,omitempty\"`\n\tCgoEnabled              bool              `json:\"CGO_ENABLED,omitempty\"`\n\tLinkingMode             string            `json:\"LINKING_MODE,omitempty\"`\n\tLibcFamily              string            `json:\"LIBC_FAMILY,omitempty\"`\n\tLibcVersion             string            `json:\"LIBC_VERSION,omitempty\"`\n}\n\ntype authRequestData struct {\n\tClientAppID             string                       `json:\"CLIENT_APP_ID\"`\n\tClientAppVersion        string                       `json:\"CLIENT_APP_VERSION\"`\n\tSvnRevision             string                       `json:\"SVN_REVISION\"`\n\tAccountName             string                       `json:\"ACCOUNT_NAME\"`\n\tLoginName               string                       `json:\"LOGIN_NAME,omitempty\"`\n\tPassword                string                       `json:\"PASSWORD,omitempty\"`\n\tRawSAMLResponse         string                       `json:\"RAW_SAML_RESPONSE,omitempty\"`\n\tExtAuthnDuoMethod       string                       `json:\"EXT_AUTHN_DUO_METHOD,omitempty\"`\n\tPasscode                string                       `json:\"PASSCODE,omitempty\"`\n\tAuthenticator           string                       `json:\"AUTHENTICATOR,omitempty\"`\n\tSessionParameters       map[string]any               `json:\"SESSION_PARAMETERS,omitempty\"`\n\tClientEnvironment       authRequestClientEnvironment `json:\"CLIENT_ENVIRONMENT\"`\n\tBrowserModeRedirectPort string                       `json:\"BROWSER_MODE_REDIRECT_PORT,omitempty\"`\n\tProofKey                string                       `json:\"PROOF_KEY,omitempty\"`\n\tToken                   string                       `json:\"TOKEN,omitempty\"`\n\tProvider                string                       `json:\"PROVIDER,omitempty\"`\n}\ntype authRequest struct {\n\tData authRequestData `json:\"data\"`\n}\n\ntype nameValueParameter struct {\n\tName  string `json:\"name\"`\n\tValue any    `json:\"value\"`\n}\n\ntype authResponseSessionInfo struct {\n\tDatabaseName  string `json:\"databaseName\"`\n\tSchemaName    string `json:\"schemaName\"`\n\tWarehouseName string `json:\"warehouseName\"`\n\tRoleName      string `json:\"roleName\"`\n}\n\ntype authResponseMain struct {\n\tToken               string                  `json:\"token,omitempty\"`\n\tValidity            time.Duration           `json:\"validityInSeconds,omitempty\"`\n\tMasterToken         string                  `json:\"masterToken,omitempty\"`\n\tMasterValidity      time.Duration           `json:\"masterValidityInSeconds\"`\n\tMfaToken            string                  `json:\"mfaToken,omitempty\"`\n\tMfaTokenValidity    time.Duration           `json:\"mfaTokenValidityInSeconds\"`\n\tIDToken             string                  `json:\"idToken,omitempty\"`\n\tIDTokenValidity     time.Duration           `json:\"idTokenValidityInSeconds\"`\n\tDisplayUserName     string                  `json:\"displayUserName\"`\n\tServerVersion       string                  `json:\"serverVersion\"`\n\tFirstLogin          bool                    `json:\"firstLogin\"`\n\tRemMeToken          string                  `json:\"remMeToken\"`\n\tRemMeValidity       time.Duration           `json:\"remMeValidityInSeconds\"`\n\tHealthCheckInterval time.Duration           `json:\"healthCheckInterval\"`\n\tNewClientForUpgrade string                  `json:\"newClientForUpgrade\"`\n\tSessionID           int64                   `json:\"sessionId\"`\n\tParameters          []nameValueParameter    `json:\"parameters\"`\n\tSessionInfo         authResponseSessionInfo `json:\"sessionInfo\"`\n\tTokenURL            string                  `json:\"tokenUrl,omitempty\"`\n\tSSOURL              string                  `json:\"ssoUrl,omitempty\"`\n\tProofKey            string                  `json:\"proofKey,omitempty\"`\n}\n\ntype authResponse struct {\n\tData    authResponseMain `json:\"data\"`\n\tMessage string           `json:\"message\"`\n\tCode    string           `json:\"code\"`\n\tSuccess bool             `json:\"success\"`\n}\n\nfunc postAuth(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\tclient *http.Client,\n\tparams *url.Values,\n\theaders map[string]string,\n\tbodyCreator bodyCreatorType,\n\ttimeout time.Duration) (\n\tdata *authResponse, err error) {\n\tparams.Set(requestIDKey, getOrGenerateRequestIDFromContext(ctx).String())\n\tparams.Set(requestGUIDKey, NewUUID().String())\n\n\tfullURL := sr.getFullURL(loginRequestPath, params)\n\tlogger.WithContext(ctx).Infof(\"full URL: %v\", fullURL)\n\tresp, err := sr.FuncAuthPost(ctx, client, fullURL, headers, bodyCreator, timeout, sr.MaxRetryCount)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif closeErr := resp.Body.Close(); closeErr != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to close HTTP response body for %v. err: %v\", fullURL, closeErr)\n\t\t}\n\t}()\n\tif resp.StatusCode == http.StatusOK {\n\t\tvar respd authResponse\n\t\terr = json.NewDecoder(resp.Body).Decode(&respd)\n\t\tif err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to decode JSON. err: %v\", err)\n\t\t\treturn nil, err\n\t\t}\n\t\treturn &respd, nil\n\t}\n\tswitch resp.StatusCode {\n\tcase http.StatusBadGateway, http.StatusServiceUnavailable, http.StatusGatewayTimeout:\n\t\t// service availability or connectivity issue. Most likely server side issue.\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:      ErrCodeServiceUnavailable,\n\t\t\tSQLState:    SQLStateConnectionWasNotEstablished,\n\t\t\tMessage:     sferrors.ErrMsgServiceUnavailable,\n\t\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t\t}\n\tcase http.StatusUnauthorized, http.StatusForbidden:\n\t\t// failed to connect to db. account name may be wrong\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:      ErrCodeFailedToConnect,\n\t\t\tSQLState:    SQLStateConnectionRejected,\n\t\t\tMessage:     sferrors.ErrMsgFailedToConnect,\n\t\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t\t}\n\t}\n\tb, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to extract HTTP response body. err: %v\", err)\n\t\treturn nil, err\n\t}\n\tlogger.WithContext(ctx).Infof(\"HTTP: %v, URL: %v, Body: %v\", resp.StatusCode, fullURL, b)\n\tlogger.WithContext(ctx).Infof(\"Header: %v\", resp.Header)\n\treturn nil, &SnowflakeError{\n\t\tNumber:      ErrFailedToAuth,\n\t\tSQLState:    SQLStateConnectionRejected,\n\t\tMessage:     sferrors.ErrMsgFailedToAuth,\n\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t}\n}\n\n// Generates a map of headers needed to authenticate\n// with Snowflake.\nfunc getHeaders() map[string]string {\n\theaders := make(map[string]string)\n\theaders[httpHeaderContentType] = headerContentTypeApplicationJSON\n\theaders[httpHeaderAccept] = headerAcceptTypeApplicationSnowflake\n\theaders[httpClientAppID] = clientType\n\theaders[httpClientAppVersion] = SnowflakeGoDriverVersion\n\theaders[httpHeaderUserAgent] = userAgent\n\treturn headers\n}\n\n// Used to authenticate the user with Snowflake.\nfunc authenticate(\n\tctx context.Context,\n\tsc *snowflakeConn,\n\tsamlResponse []byte,\n\tproofKey []byte,\n) (resp *authResponseMain, err error) {\n\tif sc.cfg.Authenticator == AuthTypeTokenAccessor {\n\t\tlogger.WithContext(ctx).Info(\"Bypass authentication using existing token from token accessor\")\n\t\tsessionInfo := authResponseSessionInfo{\n\t\t\tDatabaseName:  sc.cfg.Database,\n\t\t\tSchemaName:    sc.cfg.Schema,\n\t\t\tWarehouseName: sc.cfg.Warehouse,\n\t\t\tRoleName:      sc.cfg.Role,\n\t\t}\n\t\ttoken, masterToken, sessionID := sc.cfg.TokenAccessor.GetTokens()\n\t\treturn &authResponseMain{\n\t\t\tToken:       token,\n\t\t\tMasterToken: masterToken,\n\t\t\tSessionID:   sessionID,\n\t\t\tSessionInfo: sessionInfo,\n\t\t}, nil\n\t}\n\n\theaders := getHeaders()\n\t// Get the current application path\n\tapplicationPath, err := os.Executable()\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Warnf(\"Failed to get executable path: %v\", err)\n\t\tapplicationPath = \"unknown\"\n\t}\n\n\toauthType := \"\"\n\tswitch sc.cfg.Authenticator {\n\tcase AuthTypeOAuthAuthorizationCode:\n\t\toauthType = \"OAUTH_AUTHORIZATION_CODE\"\n\tcase AuthTypeOAuthClientCredentials:\n\t\toauthType = \"OAUTH_CLIENT_CREDENTIALS\"\n\t}\n\n\tclientEnvironment := newAuthRequestClientEnvironment()\n\tclientEnvironment.Application = sc.cfg.Application\n\tclientEnvironment.ApplicationPath = applicationPath\n\tclientEnvironment.OAuthType = oauthType\n\tclientEnvironment.CertRevocationCheckMode = sc.cfg.CertRevocationCheckMode.String()\n\tclientEnvironment.Platform = getDetectedPlatforms()\n\n\tsessionParameters := make(map[string]any)\n\tfor k, v := range sc.syncParams.All() {\n\t\t// upper casing to normalize keys\n\t\tsessionParameters[strings.ToUpper(k)] = v\n\t}\n\n\tsessionParameters[sessionClientValidateDefaultParameters] = sc.cfg.ValidateDefaultParameters != ConfigBoolFalse\n\tif sc.cfg.ClientRequestMfaToken == ConfigBoolTrue {\n\t\tsessionParameters[clientRequestMfaToken] = true\n\t}\n\tif sc.cfg.ClientStoreTemporaryCredential == ConfigBoolTrue {\n\t\tsessionParameters[clientStoreTemporaryCredential] = true\n\t}\n\tbodyCreator := func() ([]byte, error) {\n\t\treturn createRequestBody(sc, sessionParameters, clientEnvironment, proofKey, samlResponse)\n\t}\n\n\tparams := &url.Values{}\n\tif sc.cfg.Database != \"\" {\n\t\tparams.Add(\"databaseName\", sc.cfg.Database)\n\t}\n\tif sc.cfg.Schema != \"\" {\n\t\tparams.Add(\"schemaName\", sc.cfg.Schema)\n\t}\n\tif sc.cfg.Warehouse != \"\" {\n\t\tparams.Add(\"warehouse\", sc.cfg.Warehouse)\n\t}\n\tif sc.cfg.Role != \"\" {\n\t\tparams.Add(\"roleName\", sc.cfg.Role)\n\t}\n\n\tlogger.WithContext(ctx).Infof(\"Information for Auth: Host: %v, User: %v, Authenticator: %v, Params: %v, Protocol: %v, Port: %v, LoginTimeout: %v\",\n\t\tsc.rest.Host, sc.cfg.User, sc.cfg.Authenticator.String(), params, sc.rest.Protocol, sc.rest.Port, sc.rest.LoginTimeout)\n\n\trespd, err := sc.rest.FuncPostAuth(ctx, sc.rest, sc.rest.getClientFor(sc.cfg.Authenticator), params, headers, bodyCreator, sc.rest.LoginTimeout)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif !respd.Success {\n\t\tlogger.WithContext(ctx).Error(\"Authentication FAILED\")\n\t\tsc.rest.TokenAccessor.SetTokens(\"\", \"\", -1)\n\t\tif sessionParameters[clientRequestMfaToken] == true {\n\t\t\tcredentialsStorage.deleteCredential(newMfaTokenSpec(sc.cfg.Host, sc.cfg.User))\n\t\t}\n\t\tif sessionParameters[clientStoreTemporaryCredential] == true && sc.cfg.Authenticator == AuthTypeExternalBrowser {\n\t\t\tcredentialsStorage.deleteCredential(newIDTokenSpec(sc.cfg.Host, sc.cfg.User))\n\t\t}\n\t\tif sessionParameters[clientStoreTemporaryCredential] == true && isOauthNativeFlow(sc.cfg.Authenticator) {\n\t\t\tcredentialsStorage.deleteCredential(newOAuthAccessTokenSpec(sc.cfg.OauthTokenRequestURL, sc.cfg.User))\n\t\t}\n\t\tcode, err := strconv.Atoi(respd.Code)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   code,\n\t\t\tSQLState: SQLStateConnectionRejected,\n\t\t\tMessage:  respd.Message,\n\t\t}, sc)\n\t}\n\tlogger.WithContext(ctx).Info(\"Authentication SUCCESS\")\n\tsc.rest.TokenAccessor.SetTokens(respd.Data.Token, respd.Data.MasterToken, respd.Data.SessionID)\n\tif sessionParameters[clientRequestMfaToken] == true {\n\t\ttoken := respd.Data.MfaToken\n\t\tcredentialsStorage.setCredential(newMfaTokenSpec(sc.cfg.Host, sc.cfg.User), token)\n\t}\n\tif sessionParameters[clientStoreTemporaryCredential] == true {\n\t\ttoken := respd.Data.IDToken\n\t\tcredentialsStorage.setCredential(newIDTokenSpec(sc.cfg.Host, sc.cfg.User), token)\n\t}\n\treturn &respd.Data, nil\n}\n\nfunc newAuthRequestClientEnvironment() authRequestClientEnvironment {\n\tvar coreVersion string\n\tvar coreLoadError string\n\n\t// Try to get minicore version, but don't block if it's not loaded yet\n\tif !compilation.MinicoreEnabled {\n\t\tlogger.Trace(\"minicore disabled at compile time\")\n\t\tcoreLoadError = \"Minicore is disabled at compile time (built with -tags minicore_disabled)\"\n\t} else if strings.EqualFold(os.Getenv(disableMinicoreEnv), \"true\") {\n\t\tlogger.Trace(\"minicore loading disabled\")\n\t\tcoreLoadError = \"Minicore is disabled with SF_DISABLE_MINICORE env variable\"\n\t} else if mc := getMiniCore(); mc != nil {\n\t\tvar err error\n\t\tcoreVersion, err = mc.FullVersion()\n\t\tif err != nil {\n\t\t\tlogger.Debugf(\"Minicore loading failed. %v\", err)\n\t\t\tvar mcErr *miniCoreError\n\t\t\tif errors.As(err, &mcErr) {\n\t\t\t\tcoreLoadError = fmt.Sprintf(\"Failed to load binary: %v\", mcErr.errorType)\n\t\t\t} else {\n\t\t\t\tcoreLoadError = \"Failed to load binary: unknown\"\n\t\t\t}\n\t\t}\n\t} else {\n\t\t// Minicore not loaded yet - this is expected during startup\n\t\tcoreVersion = \"\"\n\t\tcoreLoadError = \"Minicore is still loading\"\n\t\tlogger.Debugf(\"Minicore not yet loaded for client environment telemetry\")\n\t}\n\tlibcInfo := internalos.GetLibcInfo()\n\tlinkingMode, err := compilation.CheckDynamicLinking()\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot determine if app is dynamically linked: %v\", err)\n\t}\n\treturn authRequestClientEnvironment{\n\t\tOs:            runtime.GOOS,\n\t\tOsVersion:     osVersion,\n\t\tOsDetails:     internalos.GetOsDetails(),\n\t\tIsa:           runtime.GOARCH,\n\t\tGoVersion:     runtime.Version(),\n\t\tCoreVersion:   coreVersion,\n\t\tCoreFileName:  getMiniCoreFileName(),\n\t\tCoreLoadError: coreLoadError,\n\t\tCgoEnabled:    compilation.CgoEnabled,\n\t\tLinkingMode:   linkingMode.String(),\n\t\tLibcFamily:    libcInfo.Family,\n\t\tLibcVersion:   libcInfo.Version,\n\t}\n}\n\nfunc createRequestBody(sc *snowflakeConn, sessionParameters map[string]any,\n\tclientEnvironment authRequestClientEnvironment, proofKey []byte, samlResponse []byte,\n) ([]byte, error) {\n\trequestMain := authRequestData{\n\t\tClientAppID:       clientType,\n\t\tClientAppVersion:  SnowflakeGoDriverVersion,\n\t\tAccountName:       sc.cfg.Account,\n\t\tSessionParameters: sessionParameters,\n\t\tClientEnvironment: clientEnvironment,\n\t}\n\n\tswitch sc.cfg.Authenticator {\n\tcase AuthTypeExternalBrowser:\n\t\tif sc.idToken != \"\" {\n\t\t\trequestMain.Authenticator = idTokenAuthenticator\n\t\t\trequestMain.Token = sc.idToken\n\t\t\trequestMain.LoginName = sc.cfg.User\n\t\t} else {\n\t\t\trequestMain.ProofKey = string(proofKey)\n\t\t\trequestMain.Token = string(samlResponse)\n\t\t\trequestMain.LoginName = sc.cfg.User\n\t\t\trequestMain.Authenticator = AuthTypeExternalBrowser.String()\n\t\t}\n\tcase AuthTypeOAuth:\n\t\trequestMain.LoginName = sc.cfg.User\n\t\trequestMain.Authenticator = AuthTypeOAuth.String()\n\t\tvar err error\n\t\tif requestMain.Token, err = sfconfig.GetToken(sc.cfg); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to get OAuth token: %w\", err)\n\t\t}\n\tcase AuthTypeOkta:\n\t\tsamlResponse, err := authenticateBySAML(\n\t\t\tsc.ctx,\n\t\t\tsc.rest,\n\t\t\tsc.cfg.OktaURL,\n\t\t\tsc.cfg.Application,\n\t\t\tsc.cfg.Account,\n\t\t\tsc.cfg.User,\n\t\t\tsc.cfg.Password,\n\t\t\tsc.cfg.DisableSamlURLCheck)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\trequestMain.RawSAMLResponse = string(samlResponse)\n\tcase AuthTypeJwt:\n\t\trequestMain.Authenticator = AuthTypeJwt.String()\n\n\t\tjwtTokenString, err := prepareJWTToken(sc.cfg)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\trequestMain.Token = jwtTokenString\n\tcase AuthTypePat:\n\t\tlogger.WithContext(sc.ctx).Info(\"Programmatic access token\")\n\t\trequestMain.Authenticator = AuthTypePat.String()\n\t\trequestMain.LoginName = sc.cfg.User\n\t\tvar err error\n\t\tif requestMain.Token, err = sfconfig.GetToken(sc.cfg); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to get PAT token: %w\", err)\n\t\t}\n\tcase AuthTypeSnowflake:\n\t\tlogger.WithContext(sc.ctx).Debug(\"Username and password\")\n\t\trequestMain.LoginName = sc.cfg.User\n\t\trequestMain.Password = sc.cfg.Password\n\t\tswitch {\n\t\tcase sc.cfg.PasscodeInPassword:\n\t\t\trequestMain.ExtAuthnDuoMethod = \"passcode\"\n\t\tcase sc.cfg.Passcode != \"\":\n\t\t\trequestMain.Passcode = sc.cfg.Passcode\n\t\t\trequestMain.ExtAuthnDuoMethod = \"passcode\"\n\t\t}\n\tcase AuthTypeUsernamePasswordMFA:\n\t\tlogger.WithContext(sc.ctx).Debug(\"Username and password MFA\")\n\t\trequestMain.LoginName = sc.cfg.User\n\t\trequestMain.Password = sc.cfg.Password\n\t\tswitch {\n\t\tcase sc.mfaToken != \"\":\n\t\t\trequestMain.Token = sc.mfaToken\n\t\tcase sc.cfg.PasscodeInPassword:\n\t\t\trequestMain.ExtAuthnDuoMethod = \"passcode\"\n\t\tcase sc.cfg.Passcode != \"\":\n\t\t\trequestMain.Passcode = sc.cfg.Passcode\n\t\t\trequestMain.ExtAuthnDuoMethod = \"passcode\"\n\t\t}\n\tcase AuthTypeOAuthAuthorizationCode:\n\t\tlogger.WithContext(sc.ctx).Debug(\"OAuth authorization code\")\n\t\ttoken, err := authenticateByAuthorizationCode(sc)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\trequestMain.LoginName = sc.cfg.User\n\t\trequestMain.Token = token\n\tcase AuthTypeOAuthClientCredentials:\n\t\tlogger.WithContext(sc.ctx).Debug(\"OAuth client credentials\")\n\t\toauthClient, err := newOauthClient(sc.ctx, sc.cfg, sc)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\ttoken, err := oauthClient.authenticateByOAuthClientCredentials()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\trequestMain.LoginName = sc.cfg.User\n\t\trequestMain.Token = token\n\tcase AuthTypeWorkloadIdentityFederation:\n\t\tlogger.WithContext(sc.ctx).Debug(\"Workload Identity Federation\")\n\t\twifAttestationProvider := createWifAttestationProvider(sc.ctx, sc.cfg, sc.telemetry)\n\t\twifAttestation, err := wifAttestationProvider.getAttestation(sc.cfg.WorkloadIdentityProvider)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif wifAttestation == nil {\n\t\t\treturn nil, errors.New(\"workload identity federation attestation is not available, please check your configuration\")\n\t\t}\n\t\trequestMain.Authenticator = AuthTypeWorkloadIdentityFederation.String()\n\t\trequestMain.Token = wifAttestation.Credential\n\t\trequestMain.Provider = wifAttestation.ProviderType\n\t}\n\n\tlogger.WithContext(sc.ctx).Debugf(\"Request body is created for the authentication. Authenticator: %s, User: %s, Account: %s\", sc.cfg.Authenticator.String(), sc.cfg.User, sc.cfg.Account)\n\n\tauthRequest := authRequest{\n\t\tData: requestMain,\n\t}\n\tjsonBody, err := json.Marshal(authRequest)\n\tif err != nil {\n\t\tlogger.WithContext(sc.ctx).Errorf(\"Failed to marshal JSON. err: %v\", err)\n\t\treturn nil, err\n\t}\n\treturn jsonBody, nil\n}\n\ntype oauthLockKey struct {\n\ttokenRequestURL string\n\tuser            string\n\tflowType        string\n}\n\nfunc newOAuthAuthorizationCodeLockKey(tokenRequestURL, user string) *oauthLockKey {\n\treturn &oauthLockKey{\n\t\ttokenRequestURL: tokenRequestURL,\n\t\tuser:            user,\n\t\tflowType:        \"authorization_code\",\n\t}\n}\n\nfunc newRefreshTokenLockKey(tokenRequestURL, user string) *oauthLockKey {\n\treturn &oauthLockKey{\n\t\ttokenRequestURL: tokenRequestURL,\n\t\tuser:            user,\n\t\tflowType:        \"refresh_token\",\n\t}\n}\n\nfunc (o *oauthLockKey) lockID() string {\n\treturn o.tokenRequestURL + \"|\" + o.user + \"|\" + o.flowType\n}\n\nfunc authenticateByAuthorizationCode(sc *snowflakeConn) (string, error) {\n\toauthClient, err := newOauthClient(sc.ctx, sc.cfg, sc)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif !isEligibleForParallelLogin(sc.cfg, sc.cfg.ClientStoreTemporaryCredential) {\n\t\treturn oauthClient.authenticateByOAuthAuthorizationCode()\n\t}\n\n\tlockKey := newOAuthAuthorizationCodeLockKey(oauthClient.tokenURL(), sc.cfg.User)\n\tvalueAwaiter := valueAwaitHolder.get(lockKey)\n\tdefer valueAwaiter.resumeOne()\n\ttoken, err := awaitValue(valueAwaiter, func() (string, error) {\n\t\treturn credentialsStorage.getCredential(newOAuthAccessTokenSpec(oauthClient.tokenURL(), sc.cfg.User)), nil\n\t}, func(s string, err error) bool {\n\t\treturn s != \"\"\n\t}, func() string {\n\t\treturn \"\"\n\t})\n\tif err != nil || token != \"\" {\n\t\treturn token, err\n\t}\n\ttoken, err = oauthClient.authenticateByOAuthAuthorizationCode()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tvalueAwaiter.done()\n\treturn token, err\n}\n\n// Generate a JWT token in string given the configuration\nfunc prepareJWTToken(config *Config) (string, error) {\n\tif config.PrivateKey == nil {\n\t\treturn \"\", errors.New(\"trying to use keypair authentication, but PrivateKey was not provided in the driver config\")\n\t}\n\tlogger.Debug(\"preparing JWT for keypair authentication\")\n\tpubBytes, err := x509.MarshalPKIXPublicKey(config.PrivateKey.Public())\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\thash := sha256.Sum256(pubBytes)\n\n\taccountName := sfconfig.ExtractAccountName(config.Account)\n\tuserName := strings.ToUpper(config.User)\n\n\tissueAtTime := time.Now().UTC()\n\tjwtClaims := jwt.MapClaims{\n\t\t\"iss\": fmt.Sprintf(\"%s.%s.%s\", accountName, userName, \"SHA256:\"+base64.StdEncoding.EncodeToString(hash[:])),\n\t\t\"sub\": fmt.Sprintf(\"%s.%s\", accountName, userName),\n\t\t\"iat\": issueAtTime.Unix(),\n\t\t\"nbf\": time.Date(2015, 10, 10, 12, 0, 0, 0, time.UTC).Unix(),\n\t\t\"exp\": issueAtTime.Add(config.JWTExpireTimeout).Unix(),\n\t}\n\ttoken := jwt.NewWithClaims(jwt.SigningMethodRS256, jwtClaims)\n\n\ttokenString, err := token.SignedString(config.PrivateKey)\n\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tlogger.Debugf(\"successfully generated JWT with following claims: %v\", jwtClaims)\n\treturn tokenString, err\n}\n\ntype tokenLockKey struct {\n\tsnowflakeHost string\n\tuser          string\n\ttokenType     string\n}\n\nfunc newMfaTokenLockKey(snowflakeHost, user string) *tokenLockKey {\n\treturn &tokenLockKey{\n\t\tsnowflakeHost: snowflakeHost,\n\t\tuser:          user,\n\t\ttokenType:     \"MFA\",\n\t}\n}\n\nfunc newIDTokenLockKey(snowflakeHost, user string) *tokenLockKey {\n\treturn &tokenLockKey{\n\t\tsnowflakeHost: snowflakeHost,\n\t\tuser:          user,\n\t\ttokenType:     \"ID\",\n\t}\n}\n\nfunc (m *tokenLockKey) lockID() string {\n\treturn m.snowflakeHost + \"|\" + m.user + \"|\" + m.tokenType\n}\n\nfunc authenticateWithConfig(sc *snowflakeConn) error {\n\tvar authData *authResponseMain\n\tvar samlResponse []byte\n\tvar proofKey []byte\n\tvar err error\n\n\tmfaTokenLockKey := newMfaTokenLockKey(sc.cfg.Host, sc.cfg.User)\n\tidTokenLockKey := newIDTokenLockKey(sc.cfg.Host, sc.cfg.User)\n\n\tif sc.cfg.Authenticator == AuthTypeExternalBrowser || sc.cfg.Authenticator == AuthTypeOAuthAuthorizationCode || sc.cfg.Authenticator == AuthTypeOAuthClientCredentials {\n\t\tif (runtime.GOOS == \"windows\" || runtime.GOOS == \"darwin\") && sc.cfg.ClientStoreTemporaryCredential == sfconfig.BoolNotSet {\n\t\t\tsc.cfg.ClientStoreTemporaryCredential = ConfigBoolTrue\n\t\t}\n\t\tif sc.cfg.Authenticator == AuthTypeExternalBrowser {\n\t\t\tif isEligibleForParallelLogin(sc.cfg, sc.cfg.ClientStoreTemporaryCredential) {\n\t\t\t\tvalueAwaiter := valueAwaitHolder.get(idTokenLockKey)\n\t\t\t\tdefer valueAwaiter.resumeOne()\n\t\t\t\tsc.idToken, _ = awaitValue(valueAwaiter, func() (string, error) {\n\t\t\t\t\tcredential := credentialsStorage.getCredential(newIDTokenSpec(sc.cfg.Host, sc.cfg.User))\n\t\t\t\t\treturn credential, nil\n\t\t\t\t}, func(s string, err error) bool {\n\t\t\t\t\treturn s != \"\"\n\t\t\t\t}, func() string {\n\t\t\t\t\treturn \"\"\n\t\t\t\t})\n\t\t\t} else if sc.cfg.ClientStoreTemporaryCredential == ConfigBoolTrue {\n\t\t\t\tsc.idToken = credentialsStorage.getCredential(newIDTokenSpec(sc.cfg.Host, sc.cfg.User))\n\t\t\t}\n\t\t}\n\t\t// Disable console login by default\n\t\tif sc.cfg.DisableConsoleLogin == sfconfig.BoolNotSet {\n\t\t\tsc.cfg.DisableConsoleLogin = ConfigBoolTrue\n\t\t}\n\t}\n\n\tif sc.cfg.Authenticator == AuthTypeUsernamePasswordMFA {\n\t\tif (runtime.GOOS == \"windows\" || runtime.GOOS == \"darwin\") && sc.cfg.ClientRequestMfaToken == sfconfig.BoolNotSet {\n\t\t\tsc.cfg.ClientRequestMfaToken = ConfigBoolTrue\n\t\t}\n\t\tif isEligibleForParallelLogin(sc.cfg, sc.cfg.ClientRequestMfaToken) {\n\t\t\tvalueAwaiter := valueAwaitHolder.get(mfaTokenLockKey)\n\t\t\tdefer valueAwaiter.resumeOne()\n\t\t\tsc.mfaToken, _ = awaitValue(valueAwaiter, func() (string, error) {\n\t\t\t\tcredential := credentialsStorage.getCredential(newMfaTokenSpec(sc.cfg.Host, sc.cfg.User))\n\t\t\t\treturn credential, nil\n\t\t\t}, func(s string, err error) bool {\n\t\t\t\treturn s != \"\"\n\t\t\t}, func() string {\n\t\t\t\treturn \"\"\n\t\t\t})\n\t\t} else if sc.cfg.ClientRequestMfaToken == ConfigBoolTrue {\n\t\t\tsc.mfaToken = credentialsStorage.getCredential(newMfaTokenSpec(sc.cfg.Host, sc.cfg.User))\n\t\t}\n\t}\n\n\tlogger.WithContext(sc.ctx).Infof(\"Authenticating via %v\", sc.cfg.Authenticator.String())\n\tswitch sc.cfg.Authenticator {\n\tcase AuthTypeExternalBrowser:\n\t\tif sc.idToken == \"\" {\n\t\t\tsamlResponse, proofKey, err = authenticateByExternalBrowser(\n\t\t\t\tsc.ctx,\n\t\t\t\tsc.rest,\n\t\t\t\tsc.cfg.Authenticator.String(),\n\t\t\t\tsc.cfg.Application,\n\t\t\t\tsc.cfg.Account,\n\t\t\t\tsc.cfg.User,\n\t\t\t\tsc.cfg.ExternalBrowserTimeout,\n\t\t\t\tsc.cfg.DisableConsoleLogin)\n\t\t\tif err != nil {\n\t\t\t\tsc.cleanup()\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\tauthData, err = authenticate(\n\t\tsc.ctx,\n\t\tsc,\n\t\tsamlResponse,\n\t\tproofKey)\n\tif err != nil {\n\t\tvar se *SnowflakeError\n\t\tif errors.As(err, &se) && slices.Contains(refreshOAuthTokenErrorCodes, strconv.Itoa(se.Number)) {\n\t\t\tcredentialsStorage.deleteCredential(newOAuthAccessTokenSpec(sc.cfg.OauthTokenRequestURL, sc.cfg.User))\n\n\t\t\tif sc.cfg.Authenticator == AuthTypeOAuthAuthorizationCode {\n\t\t\t\tdoRefreshTokenWithLock(sc)\n\t\t\t}\n\n\t\t\t// if refreshing succeeds for authorization code, we will take a token from cache\n\t\t\t// if it fails, we will just run the full flow\n\t\t\tauthData, err = authenticate(sc.ctx, sc, nil, nil)\n\t\t}\n\t\tif err != nil {\n\t\t\tsc.cleanup()\n\t\t\treturn err\n\t\t}\n\t}\n\tif sc.cfg.Authenticator == AuthTypeUsernamePasswordMFA && isEligibleForParallelLogin(sc.cfg, sc.cfg.ClientRequestMfaToken) {\n\t\tvalueAwaiter := valueAwaitHolder.get(mfaTokenLockKey)\n\t\tvalueAwaiter.done()\n\t}\n\tif sc.cfg.Authenticator == AuthTypeExternalBrowser && isEligibleForParallelLogin(sc.cfg, sc.cfg.ClientStoreTemporaryCredential) {\n\t\tvalueAwaiter := valueAwaitHolder.get(idTokenLockKey)\n\t\tvalueAwaiter.done()\n\t}\n\tsc.populateSessionParameters(authData.Parameters)\n\tsc.configureTelemetry()\n\tsc.ctx = context.WithValue(sc.ctx, SFSessionIDKey, authData.SessionID)\n\treturn nil\n}\n\nfunc doRefreshTokenWithLock(sc *snowflakeConn) {\n\tif oauthClient, err := newOauthClient(sc.ctx, sc.cfg, sc); err != nil {\n\t\tlogger.Warnf(\"failed to create oauth client. %v\", err)\n\t} else {\n\t\tlockKey := newRefreshTokenLockKey(oauthClient.tokenURL(), sc.cfg.User)\n\t\tif _, err = getValueWithLock(chooseLockerForAuth(sc.cfg), lockKey, func() (string, error) {\n\t\t\tif err = oauthClient.refreshToken(); err != nil {\n\t\t\t\tlogger.Warnf(\"cannot refresh token. %v\", err)\n\t\t\t\tcredentialsStorage.deleteCredential(newOAuthRefreshTokenSpec(sc.cfg.OauthTokenRequestURL, sc.cfg.User))\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t\treturn \"\", nil\n\t\t}); err != nil {\n\t\t\tlogger.Warnf(\"failed to refresh token with lock. %v\", err)\n\t\t}\n\t}\n}\n\nfunc chooseLockerForAuth(cfg *Config) locker {\n\tif cfg.SingleAuthenticationPrompt == ConfigBoolFalse {\n\t\treturn noopLocker\n\t}\n\tif cfg.User == \"\" {\n\t\treturn noopLocker\n\t}\n\treturn exclusiveLocker\n}\n\nfunc isEligibleForParallelLogin(cfg *Config, cacheEnabled ConfigBool) bool {\n\treturn cfg.SingleAuthenticationPrompt != ConfigBoolFalse && cfg.User != \"\" && cacheEnabled == ConfigBoolTrue\n}\n"
  },
  {
    "path": "auth_generic_test_methods_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n)\n\nfunc getAuthTestConfigFromEnv() (*Config, error) {\n\treturn GetConfigFromEnv([]*ConfigParam{\n\t\t{Name: \"Account\", EnvName: \"SNOWFLAKE_TEST_ACCOUNT\", FailOnMissing: true},\n\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_AUTH_TEST_OKTA_USER\", FailOnMissing: true},\n\t\t{Name: \"Password\", EnvName: \"SNOWFLAKE_AUTH_TEST_OKTA_PASS\", FailOnMissing: true},\n\t\t{Name: \"Host\", EnvName: \"SNOWFLAKE_TEST_HOST\", FailOnMissing: false},\n\t\t{Name: \"Port\", EnvName: \"SNOWFLAKE_TEST_PORT\", FailOnMissing: false},\n\t\t{Name: \"Protocol\", EnvName: \"SNOWFLAKE_AUTH_TEST_PROTOCOL\", FailOnMissing: false},\n\t\t{Name: \"Role\", EnvName: \"SNOWFLAKE_TEST_ROLE\", FailOnMissing: false},\n\t\t{Name: \"Warehouse\", EnvName: \"SNOWFLAKE_TEST_WAREHOUSE\", FailOnMissing: false},\n\t})\n}\n\nfunc getAuthTestsConfig(t *testing.T, authMethod AuthType) (*Config, error) {\n\tcfg, err := getAuthTestConfigFromEnv()\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get config: %v\", err))\n\n\tcfg.Authenticator = authMethod\n\n\treturn cfg, nil\n}\n\nfunc isTestRunningInDockerContainer() bool {\n\treturn os.Getenv(\"AUTHENTICATION_TESTS_ENV\") == \"docker\"\n}\n"
  },
  {
    "path": "auth_oauth.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"cmp\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"html\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"golang.org/x/oauth2\"\n\t\"golang.org/x/oauth2/clientcredentials\"\n)\n\nconst (\n\toauthSuccessHTML = `<!DOCTYPE html><html><head><meta charset=\"UTF-8\"/>\n<title>OAuth for Snowflake</title></head>\n<body>\nOAuth authentication completed successfully.\n</body></html>`\n\tlocalApplicationClientCredentials = \"LOCAL_APPLICATION\"\n)\n\nvar defaultAuthorizationCodeProviderFactory = func() authorizationCodeProvider {\n\treturn &browserBasedAuthorizationCodeProvider{}\n}\n\ntype oauthClient struct {\n\tctx    context.Context\n\tcfg    *Config\n\tclient *http.Client\n\n\tport                int\n\tredirectURITemplate string\n\n\tauthorizationCodeProviderFactory func() authorizationCodeProvider\n}\n\nfunc newOauthClient(ctx context.Context, cfg *Config, sc *snowflakeConn) (*oauthClient, error) {\n\tport := 0\n\tif cfg.OauthRedirectURI != \"\" {\n\t\tlogger.Debugf(\"Using oauthRedirectUri from config: %v\", cfg.OauthRedirectURI)\n\t\turi, err := url.Parse(cfg.OauthRedirectURI)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tportStr := uri.Port()\n\t\tif portStr != \"\" {\n\t\t\tif port, err = strconv.Atoi(portStr); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t}\n\n\tredirectURITemplate := \"\"\n\tif cfg.OauthRedirectURI == \"\" {\n\t\tredirectURITemplate = \"http://127.0.0.1:%v\"\n\t}\n\tlogger.Debugf(\"Redirect URI template: %v, port: %v\", redirectURITemplate, port)\n\n\ttransport, err := newTransportFactory(cfg, sc.telemetry).createTransport(transportConfigFor(transportTypeOAuth))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tclient := &http.Client{\n\t\tTransport: transport,\n\t}\n\treturn &oauthClient{\n\t\tctx:                              context.WithValue(ctx, oauth2.HTTPClient, client),\n\t\tcfg:                              cfg,\n\t\tclient:                           client,\n\t\tport:                             port,\n\t\tredirectURITemplate:              redirectURITemplate,\n\t\tauthorizationCodeProviderFactory: defaultAuthorizationCodeProviderFactory,\n\t}, nil\n}\n\ntype oauthBrowserResult struct {\n\taccessToken  string\n\trefreshToken string\n\terr          error\n}\n\nfunc (oauthClient *oauthClient) authenticateByOAuthAuthorizationCode() (string, error) {\n\taccessTokenSpec := oauthClient.accessTokenSpec()\n\tif oauthClient.cfg.ClientStoreTemporaryCredential == ConfigBoolTrue {\n\t\tif accessToken := credentialsStorage.getCredential(accessTokenSpec); accessToken != \"\" {\n\t\t\tlogger.Debugf(\"Access token retrieved from cache\")\n\t\t\treturn accessToken, nil\n\t\t}\n\t\tif refreshToken := credentialsStorage.getCredential(oauthClient.refreshTokenSpec()); refreshToken != \"\" {\n\t\t\treturn \"\", &SnowflakeError{Number: ErrMissingAccessATokenButRefreshTokenPresent}\n\t\t}\n\t}\n\tlogger.Debugf(\"Access token not present in cache, running full auth code flow\")\n\n\tresultChan := make(chan oauthBrowserResult, 1)\n\ttcpListener, callbackPort, err := oauthClient.setupListener()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdefer func() {\n\t\tlogger.Debug(\"Closing tcp listener\")\n\t\tif err := tcpListener.Close(); err != nil {\n\t\t\tlogger.Warnf(\"error while closing TCP listener. %v\", err)\n\t\t}\n\t}()\n\tgo GoroutineWrapper(oauthClient.ctx, func() {\n\t\tresultChan <- oauthClient.doAuthenticateByOAuthAuthorizationCode(tcpListener, callbackPort)\n\t})\n\tselect {\n\tcase <-time.After(oauthClient.cfg.ExternalBrowserTimeout):\n\t\treturn \"\", errors.New(\"authentication via browser timed out\")\n\tcase result := <-resultChan:\n\t\tif oauthClient.cfg.ClientStoreTemporaryCredential == ConfigBoolTrue {\n\t\t\tlogger.Debug(\"saving oauth access token in cache\")\n\t\t\tcredentialsStorage.setCredential(oauthClient.accessTokenSpec(), result.accessToken)\n\t\t\tcredentialsStorage.setCredential(oauthClient.refreshTokenSpec(), result.refreshToken)\n\t\t}\n\t\treturn result.accessToken, result.err\n\t}\n}\n\nfunc (oauthClient *oauthClient) doAuthenticateByOAuthAuthorizationCode(tcpListener *net.TCPListener, callbackPort int) oauthBrowserResult {\n\tauthCodeProvider := oauthClient.authorizationCodeProviderFactory()\n\n\tsuccessChan := make(chan []byte)\n\terrChan := make(chan error)\n\tresponseBodyChan := make(chan string, 2)\n\tcloseListenerChan := make(chan bool, 2)\n\n\tdefer func() {\n\t\tcloseListenerChan <- true\n\t\tclose(successChan)\n\t\tclose(errChan)\n\t\tclose(responseBodyChan)\n\t\tclose(closeListenerChan)\n\t}()\n\n\tlogger.Debugf(\"opening socket on port %v\", callbackPort)\n\tdefer func(tcpListener *net.TCPListener) {\n\t\t<-closeListenerChan\n\t}(tcpListener)\n\n\tgo handleOAuthSocket(tcpListener, successChan, errChan, responseBodyChan, closeListenerChan)\n\n\toauth2cfg := oauthClient.buildAuthorizationCodeConfig(callbackPort)\n\tcodeVerifier := authCodeProvider.createCodeVerifier()\n\tstate := authCodeProvider.createState()\n\tauthorizationURL := oauth2cfg.AuthCodeURL(state, oauth2.S256ChallengeOption(codeVerifier))\n\tif err := authCodeProvider.run(authorizationURL); err != nil {\n\t\tresponseBodyChan <- err.Error()\n\t\tcloseListenerChan <- true\n\t\treturn oauthBrowserResult{\"\", \"\", err}\n\t}\n\n\terr := <-errChan\n\tif err != nil {\n\t\tresponseBodyChan <- err.Error()\n\t\treturn oauthBrowserResult{\"\", \"\", err}\n\t}\n\tcodeReqBytes := <-successChan\n\n\tcodeReq, err := http.ReadRequest(bufio.NewReader(bytes.NewReader(codeReqBytes)))\n\tif err != nil {\n\t\tresponseBodyChan <- err.Error()\n\t\treturn oauthBrowserResult{\"\", \"\", err}\n\t}\n\tlogger.Debugf(\"Received authorization code from %v\", oauthClient.authorizationURL())\n\ttokenResponse, err := oauthClient.exchangeAccessToken(codeReq, state, oauth2cfg, codeVerifier, responseBodyChan)\n\tif err != nil {\n\t\treturn oauthBrowserResult{\"\", \"\", err}\n\t}\n\tlogger.Debugf(\"Received token from %v\", oauthClient.tokenURL())\n\treturn oauthBrowserResult{tokenResponse.AccessToken, tokenResponse.RefreshToken, err}\n}\n\nfunc (oauthClient *oauthClient) setupListener() (*net.TCPListener, int, error) {\n\ttcpListener, err := createLocalTCPListener(oauthClient.port)\n\tif err != nil {\n\t\treturn nil, 0, err\n\t}\n\tcallbackPort := tcpListener.Addr().(*net.TCPAddr).Port\n\tlogger.Debugf(\"oauthClient.port: %v, callbackPort: %v\", oauthClient.port, callbackPort)\n\treturn tcpListener, callbackPort, nil\n}\n\nfunc (oauthClient *oauthClient) exchangeAccessToken(codeReq *http.Request, state string, oauth2cfg *oauth2.Config, codeVerifier string, responseBodyChan chan string) (*oauth2.Token, error) {\n\tqueryParams := codeReq.URL.Query()\n\terrorMsg := queryParams.Get(\"error\")\n\tif errorMsg != \"\" {\n\t\terrorDesc := queryParams.Get(\"error_description\")\n\t\terrMsg := fmt.Sprintf(\"error while getting authentication from oauth: %v. Details: %v\", errorMsg, errorDesc)\n\t\tresponseBodyChan <- html.EscapeString(errMsg)\n\t\treturn nil, errors.New(errMsg)\n\t}\n\n\treceivedState := queryParams.Get(\"state\")\n\tif state != receivedState {\n\t\terrMsg := \"invalid oauth state received\"\n\t\tresponseBodyChan <- errMsg\n\t\treturn nil, errors.New(errMsg)\n\t}\n\n\tcode := queryParams.Get(\"code\")\n\topts := []oauth2.AuthCodeOption{oauth2.VerifierOption(codeVerifier)}\n\tif oauthClient.cfg.EnableSingleUseRefreshTokens {\n\t\topts = append(opts, oauth2.SetAuthURLParam(\"enable_single_use_refresh_tokens\", \"true\"))\n\t}\n\ttoken, err := oauth2cfg.Exchange(oauthClient.ctx, code, opts...)\n\tif err != nil {\n\t\tresponseBodyChan <- err.Error()\n\t\treturn nil, err\n\t}\n\tresponseBodyChan <- oauthSuccessHTML\n\treturn token, nil\n}\n\nfunc (oauthClient *oauthClient) buildAuthorizationCodeConfig(callbackPort int) *oauth2.Config {\n\tclientID, clientSecret := oauthClient.cfg.OauthClientID, oauthClient.cfg.OauthClientSecret\n\tif oauthClient.eligibleForDefaultClientCredentials() {\n\t\tclientID, clientSecret = localApplicationClientCredentials, localApplicationClientCredentials\n\t}\n\toauthClient.logIfHTTPInUse(oauthClient.authorizationURL())\n\toauthClient.logIfHTTPInUse(oauthClient.tokenURL())\n\treturn &oauth2.Config{\n\t\tClientID:     clientID,\n\t\tClientSecret: clientSecret,\n\t\tRedirectURL:  oauthClient.buildRedirectURI(callbackPort),\n\t\tScopes:       oauthClient.buildScopes(),\n\t\tEndpoint: oauth2.Endpoint{\n\t\t\tAuthURL:   oauthClient.authorizationURL(),\n\t\t\tTokenURL:  oauthClient.tokenURL(),\n\t\t\tAuthStyle: oauth2.AuthStyleInHeader,\n\t\t},\n\t}\n}\nfunc (oauthClient *oauthClient) eligibleForDefaultClientCredentials() bool {\n\treturn oauthClient.cfg.OauthClientID == \"\" && oauthClient.cfg.OauthClientSecret == \"\" && oauthClient.isSnowflakeAsIDP()\n}\n\nfunc (oauthClient *oauthClient) isSnowflakeAsIDP() bool {\n\treturn (oauthClient.cfg.OauthAuthorizationURL == \"\" || strings.Contains(oauthClient.cfg.OauthAuthorizationURL, oauthClient.cfg.Host)) &&\n\t\t(oauthClient.cfg.OauthTokenRequestURL == \"\" || strings.Contains(oauthClient.cfg.OauthTokenRequestURL, oauthClient.cfg.Host))\n}\n\nfunc (oauthClient *oauthClient) authorizationURL() string {\n\treturn cmp.Or(oauthClient.cfg.OauthAuthorizationURL, oauthClient.defaultAuthorizationURL())\n}\n\nfunc (oauthClient *oauthClient) defaultAuthorizationURL() string {\n\treturn fmt.Sprintf(\"%v://%v:%v/oauth/authorize\", oauthClient.cfg.Protocol, oauthClient.cfg.Host, oauthClient.cfg.Port)\n}\n\nfunc (oauthClient *oauthClient) tokenURL() string {\n\treturn cmp.Or(oauthClient.cfg.OauthTokenRequestURL, oauthClient.defaultTokenURL())\n}\n\nfunc (oauthClient *oauthClient) defaultTokenURL() string {\n\treturn fmt.Sprintf(\"%v://%v:%v/oauth/token-request\", oauthClient.cfg.Protocol, oauthClient.cfg.Host, oauthClient.cfg.Port)\n}\n\nfunc (oauthClient *oauthClient) buildRedirectURI(port int) string {\n\tif oauthClient.cfg.OauthRedirectURI != \"\" {\n\t\treturn oauthClient.cfg.OauthRedirectURI\n\t}\n\treturn fmt.Sprintf(oauthClient.redirectURITemplate, port)\n}\n\nfunc (oauthClient *oauthClient) buildScopes() []string {\n\tif oauthClient.cfg.OauthScope == \"\" {\n\t\treturn []string{\"session:role:\" + oauthClient.cfg.Role}\n\t}\n\tscopes := strings.Split(oauthClient.cfg.OauthScope, \" \")\n\tfor i, scope := range scopes {\n\t\tscopes[i] = strings.TrimSpace(scope)\n\t}\n\treturn scopes\n}\n\nfunc handleOAuthSocket(tcpListener *net.TCPListener, successChan chan []byte, errChan chan error, responseBodyChan chan string, closeListenerChan chan bool) {\n\tconn, err := tcpListener.AcceptTCP()\n\tif err != nil {\n\t\tlogger.Warnf(\"error creating socket. %v\", err)\n\t\treturn\n\t}\n\tdefer func() {\n\t\tif err := conn.Close(); err != nil {\n\t\t\tlogger.Warnf(\"error while closing connection (%v -> %v). %v\", conn.LocalAddr(), conn.RemoteAddr(), err)\n\t\t}\n\t}()\n\tvar buf [bufSize]byte\n\tcodeResp := bytes.NewBuffer(nil)\n\tfor {\n\t\treadBytes, err := conn.Read(buf[:])\n\t\tif err == io.EOF {\n\t\t\tbreak\n\t\t}\n\t\tif err != nil {\n\t\t\terrChan <- err\n\t\t\treturn\n\t\t}\n\t\tcodeResp.Write(buf[0:readBytes])\n\t\tif readBytes < bufSize {\n\t\t\tbreak\n\t\t}\n\t}\n\n\terrChan <- nil\n\tsuccessChan <- codeResp.Bytes()\n\n\tresponseBody := <-responseBodyChan\n\trespToBrowser, err := buildResponse(responseBody)\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot create response to browser. %v\", err)\n\t}\n\t_, err = conn.Write(respToBrowser.Bytes())\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot write response to browser. %v\", err)\n\t}\n\tcloseListenerChan <- true\n}\n\ntype authorizationCodeProvider interface {\n\trun(authorizationURL string) error\n\tcreateState() string\n\tcreateCodeVerifier() string\n}\n\ntype browserBasedAuthorizationCodeProvider struct {\n}\n\nfunc (provider *browserBasedAuthorizationCodeProvider) run(authorizationURL string) error {\n\treturn openBrowser(authorizationURL)\n}\n\nfunc (provider *browserBasedAuthorizationCodeProvider) createState() string {\n\treturn NewUUID().String()\n}\n\nfunc (provider *browserBasedAuthorizationCodeProvider) createCodeVerifier() string {\n\treturn oauth2.GenerateVerifier()\n}\n\nfunc (oauthClient *oauthClient) authenticateByOAuthClientCredentials() (string, error) {\n\taccessTokenSpec := oauthClient.accessTokenSpec()\n\tif oauthClient.cfg.ClientStoreTemporaryCredential == ConfigBoolTrue {\n\t\tif accessToken := credentialsStorage.getCredential(accessTokenSpec); accessToken != \"\" {\n\t\t\treturn accessToken, nil\n\t\t}\n\t}\n\toauth2Cfg, err := oauthClient.buildClientCredentialsConfig()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\ttoken, err := oauth2Cfg.Token(oauthClient.ctx)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif oauthClient.cfg.ClientStoreTemporaryCredential == ConfigBoolTrue {\n\t\tcredentialsStorage.setCredential(accessTokenSpec, token.AccessToken)\n\t}\n\treturn token.AccessToken, nil\n}\n\nfunc (oauthClient *oauthClient) buildClientCredentialsConfig() (*clientcredentials.Config, error) {\n\tif oauthClient.cfg.OauthTokenRequestURL == \"\" {\n\t\treturn nil, errors.New(\"client credentials flow requires tokenRequestURL\")\n\t}\n\treturn &clientcredentials.Config{\n\t\tClientID:     oauthClient.cfg.OauthClientID,\n\t\tClientSecret: oauthClient.cfg.OauthClientSecret,\n\t\tTokenURL:     oauthClient.cfg.OauthTokenRequestURL,\n\t\tScopes:       oauthClient.buildScopes(),\n\t}, nil\n}\n\nfunc (oauthClient *oauthClient) refreshToken() error {\n\tif oauthClient.cfg.ClientStoreTemporaryCredential != ConfigBoolTrue {\n\t\tlogger.Debug(\"credentials storage is disabled, cannot use refresh tokens\")\n\t\treturn nil\n\t}\n\trefreshTokenSpec := newOAuthRefreshTokenSpec(oauthClient.cfg.OauthTokenRequestURL, oauthClient.cfg.User)\n\trefreshToken := credentialsStorage.getCredential(refreshTokenSpec)\n\tif refreshToken == \"\" {\n\t\tlogger.Debug(\"no refresh token in cache, full flow must be run\")\n\t\treturn nil\n\t}\n\tbody := url.Values{}\n\tbody.Add(\"grant_type\", \"refresh_token\")\n\tbody.Add(\"refresh_token\", refreshToken)\n\tbody.Add(\"scope\", strings.Join(oauthClient.buildScopes(), \" \"))\n\treq, err := http.NewRequest(\"POST\", oauthClient.tokenURL(), strings.NewReader(body.Encode()))\n\tif err != nil {\n\t\treturn err\n\t}\n\treq.SetBasicAuth(oauthClient.cfg.OauthClientID, oauthClient.cfg.OauthClientSecret)\n\treq.Header.Add(\"Content-Type\", \"application/x-www-form-urlencoded\")\n\tresp, err := oauthClient.client.Do(req)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif err := resp.Body.Close(); err != nil {\n\t\t\tlogger.Warnf(\"error while closing response body for %v. %v\", req.URL, err)\n\t\t}\n\t}()\n\tif resp.StatusCode != 200 {\n\t\trespBody, err := io.ReadAll(resp.Body)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcredentialsStorage.deleteCredential(refreshTokenSpec)\n\t\treturn errors.New(string(respBody))\n\t}\n\tvar tokenResponse tokenExchangeResponseBody\n\tif err = json.NewDecoder(resp.Body).Decode(&tokenResponse); err != nil {\n\t\treturn err\n\t}\n\taccessTokenSpec := oauthClient.accessTokenSpec()\n\tcredentialsStorage.setCredential(accessTokenSpec, tokenResponse.AccessToken)\n\tif tokenResponse.RefreshToken != \"\" {\n\t\tcredentialsStorage.setCredential(refreshTokenSpec, tokenResponse.RefreshToken)\n\t}\n\treturn nil\n}\n\ntype tokenExchangeResponseBody struct {\n\tAccessToken  string `json:\"access_token,omitempty\"`\n\tRefreshToken string `json:\"refresh_token\"`\n}\n\nfunc (oauthClient *oauthClient) accessTokenSpec() *secureTokenSpec {\n\treturn newOAuthAccessTokenSpec(oauthClient.tokenURL(), oauthClient.cfg.User)\n}\n\nfunc (oauthClient *oauthClient) refreshTokenSpec() *secureTokenSpec {\n\treturn newOAuthRefreshTokenSpec(oauthClient.tokenURL(), oauthClient.cfg.User)\n}\n\nfunc (oauthClient *oauthClient) logIfHTTPInUse(u string) {\n\tparsed, err := url.Parse(u)\n\tif err != nil {\n\t\tlogger.Warnf(\"Cannot parse URL: %v. %v\", u, err)\n\t\treturn\n\t}\n\tif parsed.Scheme == \"http\" {\n\t\tlogger.Warnf(\"OAuth URL uses insecure HTTP protocol: %v\", u)\n\t}\n}\n"
  },
  {
    "path": "auth_oauth_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"errors\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"io\"\n\t\"net/http\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"golang.org/x/oauth2\"\n)\n\nfunc TestUnitOAuthAuthorizationCode(t *testing.T) {\n\tskipOnMac(t, \"keychain requires password\")\n\troundTripper := newCountingRoundTripper(createTestNoRevocationTransport())\n\thttpClient := &http.Client{\n\t\tTransport: roundTripper,\n\t}\n\tcfg := &Config{\n\t\tUser:                           \"testUser\",\n\t\tRole:                           \"ANALYST\",\n\t\tOauthClientID:                  \"testClientId\",\n\t\tOauthClientSecret:              \"testClientSecret\",\n\t\tOauthAuthorizationURL:          wiremock.baseURL() + \"/oauth/authorize\",\n\t\tOauthTokenRequestURL:           wiremock.baseURL() + \"/oauth/token\",\n\t\tOauthRedirectURI:               \"http://localhost:1234/snowflake/oauth-redirect\",\n\t\tTransporter:                    roundTripper,\n\t\tClientStoreTemporaryCredential: ConfigBoolTrue,\n\t\tExternalBrowserTimeout:         time.Duration(sfconfig.DefaultExternalBrowserTimeout),\n\t}\n\tclient, err := newOauthClient(context.WithValue(context.Background(), oauth2.HTTPClient, httpClient), cfg, &snowflakeConn{})\n\tassertNilF(t, err)\n\taccessTokenSpec := newOAuthAccessTokenSpec(wiremock.connectionConfig().OauthTokenRequestURL, wiremock.connectionConfig().User)\n\trefreshTokenSpec := newOAuthRefreshTokenSpec(wiremock.connectionConfig().OauthTokenRequestURL, wiremock.connectionConfig().User)\n\n\tt.Run(\"Success\", func(t *testing.T) {\n\t\tcredentialsStorage.deleteCredential(accessTokenSpec)\n\t\tcredentialsStorage.deleteCredential(refreshTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"))\n\t\tauthCodeProvider := &nonInteractiveAuthorizationCodeProvider{t: t}\n\t\tclient.authorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\t\treturn authCodeProvider\n\t\t}\n\t\ttoken, err := client.authenticateByOAuthAuthorizationCode()\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, token, \"access-token-123\")\n\t\ttime.Sleep(100 * time.Millisecond)\n\t\tauthCodeProvider.assertResponseBodyContains(\"OAuth authentication completed successfully.\")\n\t})\n\n\tt.Run(\"Store access token in cache\", func(t *testing.T) {\n\t\tskipOnMissingHome(t)\n\t\troundTripper.reset()\n\t\tcredentialsStorage.deleteCredential(accessTokenSpec)\n\t\tcredentialsStorage.deleteCredential(refreshTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"))\n\t\tauthCodeProvider := &nonInteractiveAuthorizationCodeProvider{t: t}\n\t\tclient.authorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\t\treturn authCodeProvider\n\t\t}\n\t\t_, err = client.authenticateByOAuthAuthorizationCode()\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, credentialsStorage.getCredential(accessTokenSpec), \"access-token-123\")\n\t})\n\n\tt.Run(\"Use cache for consecutive calls\", func(t *testing.T) {\n\t\tskipOnMissingHome(t)\n\t\troundTripper.reset()\n\t\tcredentialsStorage.setCredential(accessTokenSpec, \"access-token-123\")\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"))\n\t\tauthCodeProvider := &nonInteractiveAuthorizationCodeProvider{t: t}\n\t\tfor range 3 {\n\t\t\tclient, err := newOauthClient(context.WithValue(context.Background(), oauth2.HTTPClient, httpClient), cfg, &snowflakeConn{})\n\t\t\tassertNilF(t, err)\n\t\t\tclient.authorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\t\t\treturn authCodeProvider\n\t\t\t}\n\t\t\t_, err = client.authenticateByOAuthAuthorizationCode()\n\t\t\tassertNilF(t, err)\n\t\t}\n\t\tassertEqualE(t, authCodeProvider.responseBody, \"\")\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 0)\n\t})\n\n\tt.Run(\"InvalidState\", func(t *testing.T) {\n\t\tcredentialsStorage.deleteCredential(accessTokenSpec)\n\t\tcredentialsStorage.deleteCredential(refreshTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"))\n\t\tauthCodeProvider := &nonInteractiveAuthorizationCodeProvider{\n\t\t\ttamperWithState: true,\n\t\t\tt:               t,\n\t\t}\n\t\tclient.authorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\t\treturn authCodeProvider\n\t\t}\n\t\t_, err = client.authenticateByOAuthAuthorizationCode()\n\t\tassertEqualE(t, err.Error(), \"invalid oauth state received\")\n\t\ttime.Sleep(100 * time.Millisecond)\n\t\tauthCodeProvider.assertResponseBodyContains(\"invalid oauth state received\")\n\t})\n\n\tt.Run(\"ErrorFromIdPWhileGettingCode\", func(t *testing.T) {\n\t\tcredentialsStorage.deleteCredential(accessTokenSpec)\n\t\tcredentialsStorage.deleteCredential(refreshTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/authorization_code/error_from_idp.json\"))\n\t\tauthCodeProvider := &nonInteractiveAuthorizationCodeProvider{t: t}\n\t\tclient.authorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\t\treturn authCodeProvider\n\t\t}\n\t\t_, err = client.authenticateByOAuthAuthorizationCode()\n\t\tassertEqualE(t, err.Error(), \"error while getting authentication from oauth: some error. Details: some error desc\")\n\t\ttime.Sleep(100 * time.Millisecond)\n\t\tauthCodeProvider.assertResponseBodyContains(\"error while getting authentication from oauth: some error. Details: some error desc\")\n\t})\n\n\tt.Run(\"ErrorFromProviderWhileGettingCode\", func(t *testing.T) {\n\t\tauthCodeProvider := &nonInteractiveAuthorizationCodeProvider{\n\t\t\ttriggerError: \"test error\",\n\t\t}\n\t\tclient.authorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\t\treturn authCodeProvider\n\t\t}\n\t\t_, err = client.authenticateByOAuthAuthorizationCode()\n\t\tassertEqualE(t, err.Error(), \"test error\")\n\t})\n\n\tt.Run(\"InvalidCode\", func(t *testing.T) {\n\t\tcredentialsStorage.deleteCredential(accessTokenSpec)\n\t\tcredentialsStorage.deleteCredential(refreshTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/authorization_code/invalid_code.json\"))\n\t\tauthCodeProvider := &nonInteractiveAuthorizationCodeProvider{t: t}\n\t\tclient.authorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\t\treturn authCodeProvider\n\t\t}\n\t\t_, err = client.authenticateByOAuthAuthorizationCode()\n\t\tassertNotNilE(t, err)\n\t\tassertEqualE(t, err.(*oauth2.RetrieveError).ErrorCode, \"invalid_grant\")\n\t\tassertEqualE(t, err.(*oauth2.RetrieveError).ErrorDescription, \"The authorization code is invalid or has expired.\")\n\t\ttime.Sleep(100 * time.Millisecond)\n\t\tauthCodeProvider.assertResponseBodyContains(\"invalid_grant\")\n\t})\n\n\tt.Run(\"timeout\", func(t *testing.T) {\n\t\tcredentialsStorage.deleteCredential(accessTokenSpec)\n\t\tcredentialsStorage.deleteCredential(refreshTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"))\n\t\tclient.cfg.ExternalBrowserTimeout = 2 * time.Second\n\t\tauthCodeProvider := &nonInteractiveAuthorizationCodeProvider{\n\t\t\tsleepTime:    3 * time.Second,\n\t\t\ttriggerError: \"timed out\",\n\t\t\tt:            t,\n\t\t}\n\t\tclient.authorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\t\treturn authCodeProvider\n\t\t}\n\t\t_, err = client.authenticateByOAuthAuthorizationCode()\n\t\tassertNotNilE(t, err)\n\t\tassertStringContainsE(t, err.Error(), \"timed out\")\n\t\ttime.Sleep(2 * time.Second) // awaiting timeout\n\t})\n}\n\nfunc TestUnitOAuthClientCredentials(t *testing.T) {\n\tskipOnMac(t, \"keychain requires password\")\n\tcacheTokenSpec := newOAuthAccessTokenSpec(wiremock.connectionConfig().OauthTokenRequestURL, wiremock.connectionConfig().User)\n\tcrt := newCountingRoundTripper(createTestNoRevocationTransport())\n\thttpClient := http.Client{\n\t\tTransport: crt,\n\t}\n\tcfgFactory := func() *Config {\n\t\treturn &Config{\n\t\t\tUser:                           \"testUser\",\n\t\t\tRole:                           \"ANALYST\",\n\t\t\tOauthClientID:                  \"testClientId\",\n\t\t\tOauthClientSecret:              \"testClientSecret\",\n\t\t\tOauthTokenRequestURL:           wiremock.baseURL() + \"/oauth/token\",\n\t\t\tTransporter:                    crt,\n\t\t\tClientStoreTemporaryCredential: ConfigBoolTrue,\n\t\t}\n\t}\n\tclient, err := newOauthClient(context.WithValue(context.Background(), oauth2.HTTPClient, httpClient), cfgFactory(), &snowflakeConn{})\n\tassertNilF(t, err)\n\n\tt.Run(\"success\", func(t *testing.T) {\n\t\tcredentialsStorage.deleteCredential(cacheTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/client_credentials/successful_flow.json\"))\n\t\ttoken, err := client.authenticateByOAuthClientCredentials()\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, token, \"access-token-123\")\n\t})\n\n\tt.Run(\"should store token in cache\", func(t *testing.T) {\n\t\tskipOnMissingHome(t)\n\t\tcrt.reset()\n\t\tcredentialsStorage.deleteCredential(cacheTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/client_credentials/successful_flow.json\"))\n\t\ttoken, err := client.authenticateByOAuthClientCredentials()\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, token, \"access-token-123\")\n\n\t\tclient, err := newOauthClient(context.Background(), cfgFactory(), &snowflakeConn{})\n\t\tassertNilF(t, err)\n\t\ttoken, err = client.authenticateByOAuthClientCredentials()\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, token, \"access-token-123\")\n\n\t\tassertEqualE(t, crt.postReqCount[cfgFactory().OauthTokenRequestURL], 1)\n\t})\n\n\tt.Run(\"consecutive calls should take token from cache\", func(t *testing.T) {\n\t\tskipOnMissingHome(t)\n\t\tcrt.reset()\n\t\tcredentialsStorage.setCredential(cacheTokenSpec, \"access-token-123\")\n\t\tfor range 3 {\n\t\t\tclient, err := newOauthClient(context.Background(), cfgFactory(), &snowflakeConn{})\n\t\t\tassertNilF(t, err)\n\t\t\ttoken, err := client.authenticateByOAuthClientCredentials()\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, token, \"access-token-123\")\n\t\t}\n\t\tassertEqualE(t, crt.postReqCount[cfgFactory().OauthTokenRequestURL], 0)\n\t})\n\n\tt.Run(\"disabling cache\", func(t *testing.T) {\n\t\tskipOnMissingHome(t)\n\t\tcfg := cfgFactory()\n\t\tcfg.ClientStoreTemporaryCredential = ConfigBoolFalse\n\t\tcredentialsStorage.deleteCredential(cacheTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/client_credentials/successful_flow.json\"))\n\t\tclient, err := newOauthClient(context.Background(), cfg, &snowflakeConn{})\n\t\tassertNilF(t, err)\n\t\ttoken, err := client.authenticateByOAuthClientCredentials()\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, token, \"access-token-123\")\n\n\t\tclient, err = newOauthClient(context.Background(), cfg, &snowflakeConn{})\n\t\tassertNilF(t, err)\n\t\ttoken, err = client.authenticateByOAuthClientCredentials()\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, token, \"access-token-123\")\n\n\t\tassertEqualE(t, crt.postReqCount[cfg.OauthTokenRequestURL], 2)\n\t})\n\n\tt.Run(\"invalid_client\", func(t *testing.T) {\n\t\tcredentialsStorage.deleteCredential(cacheTokenSpec)\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/client_credentials/invalid_client.json\"))\n\t\t_, err = client.authenticateByOAuthClientCredentials()\n\t\tassertNotNilF(t, err)\n\t\toauth2Err := err.(*oauth2.RetrieveError)\n\t\tassertEqualE(t, oauth2Err.ErrorCode, \"invalid_client\")\n\t\tassertEqualE(t, oauth2Err.ErrorDescription, \"The client secret supplied for a confidential client is invalid.\")\n\t})\n}\n\nfunc TestAuthorizationCodeFlow(t *testing.T) {\n\tif runningOnGithubAction() && runningOnLinux() {\n\t\tt.Skip(\"Github blocks writing to file system\")\n\t}\n\tskipOnMac(t, \"keychain requires password\")\n\tcurrentDefaultAuthorizationCodeProviderFactory := defaultAuthorizationCodeProviderFactory\n\tdefer func() {\n\t\tdefaultAuthorizationCodeProviderFactory = currentDefaultAuthorizationCodeProviderFactory\n\t}()\n\tdefaultAuthorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\treturn &nonInteractiveAuthorizationCodeProvider{\n\t\t\tt:  t,\n\t\t\tmu: sync.Mutex{},\n\t\t}\n\t}\n\troundTripper := newCountingRoundTripper(createTestNoRevocationTransport())\n\n\tt.Run(\"successful flow\", func(t *testing.T) {\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.Role = \"ANALYST\"\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\tcfg.OauthRedirectURI = \"http://localhost:1234/snowflake/oauth-redirect\"\n\t\tcfg.Transporter = roundTripper\n\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\tcredentialsStorage.deleteCredential(oauthAccessTokenSpec)\n\t\tcredentialsStorage.deleteCredential(oauthRefreshTokenSpec)\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t})\n\n\tt.Run(\"successful flow with multiple threads\", func(t *testing.T) {\n\t\tfor _, singleAuthenticationPrompt := range []ConfigBool{ConfigBoolFalse, ConfigBoolTrue, configBoolNotSet} {\n\t\t\tt.Run(\"singleAuthenticationPrompt=\"+singleAuthenticationPrompt.String(), func(t *testing.T) {\n\t\t\t\tcurrentDefaultAuthorizationCodeProviderFactory := defaultAuthorizationCodeProviderFactory\n\t\t\t\tdefer func() {\n\t\t\t\t\tdefaultAuthorizationCodeProviderFactory = currentDefaultAuthorizationCodeProviderFactory\n\t\t\t\t}()\n\t\t\t\tdefaultAuthorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\t\t\t\treturn &nonInteractiveAuthorizationCodeProvider{\n\t\t\t\t\t\tt:         t,\n\t\t\t\t\t\tmu:        sync.Mutex{},\n\t\t\t\t\t\tsleepTime: 500 * time.Millisecond,\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\troundTripper.reset()\n\t\t\t\twiremock.registerMappings(t,\n\t\t\t\t\tnewWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"),\n\t\t\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\t\t\tnewWiremockMapping(\"select1.json\"),\n\t\t\t\t\tnewWiremockMapping(\"close_session.json\"))\n\t\t\t\tcfg := wiremock.connectionConfig()\n\t\t\t\tcfg.Role = \"ANALYST\"\n\t\t\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\t\t\tcfg.Transporter = roundTripper\n\t\t\t\tcfg.SingleAuthenticationPrompt = singleAuthenticationPrompt\n\t\t\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\t\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\t\t\tcredentialsStorage.deleteCredential(oauthAccessTokenSpec)\n\t\t\t\tcredentialsStorage.deleteCredential(oauthRefreshTokenSpec)\n\t\t\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\t\t\tdb := sql.OpenDB(connector)\n\t\t\t\tinitPoolWithSize(t, db, 20)\n\t\t\t\tprintln(roundTripper.postReqCount[cfg.OauthTokenRequestURL])\n\t\t\t\tif singleAuthenticationPrompt == ConfigBoolFalse {\n\t\t\t\t\tassertTrueE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL] > 1)\n\t\t\t\t} else {\n\t\t\t\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n\n\tt.Run(\"successful flow with single-use refresh token enabled\", func(t *testing.T) {\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/oauth2/authorization_code/successful_flow_with_single_use_refresh_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.Role = \"ANALYST\"\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\tcfg.OauthRedirectURI = \"http://localhost:1234/snowflake/oauth-redirect\"\n\t\tcfg.Transporter = roundTripper\n\t\tcfg.EnableSingleUseRefreshTokens = true\n\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\tcredentialsStorage.deleteCredential(oauthAccessTokenSpec)\n\t\tcredentialsStorage.deleteCredential(oauthRefreshTokenSpec)\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t})\n\n\tt.Run(\"should use cached access token\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.Role = \"ANALYST\"\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\tcfg.OauthRedirectURI = \"http://localhost:1234/snowflake/oauth-redirect\"\n\t\tcfg.Transporter = roundTripper\n\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\tcredentialsStorage.deleteCredential(oauthAccessTokenSpec)\n\t\tcredentialsStorage.deleteCredential(oauthRefreshTokenSpec)\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\tconn1, err := db.Conn(context.Background())\n\t\tassertNilF(t, err)\n\t\tdefer conn1.Close()\n\t\tconn2, err := db.Conn(context.Background())\n\t\tassertNilF(t, err)\n\t\tdefer conn2.Close()\n\t\trunSmokeQueryWithConn(t, conn1)\n\t\trunSmokeQueryWithConn(t, conn2)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1)\n\t})\n\n\tt.Run(\"should update cache with new token when the old one expired if refresh token is missing\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request_with_expired_access_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.Role = \"ANALYST\"\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\tcfg.OauthRedirectURI = \"http://localhost:1234/snowflake/oauth-redirect\"\n\t\tcfg.Transporter = roundTripper\n\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\tcredentialsStorage.setCredential(oauthAccessTokenSpec, \"expired-token\")\n\t\tcredentialsStorage.deleteCredential(oauthRefreshTokenSpec)\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1)\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthAccessTokenSpec), \"access-token-123\")\n\t})\n\n\tt.Run(\"if access token is missing and refresh token is present, should run refresh token flow\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.OauthScope = \"session:role:ANALYST offline_access\"\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\tcfg.OauthRedirectURI = \"http://localhost:1234/snowflake/oauth-redirect\"\n\t\tcfg.Transporter = roundTripper\n\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\tcredentialsStorage.deleteCredential(oauthAccessTokenSpec)\n\t\tcredentialsStorage.setCredential(oauthRefreshTokenSpec, \"refresh-token-123\")\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/login_request_with_expired_access_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/refresh_token/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1) // only refresh token\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthAccessTokenSpec), \"access-token-123\")\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthRefreshTokenSpec), \"refresh-token-123a\")\n\t})\n\n\tt.Run(\"if access token is expired and refresh token is present, should run refresh token flow\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.OauthScope = \"session:role:ANALYST offline_access\"\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\tcfg.OauthRedirectURI = \"http://localhost:1234/snowflake/oauth-redirect\"\n\t\tcfg.Transporter = roundTripper\n\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\tcredentialsStorage.setCredential(oauthAccessTokenSpec, \"expired-token\")\n\t\tcredentialsStorage.setCredential(oauthRefreshTokenSpec, \"refresh-token-123\")\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/login_request_with_expired_access_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/refresh_token/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1) // only refresh token\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthAccessTokenSpec), \"access-token-123\")\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthRefreshTokenSpec), \"refresh-token-123a\")\n\t})\n\n\tt.Run(\"if new refresh token is not returned, should keep old one\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.OauthScope = \"session:role:ANALYST offline_access\"\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\tcfg.OauthRedirectURI = \"http://localhost:1234/snowflake/oauth-redirect\"\n\t\tcfg.Transporter = roundTripper\n\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\tcredentialsStorage.setCredential(oauthAccessTokenSpec, \"expired-token\")\n\t\tcredentialsStorage.setCredential(oauthRefreshTokenSpec, \"refresh-token-123\")\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/login_request_with_expired_access_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/refresh_token/successful_flow_without_new_refresh_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/authorization_code/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1) // only refresh token\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthAccessTokenSpec), \"access-token-123\")\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthRefreshTokenSpec), \"refresh-token-123\")\n\t})\n\n\tt.Run(\"if refreshing token failed, run normal flow\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.OauthScope = \"session:role:ANALYST offline_access\"\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\tcfg.OauthRedirectURI = \"http://localhost:1234/snowflake/oauth-redirect\"\n\t\tcfg.Transporter = roundTripper\n\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\tcredentialsStorage.setCredential(oauthAccessTokenSpec, \"expired-token\")\n\t\tcredentialsStorage.setCredential(oauthRefreshTokenSpec, \"expired-refresh-token\")\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/login_request_with_expired_access_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/refresh_token/invalid_refresh_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/authorization_code/successful_flow_with_offline_access.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 2) // only refresh token fails, then authorization code\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthAccessTokenSpec), \"access-token-123\")\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthRefreshTokenSpec), \"refresh-token-123\")\n\t})\n\n\tt.Run(\"if secure storage is disabled, run normal flow\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.OauthScope = \"session:role:ANALYST offline_access\"\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\tcfg.OauthRedirectURI = \"http://localhost:1234/snowflake/oauth-redirect\"\n\t\tcfg.Transporter = roundTripper\n\t\tcfg.ClientStoreTemporaryCredential = ConfigBoolFalse\n\t\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\t\tcredentialsStorage.setCredential(oauthAccessTokenSpec, \"old-access-token\")\n\t\tcredentialsStorage.setCredential(oauthRefreshTokenSpec, \"old-refresh-token\")\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/oauth2/authorization_code/successful_flow_with_offline_access.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1) // only access token token\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthAccessTokenSpec), \"old-access-token\")\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthRefreshTokenSpec), \"old-refresh-token\")\n\t})\n}\n\nfunc TestClientCredentialsFlow(t *testing.T) {\n\tif runningOnGithubAction() && runningOnLinux() {\n\t\tt.Skip(\"Github blocks writing to file system\")\n\t}\n\tskipOnMac(t, \"keychain requires password\")\n\tcurrentDefaultAuthorizationCodeProviderFactory := defaultAuthorizationCodeProviderFactory\n\tdefer func() {\n\t\tdefaultAuthorizationCodeProviderFactory = currentDefaultAuthorizationCodeProviderFactory\n\t}()\n\tdefaultAuthorizationCodeProviderFactory = func() authorizationCodeProvider {\n\t\treturn &nonInteractiveAuthorizationCodeProvider{\n\t\t\tt:  t,\n\t\t\tmu: sync.Mutex{},\n\t\t}\n\t}\n\troundTripper := newCountingRoundTripper(createTestNoRevocationTransport())\n\n\tcfg := wiremock.connectionConfig()\n\tcfg.Role = \"ANALYST\"\n\tcfg.Authenticator = AuthTypeOAuthClientCredentials\n\tcfg.Transporter = roundTripper\n\n\toauthAccessTokenSpec := newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\toauthRefreshTokenSpec := newOAuthRefreshTokenSpec(cfg.OauthTokenRequestURL, cfg.User)\n\n\tt.Run(\"successful flow\", func(t *testing.T) {\n\t\tcredentialsStorage.deleteCredential(oauthAccessTokenSpec)\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/oauth2/client_credentials/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t})\n\n\tt.Run(\"should use cached access token\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/oauth2/client_credentials/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\t\tcredentialsStorage.deleteCredential(oauthAccessTokenSpec)\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\tconn1, err := db.Conn(context.Background())\n\t\tassertNilF(t, err)\n\t\tdefer conn1.Close()\n\t\tconn2, err := db.Conn(context.Background())\n\t\tassertNilF(t, err)\n\t\tdefer conn2.Close()\n\t\trunSmokeQueryWithConn(t, conn1)\n\t\trunSmokeQueryWithConn(t, conn2)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1)\n\t})\n\n\tt.Run(\"should update cache with new token when the old one expired\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request_with_expired_access_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/client_credentials/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\n\t\tcredentialsStorage.setCredential(oauthAccessTokenSpec, \"expired-token\")\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1)\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthAccessTokenSpec), \"access-token-123\")\n\t})\n\n\tt.Run(\"should not use refresh token, but ask for fresh access token\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request_with_expired_access_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/client_credentials/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\n\t\tcredentialsStorage.setCredential(oauthAccessTokenSpec, \"expired-token\")\n\t\tcredentialsStorage.setCredential(oauthRefreshTokenSpec, \"refresh-token-123\")\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1)\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthAccessTokenSpec), \"access-token-123\")\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthRefreshTokenSpec), \"refresh-token-123\")\n\t})\n\n\tt.Run(\"should not use access token if token cache is disabled\", func(t *testing.T) {\n\t\troundTripper.reset()\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request_with_expired_access_token.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/client_credentials/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"auth/oauth2/login_request.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"))\n\n\t\tcredentialsStorage.setCredential(oauthAccessTokenSpec, \"access-token-123\")\n\t\tcfg.ClientStoreTemporaryCredential = ConfigBoolFalse\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, roundTripper.postReqCount[cfg.OauthTokenRequestURL], 1)\n\t\tassertEqualE(t, credentialsStorage.getCredential(oauthAccessTokenSpec), \"access-token-123\")\n\t})\n}\n\nfunc TestEligibleForDefaultClientCredentials(t *testing.T) {\n\ttests := []struct {\n\t\tname        string\n\t\toauthClient *oauthClient\n\t\texpected    bool\n\t}{\n\t\t{\n\t\t\tname: \"Client credentials not supplied and Snowflake as IdP\",\n\t\t\toauthClient: &oauthClient{\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tHost:                  \"example.snowflakecomputing.com\",\n\t\t\t\t\tOauthClientID:         \"\",\n\t\t\t\t\tOauthClientSecret:     \"\",\n\t\t\t\t\tOauthAuthorizationURL: \"https://example.snowflakecomputing.com/oauth/authorize\",\n\t\t\t\t\tOauthTokenRequestURL:  \"https://example.snowflakecomputing.com/oauth/token\",\n\t\t\t\t},\n\t\t\t},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Client credentials not supplied and empty URLs (defaults to Snowflake)\",\n\t\t\toauthClient: &oauthClient{\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tHost:                  \"example.snowflakecomputing.com\",\n\t\t\t\t\tOauthClientID:         \"\",\n\t\t\t\t\tOauthClientSecret:     \"\",\n\t\t\t\t\tOauthAuthorizationURL: \"\",\n\t\t\t\t\tOauthTokenRequestURL:  \"\",\n\t\t\t\t},\n\t\t\t},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Client credentials supplied\",\n\t\t\toauthClient: &oauthClient{\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tHost:                  \"example.snowflakecomputing.com\",\n\t\t\t\t\tOauthClientID:         \"testClientID\",\n\t\t\t\t\tOauthClientSecret:     \"testClientSecret\",\n\t\t\t\t\tOauthAuthorizationURL: \"https://example.snowflakecomputing.com/oauth/authorize\",\n\t\t\t\t\tOauthTokenRequestURL:  \"https://example.snowflakecomputing.com/oauth/token\",\n\t\t\t\t},\n\t\t\t},\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Only client ID supplied\",\n\t\t\toauthClient: &oauthClient{\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tHost:                  \"example.snowflakecomputing.com\",\n\t\t\t\t\tOauthClientID:         \"testClientID\",\n\t\t\t\t\tOauthClientSecret:     \"\",\n\t\t\t\t\tOauthAuthorizationURL: \"https://example.snowflakecomputing.com/oauth/authorize\",\n\t\t\t\t\tOauthTokenRequestURL:  \"https://example.snowflakecomputing.com/oauth/token\",\n\t\t\t\t},\n\t\t\t},\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Non-Snowflake IdP\",\n\t\t\toauthClient: &oauthClient{\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tHost:                  \"example.snowflakecomputing.com\",\n\t\t\t\t\tOauthClientID:         \"\",\n\t\t\t\t\tOauthClientSecret:     \"\",\n\t\t\t\t\tOauthAuthorizationURL: \"https://example.com/oauth/authorize\",\n\t\t\t\t\tOauthTokenRequestURL:  \"https://example.com/oauth/token\",\n\t\t\t\t},\n\t\t\t},\n\t\t\texpected: false,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tresult := test.oauthClient.eligibleForDefaultClientCredentials()\n\t\t\tif result != test.expected {\n\t\t\t\tt.Errorf(\"expected %v, got %v\", test.expected, result)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype nonInteractiveAuthorizationCodeProvider struct {\n\tt               *testing.T\n\ttamperWithState bool\n\ttriggerError    string\n\tresponseBody    string\n\tmu              sync.Mutex\n\tsleepTime       time.Duration\n}\n\nfunc (provider *nonInteractiveAuthorizationCodeProvider) run(authorizationURL string) error {\n\tif provider.sleepTime != 0 {\n\t\ttime.Sleep(provider.sleepTime)\n\t\tif provider.triggerError != \"\" {\n\t\t\treturn errors.New(provider.triggerError)\n\t\t}\n\t}\n\tif provider.triggerError != \"\" {\n\t\treturn errors.New(provider.triggerError)\n\t}\n\tgo func() {\n\t\tresp, err := http.Get(authorizationURL)\n\t\tassertNilF(provider.t, err)\n\t\tassertEqualE(provider.t, resp.StatusCode, http.StatusOK)\n\t\trespBody, err := io.ReadAll(resp.Body)\n\t\tassertNilF(provider.t, err)\n\t\tprovider.mu.Lock()\n\t\tdefer provider.mu.Unlock()\n\t\tprovider.responseBody = string(respBody)\n\t}()\n\treturn nil\n}\n\nfunc (provider *nonInteractiveAuthorizationCodeProvider) createState() string {\n\tif provider.tamperWithState {\n\t\treturn \"invalidState\"\n\t}\n\treturn \"testState\"\n}\n\nfunc (provider *nonInteractiveAuthorizationCodeProvider) createCodeVerifier() string {\n\treturn \"testCodeVerifier\"\n}\n\nfunc (provider *nonInteractiveAuthorizationCodeProvider) assertResponseBodyContains(str string) {\n\tprovider.mu.Lock()\n\tdefer provider.mu.Unlock()\n\tassertStringContainsE(provider.t, provider.responseBody, str)\n}\n"
  },
  {
    "path": "auth_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"cmp\"\n\t\"context\"\n\t\"crypto/rand\"\n\t\"crypto/rsa\"\n\t\"database/sql\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/golang-jwt/jwt/v5\"\n)\n\nfunc TestUnitPostAuth(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t\tFuncAuthPost:  postAuthTestAfterRenew,\n\t}\n\tvar err error\n\tbodyCreator := func() ([]byte, error) {\n\t\treturn []byte{0x12, 0x34}, nil\n\t}\n\t_, err = postAuth(context.Background(), sr, sr.Client, &url.Values{}, make(map[string]string), bodyCreator, 0)\n\tif err != nil {\n\t\tt.Fatalf(\"err: %v\", err)\n\t}\n\tsr.FuncAuthPost = postAuthTestError\n\t_, err = postAuth(context.Background(), sr, sr.Client, &url.Values{}, make(map[string]string), bodyCreator, 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to auth for unknown reason\")\n\t}\n\tsr.FuncAuthPost = postAuthTestAppBadGatewayError\n\t_, err = postAuth(context.Background(), sr, sr.Client, &url.Values{}, make(map[string]string), bodyCreator, 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to auth for unknown reason\")\n\t}\n\tsr.FuncAuthPost = postAuthTestAppForbiddenError\n\t_, err = postAuth(context.Background(), sr, sr.Client, &url.Values{}, make(map[string]string), bodyCreator, 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to auth for unknown reason\")\n\t}\n\tsr.FuncAuthPost = postAuthTestAppUnexpectedError\n\t_, err = postAuth(context.Background(), sr, sr.Client, &url.Values{}, make(map[string]string), bodyCreator, 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to auth for unknown reason\")\n\t}\n}\n\nfunc postAuthFailServiceIssue(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, _ bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\treturn nil, &SnowflakeError{\n\t\tNumber: ErrCodeServiceUnavailable,\n\t}\n}\n\nfunc postAuthFailWrongAccount(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, _ bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\treturn nil, &SnowflakeError{\n\t\tNumber: ErrCodeFailedToConnect,\n\t}\n}\n\nfunc postAuthFailUnknown(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, _ bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\treturn nil, &SnowflakeError{\n\t\tNumber: ErrFailedToAuth,\n\t}\n}\n\nfunc postAuthSuccessWithErrorCode(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, _ bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: false,\n\t\tCode:    \"98765\",\n\t\tMessage: \"wrong!\",\n\t}, nil\n}\n\nfunc postAuthSuccessWithInvalidErrorCode(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, _ bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: false,\n\t\tCode:    \"abcdef\",\n\t\tMessage: \"wrong!\",\n\t}, nil\n}\n\nfunc postAuthSuccess(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, _ bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc postAuthCheckSAMLResponse(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, err := bodyCreator()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif err = json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\tif ar.Data.RawSAMLResponse == \"\" {\n\t\treturn nil, errors.New(\"SAML response is empty\")\n\t}\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\n// Checks that the request body generated when authenticating with OAuth\n// contains all the necessary values.\nfunc postAuthCheckOAuth(\n\t_ context.Context,\n\t_ *snowflakeRestful,\n\t_ *http.Client,\n\t_ *url.Values, _ map[string]string,\n\tbodyCreator bodyCreatorType,\n\t_ time.Duration,\n) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, _ := bodyCreator()\n\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\tif ar.Data.Authenticator != AuthTypeOAuth.String() {\n\t\treturn nil, errors.New(\"Authenticator is not OAUTH\")\n\t}\n\tif ar.Data.Token == \"\" {\n\t\treturn nil, errors.New(\"Token is empty\")\n\t}\n\tif ar.Data.LoginName == \"\" {\n\t\treturn nil, errors.New(\"Login name is empty\")\n\t}\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc postAuthCheckPasscode(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, _ := bodyCreator()\n\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\tif ar.Data.Passcode != \"987654321\" || ar.Data.ExtAuthnDuoMethod != \"passcode\" {\n\t\treturn nil, fmt.Errorf(\"passcode didn't match. expected: 987654321, got: %v, duo: %v\", ar.Data.Passcode, ar.Data.ExtAuthnDuoMethod)\n\t}\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc postAuthCheckPasscodeInPassword(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, _ := bodyCreator()\n\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\tif ar.Data.Passcode != \"\" || ar.Data.ExtAuthnDuoMethod != \"passcode\" {\n\t\treturn nil, fmt.Errorf(\"passcode must be empty, got: %v, duo: %v\", ar.Data.Passcode, ar.Data.ExtAuthnDuoMethod)\n\t}\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc postAuthCheckUsernamePasswordMfa(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, _ := bodyCreator()\n\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif ar.Data.SessionParameters[\"CLIENT_REQUEST_MFA_TOKEN\"] != true {\n\t\treturn nil, fmt.Errorf(\"expected client_request_mfa_token to be true but was %v\", ar.Data.SessionParameters[\"CLIENT_REQUEST_MFA_TOKEN\"])\n\t}\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tMfaToken:    \"mockedMfaToken\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc postAuthCheckUsernamePasswordMfaToken(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, _ := bodyCreator()\n\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif ar.Data.Token != \"mockedMfaToken\" {\n\t\treturn nil, fmt.Errorf(\"unexpected mfa token: %v\", ar.Data.Token)\n\t}\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tMfaToken:    \"mockedMfaToken\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc postAuthCheckUsernamePasswordMfaFailed(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, _ := bodyCreator()\n\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif ar.Data.Token != \"mockedMfaToken\" {\n\t\treturn nil, fmt.Errorf(\"unexpected mfa token: %v\", ar.Data.Token)\n\t}\n\treturn &authResponse{\n\t\tSuccess: false,\n\t\tData:    authResponseMain{},\n\t\tMessage: \"auth failed\",\n\t\tCode:    \"260008\",\n\t}, nil\n}\n\nfunc postAuthCheckExternalBrowser(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, _ := bodyCreator()\n\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif ar.Data.SessionParameters[\"CLIENT_STORE_TEMPORARY_CREDENTIAL\"] != true {\n\t\treturn nil, fmt.Errorf(\"expected client_store_temporary_credential to be true but was %v\", ar.Data.SessionParameters[\"CLIENT_STORE_TEMPORARY_CREDENTIAL\"])\n\t}\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tIDToken:     \"mockedIDToken\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc postAuthCheckExternalBrowserToken(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, _ := bodyCreator()\n\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif ar.Data.Token != \"mockedIDToken\" {\n\t\treturn nil, fmt.Errorf(\"unexpected mfatoken: %v\", ar.Data.Token)\n\t}\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tIDToken:     \"mockedIDToken\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc postAuthCheckExternalBrowserFailed(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\tjsonBody, _ := bodyCreator()\n\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif ar.Data.SessionParameters[\"CLIENT_STORE_TEMPORARY_CREDENTIAL\"] != true {\n\t\treturn nil, fmt.Errorf(\"expected client_store_temporary_credential to be true but was %v\", ar.Data.SessionParameters[\"CLIENT_STORE_TEMPORARY_CREDENTIAL\"])\n\t}\n\treturn &authResponse{\n\t\tSuccess: false,\n\t\tData:    authResponseMain{},\n\t\tMessage: \"auth failed\",\n\t\tCode:    \"260008\",\n\t}, nil\n}\n\ntype restfulTestWrapper struct {\n\tt *testing.T\n}\n\nfunc (rtw restfulTestWrapper) postAuthOktaWithNewToken(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\tvar ar authRequest\n\n\tcfg := &Config{\n\t\tAuthenticator: AuthTypeOkta,\n\t}\n\n\t// Retry 3 times and success\n\tclient := &fakeHTTPClient{\n\t\tcnt:        3,\n\t\tsuccess:    true,\n\t\tstatusCode: 429,\n\t\tt:          rtw.t,\n\t}\n\n\turlPtr, err := url.Parse(\"https://fakeaccountretrylogin.snowflakecomputing.com:443/login-request?request_guid=testguid\")\n\tif err != nil {\n\t\treturn &authResponse{}, err\n\t}\n\n\tbody := func() ([]byte, error) {\n\t\tjsonBody, _ := bodyCreator()\n\t\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn jsonBody, err\n\t}\n\n\t_, err = newRetryHTTP(context.Background(), client, emptyRequest, urlPtr, make(map[string]string), 60*time.Second, 3, defaultTimeProvider, cfg).doPost().setBodyCreator(body).execute()\n\tif err != nil {\n\t\treturn &authResponse{}, err\n\t}\n\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tData: authResponseMain{\n\t\t\tToken:       \"t\",\n\t\t\tMasterToken: \"m\",\n\t\t\tMfaToken:    \"mockedMfaToken\",\n\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc getDefaultSnowflakeConn() *snowflakeConn {\n\tsc := &snowflakeConn{\n\t\trest: &snowflakeRestful{\n\t\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t\t},\n\t\tcfg: &Config{\n\t\t\tAccount:            \"a\",\n\t\t\tUser:               \"u\",\n\t\t\tPassword:           \"p\",\n\t\t\tDatabase:           \"d\",\n\t\t\tSchema:             \"s\",\n\t\t\tWarehouse:          \"w\",\n\t\t\tRole:               \"r\",\n\t\t\tRegion:             \"\",\n\t\t\tPasscodeInPassword: false,\n\t\t\tPasscode:           \"\",\n\t\t\tApplication:        \"testapp\",\n\t\t},\n\t\ttelemetry: &snowflakeTelemetry{enabled: false},\n\t}\n\treturn sc\n}\n\nfunc TestUnitAuthenticateWithTokenAccessor(t *testing.T) {\n\texpectedSessionID := int64(123)\n\texpectedMasterToken := \"master_token\"\n\texpectedToken := \"auth_token\"\n\n\tta := getSimpleTokenAccessor()\n\tta.SetTokens(expectedToken, expectedMasterToken, expectedSessionID)\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Authenticator = AuthTypeTokenAccessor\n\tsc.cfg.TokenAccessor = ta\n\tsr := &snowflakeRestful{\n\t\tFuncPostAuth:  postAuthFailServiceIssue,\n\t\tTokenAccessor: ta,\n\t}\n\tsc.rest = sr\n\n\t// FuncPostAuth is set to fail, but AuthTypeTokenAccessor should not even make a call to FuncPostAuth\n\tresp, err := authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err != nil {\n\t\tt.Fatalf(\"should not have failed, err %v\", err)\n\t}\n\n\tif resp.SessionID != expectedSessionID {\n\t\tt.Fatalf(\"Expected session id %v but got %v\", expectedSessionID, resp.SessionID)\n\t}\n\tif resp.Token != expectedToken {\n\t\tt.Fatalf(\"Expected token %v but got %v\", expectedToken, resp.Token)\n\t}\n\tif resp.MasterToken != expectedMasterToken {\n\t\tt.Fatalf(\"Expected master token %v but got %v\", expectedMasterToken, resp.MasterToken)\n\t}\n\tif resp.SessionInfo.DatabaseName != sc.cfg.Database {\n\t\tt.Fatalf(\"Expected database %v but got %v\", sc.cfg.Database, resp.SessionInfo.DatabaseName)\n\t}\n\tif resp.SessionInfo.WarehouseName != sc.cfg.Warehouse {\n\t\tt.Fatalf(\"Expected warehouse %v but got %v\", sc.cfg.Warehouse, resp.SessionInfo.WarehouseName)\n\t}\n\tif resp.SessionInfo.RoleName != sc.cfg.Role {\n\t\tt.Fatalf(\"Expected role %v but got %v\", sc.cfg.Role, resp.SessionInfo.RoleName)\n\t}\n\tif resp.SessionInfo.SchemaName != sc.cfg.Schema {\n\t\tt.Fatalf(\"Expected schema %v but got %v\", sc.cfg.Schema, resp.SessionInfo.SchemaName)\n\t}\n}\n\nfunc TestUnitAuthenticate(t *testing.T) {\n\tvar err error\n\tvar driverErr *SnowflakeError\n\tvar ok bool\n\n\tta := getSimpleTokenAccessor()\n\tsc := getDefaultSnowflakeConn()\n\tsr := &snowflakeRestful{\n\t\tFuncPostAuth:  postAuthFailServiceIssue,\n\t\tTokenAccessor: ta,\n\t}\n\tsc.rest = sr\n\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tdriverErr, ok = err.(*SnowflakeError)\n\tif !ok || driverErr.Number != ErrCodeServiceUnavailable {\n\t\tt.Fatalf(\"Snowflake error is expected. err: %v\", driverErr)\n\t}\n\tsr.FuncPostAuth = postAuthFailWrongAccount\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tdriverErr, ok = err.(*SnowflakeError)\n\tif !ok || driverErr.Number != ErrCodeFailedToConnect {\n\t\tt.Fatalf(\"Snowflake error is expected. err: %v\", driverErr)\n\t}\n\tsr.FuncPostAuth = postAuthFailUnknown\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tdriverErr, ok = err.(*SnowflakeError)\n\tif !ok || driverErr.Number != ErrFailedToAuth {\n\t\tt.Fatalf(\"Snowflake error is expected. err: %v\", driverErr)\n\t}\n\tta.SetTokens(\"bad-token\", \"bad-master-token\", 1)\n\tsr.FuncPostAuth = postAuthSuccessWithErrorCode\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tnewToken, newMasterToken, newSessionID := ta.GetTokens()\n\tif newToken != \"\" || newMasterToken != \"\" || newSessionID != -1 {\n\t\tt.Fatalf(\"failed auth should have reset tokens: %v %v %v\", newToken, newMasterToken, newSessionID)\n\t}\n\tdriverErr, ok = err.(*SnowflakeError)\n\tif !ok || driverErr.Number != 98765 {\n\t\tt.Fatalf(\"Snowflake error is expected. err: %v\", driverErr)\n\t}\n\tta.SetTokens(\"bad-token\", \"bad-master-token\", 1)\n\tsr.FuncPostAuth = postAuthSuccessWithInvalidErrorCode\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\toldToken, oldMasterToken, oldSessionID := ta.GetTokens()\n\tif oldToken != \"\" || oldMasterToken != \"\" || oldSessionID != -1 {\n\t\tt.Fatalf(\"failed auth should have reset tokens: %v %v %v\", oldToken, oldMasterToken, oldSessionID)\n\t}\n\tsr.FuncPostAuth = postAuthSuccess\n\tvar resp *authResponseMain\n\tresp, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to auth. err: %v\", err)\n\t}\n\tif resp.SessionInfo.DatabaseName != \"dbn\" {\n\t\tt.Fatalf(\"failed to get response from auth\")\n\t}\n\tnewToken, newMasterToken, newSessionID = ta.GetTokens()\n\tif newToken == oldToken {\n\t\tt.Fatalf(\"new token was not set: %v\", newToken)\n\t}\n\tif newMasterToken == oldMasterToken {\n\t\tt.Fatalf(\"new master token was not set: %v\", newMasterToken)\n\t}\n\tif newSessionID == oldSessionID {\n\t\tt.Fatalf(\"new session id was not set: %v\", newSessionID)\n\t}\n}\n\nfunc TestUnitAuthenticateSaml(t *testing.T) {\n\tvar err error\n\tsr := &snowflakeRestful{\n\t\tProtocol:         \"https\",\n\t\tHost:             \"abc.com\",\n\t\tPort:             443,\n\t\tFuncPostAuthSAML: postAuthSAMLAuthSuccess,\n\t\tFuncPostAuthOKTA: postAuthOKTASuccess,\n\t\tFuncGetSSO:       getSSOSuccess,\n\t\tFuncPostAuth:     postAuthCheckSAMLResponse,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Authenticator = AuthTypeOkta\n\tsc.cfg.OktaURL = &url.URL{\n\t\tScheme: \"https\",\n\t\tHost:   \"abc.com\",\n\t}\n\tsc.rest = sr\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tassertNilF(t, err, \"failed to run.\")\n}\n\n// Unit test for OAuth.\nfunc TestUnitAuthenticateOAuth(t *testing.T) {\n\tvar err error\n\tsr := &snowflakeRestful{\n\t\tFuncPostAuth:  postAuthCheckOAuth,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Token = \"oauthToken\"\n\tsc.cfg.Authenticator = AuthTypeOAuth\n\tsc.rest = sr\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to run. err: %v\", err)\n\t}\n}\n\nfunc TestUnitAuthenticatePasscode(t *testing.T) {\n\tvar err error\n\tsr := &snowflakeRestful{\n\t\tFuncPostAuth:  postAuthCheckPasscode,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Passcode = \"987654321\"\n\tsc.rest = sr\n\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to run. err: %v\", err)\n\t}\n\tsr.FuncPostAuth = postAuthCheckPasscodeInPassword\n\tsc.rest = sr\n\tsc.cfg.PasscodeInPassword = true\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to run. err: %v\", err)\n\t}\n}\n\n// Test JWT function in the local environment against the validation function in go\nfunc TestUnitAuthenticateJWT(t *testing.T) {\n\tvar err error\n\n\t// Generate a fresh private key for this unit test only\n\tlocalTestKey, err := rsa.GenerateKey(rand.Reader, 2048)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to generate test private key: %s\", err.Error())\n\t}\n\n\t// Create custom JWT verification function that uses the local key\n\tpostAuthCheckLocalJWTToken := func(_ context.Context, _ *snowflakeRestful, _ *http.Client, _ *url.Values, _ map[string]string, bodyCreator bodyCreatorType, _ time.Duration) (*authResponse, error) {\n\t\tvar ar authRequest\n\t\tjsonBody, _ := bodyCreator()\n\t\tif err := json.Unmarshal(jsonBody, &ar); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif ar.Data.Authenticator != AuthTypeJwt.String() {\n\t\t\treturn nil, errors.New(\"Authenticator is not JWT\")\n\t\t}\n\n\t\ttokenString := ar.Data.Token\n\n\t\t// Validate token using the local test key's public key\n\t\t_, err := jwt.Parse(tokenString, func(token *jwt.Token) (any, error) {\n\t\t\tif _, ok := token.Method.(*jwt.SigningMethodRSA); !ok {\n\t\t\t\treturn nil, fmt.Errorf(\"Unexpected signing method: %v\", token.Header[\"alg\"])\n\t\t\t}\n\t\t\treturn localTestKey.Public(), nil // Use local key for verification\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\treturn &authResponse{\n\t\t\tSuccess: true,\n\t\t\tData: authResponseMain{\n\t\t\t\tToken:       \"t\",\n\t\t\t\tMasterToken: \"m\",\n\t\t\t\tSessionInfo: authResponseSessionInfo{\n\t\t\t\t\tDatabaseName: \"dbn\",\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncPostAuth:  postAuthCheckLocalJWTToken, // Use local verification function\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Authenticator = AuthTypeJwt\n\tsc.cfg.JWTExpireTimeout = time.Duration(sfconfig.DefaultJWTTimeout)\n\tsc.cfg.PrivateKey = localTestKey\n\tsc.rest = sr\n\n\t// A valid JWT token should pass\n\tif _, err = authenticate(context.Background(), sc, []byte{}, []byte{}); err != nil {\n\t\tt.Fatalf(\"failed to run. err: %v\", err)\n\t}\n\n\t// An invalid JWT token should not pass\n\tinvalidPrivateKey, err := rsa.GenerateKey(rand.Reader, 2048)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tsc.cfg.PrivateKey = invalidPrivateKey\n\tif _, err = authenticate(context.Background(), sc, []byte{}, []byte{}); err == nil {\n\t\tt.Fatalf(\"invalid token passed\")\n\t}\n}\n\nfunc TestUnitAuthenticateUsernamePasswordMfa(t *testing.T) {\n\tvar err error\n\tsr := &snowflakeRestful{\n\t\tFuncPostAuth:  postAuthCheckUsernamePasswordMfa,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Authenticator = AuthTypeUsernamePasswordMFA\n\tsc.cfg.ClientRequestMfaToken = ConfigBoolTrue\n\tsc.rest = sr\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to run. err: %v\", err)\n\t}\n\n\tsr.FuncPostAuth = postAuthCheckUsernamePasswordMfaToken\n\tsc.mfaToken = \"mockedMfaToken\"\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to run. err: %v\", err)\n\t}\n\n\tsr.FuncPostAuth = postAuthCheckUsernamePasswordMfaFailed\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestUnitAuthenticateWithConfigMFA(t *testing.T) {\n\tvar err error\n\tsr := &snowflakeRestful{\n\t\tFuncPostAuth:  postAuthCheckUsernamePasswordMfa,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Authenticator = AuthTypeUsernamePasswordMFA\n\tsc.cfg.ClientRequestMfaToken = ConfigBoolTrue\n\tsc.rest = sr\n\tsc.ctx = context.Background()\n\terr = authenticateWithConfig(sc)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to run. err: %v\", err)\n\t}\n}\n\n// This test creates two groups of scenarios:\n// a) singleAuthenticationPrompt=true - in this case, we start authenticating threads at once,\n// but due to locking mechanism only one should reach wiremock without MFA token.\n// b) singleAuthenticationPrompt=false - in this case, there is no locking, so all threads should rush,\n// but on Wiremock only first will be served with correct response (simulating a user confirming MFA only once).\n// The remaining threads should return error.\nfunc TestMfaParallelLogin(t *testing.T) {\n\tskipOnMissingHome(t)\n\tskipOnMac(t, \"interactive keyring access not available on macOS runners\")\n\tcfg := wiremock.connectionConfig()\n\ttokenSpec := newMfaTokenSpec(cfg.Host, cfg.User)\n\n\tfor _, singleAuthenticationPrompt := range []ConfigBool{ConfigBoolTrue, ConfigBoolFalse} {\n\t\tt.Run(\"starts without mfa token, singleAuthenticationPrompt=\"+singleAuthenticationPrompt.String(), func(t *testing.T) {\n\t\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/mfa/parallel_login_successful_flow.json\"),\n\t\t\t\tnewWiremockMapping(\"select1.json\"),\n\t\t\t\tnewWiremockMapping(\"close_session.json\"))\n\t\t\tcfg := wiremock.connectionConfig()\n\t\t\tcfg.Authenticator = AuthTypeUsernamePasswordMFA\n\t\t\tcfg.SingleAuthenticationPrompt = singleAuthenticationPrompt\n\t\t\tcfg.ClientRequestMfaToken = ConfigBoolTrue\n\t\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\t\tdb := sql.OpenDB(connector)\n\t\t\tdefer db.Close()\n\t\t\tcredentialsStorage.deleteCredential(tokenSpec)\n\t\t\terrs := initPoolWithSizeAndReturnErrors(db, 20)\n\t\t\tif singleAuthenticationPrompt == ConfigBoolTrue {\n\t\t\t\tassertEqualE(t, len(errs), 0)\n\t\t\t} else {\n\t\t\t\t// most of for the one that actually retrieves MFA token should fail\n\t\t\t\tassertEqualE(t, len(errs), 19)\n\t\t\t}\n\t\t})\n\n\t\tt.Run(\"starts without mfa token, first attempt fails, singleAuthenticationPrompt=\"+singleAuthenticationPrompt.String(), func(t *testing.T) {\n\t\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/mfa/parallel_login_first_fails_then_successful_flow.json\"),\n\t\t\t\tnewWiremockMapping(\"select1.json\"),\n\t\t\t\tnewWiremockMapping(\"close_session.json\"))\n\t\t\tcfg := wiremock.connectionConfig()\n\t\t\tcfg.Authenticator = AuthTypeUsernamePasswordMFA\n\t\t\tcfg.SingleAuthenticationPrompt = singleAuthenticationPrompt\n\t\t\tcfg.ClientRequestMfaToken = ConfigBoolTrue\n\t\t\tcredentialsStorage.deleteCredential(tokenSpec)\n\t\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\t\tdb := sql.OpenDB(connector)\n\t\t\tdefer db.Close()\n\t\t\terrs := initPoolWithSizeAndReturnErrors(db, 20)\n\t\t\tif singleAuthenticationPrompt == ConfigBoolTrue {\n\t\t\t\tassertEqualF(t, len(errs), 1)\n\t\t\t\tassertStringContainsE(t, errs[0].Error(), \"MFA with TOTP is required\")\n\t\t\t} else {\n\t\t\t\tassertEqualE(t, len(errs), 19)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestUnitAuthenticateWithConfigOkta(t *testing.T) {\n\tvar err error\n\tsr := &snowflakeRestful{\n\t\tProtocol:         \"https\",\n\t\tHost:             \"abc.com\",\n\t\tPort:             443,\n\t\tFuncPostAuthSAML: postAuthSAMLAuthSuccess,\n\t\tFuncPostAuthOKTA: postAuthOKTASuccess,\n\t\tFuncGetSSO:       getSSOSuccess,\n\t\tFuncPostAuth:     postAuthCheckSAMLResponse,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Authenticator = AuthTypeOkta\n\tsc.cfg.OktaURL = &url.URL{\n\t\tScheme: \"https\",\n\t\tHost:   \"abc.com\",\n\t}\n\tsc.rest = sr\n\tsc.ctx = context.Background()\n\n\terr = authenticateWithConfig(sc)\n\tassertNilE(t, err, \"expected to have no error.\")\n\n\tsr.FuncPostAuthSAML = postAuthSAMLError\n\terr = authenticateWithConfig(sc)\n\tassertNotNilF(t, err, \"should have failed at FuncPostAuthSAML.\")\n\tassertEqualE(t, err.Error(), \"failed to get SAML response\")\n}\n\nfunc TestUnitAuthenticateWithExternalBrowserParallel(t *testing.T) {\n\tskipOnMissingHome(t)\n\tskipOnMac(t, \"interactive keyring access not available on macOS runners\")\n\tt.Run(\"no ID token cached\", func(t *testing.T) {\n\t\torigSamlResponseProvider := defaultSamlResponseProvider\n\t\tdefer func() { defaultSamlResponseProvider = origSamlResponseProvider }()\n\t\tdefaultSamlResponseProvider = func() samlResponseProvider {\n\t\t\treturn &nonInteractiveSamlResponseProvider{t: t}\n\t\t}\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/external_browser/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"),\n\t\t\tnewWiremockMapping(\"close_session.json\"))\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.Authenticator = AuthTypeExternalBrowser\n\t\tcfg.ClientStoreTemporaryCredential = ConfigBoolTrue\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tcredentialsStorage.deleteCredential(newIDTokenSpec(cfg.Host, cfg.User))\n\t\tdb := sql.OpenDB(connector)\n\t\tdefer db.Close()\n\t\trunSmokeQuery(t, db)\n\t\tassertEqualE(t, credentialsStorage.getCredential(newIDTokenSpec(cfg.Host, cfg.User)), \"test-id-token\")\n\t})\n\n\tt.Run(\"ID token cached\", func(t *testing.T) {\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/external_browser/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"),\n\t\t\tnewWiremockMapping(\"close_session.json\"))\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.Authenticator = AuthTypeExternalBrowser\n\t\tcfg.ClientStoreTemporaryCredential = ConfigBoolTrue\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tcredentialsStorage.setCredential(newIDTokenSpec(cfg.Host, cfg.User), \"test-id-token\")\n\t\tdb := sql.OpenDB(connector)\n\t\tdefer db.Close()\n\t\trunSmokeQuery(t, db)\n\t})\n\n\tt.Run(\"first connection retrieves ID token, second request uses cached ID token\", func(t *testing.T) {\n\t\torigSamlResponseProvider := defaultSamlResponseProvider\n\t\tdefer func() { defaultSamlResponseProvider = origSamlResponseProvider }()\n\t\tdefaultSamlResponseProvider = func() samlResponseProvider {\n\t\t\treturn &nonInteractiveSamlResponseProvider{t: t}\n\t\t}\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/external_browser/parallel_login_successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"),\n\t\t\tnewWiremockMapping(\"close_session.json\"))\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.Authenticator = AuthTypeExternalBrowser\n\t\tcfg.ClientStoreTemporaryCredential = ConfigBoolTrue\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tcredentialsStorage.deleteCredential(newIDTokenSpec(cfg.Host, cfg.User))\n\t\tdb := sql.OpenDB(connector)\n\t\tdefer db.Close()\n\t\tconn1, err := db.Conn(context.Background())\n\t\tassertNilF(t, err)\n\t\tdefer conn1.Close()\n\t\trunSmokeQueryWithConn(t, conn1)\n\t\tconn2, err := db.Conn(context.Background())\n\t\tassertNilF(t, err)\n\t\tdefer conn2.Close()\n\t\trunSmokeQueryWithConn(t, conn2)\n\t})\n\n\tt.Run(\"first connection retrieves ID token, remaining ones wait and reuse\", func(t *testing.T) {\n\t\torigSamlResponseProvider := defaultSamlResponseProvider\n\t\tdefer func() { defaultSamlResponseProvider = origSamlResponseProvider }()\n\t\tdefaultSamlResponseProvider = func() samlResponseProvider {\n\t\t\treturn &nonInteractiveSamlResponseProvider{t: t}\n\t\t}\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/external_browser/parallel_login_successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"),\n\t\t\tnewWiremockMapping(\"close_session.json\"))\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.Authenticator = AuthTypeExternalBrowser\n\t\tcfg.ClientStoreTemporaryCredential = ConfigBoolTrue\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tcredentialsStorage.deleteCredential(newIDTokenSpec(cfg.Host, cfg.User))\n\t\tdb := sql.OpenDB(connector)\n\t\tdefer db.Close()\n\t\terrs := initPoolWithSizeAndReturnErrors(db, 20)\n\t\tassertEqualE(t, len(errs), 0)\n\t})\n\n\tt.Run(\"first connection fails, second retrieves ID token, remaining ones wait and reuse\", func(t *testing.T) {\n\t\torigSamlResponseProvider := defaultSamlResponseProvider\n\t\tdefer func() { defaultSamlResponseProvider = origSamlResponseProvider }()\n\t\tdefaultSamlResponseProvider = func() samlResponseProvider {\n\t\t\treturn &nonInteractiveSamlResponseProvider{t: t}\n\t\t}\n\t\twiremock.registerMappings(t, newWiremockMapping(\"auth/external_browser/parallel_login_first_fails_then_successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"select1.json\"),\n\t\t\tnewWiremockMapping(\"close_session.json\"))\n\t\tcfg := wiremock.connectionConfig()\n\t\tcfg.Authenticator = AuthTypeExternalBrowser\n\t\tcfg.ClientStoreTemporaryCredential = ConfigBoolTrue\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tcredentialsStorage.deleteCredential(newIDTokenSpec(cfg.Host, cfg.User))\n\t\tdb := sql.OpenDB(connector)\n\t\tdefer db.Close()\n\t\terrs := initPoolWithSizeAndReturnErrors(db, 20)\n\t\tassertEqualE(t, len(errs), 1)\n\t})\n}\n\nfunc TestUnitAuthenticateWithConfigExternalBrowserWithFailedSAMLResponse(t *testing.T) {\n\tvar err error\n\tsr := &snowflakeRestful{\n\t\tFuncPostAuthSAML: postAuthSAMLError,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Authenticator = AuthTypeExternalBrowser\n\tsc.cfg.ExternalBrowserTimeout = time.Duration(sfconfig.DefaultExternalBrowserTimeout)\n\tsc.rest = sr\n\tsc.ctx = context.Background()\n\terr = authenticateWithConfig(sc)\n\tassertNotNilF(t, err, \"should have failed at FuncPostAuthSAML.\")\n\tassertEqualE(t, err.Error(), \"failed to get SAML response\")\n}\n\nfunc TestUnitAuthenticateExternalBrowser(t *testing.T) {\n\tvar err error\n\tsr := &snowflakeRestful{\n\t\tFuncPostAuth:  postAuthCheckExternalBrowser,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Authenticator = AuthTypeExternalBrowser\n\tsc.cfg.ClientStoreTemporaryCredential = ConfigBoolTrue\n\tsc.rest = sr\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to run. err: %v\", err)\n\t}\n\n\tsr.FuncPostAuth = postAuthCheckExternalBrowserToken\n\tsc.idToken = \"mockedIDToken\"\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to run. err: %v\", err)\n\t}\n\n\tsr.FuncPostAuth = postAuthCheckExternalBrowserFailed\n\t_, err = authenticate(context.Background(), sc, []byte{}, []byte{})\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\n// To run this test you need to set environment variables in parameters.json to a user with MFA authentication enabled\n// Set any other snowflake_test variables needed for database, schema, role for this user\nfunc TestUsernamePasswordMfaCaching(t *testing.T) {\n\tt.Skip(\"manual test for MFA token caching\")\n\n\tconfig, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse dsn\")\n\t}\n\t// connect with MFA authentication\n\tuser := os.Getenv(\"SNOWFLAKE_TEST_MFA_USER\")\n\tpassword := os.Getenv(\"SNOWFLAKE_TEST_MFA_PASSWORD\")\n\tconfig.User = user\n\tconfig.Password = password\n\tconfig.Authenticator = AuthTypeUsernamePasswordMFA\n\tif runtime.GOOS == \"linux\" {\n\t\tconfig.ClientRequestMfaToken = ConfigBoolTrue\n\t}\n\tconnector := NewConnector(SnowflakeDriver{}, *config)\n\tdb := sql.OpenDB(connector)\n\tfor range 3 {\n\t\t// should only be prompted to authenticate first time around.\n\t\t_, err := db.Query(\"select current_user()\")\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n}\n\nfunc TestUsernamePasswordMfaCachingWithPasscode(t *testing.T) {\n\tt.Skip(\"manual test for MFA token caching\")\n\n\tconfig, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse dsn\")\n\t}\n\t// connect with MFA authentication\n\tuser := os.Getenv(\"SNOWFLAKE_TEST_MFA_USER\")\n\tpassword := os.Getenv(\"SNOWFLAKE_TEST_MFA_PASSWORD\")\n\tconfig.User = user\n\tconfig.Password = password\n\tconfig.Passcode = \"\" // fill with your passcode from DUO app\n\tconfig.Authenticator = AuthTypeUsernamePasswordMFA\n\tif runtime.GOOS == \"linux\" {\n\t\tconfig.ClientRequestMfaToken = ConfigBoolTrue\n\t}\n\tconnector := NewConnector(SnowflakeDriver{}, *config)\n\tdb := sql.OpenDB(connector)\n\tfor range 3 {\n\t\t// should only be prompted to authenticate first time around.\n\t\t_, err := db.Query(\"select current_user()\")\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n}\n\nfunc TestUsernamePasswordMfaCachingWithPasscodeInPassword(t *testing.T) {\n\tt.Skip(\"manual test for MFA token caching\")\n\n\tconfig, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse dsn\")\n\t}\n\t// connect with MFA authentication\n\tuser := os.Getenv(\"SNOWFLAKE_TEST_MFA_USER\")\n\tpassword := os.Getenv(\"SNOWFLAKE_TEST_MFA_PASSWORD\")\n\tconfig.User = user\n\tconfig.Password = password + \"\" // fill with your passcode from DUO app\n\tconfig.PasscodeInPassword = true\n\tconnector := NewConnector(SnowflakeDriver{}, *config)\n\tdb := sql.OpenDB(connector)\n\tfor range 3 {\n\t\t// should only be prompted to authenticate first time around.\n\t\t_, err := db.Query(\"select current_user()\")\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n}\n\n// To run this test you need to set environment variables in parameters.json to a user with MFA authentication enabled\n// Set any other snowflake_test variables needed for database, schema, role for this user\nfunc TestDisableUsernamePasswordMfaCaching(t *testing.T) {\n\tt.Skip(\"manual test for disabling MFA token caching\")\n\n\tconfig, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse dsn\")\n\t}\n\t// connect with MFA authentication\n\tuser := os.Getenv(\"SNOWFLAKE_TEST_MFA_USER\")\n\tpassword := os.Getenv(\"SNOWFLAKE_TEST_MFA_PASSWORD\")\n\tconfig.User = user\n\tconfig.Password = password\n\tconfig.Authenticator = AuthTypeUsernamePasswordMFA\n\t// disable MFA token caching\n\tconfig.ClientRequestMfaToken = ConfigBoolFalse\n\tconnector := NewConnector(SnowflakeDriver{}, *config)\n\tdb := sql.OpenDB(connector)\n\tfor range 3 {\n\t\t// should be prompted to authenticate 3 times.\n\t\t_, err := db.Query(\"select current_user()\")\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n}\n\n// To run this test you need to set SNOWFLAKE_TEST_EXT_BROWSER_USER environment variable to an external browser user\n// Set any other snowflake_test variables needed for database, schema, role for this user\nfunc TestExternalBrowserCaching(t *testing.T) {\n\tt.Skip(\"manual test for external browser token caching\")\n\n\tconfig, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse dsn\")\n\t}\n\t// connect with external browser authentication\n\tuser := os.Getenv(\"SNOWFLAKE_TEST_EXT_BROWSER_USER\")\n\tconfig.User = user\n\tconfig.Authenticator = AuthTypeExternalBrowser\n\tif runtime.GOOS == \"linux\" {\n\t\tconfig.ClientStoreTemporaryCredential = ConfigBoolTrue\n\t}\n\tconnector := NewConnector(SnowflakeDriver{}, *config)\n\tdb := sql.OpenDB(connector)\n\tfor range 3 {\n\t\t// should only be prompted to authenticate first time around.\n\t\t_, err := db.Query(\"select current_user()\")\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n}\n\n// To run this test you need to set SNOWFLAKE_TEST_EXT_BROWSER_USER environment variable to an external browser user\n// Set any other snowflake_test variables needed for database, schema, role for this user\nfunc TestDisableExternalBrowserCaching(t *testing.T) {\n\tt.Skip(\"manual test for disabling external browser token caching\")\n\n\tconfig, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse dsn\")\n\t}\n\t// connect with external browser authentication\n\tuser := os.Getenv(\"SNOWFLAKE_TEST_EXT_BROWSER_USER\")\n\tconfig.User = user\n\tconfig.Authenticator = AuthTypeExternalBrowser\n\t// disable external browser token caching\n\tconfig.ClientStoreTemporaryCredential = ConfigBoolFalse\n\tconnector := NewConnector(SnowflakeDriver{}, *config)\n\tdb := sql.OpenDB(connector)\n\tfor range 3 {\n\t\t// should be prompted to authenticate 3 times.\n\t\t_, err := db.Query(\"select current_user()\")\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n}\n\nfunc TestOktaRetryWithNewToken(t *testing.T) {\n\texpectedMasterToken := \"m\"\n\texpectedToken := \"t\"\n\texpectedMfaToken := \"mockedMfaToken\"\n\texpectedDatabaseName := \"dbn\"\n\n\tsr := &snowflakeRestful{\n\t\tProtocol:         \"https\",\n\t\tHost:             \"abc.com\",\n\t\tPort:             443,\n\t\tFuncPostAuthSAML: postAuthSAMLAuthSuccess,\n\t\tFuncPostAuthOKTA: postAuthOKTASuccess,\n\t\tFuncGetSSO:       getSSOSuccess,\n\t\tFuncPostAuth:     restfulTestWrapper{t: t}.postAuthOktaWithNewToken,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\tsc := getDefaultSnowflakeConn()\n\tsc.cfg.Authenticator = AuthTypeOkta\n\tsc.cfg.OktaURL = &url.URL{\n\t\tScheme: \"https\",\n\t\tHost:   \"abc.com\",\n\t}\n\tsc.rest = sr\n\tsc.ctx = context.Background()\n\n\tauthResponse, err := authenticate(context.Background(), sc, []byte{0x12, 0x34}, []byte{0x56, 0x78})\n\tassertNilF(t, err, \"should not have failed to run authenticate()\")\n\tassertEqualF(t, authResponse.MasterToken, expectedMasterToken)\n\tassertEqualF(t, authResponse.Token, expectedToken)\n\tassertEqualF(t, authResponse.MfaToken, expectedMfaToken)\n\tassertEqualF(t, authResponse.SessionInfo.DatabaseName, expectedDatabaseName)\n}\n\nfunc TestContextPropagatedToAuthWhenUsingOpen(t *testing.T) {\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tassertNilF(t, err)\n\tdefer db.Close()\n\n\tctx, cancel := context.WithTimeout(context.Background(), 20*time.Millisecond)\n\t_, err = db.QueryContext(ctx, \"SELECT 1\")\n\tassertNotNilF(t, err)\n\tassertStringContainsE(t, err.Error(), \"context deadline exceeded\")\n\tcancel()\n}\n\nfunc TestContextPropagatedToAuthWhenUsingOpenDB(t *testing.T) {\n\tcfg, err := ParseDSN(dsn)\n\tassertNilF(t, err)\n\tconnector := NewConnector(&SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\n\tctx, cancel := context.WithTimeout(context.Background(), 20*time.Millisecond)\n\t_, err = db.QueryContext(ctx, \"SELECT 1\")\n\tassertNotNilF(t, err)\n\tassertStringContainsE(t, err.Error(), \"context deadline exceeded\")\n\tcancel()\n}\n\nfunc TestPatSuccessfulFlow(t *testing.T) {\n\tcfg := wiremock.connectionConfig()\n\tcfg.Authenticator = AuthTypePat\n\tcfg.Token = \"some PAT\"\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/pat/successful_flow.json\"},\n\t\twiremockMapping{filePath: \"select1.json\"},\n\t)\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\trows, err := db.Query(\"SELECT 1\")\n\tassertNilF(t, err)\n\tvar v int\n\tassertTrueE(t, rows.Next())\n\tassertNilF(t, rows.Scan(&v))\n\tassertEqualE(t, v, 1)\n}\n\nfunc TestPatTokenRotation(t *testing.T) {\n\tdir := t.TempDir()\n\ttokenFilePath := filepath.Join(dir, \"tokenFile\")\n\tassertNilF(t, os.WriteFile(tokenFilePath, []byte(\"some PAT\"), 0644))\n\n\tcfg := wiremock.connectionConfig()\n\tcfg.Authenticator = AuthTypePat\n\tcfg.TokenFilePath = tokenFilePath\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/pat/reading_fresh_token.json\"},\n\t)\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\t_, err := db.Conn(context.Background())\n\tassertNilF(t, err)\n\n\tassertNilF(t, os.WriteFile(tokenFilePath, []byte(\"some PAT 2\"), 0644))\n\t_, err = db.Conn(context.Background())\n\tassertNilF(t, err)\n}\n\nfunc TestPatInvalidToken(t *testing.T) {\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/pat/invalid_token.json\"},\n\t)\n\tcfg := wiremock.connectionConfig()\n\tcfg.Authenticator = AuthTypePat\n\tcfg.Token = \"some PAT\"\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\t_, err := db.Query(\"SELECT 1\")\n\tassertNotNilF(t, err)\n\tvar se *SnowflakeError\n\tassertErrorsAsF(t, err, &se)\n\tassertEqualE(t, se.Number, 394400)\n\tassertEqualE(t, se.Message, \"Programmatic access token is invalid.\")\n}\n\nfunc TestWithOauthAuthorizationCodeFlowManual(t *testing.T) {\n\tt.Skip(\"manual test\")\n\tfor _, provider := range []string{\"OKTA\", \"SNOWFLAKE\"} {\n\t\tt.Run(provider, func(t *testing.T) {\n\t\t\tcfg, err := GetConfigFromEnv([]*ConfigParam{\n\t\t\t\t{Name: \"OAuthClientId\", EnvName: \"SNOWFLAKE_TEST_OAUTH_\" + provider + \"_CLIENT_ID\", FailOnMissing: true},\n\t\t\t\t{Name: \"OAuthClientSecret\", EnvName: \"SNOWFLAKE_TEST_OAUTH_\" + provider + \"_CLIENT_SECRET\", FailOnMissing: true},\n\t\t\t\t{Name: \"OAuthAuthorizationURL\", EnvName: \"SNOWFLAKE_TEST_OAUTH_\" + provider + \"_AUTHORIZATION_URL\", FailOnMissing: false},\n\t\t\t\t{Name: \"OAuthTokenRequestURL\", EnvName: \"SNOWFLAKE_TEST_OAUTH_\" + provider + \"_TOKEN_REQUEST_URL\", FailOnMissing: false},\n\t\t\t\t{Name: \"OAuthRedirectURI\", EnvName: \"SNOWFLAKE_TEST_OAUTH_\" + provider + \"_REDIRECT_URI\", FailOnMissing: false},\n\t\t\t\t{Name: \"OAuthScope\", EnvName: \"SNOWFLAKE_TEST_OAUTH_\" + provider + \"_SCOPE\", FailOnMissing: false},\n\t\t\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_TEST_OAUTH_\" + provider + \"_USER\", FailOnMissing: true},\n\t\t\t\t{Name: \"Role\", EnvName: \"SNOWFLAKE_TEST_OAUTH_\" + provider + \"_ROLE\", FailOnMissing: true},\n\t\t\t\t{Name: \"Account\", EnvName: \"SNOWFLAKE_TEST_ACCOUNT\", FailOnMissing: true},\n\t\t\t})\n\t\t\tassertNilF(t, err)\n\t\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\t\ttokenRequestURL := cmp.Or(cfg.OauthTokenRequestURL, fmt.Sprintf(\"https://%v.snowflakecomputing.com:443/oauth/token-request\", cfg.Account))\n\t\t\tcredentialsStorage.deleteCredential(newOAuthAccessTokenSpec(tokenRequestURL, cfg.User))\n\t\t\tcredentialsStorage.deleteCredential(newOAuthRefreshTokenSpec(tokenRequestURL, cfg.User))\n\t\t\tconnector := NewConnector(&SnowflakeDriver{}, *cfg)\n\t\t\tdb := sql.OpenDB(connector)\n\t\t\tdefer db.Close()\n\t\t\tconn1, err := db.Conn(context.Background())\n\t\t\tassertNilF(t, err)\n\t\t\tdefer conn1.Close()\n\t\t\trunSmokeQueryWithConn(t, conn1)\n\t\t\tconn2, err := db.Conn(context.Background())\n\t\t\tassertNilF(t, err)\n\t\t\tdefer conn2.Close()\n\t\t\trunSmokeQueryWithConn(t, conn2)\n\t\t\tcredentialsStorage.setCredential(newOAuthAccessTokenSpec(cfg.OauthTokenRequestURL, cfg.User), \"expired-token\")\n\t\t\tconn3, err := db.Conn(context.Background())\n\t\t\tassertNilF(t, err)\n\t\t\tdefer conn3.Close()\n\t\t\trunSmokeQueryWithConn(t, conn3)\n\t\t})\n\t}\n}\n\nfunc TestWithOAuthClientCredentialsFlowManual(t *testing.T) {\n\tt.Skip(\"manual test\")\n\tcfg, err := GetConfigFromEnv([]*ConfigParam{\n\t\t{Name: \"OAuthClientId\", EnvName: \"SNOWFLAKE_TEST_OAUTH_OKTA_CLIENT_ID\", FailOnMissing: true},\n\t\t{Name: \"OAuthClientSecret\", EnvName: \"SNOWFLAKE_TEST_OAUTH_OKTA_CLIENT_SECRET\", FailOnMissing: true},\n\t\t{Name: \"OAuthTokenRequestURL\", EnvName: \"SNOWFLAKE_TEST_OAUTH_OKTA_TOKEN_REQUEST_URL\", FailOnMissing: true},\n\t\t{Name: \"Role\", EnvName: \"SNOWFLAKE_TEST_OAUTH_OKTA_ROLE\", FailOnMissing: true},\n\t\t{Name: \"Account\", EnvName: \"SNOWFLAKE_TEST_ACCOUNT\", FailOnMissing: true},\n\t})\n\tassertNilF(t, err)\n\tcfg.Authenticator = AuthTypeOAuthClientCredentials\n\tconnector := NewConnector(&SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\trunSmokeQuery(t, db)\n}\n"
  },
  {
    "path": "auth_wif.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/sha256\"\n\t\"encoding/base64\"\n\t\"encoding/hex\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"os\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/aws/aws-sdk-go-v2/aws\"\n\tv4 \"github.com/aws/aws-sdk-go-v2/aws/signer/v4\"\n\t\"github.com/aws/aws-sdk-go-v2/config\"\n\t\"github.com/aws/aws-sdk-go-v2/credentials\"\n\t\"github.com/aws/aws-sdk-go-v2/service/sts\"\n\t\"github.com/golang-jwt/jwt/v5\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n)\n\nconst (\n\tawsWif   wifProviderType = \"AWS\"\n\tgcpWif   wifProviderType = \"GCP\"\n\tazureWif wifProviderType = \"AZURE\"\n\toidcWif  wifProviderType = \"OIDC\"\n\n\tgcpMetadataFlavorHeaderName  = \"Metadata-Flavor\"\n\tgcpMetadataFlavor            = \"Google\"\n\tdefaultMetadataServiceBase   = \"http://169.254.169.254\"\n\tdefaultGcpIamCredentialsBase = \"https://iamcredentials.googleapis.com\"\n\tsnowflakeAudience            = \"snowflakecomputing.com\"\n)\n\ntype wifProviderType string\n\ntype wifAttestation struct {\n\tProviderType string            `json:\"providerType\"`\n\tCredential   string            `json:\"credential\"`\n\tMetadata     map[string]string `json:\"metadata\"`\n}\n\ntype wifAttestationCreator interface {\n\tcreateAttestation() (*wifAttestation, error)\n}\n\ntype wifAttestationProvider struct {\n\tcontext      context.Context\n\tcfg          *Config\n\tawsCreator   wifAttestationCreator\n\tgcpCreator   wifAttestationCreator\n\tazureCreator wifAttestationCreator\n\toidcCreator  wifAttestationCreator\n}\n\nfunc createWifAttestationProvider(ctx context.Context, cfg *Config, telemetry *snowflakeTelemetry) *wifAttestationProvider {\n\treturn &wifAttestationProvider{\n\t\tcontext: ctx,\n\t\tcfg:     cfg,\n\t\tawsCreator: &awsIdentityAttestationCreator{\n\t\t\tcfg:                       cfg,\n\t\t\tattestationServiceFactory: createDefaultAwsAttestationMetadataProvider,\n\t\t\tctx:                       ctx,\n\t\t},\n\t\tgcpCreator: &gcpIdentityAttestationCreator{\n\t\t\tcfg:                    cfg,\n\t\t\ttelemetry:              telemetry,\n\t\t\tmetadataServiceBaseURL: defaultMetadataServiceBase,\n\t\t\tiamCredentialsURL:      defaultGcpIamCredentialsBase,\n\t\t},\n\t\tazureCreator: &azureIdentityAttestationCreator{\n\t\t\tazureAttestationMetadataProvider: &defaultAzureAttestationMetadataProvider{},\n\t\t\tcfg:                              cfg,\n\t\t\ttelemetry:                        telemetry,\n\t\t\tworkloadIdentityEntraResource:    determineEntraResource(cfg),\n\t\t\tazureMetadataServiceBaseURL:      defaultMetadataServiceBase,\n\t\t},\n\t\toidcCreator: &oidcIdentityAttestationCreator{token: func() (string, error) { return sfconfig.GetToken(cfg) }},\n\t}\n}\n\nfunc (p *wifAttestationProvider) getAttestation(identityProvider string) (*wifAttestation, error) {\n\tswitch strings.ToUpper(identityProvider) {\n\tcase string(awsWif):\n\t\treturn p.awsCreator.createAttestation()\n\tcase string(gcpWif):\n\t\treturn p.gcpCreator.createAttestation()\n\tcase string(azureWif):\n\t\treturn p.azureCreator.createAttestation()\n\tcase string(oidcWif):\n\t\treturn p.oidcCreator.createAttestation()\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unknown WorkloadIdentityProvider specified: %s. Valid values are: %s, %s, %s, %s\", identityProvider, awsWif, gcpWif, azureWif, oidcWif)\n\t}\n}\n\ntype awsAttestastationMetadataProviderFactory func(ctx context.Context, cfg *Config) awsAttestationMetadataProvider\n\ntype awsIdentityAttestationCreator struct {\n\tcfg                       *Config\n\tattestationServiceFactory awsAttestastationMetadataProviderFactory\n\tctx                       context.Context\n}\n\ntype gcpIdentityAttestationCreator struct {\n\tcfg                    *Config\n\ttelemetry              *snowflakeTelemetry\n\tmetadataServiceBaseURL string\n\tiamCredentialsURL      string\n}\n\ntype oidcIdentityAttestationCreator struct {\n\ttoken func() (string, error)\n}\n\ntype awsAttestationMetadataProvider interface {\n\tawsCredentials() (aws.Credentials, error)\n\tawsCredentialsViaRoleChaining() (aws.Credentials, error)\n\tawsRegion() string\n}\n\ntype defaultAwsAttestationMetadataProvider struct {\n\tctx    context.Context\n\tcfg    *Config\n\tawsCfg aws.Config\n}\n\nfunc createDefaultAwsAttestationMetadataProvider(ctx context.Context, cfg *Config) awsAttestationMetadataProvider {\n\tawsCfg, err := config.LoadDefaultConfig(ctx, config.WithEC2IMDSRegion())\n\tif err != nil {\n\t\tlogger.Debugf(\"Unable to load AWS config: %v\", err)\n\t\treturn nil\n\t}\n\treturn &defaultAwsAttestationMetadataProvider{\n\t\tawsCfg: awsCfg,\n\t\tcfg:    cfg,\n\t\tctx:    ctx,\n\t}\n}\n\nfunc (s *defaultAwsAttestationMetadataProvider) awsCredentials() (aws.Credentials, error) {\n\treturn s.awsCfg.Credentials.Retrieve(s.ctx)\n}\n\nfunc (s *defaultAwsAttestationMetadataProvider) awsCredentialsViaRoleChaining() (aws.Credentials, error) {\n\tcreds, err := s.awsCredentials()\n\tif err != nil {\n\t\treturn aws.Credentials{}, err\n\t}\n\tfor _, roleArn := range s.cfg.WorkloadIdentityImpersonationPath {\n\t\tif creds, err = s.assumeRole(creds, roleArn); err != nil {\n\t\t\treturn aws.Credentials{}, err\n\t\t}\n\t}\n\treturn creds, nil\n}\n\nfunc (s *defaultAwsAttestationMetadataProvider) assumeRole(creds aws.Credentials, roleArn string) (aws.Credentials, error) {\n\tlogger.Debugf(\"assuming role %v\", roleArn)\n\tawsCfg := s.awsCfg\n\tawsCfg.Credentials = credentials.StaticCredentialsProvider{Value: creds}\n\tawsCfg.Region = s.awsRegion()\n\tstsClient := sts.NewFromConfig(awsCfg)\n\n\trole, err := stsClient.AssumeRole(s.ctx, &sts.AssumeRoleInput{\n\t\tRoleArn:         aws.String(roleArn),\n\t\tRoleSessionName: aws.String(\"identity-federation-session\"),\n\t})\n\tif err != nil {\n\t\tlogger.Debugf(\"failed to assume role %v: %v\", roleArn, err)\n\t\treturn aws.Credentials{}, err\n\t}\n\n\treturn aws.Credentials{\n\t\tAccessKeyID:     *role.Credentials.AccessKeyId,\n\t\tSecretAccessKey: *role.Credentials.SecretAccessKey,\n\t\tSessionToken:    *role.Credentials.SessionToken,\n\t\tExpires:         *role.Credentials.Expiration,\n\t}, nil\n}\n\nfunc (s *defaultAwsAttestationMetadataProvider) awsRegion() string {\n\treturn s.awsCfg.Region\n}\n\nfunc (c *awsIdentityAttestationCreator) createAttestation() (*wifAttestation, error) {\n\tlogger.Debug(\"Creating AWS identity attestation...\")\n\n\tattestationService := c.attestationServiceFactory(c.ctx, c.cfg)\n\tif attestationService == nil {\n\t\treturn nil, errors.New(\"AWS attestation service could not be created\")\n\t}\n\n\tvar creds aws.Credentials\n\tvar err error\n\n\tif len(c.cfg.WorkloadIdentityImpersonationPath) == 0 {\n\t\tif creds, err = attestationService.awsCredentials(); err != nil {\n\t\t\tlogger.Debugf(\"error while getting for aws credentials. %v\", err)\n\t\t\treturn nil, err\n\t\t}\n\t} else {\n\t\tif creds, err = attestationService.awsCredentialsViaRoleChaining(); err != nil {\n\t\t\tlogger.Debugf(\"error while getting for aws credentials via role chaining. %v\", err)\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif creds.AccessKeyID == \"\" || creds.SecretAccessKey == \"\" {\n\t\treturn nil, fmt.Errorf(\"no AWS credentials were found\")\n\t}\n\n\tregion := attestationService.awsRegion()\n\tif region == \"\" {\n\t\treturn nil, fmt.Errorf(\"no AWS region was found\")\n\t}\n\n\tstsHostname := stsHostname(region)\n\treq, err := c.createStsRequest(stsHostname)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\terr = c.signRequestWithSigV4(c.ctx, req, creds, region)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tcredential, err := c.createBase64EncodedRequestCredential(req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &wifAttestation{\n\t\tProviderType: string(awsWif),\n\t\tCredential:   credential,\n\t\tMetadata:     map[string]string{},\n\t}, nil\n}\n\nfunc stsHostname(region string) string {\n\tvar domain string\n\tif strings.HasPrefix(region, \"cn-\") {\n\t\tdomain = \"amazonaws.com.cn\"\n\t} else {\n\t\tdomain = \"amazonaws.com\"\n\t}\n\treturn fmt.Sprintf(\"sts.%s.%s\", region, domain)\n}\n\nfunc (c *awsIdentityAttestationCreator) createStsRequest(hostname string) (*http.Request, error) {\n\turl := fmt.Sprintf(\"https://%s?Action=GetCallerIdentity&Version=2011-06-15\", hostname)\n\treq, err := http.NewRequest(\"POST\", url, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treq.Header.Set(\"Host\", hostname)\n\treq.Header.Set(\"X-Snowflake-Audience\", \"snowflakecomputing.com\")\n\treturn req, nil\n}\n\nfunc (c *awsIdentityAttestationCreator) signRequestWithSigV4(ctx context.Context, req *http.Request, creds aws.Credentials, region string) error {\n\tsigner := v4.NewSigner()\n\t// as per docs of SignHTTP, the payload hash must be present even if the payload is empty\n\tpayloadHash := hex.EncodeToString(sha256.New().Sum(nil))\n\treturn signer.SignHTTP(ctx, creds, req, payloadHash, \"sts\", region, time.Now())\n}\n\nfunc (c *awsIdentityAttestationCreator) createBase64EncodedRequestCredential(req *http.Request) (string, error) {\n\theaders := make(map[string]string)\n\tfor key, values := range req.Header {\n\t\theaders[key] = values[0]\n\t}\n\n\tassertion := map[string]any{\n\t\t\"url\":     req.URL.String(),\n\t\t\"method\":  req.Method,\n\t\t\"headers\": headers,\n\t}\n\n\tassertionJSON, err := json.Marshal(assertion)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn base64.StdEncoding.EncodeToString(assertionJSON), nil\n}\n\nfunc (c *gcpIdentityAttestationCreator) createAttestation() (*wifAttestation, error) {\n\tlogger.Debugf(\"Creating GCP identity attestation...\")\n\tif len(c.cfg.WorkloadIdentityImpersonationPath) == 0 {\n\t\treturn c.createGcpIdentityTokenFromMetadataService()\n\t}\n\treturn c.createGcpIdentityViaImpersonation()\n}\n\nfunc (c *gcpIdentityAttestationCreator) createGcpIdentityTokenFromMetadataService() (*wifAttestation, error) {\n\treq, err := c.createTokenRequest()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to create GCP token request: %w\", err)\n\t}\n\ttoken := fetchTokenFromMetadataService(req, c.cfg, c.telemetry)\n\tif token == \"\" {\n\t\treturn nil, fmt.Errorf(\"no GCP token was found\")\n\t}\n\tsub, _, err := extractSubIssWithoutVerifyingSignature(token)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"could not extract claims from token: %v\", err)\n\t}\n\treturn &wifAttestation{\n\t\tProviderType: string(gcpWif),\n\t\tCredential:   token,\n\t\tMetadata:     map[string]string{\"sub\": sub},\n\t}, nil\n}\n\nfunc (c *gcpIdentityAttestationCreator) createTokenRequest() (*http.Request, error) {\n\turi := fmt.Sprintf(\"%s/computeMetadata/v1/instance/service-accounts/default/identity?audience=%s\",\n\t\tc.metadataServiceBaseURL, snowflakeAudience)\n\treq, err := http.NewRequest(\"GET\", uri, nil)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to create HTTP request: %v\", err)\n\t}\n\treq.Header.Set(gcpMetadataFlavorHeaderName, gcpMetadataFlavor)\n\treturn req, nil\n}\n\nfunc (c *gcpIdentityAttestationCreator) createGcpIdentityViaImpersonation() (*wifAttestation, error) {\n\t// initialize transport\n\ttransport, err := newTransportFactory(c.cfg, c.telemetry).createTransport(transportConfigFor(transportTypeWIF))\n\tif err != nil {\n\t\tlogger.Debugf(\"Failed to create HTTP transport: %v\", err)\n\t\treturn nil, err\n\t}\n\tclient := &http.Client{Transport: transport}\n\n\t// fetch access token for impersonation\n\taccessToken, err := c.fetchServiceToken(client)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// map paths to full service account paths\n\tvar fullServiceAccountPaths []string\n\tfor _, path := range c.cfg.WorkloadIdentityImpersonationPath {\n\t\tfullServiceAccountPaths = append(fullServiceAccountPaths, fmt.Sprintf(\"projects/-/serviceAccounts/%s\", path))\n\t}\n\ttargetServiceAccount := fullServiceAccountPaths[len(fullServiceAccountPaths)-1]\n\tdelegates := fullServiceAccountPaths[:len(fullServiceAccountPaths)-1]\n\n\t// fetch impersonated token\n\timpersonationToken, err := c.fetchImpersonatedToken(targetServiceAccount, delegates, accessToken, client)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// create attestation\n\tsub, _, err := extractSubIssWithoutVerifyingSignature(impersonationToken)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"could not extract claims from token: %v\", err)\n\t}\n\treturn &wifAttestation{\n\t\tProviderType: string(gcpWif),\n\t\tCredential:   impersonationToken,\n\t\tMetadata:     map[string]string{\"sub\": sub},\n\t}, nil\n}\n\nfunc (c *gcpIdentityAttestationCreator) fetchServiceToken(client *http.Client) (string, error) {\n\t// initialize and do request\n\treq, err := http.NewRequest(\"GET\", c.metadataServiceBaseURL+\"/computeMetadata/v1/instance/service-accounts/default/token\", nil)\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot create token request for impersonation. %v\", err)\n\t\treturn \"\", err\n\t}\n\treq.Header.Set(gcpMetadataFlavorHeaderName, gcpMetadataFlavor)\n\tresp, err := client.Do(req)\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot fetch token for impersonation. %v\", err)\n\t\treturn \"\", err\n\t}\n\tdefer func(body io.ReadCloser) {\n\t\tif err = body.Close(); err != nil {\n\t\t\tlogger.Debugf(\"cannot close token response body for impersonation. %v\", err)\n\t\t}\n\t}(resp.Body)\n\n\t// if it is not 200, do not parse the response\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn \"\", fmt.Errorf(\"token response status is %v, not parsing\", resp.StatusCode)\n\t}\n\n\t// parse response and extract access token\n\taccessTokenResponse := struct {\n\t\tAccessToken string `json:\"access_token\"`\n\t}{}\n\tif err = json.NewDecoder(resp.Body).Decode(&accessTokenResponse); err != nil {\n\t\tlogger.Debugf(\"cannot decode token for impersonation. %v\", err)\n\t\treturn \"\", err\n\t}\n\taccessToken := accessTokenResponse.AccessToken\n\treturn accessToken, nil\n}\n\nfunc (c *gcpIdentityAttestationCreator) fetchImpersonatedToken(targetServiceAccount string, delegates []string, accessToken string, client *http.Client) (string, error) {\n\t// prepare the request\n\turl := fmt.Sprintf(\"%v/v1/%v:generateIdToken\", c.iamCredentialsURL, targetServiceAccount)\n\tbody := struct {\n\t\tDelegates []string `json:\"delegates,omitempty\"`\n\t\tAudience  string   `json:\"audience\"`\n\t}{\n\t\tDelegates: delegates,\n\t\tAudience:  snowflakeAudience,\n\t}\n\tpayload := new(bytes.Buffer)\n\tif err := json.NewEncoder(payload).Encode(body); err != nil {\n\t\tlogger.Debugf(\"cannot encode impersonation request body. %v\", err)\n\t\treturn \"\", err\n\t}\n\treq, err := http.NewRequest(\"POST\", url, payload)\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot create token request for impersonation. %v\", err)\n\t\treturn \"\", err\n\t}\n\treq.Header.Set(\"Authorization\", \"Bearer \"+accessToken)\n\treq.Header.Set(\"Content-Type\", \"application/json\")\n\n\t// send the request\n\tresp, err := client.Do(req)\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot call impersonation service. %v\", err)\n\t\treturn \"\", err\n\t}\n\tdefer func(body io.ReadCloser) {\n\t\tif err = body.Close(); err != nil {\n\t\t\tlogger.Debugf(\"cannot close token response body for impersonation. %v\", err)\n\t\t}\n\t}(resp.Body)\n\n\t// handle the response\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn \"\", fmt.Errorf(\"response status is %v, not parsing\", resp.StatusCode)\n\t}\n\ttokenResponse := struct {\n\t\tToken string `json:\"token\"`\n\t}{}\n\tif err = json.NewDecoder(resp.Body).Decode(&tokenResponse); err != nil {\n\t\tlogger.Debugf(\"cannot decode token response. %v\", err)\n\t\treturn \"\", err\n\t}\n\treturn tokenResponse.Token, nil\n}\n\nfunc fetchTokenFromMetadataService(req *http.Request, cfg *Config, telemetry *snowflakeTelemetry) string {\n\ttransport, err := newTransportFactory(cfg, telemetry).createTransport(transportConfigFor(transportTypeWIF))\n\tif err != nil {\n\t\tlogger.Debugf(\"Failed to create HTTP transport: %v\", err)\n\t\treturn \"\"\n\t}\n\tclient := &http.Client{Transport: transport}\n\tresp, err := client.Do(req)\n\tif err != nil {\n\t\tlogger.Debugf(\"Metadata server request was not successful: %v\", err)\n\t\treturn \"\"\n\t}\n\tdefer func() {\n\t\tif err = resp.Body.Close(); err != nil {\n\t\t\tlogger.Debugf(\"Failed to close response body: %v\", err)\n\t\t}\n\t}()\n\tbody, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.Debugf(\"Failed to read response body: %v\", err)\n\t\treturn \"\"\n\t}\n\treturn string(body)\n}\n\nfunc extractSubIssWithoutVerifyingSignature(token string) (subject string, issuer string, err error) {\n\tclaims, err := extractClaimsMap(token)\n\tif err != nil {\n\t\treturn \"\", \"\", err\n\t}\n\tissuerClaim, ok := claims[\"iss\"]\n\tif !ok {\n\t\treturn \"\", \"\", errors.New(\"missing issuer claim in JWT token\")\n\t}\n\tsubjectClaim, ok := claims[\"sub\"]\n\tif !ok {\n\t\treturn \"\", \"\", errors.New(\"missing sub claim in JWT token\")\n\t}\n\tsubject, ok = subjectClaim.(string)\n\tif !ok {\n\t\treturn \"\", \"\", errors.New(\"sub claim is not a string in JWT token\")\n\t}\n\tissuer, ok = issuerClaim.(string)\n\tif !ok {\n\t\treturn \"\", \"\", errors.New(\"iss claim is not a string in JWT token\")\n\t}\n\treturn\n}\n\n// extractClaimsMap parses a JWT token and returns its claims as a map.\n// It does not verify the token signature.\nfunc extractClaimsMap(token string) (map[string]any, error) {\n\tparser := jwt.NewParser(jwt.WithoutClaimsValidation())\n\tclaims := jwt.MapClaims{}\n\t_, _, err := parser.ParseUnverified(token, claims)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to extract JWT claims from token: %w\", err)\n\t}\n\treturn claims, nil\n}\n\nfunc (c *oidcIdentityAttestationCreator) createAttestation() (*wifAttestation, error) {\n\tlogger.Debugf(\"Creating OIDC identity attestation...\")\n\ttoken, err := c.token()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to get OIDC token: %w\", err)\n\t}\n\tif token == \"\" {\n\t\treturn nil, fmt.Errorf(\"no OIDC token was specified\")\n\t}\n\tsub, iss, err := extractSubIssWithoutVerifyingSignature(token)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif sub == \"\" || iss == \"\" {\n\t\treturn nil, errors.New(\"missing sub or iss claim in JWT token\")\n\t}\n\treturn &wifAttestation{\n\t\tProviderType: string(oidcWif),\n\t\tCredential:   token,\n\t\tMetadata:     map[string]string{\"sub\": sub},\n\t}, nil\n}\n\n// azureAttestationMetadataProvider defines the interface for Azure attestation services\ntype azureAttestationMetadataProvider interface {\n\tidentityEndpoint() string\n\tidentityHeader() string\n\tclientID() string\n}\n\ntype defaultAzureAttestationMetadataProvider struct{}\n\nfunc (p *defaultAzureAttestationMetadataProvider) identityEndpoint() string {\n\treturn os.Getenv(\"IDENTITY_ENDPOINT\")\n}\n\nfunc (p *defaultAzureAttestationMetadataProvider) identityHeader() string {\n\treturn os.Getenv(\"IDENTITY_HEADER\")\n}\n\nfunc (p *defaultAzureAttestationMetadataProvider) clientID() string {\n\treturn os.Getenv(\"MANAGED_IDENTITY_CLIENT_ID\")\n}\n\ntype azureIdentityAttestationCreator struct {\n\tazureAttestationMetadataProvider azureAttestationMetadataProvider\n\tcfg                              *Config\n\ttelemetry                        *snowflakeTelemetry\n\tworkloadIdentityEntraResource    string\n\tazureMetadataServiceBaseURL      string\n}\n\n// createAttestation creates an attestation using Azure identity\nfunc (a *azureIdentityAttestationCreator) createAttestation() (*wifAttestation, error) {\n\tlogger.Debug(\"Creating Azure identity attestation...\")\n\n\tidentityEndpoint := a.azureAttestationMetadataProvider.identityEndpoint()\n\tvar request *http.Request\n\tvar err error\n\n\tif identityEndpoint == \"\" {\n\t\trequest, err = a.azureVMIdentityRequest()\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to create Azure VM identity request: %v\", err)\n\t\t}\n\t} else {\n\t\tidentityHeader := a.azureAttestationMetadataProvider.identityHeader()\n\t\tif identityHeader == \"\" {\n\t\t\treturn nil, fmt.Errorf(\"managed identity is not enabled on this Azure function\")\n\t\t}\n\t\trequest, err = a.azureFunctionsIdentityRequest(\n\t\t\tidentityEndpoint,\n\t\t\tidentityHeader,\n\t\t\ta.azureAttestationMetadataProvider.clientID(),\n\t\t)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to create Azure Functions identity request: %v\", err)\n\t\t}\n\t}\n\n\ttokenJSON := fetchTokenFromMetadataService(request, a.cfg, a.telemetry)\n\tif tokenJSON == \"\" {\n\t\treturn nil, fmt.Errorf(\"could not fetch Azure token\")\n\t}\n\n\ttoken, err := extractTokenFromJSON(tokenJSON)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to extract token from JSON: %v\", err)\n\t}\n\tif token == \"\" {\n\t\treturn nil, fmt.Errorf(\"no access token found in Azure response\")\n\t}\n\n\tsub, iss, err := extractSubIssWithoutVerifyingSignature(token)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to extract sub and iss claims from token: %v\", err)\n\t}\n\tif sub == \"\" || iss == \"\" {\n\t\treturn nil, fmt.Errorf(\"missing sub or iss claim in JWT token\")\n\t}\n\n\treturn &wifAttestation{\n\t\tProviderType: string(azureWif),\n\t\tCredential:   token,\n\t\tMetadata:     map[string]string{\"sub\": sub, \"iss\": iss},\n\t}, nil\n}\n\nfunc determineEntraResource(config *Config) string {\n\tif config != nil && config.WorkloadIdentityEntraResource != \"\" {\n\t\treturn config.WorkloadIdentityEntraResource\n\t}\n\t// default resource if none specified\n\treturn \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n}\n\nfunc extractTokenFromJSON(tokenJSON string) (string, error) {\n\tvar response struct {\n\t\tAccessToken string `json:\"access_token\"`\n\t}\n\n\terr := json.Unmarshal([]byte(tokenJSON), &response)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn response.AccessToken, nil\n}\n\nfunc (a *azureIdentityAttestationCreator) azureFunctionsIdentityRequest(identityEndpoint, identityHeader, managedIdentityClientID string) (*http.Request, error) {\n\tqueryParams := fmt.Sprintf(\"api-version=2019-08-01&resource=%s\", a.workloadIdentityEntraResource)\n\tif managedIdentityClientID != \"\" {\n\t\tqueryParams += fmt.Sprintf(\"&client_id=%s\", managedIdentityClientID)\n\t}\n\n\turl := fmt.Sprintf(\"%s?%s\", identityEndpoint, queryParams)\n\treq, err := http.NewRequest(\"GET\", url, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treq.Header.Add(\"X-IDENTITY-HEADER\", identityHeader)\n\n\treturn req, nil\n}\n\nfunc (a *azureIdentityAttestationCreator) azureVMIdentityRequest() (*http.Request, error) {\n\turlWithoutQuery := a.azureMetadataServiceBaseURL + \"/metadata/identity/oauth2/token?\"\n\tqueryParams := fmt.Sprintf(\"api-version=2018-02-01&resource=%s\", a.workloadIdentityEntraResource)\n\n\turl := urlWithoutQuery + queryParams\n\treq, err := http.NewRequest(\"GET\", url, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treq.Header.Add(\"Metadata\", \"true\")\n\n\treturn req, nil\n}\n"
  },
  {
    "path": "auth_wif_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/aws/aws-sdk-go-v2/aws\"\n)\n\ntype mockWifAttestationCreator struct {\n\tproviderType wifProviderType\n\treturnError  error\n}\n\nfunc (m *mockWifAttestationCreator) createAttestation() (*wifAttestation, error) {\n\tif m.returnError != nil {\n\t\treturn nil, m.returnError\n\t}\n\treturn &wifAttestation{\n\t\tProviderType: string(m.providerType),\n\t}, nil\n}\n\nfunc TestGetAttestation(t *testing.T) {\n\tawsError := errors.New(\"aws attestation error\")\n\tgcpError := errors.New(\"gcp attestation error\")\n\tazureError := errors.New(\"azure attestation error\")\n\toidcError := errors.New(\"oidc attestation error\")\n\n\tprovider := &wifAttestationProvider{\n\t\tcontext:      context.Background(),\n\t\tawsCreator:   &mockWifAttestationCreator{providerType: awsWif},\n\t\tgcpCreator:   &mockWifAttestationCreator{providerType: gcpWif},\n\t\tazureCreator: &mockWifAttestationCreator{providerType: azureWif},\n\t\toidcCreator:  &mockWifAttestationCreator{providerType: oidcWif},\n\t}\n\n\tproviderWithErrors := &wifAttestationProvider{\n\t\tcontext:      context.Background(),\n\t\tawsCreator:   &mockWifAttestationCreator{providerType: awsWif, returnError: awsError},\n\t\tgcpCreator:   &mockWifAttestationCreator{providerType: gcpWif, returnError: gcpError},\n\t\tazureCreator: &mockWifAttestationCreator{providerType: azureWif, returnError: azureError},\n\t\toidcCreator:  &mockWifAttestationCreator{providerType: oidcWif, returnError: oidcError},\n\t}\n\n\ttests := []struct {\n\t\tname             string\n\t\tprovider         *wifAttestationProvider\n\t\tidentityProvider string\n\t\texpectedResult   *wifAttestation\n\t\texpectedError    error\n\t}{\n\t\t{\n\t\t\tname:             \"AWS success\",\n\t\t\tprovider:         provider,\n\t\t\tidentityProvider: \"AWS\",\n\t\t\texpectedResult:   &wifAttestation{ProviderType: string(awsWif)},\n\t\t\texpectedError:    nil,\n\t\t},\n\t\t{\n\t\t\tname:             \"AWS error\",\n\t\t\tprovider:         providerWithErrors,\n\t\t\tidentityProvider: \"AWS\",\n\t\t\texpectedResult:   nil,\n\t\t\texpectedError:    awsError,\n\t\t},\n\t\t{\n\t\t\tname:             \"GCP success\",\n\t\t\tprovider:         provider,\n\t\t\tidentityProvider: \"GCP\",\n\t\t\texpectedResult:   &wifAttestation{ProviderType: string(gcpWif)},\n\t\t\texpectedError:    nil,\n\t\t},\n\t\t{\n\t\t\tname:             \"GCP error\",\n\t\t\tprovider:         providerWithErrors,\n\t\t\tidentityProvider: \"GCP\",\n\t\t\texpectedResult:   nil,\n\t\t\texpectedError:    gcpError,\n\t\t},\n\t\t{\n\t\t\tname:             \"AZURE success\",\n\t\t\tprovider:         provider,\n\t\t\tidentityProvider: \"AZURE\",\n\t\t\texpectedResult:   &wifAttestation{ProviderType: string(azureWif)},\n\t\t\texpectedError:    nil,\n\t\t},\n\t\t{\n\t\t\tname:             \"AZURE error\",\n\t\t\tprovider:         providerWithErrors,\n\t\t\tidentityProvider: \"AZURE\",\n\t\t\texpectedResult:   nil,\n\t\t\texpectedError:    azureError,\n\t\t},\n\t\t{\n\t\t\tname:             \"OIDC success\",\n\t\t\tprovider:         provider,\n\t\t\tidentityProvider: \"OIDC\",\n\t\t\texpectedResult:   &wifAttestation{ProviderType: string(oidcWif)},\n\t\t\texpectedError:    nil,\n\t\t},\n\t\t{\n\t\t\tname:             \"OIDC error\",\n\t\t\tprovider:         providerWithErrors,\n\t\t\tidentityProvider: \"OIDC\",\n\t\t\texpectedResult:   nil,\n\t\t\texpectedError:    oidcError,\n\t\t},\n\t\t{\n\t\t\tname:             \"Unknown provider\",\n\t\t\tprovider:         provider,\n\t\t\tidentityProvider: \"UNKNOWN\",\n\t\t\texpectedResult:   nil,\n\t\t\texpectedError:    errors.New(\"unknown WorkloadIdentityProvider specified: UNKNOWN. Valid values are: AWS, GCP, AZURE, OIDC\"),\n\t\t},\n\t\t{\n\t\t\tname:             \"Empty provider\",\n\t\t\tprovider:         provider,\n\t\t\tidentityProvider: \"\",\n\t\t\texpectedResult:   nil,\n\t\t\texpectedError:    errors.New(\"unknown WorkloadIdentityProvider specified: . Valid values are: AWS, GCP, AZURE, OIDC\"),\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tattestation, err := test.provider.getAttestation(test.identityProvider)\n\t\t\tif test.expectedError != nil {\n\t\t\t\tassertNilE(t, attestation)\n\t\t\t\tassertNotNilF(t, err)\n\t\t\t\tassertEqualE(t, test.expectedError.Error(), err.Error())\n\t\t\t} else if test.expectedResult != nil {\n\t\t\t\tassertNilE(t, err)\n\t\t\t\tassertNotNilF(t, attestation)\n\t\t\t\tassertEqualE(t, test.expectedResult.ProviderType, attestation.ProviderType)\n\t\t\t} else {\n\t\t\t\tt.Fatal(\"test case must specify either expectedError or expectedResult\")\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestAwsIdentityAttestationCreator(t *testing.T) {\n\ttests := []struct {\n\t\tname             string\n\t\tconfig           Config\n\t\tattestationSvc   awsAttestationMetadataProvider\n\t\texpectedError    error\n\t\texpectedProvider string\n\t\texpectedStsHost  string\n\t}{\n\t\t{\n\t\t\tname:           \"No attestation service\",\n\t\t\tattestationSvc: nil,\n\t\t\texpectedError:  fmt.Errorf(\"AWS attestation service could not be created\"),\n\t\t},\n\t\t{\n\t\t\tname: \"No AWS credentials\",\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds:  aws.Credentials{},\n\t\t\t\tregion: \"us-west-2\",\n\t\t\t},\n\t\t\texpectedError: fmt.Errorf(\"no AWS credentials were found\"),\n\t\t},\n\t\t{\n\t\t\tname: \"No AWS region\",\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds:  mockCreds,\n\t\t\t\tregion: \"\",\n\t\t\t},\n\t\t\texpectedError: fmt.Errorf(\"no AWS region was found\"),\n\t\t},\n\t\t{\n\t\t\tname: \"Successful attestation\",\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds:  mockCreds,\n\t\t\t\tregion: \"us-west-2\",\n\t\t\t},\n\t\t\texpectedProvider: \"AWS\",\n\t\t\texpectedStsHost:  \"sts.us-west-2.amazonaws.com\",\n\t\t},\n\t\t{\n\t\t\tname: \"Successful attestation for CN region\",\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds:  mockCreds,\n\t\t\t\tregion: \"cn-northwest-1\",\n\t\t\t},\n\t\t\texpectedProvider: \"AWS\",\n\t\t\texpectedStsHost:  \"sts.cn-northwest-1.amazonaws.com.cn\",\n\t\t},\n\t\t{\n\t\t\tname: \"Successful attestation with single role chaining\",\n\t\t\tconfig: Config{\n\t\t\t\tWorkloadIdentityImpersonationPath: []string{\"arn:aws:iam::123456789012:role/test-role\"},\n\t\t\t},\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds: mockCreds,\n\t\t\t\tchainingCreds: aws.Credentials{\n\t\t\t\t\tAccessKeyID:     \"chainedAccessKey\",\n\t\t\t\t\tSecretAccessKey: \"chainedSecretKey\",\n\t\t\t\t\tSessionToken:    \"chainedSessionToken\",\n\t\t\t\t},\n\t\t\t\tregion:          \"us-east-1\",\n\t\t\t\tuseRoleChaining: true,\n\t\t\t},\n\t\t\texpectedProvider: \"AWS\",\n\t\t\texpectedStsHost:  \"sts.us-east-1.amazonaws.com\",\n\t\t},\n\t\t{\n\t\t\tname: \"Successful attestation with multiple role chaining\",\n\t\t\tconfig: Config{\n\t\t\t\tWorkloadIdentityImpersonationPath: []string{\n\t\t\t\t\t\"arn:aws:iam::123456789012:role/role1\",\n\t\t\t\t\t\"arn:aws:iam::123456789012:role/role2\",\n\t\t\t\t\t\"arn:aws:iam::123456789012:role/role3\",\n\t\t\t\t},\n\t\t\t},\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds: mockCreds,\n\t\t\t\tchainingCreds: aws.Credentials{\n\t\t\t\t\tAccessKeyID:     \"finalRoleAccessKey\",\n\t\t\t\t\tSecretAccessKey: \"finalRoleSecretKey\",\n\t\t\t\t\tSessionToken:    \"finalRoleSessionToken\",\n\t\t\t\t},\n\t\t\t\tregion:          \"us-west-2\",\n\t\t\t\tuseRoleChaining: true,\n\t\t\t},\n\t\t\texpectedProvider: \"AWS\",\n\t\t\texpectedStsHost:  \"sts.us-west-2.amazonaws.com\",\n\t\t},\n\t\t{\n\t\t\tname: \"Role chaining with no credentials\",\n\t\t\tconfig: Config{\n\t\t\t\tWorkloadIdentityImpersonationPath: []string{\"arn:aws:iam::123456789012:role/test-role\"},\n\t\t\t},\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds:           aws.Credentials{},\n\t\t\t\tregion:          \"us-west-2\",\n\t\t\t\tuseRoleChaining: true,\n\t\t\t},\n\t\t\texpectedError: fmt.Errorf(\"no AWS credentials were found\"),\n\t\t},\n\t\t{\n\t\t\tname: \"Role chaining with no region\",\n\t\t\tconfig: Config{\n\t\t\t\tWorkloadIdentityImpersonationPath: []string{\"arn:aws:iam::123456789012:role/test-role\"},\n\t\t\t},\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds: aws.Credentials{\n\t\t\t\t\tAccessKeyID:     \"chainedAccessKey\",\n\t\t\t\t\tSecretAccessKey: \"chainedSecretKey\",\n\t\t\t\t\tSessionToken:    \"chainedSessionToken\",\n\t\t\t\t},\n\t\t\t\tregion:          \"\",\n\t\t\t\tuseRoleChaining: true,\n\t\t\t},\n\t\t\texpectedError: fmt.Errorf(\"no AWS region was found\"),\n\t\t},\n\t\t{\n\t\t\tname: \"Role chaining failure\",\n\t\t\tconfig: Config{\n\t\t\t\tWorkloadIdentityImpersonationPath: []string{\"arn:aws:iam::123456789012:role/test-role\"},\n\t\t\t},\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds:           mockCreds,\n\t\t\t\tregion:          \"us-west-2\",\n\t\t\t\tchainingError:   fmt.Errorf(\"failed to assume role: AccessDenied\"),\n\t\t\t\tuseRoleChaining: true,\n\t\t\t},\n\t\t\texpectedError: fmt.Errorf(\"failed to assume role: AccessDenied\"),\n\t\t},\n\t\t{\n\t\t\tname: \"Cross-account role chaining\",\n\t\t\tconfig: Config{\n\t\t\t\tWorkloadIdentityImpersonationPath: []string{\n\t\t\t\t\t\"arn:aws:iam::111111111111:role/cross-account-role\",\n\t\t\t\t\t\"arn:aws:iam::222222222222:role/target-role\",\n\t\t\t\t},\n\t\t\t},\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds: mockCreds,\n\t\t\t\tchainingCreds: aws.Credentials{\n\t\t\t\t\tAccessKeyID:     \"crossAccountAccessKey\",\n\t\t\t\t\tSecretAccessKey: \"crossAccountSecretKey\",\n\t\t\t\t\tSessionToken:    \"crossAccountSessionToken\",\n\t\t\t\t},\n\t\t\t\tregion:          \"us-east-1\",\n\t\t\t\tuseRoleChaining: true,\n\t\t\t},\n\t\t\texpectedProvider: \"AWS\",\n\t\t\texpectedStsHost:  \"sts.us-east-1.amazonaws.com\",\n\t\t},\n\t\t{\n\t\t\tname: \"Role chaining in CN region\",\n\t\t\tconfig: Config{\n\t\t\t\tWorkloadIdentityImpersonationPath: []string{\"arn:aws-cn:iam::123456789012:role/cn-role\"},\n\t\t\t},\n\t\t\tattestationSvc: &mockAwsAttestationMetadataProvider{\n\t\t\t\tcreds: mockCreds,\n\t\t\t\tchainingCreds: aws.Credentials{\n\t\t\t\t\tAccessKeyID:     \"cnRoleAccessKey\",\n\t\t\t\t\tSecretAccessKey: \"cnRoleSecretKey\",\n\t\t\t\t\tSessionToken:    \"cnRoleSessionToken\",\n\t\t\t\t},\n\t\t\t\tregion:          \"cn-north-1\",\n\t\t\t\tuseRoleChaining: true,\n\t\t\t},\n\t\t\texpectedProvider: \"AWS\",\n\t\t\texpectedStsHost:  \"sts.cn-north-1.amazonaws.com.cn\",\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tcreator := &awsIdentityAttestationCreator{\n\t\t\t\tattestationServiceFactory: func(ctx context.Context, cfg *Config) awsAttestationMetadataProvider {\n\t\t\t\t\treturn test.attestationSvc\n\t\t\t\t},\n\t\t\t\tctx: context.Background(),\n\t\t\t\tcfg: &test.config,\n\t\t\t}\n\t\t\tattestation, err := creator.createAttestation()\n\t\t\tif test.expectedError != nil {\n\t\t\t\tassertNilF(t, attestation)\n\t\t\t\tassertNotNilE(t, err)\n\t\t\t\tassertEqualE(t, test.expectedError.Error(), err.Error())\n\t\t\t} else {\n\t\t\t\tassertNilE(t, err)\n\t\t\t\tassertNotNilE(t, attestation)\n\t\t\t\tassertNotNilE(t, attestation.Credential)\n\t\t\t\tassertEqualE(t, test.expectedProvider, attestation.ProviderType)\n\t\t\t\tdecoded, err := base64.StdEncoding.DecodeString(attestation.Credential)\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"Failed to decode credential: %v\", err)\n\t\t\t\t}\n\t\t\t\tvar credentialMap map[string]any\n\t\t\t\tif err := json.Unmarshal(decoded, &credentialMap); err != nil {\n\t\t\t\t\tt.Fatalf(\"Failed to unmarshal credential JSON: %v\", err)\n\t\t\t\t}\n\t\t\t\tassertEqualE(t, fmt.Sprintf(\"https://%s?Action=GetCallerIdentity&Version=2011-06-15\", test.expectedStsHost), credentialMap[\"url\"])\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype mockAwsAttestationMetadataProvider struct {\n\tcreds           aws.Credentials\n\tregion          string\n\tchainingCreds   aws.Credentials\n\tchainingError   error\n\tuseRoleChaining bool\n}\n\nvar mockCreds = aws.Credentials{\n\tAccessKeyID:     \"mockAccessKey\",\n\tSecretAccessKey: \"mockSecretKey\",\n\tSessionToken:    \"mockSessionToken\",\n}\n\nfunc (m *mockAwsAttestationMetadataProvider) awsCredentials() (aws.Credentials, error) {\n\treturn m.creds, nil\n}\n\nfunc (m *mockAwsAttestationMetadataProvider) awsCredentialsViaRoleChaining() (aws.Credentials, error) {\n\tif m.chainingError != nil {\n\t\treturn aws.Credentials{}, m.chainingError\n\t}\n\tif m.chainingCreds.AccessKeyID != \"\" {\n\t\treturn m.chainingCreds, nil\n\t}\n\treturn m.creds, nil\n}\n\nfunc (m *mockAwsAttestationMetadataProvider) awsRegion() string {\n\treturn m.region\n}\n\nfunc TestGcpIdentityAttestationCreator(t *testing.T) {\n\ttests := []struct {\n\t\tname                string\n\t\twiremockMappingPath string\n\t\tconfig              Config\n\t\texpectedError       error\n\t\texpectedSub         string\n\t}{\n\t\t{\n\t\t\tname:                \"Successful flow\",\n\t\t\twiremockMappingPath: \"auth/wif/gcp/successful_flow.json\",\n\t\t\texpectedError:       nil,\n\t\t\texpectedSub:         \"some-subject\",\n\t\t},\n\t\t{\n\t\t\tname:                \"Successful impersonation flow\",\n\t\t\twiremockMappingPath: \"auth/wif/gcp/successful_impersionation_flow.json\",\n\t\t\tconfig: Config{\n\t\t\t\tWorkloadIdentityImpersonationPath: []string{\n\t\t\t\t\t\"delegate1\",\n\t\t\t\t\t\"delegate2\",\n\t\t\t\t\t\"targetServiceAccount\",\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedError: nil,\n\t\t\texpectedSub:   \"some-impersonated-subject\",\n\t\t},\n\t\t{\n\t\t\tname:                \"No GCP credential - http error\",\n\t\t\twiremockMappingPath: \"auth/wif/gcp/http_error.json\",\n\t\t\texpectedError:       fmt.Errorf(\"no GCP token was found\"),\n\t\t\texpectedSub:         \"\",\n\t\t},\n\t\t{\n\t\t\tname:                \"missing issuer claim\",\n\t\t\twiremockMappingPath: \"auth/wif/gcp/missing_issuer_claim.json\",\n\t\t\texpectedError:       fmt.Errorf(\"could not extract claims from token: missing issuer claim in JWT token\"),\n\t\t\texpectedSub:         \"\",\n\t\t},\n\t\t{\n\t\t\tname:                \"missing sub claim\",\n\t\t\twiremockMappingPath: \"auth/wif/gcp/missing_sub_claim.json\",\n\t\t\texpectedError:       fmt.Errorf(\"could not extract claims from token: missing sub claim in JWT token\"),\n\t\t\texpectedSub:         \"\",\n\t\t},\n\t\t{\n\t\t\tname:                \"unparsable token\",\n\t\t\twiremockMappingPath: \"auth/wif/gcp/unparsable_token.json\",\n\t\t\texpectedError:       fmt.Errorf(\"could not extract claims from token: unable to extract JWT claims from token: token is malformed: token contains an invalid number of segments\"),\n\t\t\texpectedSub:         \"\",\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tcreator := &gcpIdentityAttestationCreator{\n\t\t\t\tcfg:                    &test.config,\n\t\t\t\tmetadataServiceBaseURL: wiremock.baseURL(),\n\t\t\t\tiamCredentialsURL:      wiremock.baseURL(),\n\t\t\t}\n\t\t\twiremock.registerMappings(t, wiremockMapping{filePath: test.wiremockMappingPath})\n\t\t\tattestation, err := creator.createAttestation()\n\n\t\t\tif test.expectedError != nil {\n\t\t\t\tassertNilF(t, attestation)\n\t\t\t\tassertNotNilF(t, err)\n\t\t\t\tassertEqualE(t, test.expectedError.Error(), err.Error())\n\t\t\t} else {\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertNotNilF(t, attestation)\n\t\t\t\tassertEqualE(t, string(gcpWif), attestation.ProviderType)\n\t\t\t\tassertEqualE(t, test.expectedSub, attestation.Metadata[\"sub\"])\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestOidcIdentityAttestationCreator(t *testing.T) {\n\tconst (\n\t\t/*\n\t\t * {\n\t\t *   \"sub\": \"some-subject\",\n\t\t *   \"iat\": 1743761213,\n\t\t *   \"exp\": 1743764813,\n\t\t *   \"aud\": \"www.example.com\"\n\t\t * }\n\t\t */\n\t\tmissingIssuerClaimToken = \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6ImU2M2I5NzA1OTRiY2NmZTAxMDlkOTg4OWM2MDk3OWEwIn0.eyJzdWIiOiJzb21lLXN1YmplY3QiLCJpYXQiOjE3NDM3NjEyMTMsImV4cCI6MTc0Mzc2NDgxMywiYXVkIjoid3d3LmV4YW1wbGUuY29tIn0.H6sN6kjA82EuijFcv-yCJTqau5qvVTCsk0ZQ4gvFQMkB7c71XPs4lkwTa7ZlNNlx9e6TpN1CVGnpCIRDDAZaDw\" // pragma: allowlist secret\n\t\t/*\n\t\t * {\n\t\t *   \"iss\": \"https://accounts.google.com\",\n\t\t *   \"iat\": 1743761213,\n\t\t *   \"exp\": 1743764813,\n\t\t *   \"aud\": \"www.example.com\"\n\t\t * }\n\t\t */\n\t\tmissingSubClaimToken = \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6ImU2M2I5NzA1OTRiY2NmZTAxMDlkOTg4OWM2MDk3OWEwIn0.eyJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJpYXQiOjE3NDM3NjEyMTMsImV4cCI6MTc0Mzc2NDgxMywiYXVkIjoid3d3LmV4YW1wbGUuY29tIn0.w0njdpfWFETVK8Ktq9GdvuKRQJjvhOplcSyvQ_zHHwBUSMapqO1bjEWBx5VhGkdECZIGS1VY7db_IOqT45yOMA\" // pragma: allowlist secret\n\t\t/*\n\t\t * {\n\t\t *     \"iss\": \"https://oidc.eks.us-east-2.amazonaws.com/id/3B869BC5D12CEB5515358621D8085D58\",\n\t\t *     \"iat\": 1743692017,\n\t\t *     \"exp\": 1775228014,\n\t\t *     \"aud\": \"www.example.com\",\n\t\t *     \"sub\": \"system:serviceaccount:poc-namespace:oidc-sa\"\n\t\t * }\n\t\t */\n\t\tvalidToken      = \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL29pZGMuZWtzLnVzLWVhc3QtMi5hbWF6b25hd3MuY29tL2lkLzNCODY5QkM1RDEyQ0VCNTUxNTM1ODYyMUQ4MDg1RDU4IiwiaWF0IjoxNzQ0Mjg3ODc4LCJleHAiOjE3NzU4MjM4NzgsImF1ZCI6Ind3dy5leGFtcGxlLmNvbSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpwb2MtbmFtZXNwYWNlOm9pZGMtc2EifQ.a8H6KRIF1XmM8lkqL6kR8ccInr7wAzQrbKd3ZHFgiEg\" // pragma: allowlist secret\n\t\tunparsableToken = \"unparsable_token\"\n\t\temptyToken      = \"\"\n\t)\n\n\ttype testCase struct {\n\t\tname          string\n\t\ttoken         string\n\t\texpectedError error\n\t\texpectedSub   string\n\t}\n\n\ttests := []testCase{\n\t\t{\n\t\t\tname:          \"no token input\",\n\t\t\ttoken:         emptyToken,\n\t\t\texpectedError: fmt.Errorf(\"no OIDC token was specified\"),\n\t\t},\n\t\t{\n\t\t\tname:          \"valid token returns proper attestation\",\n\t\t\ttoken:         validToken,\n\t\t\texpectedError: nil,\n\t\t\texpectedSub:   \"system:serviceaccount:poc-namespace:oidc-sa\",\n\t\t},\n\t\t{\n\t\t\tname:          \"missing issuer returns error\",\n\t\t\ttoken:         missingIssuerClaimToken,\n\t\t\texpectedError: errors.New(\"missing issuer claim in JWT token\"),\n\t\t},\n\t\t{\n\t\t\tname:          \"missing sub returns error\",\n\t\t\ttoken:         missingSubClaimToken,\n\t\t\texpectedError: errors.New(\"missing sub claim in JWT token\"),\n\t\t},\n\t\t{\n\t\t\tname:          \"unparsable token returns error\",\n\t\t\ttoken:         unparsableToken,\n\t\t\texpectedError: errors.New(\"unable to extract JWT claims from token: token is malformed: token contains an invalid number of segments\"),\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tcreator := &oidcIdentityAttestationCreator{token: func() (string, error) {\n\t\t\t\treturn test.token, nil\n\t\t\t}}\n\t\t\tattestation, err := creator.createAttestation()\n\n\t\t\tif test.expectedError != nil {\n\t\t\t\tassertNotNilE(t, err)\n\t\t\t\tassertNilF(t, attestation)\n\t\t\t\tassertEqualE(t, test.expectedError.Error(), err.Error())\n\t\t\t} else {\n\t\t\t\tassertNilE(t, err)\n\t\t\t\tassertNotNilE(t, attestation)\n\t\t\t\tassertEqualE(t, string(oidcWif), attestation.ProviderType)\n\t\t\t\tassertEqualE(t, test.expectedSub, attestation.Metadata[\"sub\"])\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestAzureIdentityAttestationCreator(t *testing.T) {\n\ttests := []struct {\n\t\tname                string\n\t\twiremockMappingPath string\n\t\tmetadataProvider    *mockAzureAttestationMetadataProvider\n\t\tcfg                 *Config\n\t\texpectedIss         string\n\t\texpectedError       error\n\t}{\n\t\t/*\n\t\t * {\n\t\t *     \"iss\": \"https://sts.windows.net/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t *     \"sub\": \"77213E30-E8CB-4595-B1B6-5F050E8308FD\"\n\t\t * }\n\t\t */\n\t\t{\n\t\t\tname:                \"Successful flow\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/successful_flow_basic.json\",\n\t\t\tmetadataProvider:    azureVMMetadataProvider(),\n\t\t\texpectedIss:         \"https://sts.windows.net/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t\texpectedError:       nil,\n\t\t},\n\t\t/*\n\t\t * {\n\t\t *     \"iss\": \"https://login.microsoftonline.com/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t *     \"sub\": \"77213E30-E8CB-4595-B1B6-5F050E8308FD\"\n\t\t * }\n\t\t */\n\t\t{\n\t\t\tname:                \"Successful flow v2 issuer\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/successful_flow_v2_issuer.json\",\n\t\t\tmetadataProvider:    azureVMMetadataProvider(),\n\t\t\texpectedIss:         \"https://login.microsoftonline.com/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t\texpectedError:       nil,\n\t\t},\n\t\t/*\n\t\t * {\n\t\t *     \"iss\": \"https://sts.windows.net/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t *     \"sub\": \"77213E30-E8CB-4595-B1B6-5F050E8308FD\"\n\t\t * }\n\t\t */\n\t\t{\n\t\t\tname:                \"Successful flow azure functions\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/successful_flow_azure_functions.json\",\n\t\t\tmetadataProvider:    azureFunctionsMetadataProvider(),\n\t\t\texpectedIss:         \"https://sts.windows.net/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t\texpectedError:       nil,\n\t\t},\n\t\t/*\n\t\t * {\n\t\t *     \"iss\": \"https://login.microsoftonline.com/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t *     \"sub\": \"77213E30-E8CB-4595-B1B6-5F050E8308FD\"\n\t\t * }\n\t\t */\n\t\t{\n\t\t\tname:                \"Successful flow azure functions v2 issuer\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/successful_flow_azure_functions_v2_issuer.json\",\n\t\t\tmetadataProvider:    azureFunctionsMetadataProvider(),\n\t\t\texpectedIss:         \"https://login.microsoftonline.com/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t\texpectedError:       nil,\n\t\t},\n\t\t/*\n\t\t * {\n\t\t *     \"iss\": \"https://sts.windows.net/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t *     \"sub\": \"77213E30-E8CB-4595-B1B6-5F050E8308FD\"\n\t\t * }\n\t\t */\n\t\t{\n\t\t\tname:                \"Successful flow azure functions no client ID\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/successful_flow_azure_functions_no_client_id.json\",\n\t\t\tmetadataProvider: &mockAzureAttestationMetadataProvider{\n\t\t\t\tidentityEndpointValue: wiremock.baseURL() + \"/metadata/identity/endpoint/from/env\",\n\t\t\t\tidentityHeaderValue:   \"some-identity-header-from-env\",\n\t\t\t\tclientIDValue:         \"\",\n\t\t\t},\n\t\t\texpectedIss:   \"https://sts.windows.net/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t\texpectedError: nil,\n\t\t},\n\t\t/*\n\t\t * {\n\t\t *     \"iss\": \"https://sts.windows.net/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t *     \"sub\": \"77213E30-E8CB-4595-B1B6-5F050E8308FD\"\n\t\t * }\n\t\t */\n\t\t{\n\t\t\tname:                \"Successful flow azure functions custom Entra resource\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/successful_flow_azure_functions_custom_entra_resource.json\",\n\t\t\tmetadataProvider:    azureFunctionsMetadataProvider(),\n\t\t\tcfg:                 &Config{WorkloadIdentityEntraResource: \"api://1111111-2222-3333-44444-55555555\"},\n\t\t\texpectedIss:         \"https://sts.windows.net/fa15d692-e9c7-4460-a743-29f29522229/\",\n\t\t\texpectedError:       nil,\n\t\t},\n\t\t{\n\t\t\tname:                \"Non-json response\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/non_json_response.json\",\n\t\t\tmetadataProvider:    azureVMMetadataProvider(),\n\t\t\texpectedError:       fmt.Errorf(\"failed to extract token from JSON: invalid character 'o' in literal null (expecting 'u')\"),\n\t\t},\n\t\t{\n\t\t\tname: \"Identity endpoint but no identity header\",\n\t\t\tmetadataProvider: &mockAzureAttestationMetadataProvider{\n\t\t\t\tidentityEndpointValue: wiremock.baseURL() + \"/metadata/identity/endpoint/from/env\",\n\t\t\t\tidentityHeaderValue:   \"\",\n\t\t\t\tclientIDValue:         \"managed-client-id-from-env\",\n\t\t\t},\n\t\t\texpectedError: fmt.Errorf(\"managed identity is not enabled on this Azure function\"),\n\t\t},\n\t\t{\n\t\t\tname:                \"Unparsable token\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/unparsable_token.json\",\n\t\t\tmetadataProvider:    azureVMMetadataProvider(),\n\t\t\texpectedError:       fmt.Errorf(\"failed to extract sub and iss claims from token: unable to extract JWT claims from token: token is malformed: token contains an invalid number of segments\"),\n\t\t},\n\t\t{\n\t\t\tname:                \"HTTP error\",\n\t\t\tmetadataProvider:    azureVMMetadataProvider(),\n\t\t\twiremockMappingPath: \"auth/wif/azure/http_error.json\",\n\t\t\texpectedError:       fmt.Errorf(\"could not fetch Azure token\"),\n\t\t},\n\t\t{\n\t\t\tname:                \"Missing sub or iss claim\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/missing_issuer_claim.json\",\n\t\t\tmetadataProvider:    azureVMMetadataProvider(),\n\t\t\texpectedError:       fmt.Errorf(\"failed to extract sub and iss claims from token: missing issuer claim in JWT token\"),\n\t\t},\n\t\t{\n\t\t\tname:                \"Missing sub claim\",\n\t\t\twiremockMappingPath: \"auth/wif/azure/missing_sub_claim.json\",\n\t\t\tmetadataProvider:    azureVMMetadataProvider(),\n\t\t\texpectedError:       fmt.Errorf(\"failed to extract sub and iss claims from token: missing sub claim in JWT token\"),\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tif test.wiremockMappingPath != \"\" {\n\t\t\t\twiremock.registerMappings(t, wiremockMapping{filePath: test.wiremockMappingPath})\n\t\t\t}\n\t\t\tcreator := &azureIdentityAttestationCreator{\n\t\t\t\tcfg:                              test.cfg,\n\t\t\t\tazureMetadataServiceBaseURL:      wiremock.baseURL(),\n\t\t\t\tazureAttestationMetadataProvider: test.metadataProvider,\n\t\t\t\tworkloadIdentityEntraResource:    determineEntraResource(test.cfg),\n\t\t\t}\n\t\t\tattestation, err := creator.createAttestation()\n\n\t\t\tif test.expectedError != nil {\n\t\t\t\tassertNilF(t, attestation)\n\t\t\t\tassertNotNilE(t, err)\n\t\t\t\tassertEqualE(t, test.expectedError.Error(), err.Error())\n\t\t\t} else {\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertNotNilF(t, attestation)\n\t\t\t\tassertEqualE(t, string(azureWif), attestation.ProviderType)\n\t\t\t\tassertEqualE(t, test.expectedIss, attestation.Metadata[\"iss\"])\n\t\t\t\tassertEqualE(t, \"77213E30-E8CB-4595-B1B6-5F050E8308FD\", attestation.Metadata[\"sub\"])\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype mockAzureAttestationMetadataProvider struct {\n\tidentityEndpointValue string\n\tidentityHeaderValue   string\n\tclientIDValue         string\n}\n\nfunc (m *mockAzureAttestationMetadataProvider) identityEndpoint() string {\n\treturn m.identityEndpointValue\n}\n\nfunc (m *mockAzureAttestationMetadataProvider) identityHeader() string {\n\treturn m.identityHeaderValue\n}\n\nfunc (m *mockAzureAttestationMetadataProvider) clientID() string {\n\treturn m.clientIDValue\n}\n\nfunc azureFunctionsMetadataProvider() *mockAzureAttestationMetadataProvider {\n\treturn &mockAzureAttestationMetadataProvider{\n\t\tidentityEndpointValue: wiremock.baseURL() + \"/metadata/identity/endpoint/from/env\",\n\t\tidentityHeaderValue:   \"some-identity-header-from-env\",\n\t\tclientIDValue:         \"managed-client-id-from-env\",\n\t}\n}\n\nfunc azureVMMetadataProvider() *mockAzureAttestationMetadataProvider {\n\treturn &mockAzureAttestationMetadataProvider{\n\t\tidentityEndpointValue: \"\",\n\t\tidentityHeaderValue:   \"\",\n\t\tclientIDValue:         \"\",\n\t}\n}\n\n// Running this test locally:\n// * Push branch to repository\n// * Set PARAMETERS_SECRET\n// * Run ci/test_wif.sh\nfunc TestWorkloadIdentityAuthOnCloudVM(t *testing.T) {\n\taccount := os.Getenv(\"SNOWFLAKE_TEST_WIF_ACCOUNT\")\n\thost := os.Getenv(\"SNOWFLAKE_TEST_WIF_HOST\")\n\tprovider := os.Getenv(\"SNOWFLAKE_TEST_WIF_PROVIDER\")\n\tprintln(\"provider = \" + provider)\n\tif account == \"\" || host == \"\" || provider == \"\" {\n\t\tt.Skip(\"Test can run only on cloud VM with env variables set\")\n\t}\n\ttestCases := []struct {\n\t\tname             string\n\t\tskip             func() (bool, string)\n\t\tsetupCfg         func(*testing.T, *Config)\n\t\texpectedUsername string\n\t}{\n\t\t{\n\t\t\tname: \"provider=\" + provider,\n\t\t\tsetupCfg: func(_ *testing.T, config *Config) {\n\t\t\t\tif provider != \"GCP+OIDC\" {\n\t\t\t\t\tconfig.WorkloadIdentityProvider = provider\n\t\t\t\t} else {\n\t\t\t\t\tconfig.WorkloadIdentityProvider = \"OIDC\"\n\t\t\t\t\tconfig.Token = func() string {\n\t\t\t\t\t\tcmd := exec.Command(\"wget\", \"-O\", \"-\", \"--header=Metadata-Flavor: Google\", \"http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/identity?audience=snowflakecomputing.com\")\n\t\t\t\t\t\toutput, err := cmd.Output()\n\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\tt.Fatalf(\"error executing GCP metadata request: %v\", err)\n\t\t\t\t\t\t}\n\t\t\t\t\t\ttoken := strings.TrimSpace(string(output))\n\t\t\t\t\t\tif token == \"\" {\n\t\t\t\t\t\t\tt.Fatal(\"failed to retrieve GCP access token: empty response\")\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn token\n\t\t\t\t\t}()\n\t\t\t\t}\n\t\t\t},\n\t\t\texpectedUsername: os.Getenv(\"SNOWFLAKE_TEST_WIF_USERNAME\"),\n\t\t},\n\t\t{\n\t\t\tname: \"provider=\" + provider + \",impersonation\",\n\t\t\tskip: func() (bool, string) {\n\t\t\t\tif provider != \"AWS\" && provider != \"GCP\" {\n\t\t\t\t\treturn true, \"Impersonation is supported only on AWS and GCP\"\n\t\t\t\t}\n\t\t\t\treturn false, \"\"\n\t\t\t},\n\t\t\tsetupCfg: func(t *testing.T, config *Config) {\n\t\t\t\tconfig.WorkloadIdentityProvider = provider\n\t\t\t\timpersonationPath := os.Getenv(\"SNOWFLAKE_TEST_WIF_IMPERSONATION_PATH\")\n\t\t\t\tassertNotEqualF(t, impersonationPath, \"\", \"SNOWFLAKE_TEST_WIF_IMPERSONATION_PATH is not set\")\n\t\t\t\tconfig.WorkloadIdentityImpersonationPath = strings.Split(impersonationPath, \",\")\n\t\t\t\tassertNotEqualF(t, os.Getenv(\"SNOWFLAKE_TEST_WIF_USERNAME_IMPERSONATION\"), \"\", \"SNOWFLAKE_TEST_WIF_USERNAME_IMPERSONATION is not set\")\n\t\t\t},\n\t\t\texpectedUsername: os.Getenv(\"SNOWFLAKE_TEST_WIF_USERNAME_IMPERSONATION\"),\n\t\t},\n\t}\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tif tc.skip != nil {\n\t\t\t\tif skip, msg := tc.skip(); skip {\n\t\t\t\t\tt.Skip(msg)\n\t\t\t\t}\n\t\t\t}\n\t\t\tconfig := &Config{\n\t\t\t\tAccount:       account,\n\t\t\t\tHost:          host,\n\t\t\t\tAuthenticator: AuthTypeWorkloadIdentityFederation,\n\t\t\t}\n\t\t\ttc.setupCfg(t, config)\n\t\t\tconnector := NewConnector(SnowflakeDriver{}, *config)\n\t\t\tdb := sql.OpenDB(connector)\n\t\t\tdefer db.Close()\n\t\t\tcurrentUser := runSelectCurrentUser(t, db)\n\t\t\tassertEqualE(t, currentUser, tc.expectedUsername)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "auth_with_external_browser_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"fmt\"\n\t\"log\"\n\t\"os/exec\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestExternalBrowserSuccessful(t *testing.T) {\n\tcfg := setupExternalBrowserTest(t)\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.Success, cfg.User, cfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n\t}()\n\twg.Wait()\n}\n\nfunc TestExternalBrowserFailed(t *testing.T) {\n\tcfg := setupExternalBrowserTest(t)\n\tcfg.ExternalBrowserTimeout = time.Duration(10) * time.Second\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.Fail, \"FakeAccount\", \"NotARealPassword\")\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNotNilF(t, err)\n\t\tassertEqualE(t, err.Error(), \"authentication timed out\")\n\t}()\n\twg.Wait()\n}\n\nfunc TestExternalBrowserTimeout(t *testing.T) {\n\tcfg := setupExternalBrowserTest(t)\n\tcfg.ExternalBrowserTimeout = time.Duration(1) * time.Second\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.Timeout, cfg.User, cfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNotNilF(t, err)\n\t\tassertEqualE(t, err.Error(), \"authentication timed out\")\n\t}()\n\twg.Wait()\n}\n\nfunc TestExternalBrowserMismatchUser(t *testing.T) {\n\tcfg := setupExternalBrowserTest(t)\n\tcorrectUsername := cfg.User\n\tcfg.User = \"fakeAccount\"\n\tvar wg sync.WaitGroup\n\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.Success, correctUsername, cfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tvar snowflakeErr *SnowflakeError\n\t\tassertErrorsAsF(t, err, &snowflakeErr)\n\t\tassertEqualE(t, snowflakeErr.Number, 390191, fmt.Sprintf(\"Expected 390191, but got %v\", snowflakeErr.Number))\n\t}()\n\twg.Wait()\n}\n\nfunc TestClientStoreCredentials(t *testing.T) {\n\tcfg := setupExternalBrowserTest(t)\n\tcfg.ClientStoreTemporaryCredential = 1\n\tcfg.ExternalBrowserTimeout = time.Duration(10) * time.Second\n\n\tt.Run(\"Obtains the ID token from the server and saves it on the local storage\", func(t *testing.T) {\n\t\tcleanupBrowserProcesses(t)\n\t\tvar wg sync.WaitGroup\n\t\twg.Add(2)\n\t\tgo func() {\n\t\t\tdefer wg.Done()\n\t\t\tprovideExternalBrowserCredentials(t, externalBrowserType.Success, cfg.User, cfg.Password)\n\t\t}()\n\t\tgo func() {\n\t\t\tdefer wg.Done()\n\t\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\t\tassertNilE(t, err, fmt.Sprintf(\"Connection failed: err %v\", err))\n\t\t}()\n\t\twg.Wait()\n\t})\n\n\tt.Run(\"Verify validation of ID token if option enabled\", func(t *testing.T) {\n\t\tcleanupBrowserProcesses(t)\n\t\tcfg.ClientStoreTemporaryCredential = 1\n\t\tdb := getDbHandlerFromConfig(t, cfg)\n\t\tconn, err := db.Conn(context.Background())\n\t\tassertNilE(t, err, fmt.Sprintf(\"Failed to connect to Snowflake. err: %v\", err))\n\t\tdefer conn.Close()\n\t\trows, err := conn.QueryContext(context.Background(), \"SELECT 1\")\n\t\tassertNilE(t, err, fmt.Sprintf(\"Failed to run a query. err: %v\", err))\n\t\trows.Close()\n\t})\n\n\tt.Run(\"Verify validation of idToken if option disabled\", func(t *testing.T) {\n\t\tcleanupBrowserProcesses(t)\n\t\tcfg.ClientStoreTemporaryCredential = 0\n\t\tdb := getDbHandlerFromConfig(t, cfg)\n\t\t_, err := db.Conn(context.Background())\n\t\tassertNotNilF(t, err)\n\t\tassertEqualE(t, err.Error(), \"authentication timed out\", fmt.Sprintf(\"Expected timeout, but got %v\", err))\n\t})\n}\n\ntype ExternalBrowserProcessResult struct {\n\tSuccess               string\n\tFail                  string\n\tTimeout               string\n\tOauthOktaSuccess      string\n\tOauthSnowflakeSuccess string\n}\n\nvar externalBrowserType = ExternalBrowserProcessResult{\n\tSuccess:               \"success\",\n\tFail:                  \"fail\",\n\tTimeout:               \"timeout\",\n\tOauthOktaSuccess:      \"externalOauthOktaSuccess\",\n\tOauthSnowflakeSuccess: \"internalOauthSnowflakeSuccess\",\n}\n\nfunc cleanupBrowserProcesses(t *testing.T) {\n\tif isTestRunningInDockerContainer() {\n\t\tconst cleanBrowserProcessesPath = \"/externalbrowser/cleanBrowserProcesses.js\"\n\t\t_, err := exec.Command(\"node\", cleanBrowserProcessesPath).CombinedOutput()\n\t\tassertNilE(t, err, fmt.Sprintf(\"failed to execute command: %v\", err))\n\t}\n}\n\nfunc provideExternalBrowserCredentials(t *testing.T, ExternalBrowserProcess string, user string, password string) {\n\tif isTestRunningInDockerContainer() {\n\t\tconst provideBrowserCredentialsPath = \"/externalbrowser/provideBrowserCredentials.js\"\n\t\toutput, err := exec.Command(\"node\", provideBrowserCredentialsPath, ExternalBrowserProcess, user, password).CombinedOutput()\n\t\tlog.Printf(\"Output: %s\\n\", output)\n\t\tassertNilE(t, err, fmt.Sprintf(\"failed to execute command: %v\", err))\n\t}\n}\n\nfunc verifyConnectionToSnowflakeAuthTests(t *testing.T, cfg *Config) (err error) {\n\tdsn, err := DSN(cfg)\n\tassertNilE(t, err, \"failed to create DSN from Config\")\n\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tassertNilE(t, err, \"failed to open Snowflake DB connection\")\n\tdefer db.Close()\n\n\trows, err := db.Query(\"SELECT 1\")\n\tif err != nil {\n\t\tlog.Printf(\"failed to run a query. 'SELECT 1', err: %v\", err)\n\t\treturn err\n\t}\n\tdefer rows.Close()\n\tassertTrueE(t, rows.Next(), \"failed to get result\", \"There were no results for query: \")\n\n\treturn err\n}\n\nfunc setupExternalBrowserTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping External Browser tests\")\n\tcleanupBrowserProcesses(t)\n\tcfg, err := getAuthTestsConfig(t, AuthTypeExternalBrowser)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get config: %v\", err))\n\treturn cfg\n}\n"
  },
  {
    "path": "auth_with_keypair_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"crypto/rsa\"\n\t\"fmt\"\n\t\"golang.org/x/crypto/ssh\"\n\t\"os\"\n\t\"testing\"\n)\n\nfunc TestKeypairSuccessful(t *testing.T) {\n\tcfg := setupKeyPairTest(t)\n\tcfg.PrivateKey = loadRsaPrivateKeyForKeyPair(t, \"SNOWFLAKE_AUTH_TEST_PRIVATE_KEY_PATH\")\n\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNilE(t, err, fmt.Sprintf(\"failed to connect. err: %v\", err))\n}\n\nfunc TestKeypairInvalidKey(t *testing.T) {\n\tcfg := setupKeyPairTest(t)\n\tcfg.PrivateKey = loadRsaPrivateKeyForKeyPair(t, \"SNOWFLAKE_AUTH_TEST_INVALID_PRIVATE_KEY_PATH\")\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tvar snowflakeErr *SnowflakeError\n\tassertErrorsAsF(t, err, &snowflakeErr)\n\tassertEqualE(t, snowflakeErr.Number, 390144, fmt.Sprintf(\"Expected 390144, but got %v\", snowflakeErr.Number))\n}\n\nfunc setupKeyPairTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping KeyPair tests\")\n\tcfg, err := getAuthTestsConfig(t, AuthTypeJwt)\n\tassertEqualE(t, err, nil, fmt.Sprintf(\"failed to get config: %v\", err))\n\n\treturn cfg\n}\n\nfunc loadRsaPrivateKeyForKeyPair(t *testing.T, envName string) *rsa.PrivateKey {\n\tfilePath, err := GetFromEnv(envName, true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get env: %v\", err))\n\n\tbytes, err := os.ReadFile(filePath)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to read file: %v\", err))\n\n\tkey, err := ssh.ParseRawPrivateKey(bytes)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to parse private key: %v\", err))\n\n\treturn key.(*rsa.PrivateKey)\n}\n"
  },
  {
    "path": "auth_with_mfa_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"log\"\n\t\"os/exec\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestMfaSuccessful(t *testing.T) {\n\tcfg := setupMfaTest(t)\n\n\t// Enable MFA token caching\n\tcfg.ClientRequestMfaToken = ConfigBoolTrue\n\n\t//Provide your own TOTP code/codes here, to test manually\n\t//totpKeys := []string{\"222222\", \"333333\", \"444444\"}\n\n\ttotpKeys := getTOPTcodes(t)\n\n\tverifyConnectionToSnowflakeUsingTotpCodes(t, cfg, totpKeys)\n\tlog.Printf(\"Testing MFA token caching with second connection...\")\n\n\t// Clear the passcode to force use of cached MFA token\n\tcfg.Passcode = \"\"\n\n\t// Attempt to connect using cached MFA token\n\tcacheErr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNilF(t, cacheErr, \"Failed to connect with cached MFA token\")\n}\n\nfunc setupMfaTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping MFA tests\")\n\tcfg, err := getAuthTestsConfig(t, AuthTypeUsernamePasswordMFA)\n\tassertNilF(t, err, \"failed to get config\")\n\n\tcfg.User, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_MFA_USER\", true)\n\tassertNilF(t, err, \"failed to get MFA user from environment\")\n\n\tcfg.Password, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_MFA_PASSWORD\", true)\n\tassertNilF(t, err, \"failed to get MFA password from environment\")\n\n\treturn cfg\n}\n\nfunc getTOPTcodes(t *testing.T) []string {\n\tif isTestRunningInDockerContainer() {\n\t\tconst provideTotpPath = \"/externalbrowser/totpGenerator.js\"\n\t\toutput, err := exec.Command(\"node\", provideTotpPath).CombinedOutput()\n\t\tassertNilF(t, err, fmt.Sprintf(\"failed to execute command: %v\", err))\n\t\ttotpCodes := strings.Fields(string(output))\n\t\treturn totpCodes\n\t}\n\treturn []string{}\n}\n\nfunc verifyConnectionToSnowflakeUsingTotpCodes(t *testing.T, cfg *Config, totpKeys []string) {\n\tif len(totpKeys) == 0 {\n\t\tt.Fatalf(\"no TOTP codes provided\")\n\t}\n\n\tvar lastError error\n\n\tfor i, totpKey := range totpKeys {\n\t\tcfg.Passcode = totpKey\n\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tif err == nil {\n\t\t\treturn\n\t\t}\n\n\t\tlastError = err\n\t\terrorMsg := err.Error()\n\n\t\tlog.Printf(\"TOTP code %d failed: %v\", i+1, errorMsg)\n\n\t\tvar snowflakeErr *SnowflakeError\n\t\tif errors.As(err, &snowflakeErr) && (snowflakeErr.Number == 394633 || snowflakeErr.Number == 394507) {\n\t\t\tlog.Printf(\"MFA error detected (%d), trying next code...\", snowflakeErr.Number)\n\t\t\tcontinue\n\t\t} else {\n\t\t\tlog.Printf(\"Non-MFA error detected: %v\", errorMsg)\n\t\t\tbreak\n\t\t}\n\t}\n\n\tassertNilF(t, lastError, \"failed to connect with any TOTP code\")\n}\n"
  },
  {
    "path": "auth_with_oauth_okta_authorization_code_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestOauthOktaAuthorizationCodeSuccessful(t *testing.T) {\n\tcfg := setupOauthOktaAuthorizationCodeTest(t)\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthOktaSuccess, cfg.User, cfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n\t}()\n\twg.Wait()\n}\n\nfunc TestOauthOktaAuthorizationCodeMismatchedUsername(t *testing.T) {\n\tcfg := setupOauthOktaAuthorizationCodeTest(t)\n\tuser := cfg.User\n\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthOktaSuccess, user, cfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tcfg.User = \"fakeUser@snowflake.com\"\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tvar snowflakeErr *SnowflakeError\n\t\tassertErrorsAsF(t, err, &snowflakeErr)\n\t\tassertEqualE(t, snowflakeErr.Number, 390309, fmt.Sprintf(\"Expected 390309, but got %v\", snowflakeErr.Number))\n\t}()\n\twg.Wait()\n}\n\nfunc TestOauthOktaAuthorizationCodeOktaTimeout(t *testing.T) {\n\tcfg := setupOauthOktaAuthorizationCodeTest(t)\n\tcfg.ExternalBrowserTimeout = time.Duration(1) * time.Second\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNotNilF(t, err, \"should failed due to timeout\")\n\tassertEqualE(t, err.Error(), \"authentication via browser timed out\", fmt.Sprintf(\"Expecteed timeout, but got %v\", err))\n}\n\nfunc TestOauthOktaAuthorizationCodeUsingTokenCache(t *testing.T) {\n\tcfg := setupOauthOktaAuthorizationCodeTest(t)\n\tcfg.ClientStoreTemporaryCredential = 1\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthOktaSuccess, cfg.User, cfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n\t}()\n\twg.Wait()\n\n\tcleanupBrowserProcesses(t)\n\tcfg.ExternalBrowserTimeout = time.Duration(1) * time.Second\n\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n}\n\nfunc setupOauthOktaAuthorizationCodeTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping Okta Authorization Code tests\")\n\tcfg, err := getAuthTestsConfig(t, AuthTypeOAuthAuthorizationCode)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get config: %v\", err))\n\n\tcleanupBrowserProcesses(t)\n\n\tcfg.OauthClientID, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_CLIENT_ID\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.OauthClientSecret, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_CLIENT_SECRET\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.OauthRedirectURI, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_REDIRECT_URI\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.OauthAuthorizationURL, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_AUTH_URL\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.OauthTokenRequestURL, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_TOKEN\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.Role, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_ROLE\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\treturn cfg\n\n}\n"
  },
  {
    "path": "auth_with_oauth_okta_client_credentials_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestOauthOktaClientCredentialsSuccessful(t *testing.T) {\n\tcfg := setupOauthOktaClientCredentialsTest(t)\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNilE(t, err, fmt.Sprintf(\"failed to connect. err: %v\", err))\n}\n\nfunc TestOauthOktaClientCredentialsMismatchedUsername(t *testing.T) {\n\tcfg := setupOauthOktaClientCredentialsTest(t)\n\tcfg.User = \"invalidUser\"\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\n\tvar snowflakeErr *SnowflakeError\n\tassertErrorsAsF(t, err, &snowflakeErr)\n\tassertEqualE(t, snowflakeErr.Number, 390309, fmt.Sprintf(\"Expected 390309, but got %v\", snowflakeErr.Number))\n}\n\nfunc TestOauthOktaClientCredentialsUnauthorized(t *testing.T) {\n\tcfg := setupOauthOktaClientCredentialsTest(t)\n\tcfg.OauthClientID = \"invalidClientID\"\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNotNilF(t, err, \"Expected an error but got nil\")\n\tassertTrueF(t, strings.Contains(err.Error(), \"invalid_client\"), fmt.Sprintf(\"Expected error to contain 'invalid_client', but got: %v\", err.Error()))\n}\n\nfunc setupOauthOktaClientCredentialsTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping Okta Client Credentials tests\")\n\n\tcfg, err := getAuthTestsConfig(t, AuthTypeOAuthClientCredentials)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get config: %v\", err))\n\n\tcfg.OauthClientID, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_CLIENT_ID\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.OauthClientSecret, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_CLIENT_SECRET\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.OauthTokenRequestURL, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_TOKEN\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.User, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_CLIENT_ID\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.Role, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_ROLE\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\treturn cfg\n}\n"
  },
  {
    "path": "auth_with_oauth_snowflake_authorization_code_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestOauthSnowflakeAuthorizationCodeSuccessful(t *testing.T) {\n\tcfg := setupOauthSnowflakeAuthorizationCodeTest(t)\n\tbrowserCfg, err := getOauthSnowflakeAuthorizationCodeTestCredentials()\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get browser config: %v\", err))\n\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthSnowflakeSuccess, browserCfg.User, browserCfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n\t}()\n\twg.Wait()\n}\n\nfunc TestOauthSnowflakeAuthorizationCodeMismatchedUsername(t *testing.T) {\n\tcfg := setupOauthSnowflakeAuthorizationCodeTest(t)\n\tbrowserCfg, err := getOauthSnowflakeAuthorizationCodeTestCredentials()\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get browser config: %v\", err))\n\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthSnowflakeSuccess, browserCfg.User, browserCfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tcfg.User = \"fakeUser@snowflake.com\"\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tvar snowflakeErr *SnowflakeError\n\t\tassertErrorsAsF(t, err, &snowflakeErr)\n\t\tassertEqualE(t, snowflakeErr.Number, 390309, fmt.Sprintf(\"Expected 390309, but got %v\", snowflakeErr.Number))\n\t}()\n\twg.Wait()\n}\n\nfunc TestOauthSnowflakeAuthorizationCodeTimeout(t *testing.T) {\n\tcfg := setupOauthSnowflakeAuthorizationCodeTest(t)\n\tcfg.ExternalBrowserTimeout = time.Duration(1) * time.Second\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNotNilF(t, err, \"should failed due to timeout\")\n\tassertEqualE(t, err.Error(), \"authentication via browser timed out\", fmt.Sprintf(\"Expecteed timeout, but got %v\", err))\n}\n\nfunc TestOauthSnowflakeAuthorizationCodeUsingTokenCache(t *testing.T) {\n\tcfg := setupOauthSnowflakeAuthorizationCodeTest(t)\n\tbrowserCfg, err := getOauthSnowflakeAuthorizationCodeTestCredentials()\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get browser config: %v\", err))\n\n\tcfg.ClientStoreTemporaryCredential = 1\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthSnowflakeSuccess, browserCfg.User, browserCfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n\t}()\n\twg.Wait()\n\n\tcleanupBrowserProcesses(t)\n\tcfg.ExternalBrowserTimeout = time.Duration(1) * time.Second\n\n\terr = verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n}\n\nfunc TestOauthSnowflakeAuthorizationCodeWithoutTokenCache(t *testing.T) {\n\tcfg := setupOauthSnowflakeAuthorizationCodeTest(t)\n\tbrowserCfg, err := getOauthSnowflakeAuthorizationCodeTestCredentials()\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get browser config: %v\", err))\n\tcfg.ClientStoreTemporaryCredential = 2\n\n\tvar wg sync.WaitGroup\n\tcfg.DisableQueryContextCache = true\n\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthSnowflakeSuccess, browserCfg.User, browserCfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n\t}()\n\twg.Wait()\n\n\tcleanupBrowserProcesses(t)\n\tcfg.ExternalBrowserTimeout = time.Duration(1) * time.Second\n\n\terr = verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNotNilF(t, err, \"Expected an error but got nil\")\n\tassertEqualE(t, err.Error(), \"authentication via browser timed out\", fmt.Sprintf(\"Expecteed timeout, but got %v\", err))\n}\n\nfunc setupOauthSnowflakeAuthorizationCodeTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping Snowflake Authorization Code tests\")\n\n\tcfg, err := getAuthTestsConfig(t, AuthTypeOAuthAuthorizationCode)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get config: %v\", err))\n\n\tcleanupBrowserProcesses(t)\n\n\tcfg.OauthClientID, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_INTERNAL_OAUTH_SNOWFLAKE_CLIENT_ID\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.OauthClientSecret, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_INTERNAL_OAUTH_SNOWFLAKE_CLIENT_SECRET\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.OauthRedirectURI, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_INTERNAL_OAUTH_SNOWFLAKE_REDIRECT_URI\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.User, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_CLIENT_ID\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.Role, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_ROLE\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.ClientStoreTemporaryCredential = 2\n\treturn cfg\n}\n\nfunc getOauthSnowflakeAuthorizationCodeTestCredentials() (*Config, error) {\n\treturn GetConfigFromEnv([]*ConfigParam{\n\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_CLIENT_ID\", FailOnMissing: true},\n\t\t{Name: \"Password\", EnvName: \"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_USER_PASSWORD\", FailOnMissing: true},\n\t})\n}\n"
  },
  {
    "path": "auth_with_oauth_snowflake_authorization_code_wildcards_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestOauthSnowflakeAuthorizationCodeWildcardsSuccessful(t *testing.T) {\n\tcfg := setupOauthSnowflakeAuthorizationCodeWildcardsTest(t)\n\tbrowserCfg, err := getOauthSnowflakeAuthorizationCodeTestCredentials()\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get browser config: %v\", err))\n\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthSnowflakeSuccess, browserCfg.User, browserCfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n\t}()\n\twg.Wait()\n}\n\nfunc TestOauthSnowflakeAuthorizationCodeWildcardsMismatchedUsername(t *testing.T) {\n\tcfg := setupOauthSnowflakeAuthorizationCodeWildcardsTest(t)\n\tbrowserCfg, err := getOauthSnowflakeAuthorizationCodeTestCredentials()\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get browser config: %v\", err))\n\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthSnowflakeSuccess, browserCfg.User, browserCfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tcfg.User = \"fakeUser@snowflake.com\"\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tvar snowflakeErr *SnowflakeError\n\t\tassertErrorsAsF(t, err, &snowflakeErr)\n\t\tassertEqualE(t, snowflakeErr.Number, 390309, fmt.Sprintf(\"Expected 390309, but got %v\", snowflakeErr.Number))\n\t}()\n\twg.Wait()\n}\n\nfunc TestOauthSnowflakeAuthorizationWildcardsCodeTimeout(t *testing.T) {\n\tcfg := setupOauthSnowflakeAuthorizationCodeWildcardsTest(t)\n\tcfg.ExternalBrowserTimeout = time.Duration(1) * time.Second\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNotNilF(t, err, \"should failed due to timeout\")\n\tassertEqualE(t, err.Error(), \"authentication via browser timed out\", fmt.Sprintf(\"Expecteed timeout, but got %v\", err))\n}\n\nfunc TestOauthSnowflakeAuthorizationCodeWildcardsWithoutTokenCache(t *testing.T) {\n\tcfg := setupOauthSnowflakeAuthorizationCodeWildcardsTest(t)\n\tbrowserCfg, err := getOauthSnowflakeAuthorizationCodeTestCredentials()\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get browser config: %v\", err))\n\tcfg.ClientStoreTemporaryCredential = 2\n\n\tvar wg sync.WaitGroup\n\tcfg.DisableQueryContextCache = true\n\n\twg.Add(2)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tprovideExternalBrowserCredentials(t, externalBrowserType.OauthSnowflakeSuccess, browserCfg.User, browserCfg.Password)\n\t}()\n\tgo func() {\n\t\tdefer wg.Done()\n\t\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\t\tassertNilE(t, err, fmt.Sprintf(\"Connection failed due to %v\", err))\n\t}()\n\twg.Wait()\n\n\tcleanupBrowserProcesses(t)\n\tcfg.ExternalBrowserTimeout = time.Duration(1) * time.Second\n\n\terr = verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNotNilF(t, err, \"Expected an error but got nil\")\n\tassertEqualE(t, err.Error(), \"authentication via browser timed out\", fmt.Sprintf(\"Expecteed timeout, but got %v\", err))\n}\n\nfunc setupOauthSnowflakeAuthorizationCodeWildcardsTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping Snowflake Authorization Code tests\")\n\n\tcfg, err := getAuthTestsConfig(t, AuthTypeOAuthAuthorizationCode)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get config: %v\", err))\n\n\tcleanupBrowserProcesses(t)\n\n\tcfg.OauthClientID, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_INTERNAL_OAUTH_SNOWFLAKE_WILDCARDS_CLIENT_ID\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.OauthClientSecret, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_INTERNAL_OAUTH_SNOWFLAKE_WILDCARDS_CLIENT_SECRET\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.User, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_EXTERNAL_OAUTH_OKTA_CLIENT_ID\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.Role, err = GetFromEnv(\"SNOWFLAKE_AUTH_TEST_INTERNAL_OAUTH_SNOWFLAKE_ROLE\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to setup config: %v\", err))\n\n\tcfg.ClientStoreTemporaryCredential = 2\n\treturn cfg\n}\n"
  },
  {
    "path": "auth_with_oauth_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestOauthSuccessful(t *testing.T) {\n\tcfg := setupOauthTest(t)\n\ttoken, err := getOauthTestToken(t, cfg)\n\tassertNilE(t, err, fmt.Sprintf(\"failed to get token. err: %v\", err))\n\tcfg.Token = token\n\terr = verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNilE(t, err, fmt.Sprintf(\"failed to connect. err: %v\", err))\n}\n\nfunc TestOauthInvalidToken(t *testing.T) {\n\tcfg := setupOauthTest(t)\n\tcfg.Token = \"invalid_token\"\n\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\n\tvar snowflakeErr *SnowflakeError\n\tassertErrorsAsF(t, err, &snowflakeErr)\n\tassertEqualE(t, snowflakeErr.Number, 390303, fmt.Sprintf(\"Expected 390303, but got %v\", snowflakeErr.Number))\n}\n\nfunc TestOauthMismatchedUser(t *testing.T) {\n\tcfg := setupOauthTest(t)\n\ttoken, err := getOauthTestToken(t, cfg)\n\tassertNilE(t, err, fmt.Sprintf(\"failed to get token. err: %v\", err))\n\tcfg.Token = token\n\tcfg.User = \"fakeaccount\"\n\n\terr = verifyConnectionToSnowflakeAuthTests(t, cfg)\n\n\tvar snowflakeErr *SnowflakeError\n\tassertErrorsAsF(t, err, &snowflakeErr)\n\tassertEqualE(t, snowflakeErr.Number, 390309, fmt.Sprintf(\"Expected 390309, but got %v\", snowflakeErr.Number))\n}\n\nfunc setupOauthTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping OAuth tests\")\n\tcfg, err := getAuthTestsConfig(t, AuthTypeOAuth)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to connect. err: %v\", err))\n\n\treturn cfg\n}\n\nfunc getOauthTestToken(t *testing.T, cfg *Config) (string, error) {\n\n\tclient := &http.Client{}\n\n\tauthURL, err := GetFromEnv(\"SNOWFLAKE_AUTH_TEST_OAUTH_URL\", true)\n\tassertNilF(t, err, \"SNOWFLAKE_AUTH_TEST_OAUTH_URL is not set\")\n\n\toauthClientID, err := GetFromEnv(\"SNOWFLAKE_AUTH_TEST_OAUTH_CLIENT_ID\", true)\n\tassertNilF(t, err, \"SNOWFLAKE_AUTH_TEST_OAUTH_CLIENT_ID is not set\")\n\n\toauthClientSecret, err := GetFromEnv(\"SNOWFLAKE_AUTH_TEST_OAUTH_CLIENT_SECRET\", true)\n\tassertNilF(t, err, \"SNOWFLAKE_AUTH_TEST_OAUTH_CLIENT_SECRET is not set\")\n\n\tinputData := formData(cfg)\n\n\treq, err := http.NewRequest(\"POST\", authURL, strings.NewReader(inputData.Encode()))\n\tassertNilF(t, err, fmt.Sprintf(\"Request failed %v\", err))\n\treq.Header.Set(\"Content-Type\", \"application/x-www-form-urlencoded;charset=UTF-8\")\n\treq.SetBasicAuth(oauthClientID, oauthClientSecret)\n\tresp, err := client.Do(req)\n\n\tassertNilF(t, err, fmt.Sprintf(\"Response failed %v\", err))\n\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn \"\", fmt.Errorf(\"failed to get access token, status code: %d\", resp.StatusCode)\n\t}\n\n\tdefer resp.Body.Close()\n\n\tvar response OAuthTokenResponse\n\tif err := json.NewDecoder(resp.Body).Decode(&response); err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to decode response: %v\", err)\n\t}\n\n\treturn response.Token, err\n}\n\nfunc formData(cfg *Config) url.Values {\n\tdata := url.Values{}\n\tdata.Set(\"username\", cfg.User)\n\tdata.Set(\"password\", cfg.Password)\n\tdata.Set(\"grant_type\", \"password\")\n\tdata.Set(\"scope\", fmt.Sprintf(\"session:role:%s\", strings.ToLower(cfg.Role)))\n\n\treturn data\n\n}\n\ntype OAuthTokenResponse struct {\n\tType       string `json:\"token_type\"`\n\tExpiration int    `json:\"expires_in\"`\n\tToken      string `json:\"access_token\"`\n\tScope      string `json:\"scope\"`\n}\n"
  },
  {
    "path": "auth_with_okta_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"net/url\"\n\t\"testing\"\n)\n\nfunc TestOktaSuccessful(t *testing.T) {\n\tcfg := setupOktaTest(t)\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNilE(t, err, fmt.Sprintf(\"failed to connect. err: %v\", err))\n}\n\nfunc TestOktaWrongCredentials(t *testing.T) {\n\tcfg := setupOktaTest(t)\n\tcfg.Password = \"fakePassword\"\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\n\tvar snowflakeErr *SnowflakeError\n\tassertErrorsAsF(t, err, &snowflakeErr)\n\tassertEqualE(t, snowflakeErr.Number, 261006, fmt.Sprintf(\"Expected 261006, but got %v\", snowflakeErr.Number))\n}\n\nfunc TestOktaWrongAuthenticator(t *testing.T) {\n\tcfg := setupOktaTest(t)\n\tinvalidAddress, err := url.Parse(\"https://fake-account-0000.okta.com\")\n\tassertNilF(t, err, fmt.Sprintf(\"failed to parse: %v\", err))\n\n\tcfg.OktaURL = invalidAddress\n\terr = verifyConnectionToSnowflakeAuthTests(t, cfg)\n\n\tvar snowflakeErr *SnowflakeError\n\tassertErrorsAsF(t, err, &snowflakeErr)\n\tassertEqualE(t, snowflakeErr.Number, 390139, fmt.Sprintf(\"Expected 390139, but got %v\", snowflakeErr.Number))\n}\n\nfunc setupOktaTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping Okta tests\")\n\turlEnv, err := GetFromEnv(\"SNOWFLAKE_AUTH_TEST_OKTA_AUTH\", true)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get env: %v\", err))\n\n\tcfg, err := getAuthTestsConfig(t, AuthTypeOkta)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get config: %v\", err))\n\n\tcfg.OktaURL, err = url.Parse(urlEnv)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to parse: %v\", err))\n\n\treturn cfg\n}\n"
  },
  {
    "path": "auth_with_pat_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"database/sql\"\n\t\"fmt\"\n\t\"log\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\ntype PatToken struct {\n\tName  string\n\tValue string\n}\n\nfunc TestEndToEndPatSuccessful(t *testing.T) {\n\tcfg := setupEndToEndPatTest(t)\n\tpatToken := createEndToEndPatToken(t)\n\tdefer removeEndToEndPatToken(t, patToken.Name)\n\tcfg.Token = patToken.Value\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tassertNilE(t, err, fmt.Sprintf(\"failed to connect. err: %v\", err))\n}\n\nfunc TestEndToEndPatMismatchedUser(t *testing.T) {\n\tcfg := setupEndToEndPatTest(t)\n\tpatToken := createEndToEndPatToken(t)\n\tdefer removeEndToEndPatToken(t, patToken.Name)\n\tcfg.Token = patToken.Value\n\tcfg.User = \"invalidUser\"\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tvar snowflakeErr *SnowflakeError\n\tassertErrorsAsF(t, err, &snowflakeErr)\n\tassertEqualE(t, snowflakeErr.Number, 394400, fmt.Sprintf(\"Expected 394400, but got %v\", snowflakeErr.Number))\n}\n\nfunc TestEndToEndPatInvalidToken(t *testing.T) {\n\tcfg := setupEndToEndPatTest(t)\n\tcfg.Token = \"invalidToken\"\n\terr := verifyConnectionToSnowflakeAuthTests(t, cfg)\n\tvar snowflakeErr *SnowflakeError\n\tassertErrorsAsF(t, err, &snowflakeErr)\n\tassertEqualE(t, snowflakeErr.Number, 394400, fmt.Sprintf(\"Expected 394400, but got %v\", snowflakeErr.Number))\n}\n\nfunc setupEndToEndPatTest(t *testing.T) *Config {\n\tskipAuthTests(t, \"Skipping PAT tests\")\n\tcfg, err := getAuthTestsConfig(t, AuthTypePat)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to parse: %v\", err))\n\n\treturn cfg\n\n}\n\nfunc getEndToEndPatSetupCommandVariables() (*Config, error) {\n\treturn GetConfigFromEnv([]*ConfigParam{\n\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_AUTH_TEST_SNOWFLAKE_USER\", FailOnMissing: true},\n\t\t{Name: \"Role\", EnvName: \"SNOWFLAKE_AUTH_TEST_INTERNAL_OAUTH_SNOWFLAKE_ROLE\", FailOnMissing: true},\n\t})\n}\n\nfunc createEndToEndPatToken(t *testing.T) *PatToken {\n\tcfg := setupOktaTest(t)\n\tpatTokenName := fmt.Sprintf(\"PAT_GOLANG_%s\", strings.ReplaceAll(time.Now().Format(\"20060102150405.000\"), \".\", \"\"))\n\tpatCommandVariables, err := getEndToEndPatSetupCommandVariables()\n\tassertNilE(t, err, \"failed to get PAT command variables\")\n\n\tquery := fmt.Sprintf(\n\t\t\"alter user %s add programmatic access token %s ROLE_RESTRICTION = '%s' DAYS_TO_EXPIRY=1;\",\n\t\tpatCommandVariables.User,\n\t\tpatTokenName,\n\t\tpatCommandVariables.Role,\n\t)\n\n\tpatToken, err := connectUsingOktaConnectionAndExecuteCustomCommand(t, cfg, query, true)\n\tassertNilE(t, err, \"failed to create PAT command variables\")\n\n\treturn patToken\n\n}\n\nfunc removeEndToEndPatToken(t *testing.T, patTokenName string) {\n\tcfg := setupOktaTest(t)\n\tcfg.Role = \"analyst\"\n\tpatCommandVariables, err := getEndToEndPatSetupCommandVariables()\n\tassertNilE(t, err, \"failed to get PAT command variables\")\n\n\tquery := fmt.Sprintf(\n\t\t\"alter user %s remove programmatic access token %s;\",\n\t\tpatCommandVariables.User,\n\t\tpatTokenName,\n\t)\n\n\t_, err = connectUsingOktaConnectionAndExecuteCustomCommand(t, cfg, query, false)\n\tassertNilE(t, err, \"failed to remove PAT command variables\")\n}\n\nfunc connectUsingOktaConnectionAndExecuteCustomCommand(t *testing.T, cfg *Config, query string, returnToken bool) (*PatToken, error) {\n\tdsn, err := DSN(cfg)\n\tassertNilE(t, err, \"failed to create DSN from Config\")\n\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tassertNilE(t, err, \"failed to open Snowflake DB connection\")\n\tdefer db.Close()\n\n\trows, err := db.Query(query)\n\tif err != nil {\n\t\tlog.Printf(\"failed to run a query: %v, err: %v\", query, err)\n\t\treturn nil, err\n\n\t}\n\n\tvar patTokenName, patTokenValue string\n\tif returnToken && rows.Next() {\n\t\tif err := rows.Scan(&patTokenName, &patTokenValue); err != nil {\n\t\t\tt.Fatalf(\"failed to scan token: %v\", err)\n\t\t}\n\n\t\treturn &PatToken{Name: patTokenName, Value: patTokenValue}, nil\n\t}\n\n\tif returnToken {\n\t\tt.Fatalf(\"no results found for query: %s\", query)\n\t}\n\n\treturn nil, err\n}\n"
  },
  {
    "path": "authexternalbrowser.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"io\"\n\t\"log\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/pkg/browser\"\n)\n\nconst (\n\tsamlSuccessHTML = `<!DOCTYPE html><html><head><meta charset=\"UTF-8\"/>\n<title>SAML Response for Snowflake</title></head>\n<body>\nYour identity was confirmed and propagated to Snowflake %v.\nYou can close this window now and go back where you started from.\n</body></html>`\n\n\tbufSize = 8192\n)\n\n// Builds a response to show to the user after successfully\n// getting a response from Snowflake.\nfunc buildResponse(body string) (bytes.Buffer, error) {\n\tt := &http.Response{\n\t\tStatus:        \"200 OK\",\n\t\tStatusCode:    200,\n\t\tProto:         \"HTTP/1.1\",\n\t\tProtoMajor:    1,\n\t\tProtoMinor:    1,\n\t\tBody:          io.NopCloser(bytes.NewBufferString(body)),\n\t\tContentLength: int64(len(body)),\n\t\tRequest:       nil,\n\t\tHeader:        make(http.Header),\n\t}\n\tvar b bytes.Buffer\n\terr := t.Write(&b)\n\treturn b, err\n}\n\n// This opens a socket that listens on all available unicast\n// and any anycast IP addresses locally. By specifying \"0\", we are\n// able to bind to a free port.\nfunc createLocalTCPListener(port int) (*net.TCPListener, error) {\n\tlogger.Debugf(\"creating local TCP listener on port %v\", port)\n\tallAddressesListener, err := net.Listen(\"tcp\", fmt.Sprintf(\"0.0.0.0:%v\", port))\n\tif err != nil {\n\t\tlogger.Warnf(\"error while setting up 0.0.0.0 listener: %v\", err)\n\t\treturn nil, err\n\t}\n\tlogger.Debug(\"Closing 0.0.0.0 tcp listener\")\n\tif err := allAddressesListener.Close(); err != nil {\n\t\tlogger.Errorf(\"error while closing TCP listener. %v\", err)\n\t\treturn nil, err\n\t}\n\n\tl, err := net.Listen(\"tcp\", fmt.Sprintf(\"localhost:%v\", port))\n\tif err != nil {\n\t\tlogger.Warnf(\"error while setting up listener: %v\", err)\n\t\treturn nil, err\n\t}\n\n\ttcpListener, ok := l.(*net.TCPListener)\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"failed to assert type as *net.TCPListener\")\n\t}\n\n\treturn tcpListener, nil\n}\n\n// Opens a browser window (or new tab) with the configured login Url.\n// This can / will fail if running inside a shell with no display, ie\n// ssh'ing into a box attempting to authenticate via external browser.\nfunc openBrowser(browserURL string) error {\n\tparsedURL, err := url.ParseRequestURI(browserURL)\n\tif err != nil {\n\t\tlogger.Errorf(\"error parsing url %v, err: %v\", browserURL, err)\n\t\treturn err\n\t}\n\tif parsedURL.Scheme != \"http\" && parsedURL.Scheme != \"https\" {\n\t\treturn fmt.Errorf(\"invalid browser URL: %v\", browserURL)\n\t}\n\terr = browser.OpenURL(browserURL)\n\tif err != nil {\n\t\tlogger.Errorf(\"failed to open a browser. err: %v\", err)\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Gets the IDP Url and Proof Key from Snowflake.\n// Note: FuncPostAuthSaml will return a fully qualified error if\n// there is something wrong getting data from Snowflake.\nfunc getIdpURLProofKey(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\tauthenticator string,\n\tapplication string,\n\taccount string,\n\tuser string,\n\tcallbackPort int) (string, string, error) {\n\n\theaders := make(map[string]string)\n\theaders[httpHeaderContentType] = headerContentTypeApplicationJSON\n\theaders[httpHeaderAccept] = headerContentTypeApplicationJSON\n\theaders[httpHeaderUserAgent] = userAgent\n\n\tclientEnvironment := newAuthRequestClientEnvironment()\n\tclientEnvironment.Application = application\n\n\trequestMain := authRequestData{\n\t\tClientAppID:             clientType,\n\t\tClientAppVersion:        SnowflakeGoDriverVersion,\n\t\tAccountName:             account,\n\t\tLoginName:               user,\n\t\tClientEnvironment:       clientEnvironment,\n\t\tAuthenticator:           authenticator,\n\t\tBrowserModeRedirectPort: strconv.Itoa(callbackPort),\n\t}\n\n\tauthRequest := authRequest{\n\t\tData: requestMain,\n\t}\n\n\tjsonBody, err := json.Marshal(authRequest)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to serialize json. err: %v\", err)\n\t\treturn \"\", \"\", err\n\t}\n\n\trespd, err := sr.FuncPostAuthSAML(ctx, sr, headers, jsonBody, sr.LoginTimeout)\n\tif err != nil {\n\t\treturn \"\", \"\", err\n\t}\n\tif !respd.Success {\n\t\tlogger.WithContext(ctx).Error(\"Authentication FAILED\")\n\t\tsr.TokenAccessor.SetTokens(\"\", \"\", -1)\n\t\tcode, err := strconv.Atoi(respd.Code)\n\t\tif err != nil {\n\t\t\treturn \"\", \"\", err\n\t\t}\n\t\treturn \"\", \"\", &SnowflakeError{\n\t\t\tNumber:   code,\n\t\t\tSQLState: SQLStateConnectionRejected,\n\t\t\tMessage:  respd.Message,\n\t\t}\n\t}\n\treturn respd.Data.SSOURL, respd.Data.ProofKey, nil\n}\n\n// Gets the login URL for multiple SAML\nfunc getLoginURL(sr *snowflakeRestful, user string, callbackPort int) (string, string, error) {\n\tproofKey := generateProofKey()\n\n\tparams := &url.Values{}\n\tparams.Add(\"login_name\", user)\n\tparams.Add(\"browser_mode_redirect_port\", strconv.Itoa(callbackPort))\n\tparams.Add(\"proof_key\", proofKey)\n\turl := sr.getFullURL(consoleLoginRequestPath, params)\n\n\treturn url.String(), proofKey, nil\n}\n\nfunc generateProofKey() string {\n\trandomness := getSecureRandom(32)\n\treturn base64.StdEncoding.WithPadding(base64.StdPadding).EncodeToString(randomness)\n}\n\n// The response returned from Snowflake looks like so:\n// GET /?token=encodedSamlToken\n// Host: localhost:54001\n// Connection: keep-alive\n// Upgrade-Insecure-Requests: 1\n// User-Agent: userAgentStr\n// Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8\n// Referer: https://myaccount.snowflakecomputing.com/fed/login\n// Accept-Encoding: gzip, deflate, br\n// Accept-Language: en-US,en;q=0.9\n// This extracts the token portion of the response.\nfunc getTokenFromResponse(response string) (string, error) {\n\tstart := \"GET /?token=\"\n\tarr := strings.Split(response, \"\\r\\n\")\n\tif !strings.HasPrefix(arr[0], start) {\n\t\tlogger.Errorf(\"response is malformed. \")\n\t\treturn \"\", &SnowflakeError{\n\t\t\tNumber:      ErrFailedToParseResponse,\n\t\t\tSQLState:    SQLStateConnectionRejected,\n\t\t\tMessage:     errors2.ErrMsgFailedToParseResponse,\n\t\t\tMessageArgs: []any{response},\n\t\t}\n\t}\n\ttoken := strings.TrimPrefix(arr[0], start)\n\ttoken = strings.Split(token, \" \")[0]\n\treturn token, nil\n}\n\ntype authenticateByExternalBrowserResult struct {\n\tescapedSamlResponse []byte\n\tproofKey            []byte\n\terr                 error\n}\n\nfunc authenticateByExternalBrowser(ctx context.Context, sr *snowflakeRestful, authenticator string, application string,\n\taccount string, user string, externalBrowserTimeout time.Duration, disableConsoleLogin ConfigBool) ([]byte, []byte, error) {\n\tresultChan := make(chan authenticateByExternalBrowserResult, 1)\n\tgo GoroutineWrapper(\n\t\tctx,\n\t\tfunc() {\n\t\t\tresultChan <- doAuthenticateByExternalBrowser(ctx, sr, authenticator, application, account, user, disableConsoleLogin)\n\t\t},\n\t)\n\tselect {\n\tcase <-time.After(externalBrowserTimeout):\n\t\treturn nil, nil, errors.New(\"authentication timed out\")\n\tcase result := <-resultChan:\n\t\treturn result.escapedSamlResponse, result.proofKey, result.err\n\t}\n}\n\n// Authentication by an external browser takes place via the following:\n//   - the golang snowflake driver communicates to Snowflake that the user wishes to\n//     authenticate via external browser\n//   - snowflake sends back the IDP Url configured at the Snowflake side for the\n//     provided account, or use the multiple SAML way via console login\n//   - the default browser is opened to that URL\n//   - user authenticates at the IDP, and is redirected to Snowflake\n//   - Snowflake directs the user back to the driver\n//   - authenticate is complete!\nfunc doAuthenticateByExternalBrowser(ctx context.Context, sr *snowflakeRestful, authenticator string, application string, account string, user string, disableConsoleLogin ConfigBool) authenticateByExternalBrowserResult {\n\tl, err := createLocalTCPListener(0)\n\tif err != nil {\n\t\treturn authenticateByExternalBrowserResult{nil, nil, err}\n\t}\n\tdefer func() {\n\t\tif err = l.Close(); err != nil {\n\t\t\tlogger.Errorf(\"error while closing TCP listener for external browser (%v). %v\", l.Addr().String(), err)\n\t\t}\n\t}()\n\n\tcallbackPort := l.Addr().(*net.TCPAddr).Port\n\n\tvar loginURL string\n\tvar proofKey string\n\tif disableConsoleLogin == ConfigBoolTrue {\n\t\t// Gets the IDP URL and Proof Key from Snowflake\n\t\tloginURL, proofKey, err = getIdpURLProofKey(ctx, sr, authenticator, application, account, user, callbackPort)\n\t} else {\n\t\t// Multiple SAML way to do authentication via console login\n\t\tloginURL, proofKey, err = getLoginURL(sr, user, callbackPort)\n\t}\n\n\tif err != nil {\n\t\treturn authenticateByExternalBrowserResult{nil, nil, err}\n\t}\n\n\tif err = defaultSamlResponseProvider().run(loginURL); err != nil {\n\t\treturn authenticateByExternalBrowserResult{nil, nil, err}\n\t}\n\n\tencodedSamlResponseChan := make(chan string)\n\terrChan := make(chan error)\n\n\tvar encodedSamlResponse string\n\tvar errFromGoroutine error\n\tconn, err := l.Accept()\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"unable to accept connection. err: %v\", err)\n\t\tlog.Fatal(err)\n\t}\n\tgo func(c net.Conn) {\n\t\tvar buf bytes.Buffer\n\t\ttotal := 0\n\t\tencodedSamlResponse := \"\"\n\t\tvar errAccept error\n\t\tfor {\n\t\t\tb := make([]byte, bufSize)\n\t\t\tn, err := c.Read(b)\n\t\t\tif err != nil {\n\t\t\t\tif err != io.EOF {\n\t\t\t\t\tlogger.WithContext(ctx).Infof(\"error reading from socket. err: %v\", err)\n\t\t\t\t\terrAccept = &SnowflakeError{\n\t\t\t\t\t\tNumber:      ErrFailedToGetExternalBrowserResponse,\n\t\t\t\t\t\tSQLState:    SQLStateConnectionRejected,\n\t\t\t\t\t\tMessage:     errors2.ErrMsgFailedToGetExternalBrowserResponse,\n\t\t\t\t\t\tMessageArgs: []any{err},\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t\ttotal += n\n\t\t\tbuf.Write(b)\n\t\t\tif n < bufSize {\n\t\t\t\t// We successfully read all data\n\t\t\t\ts := string(buf.Bytes()[:total])\n\t\t\t\tencodedSamlResponse, errAccept = getTokenFromResponse(s)\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tbuf.Grow(bufSize)\n\t\t}\n\t\tif encodedSamlResponse != \"\" {\n\t\t\tbody := fmt.Sprintf(samlSuccessHTML, application)\n\t\t\thttpResponse, err := buildResponse(body)\n\t\t\tif err != nil && errAccept == nil {\n\t\t\t\terrAccept = err\n\t\t\t}\n\t\t\tif _, err = c.Write(httpResponse.Bytes()); err != nil && errAccept == nil {\n\t\t\t\terrAccept = err\n\t\t\t}\n\t\t}\n\t\tif err := c.Close(); err != nil {\n\t\t\tlogger.Warnf(\"error while closing browser connection. %v\", err)\n\t\t}\n\t\tencodedSamlResponseChan <- encodedSamlResponse\n\t\terrChan <- errAccept\n\t}(conn)\n\n\tencodedSamlResponse = <-encodedSamlResponseChan\n\terrFromGoroutine = <-errChan\n\n\tif errFromGoroutine != nil {\n\t\treturn authenticateByExternalBrowserResult{nil, nil, errFromGoroutine}\n\t}\n\n\tescapedSamlResponse, err := url.QueryUnescape(encodedSamlResponse)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"unable to unescape saml response. err: %v\", err)\n\t\treturn authenticateByExternalBrowserResult{nil, nil, err}\n\t}\n\treturn authenticateByExternalBrowserResult{[]byte(escapedSamlResponse), []byte(proofKey), nil}\n}\n\ntype samlResponseProvider interface {\n\trun(url string) error\n}\n\ntype externalBrowserSamlResponseProvider struct {\n}\n\nfunc (e externalBrowserSamlResponseProvider) run(url string) error {\n\treturn openBrowser(url)\n}\n\nvar defaultSamlResponseProvider = func() samlResponseProvider {\n\treturn &externalBrowserSamlResponseProvider{}\n}\n"
  },
  {
    "path": "authexternalbrowser_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestGetTokenFromResponseFail(t *testing.T) {\n\tresponse := \"GET /?fakeToken=fakeEncodedSamlToken HTTP/1.1\\r\\n\" +\n\t\t\"Host: localhost:54001\\r\\n\" +\n\t\t\"Connection: keep-alive\\r\\n\" +\n\t\t\"Upgrade-Insecure-Requests: 1\\r\\n\" +\n\t\t\"User-Agent: userAgentStr\\r\\n\" +\n\t\t\"Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8\\r\\n\" +\n\t\t\"Referer: https://myaccount.snowflakecomputing.com/fed/login\\r\\n\" +\n\t\t\"Accept-Encoding: gzip, deflate, br\\r\\n\" +\n\t\t\"Accept-Language: en-US,en;q=0.9\\r\\n\\r\\n\"\n\n\t_, err := getTokenFromResponse(response)\n\tif err == nil {\n\t\tt.Errorf(\"Should have failed parsing the malformed response.\")\n\t}\n}\n\nfunc TestGetTokenFromResponse(t *testing.T) {\n\tresponse := \"GET /?token=GETtokenFromResponse HTTP/1.1\\r\\n\" +\n\t\t\"Host: localhost:54001\\r\\n\" +\n\t\t\"Connection: keep-alive\\r\\n\" +\n\t\t\"Upgrade-Insecure-Requests: 1\\r\\n\" +\n\t\t\"User-Agent: userAgentStr\\r\\n\" +\n\t\t\"Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8\\r\\n\" +\n\t\t\"Referer: https://myaccount.snowflakecomputing.com/fed/login\\r\\n\" +\n\t\t\"Accept-Encoding: gzip, deflate, br\\r\\n\" +\n\t\t\"Accept-Language: en-US,en;q=0.9\\r\\n\\r\\n\"\n\n\texpected := \"GETtokenFromResponse\"\n\n\ttoken, err := getTokenFromResponse(response)\n\tif err != nil {\n\t\tt.Errorf(\"Failed to get the token. Err: %#v\", err)\n\t}\n\tif token != expected {\n\t\tt.Errorf(\"Expected: %s, found: %s\", expected, token)\n\t}\n}\n\nfunc TestBuildResponse(t *testing.T) {\n\tresp, err := buildResponse(fmt.Sprintf(samlSuccessHTML, \"Go\"))\n\tassertNilF(t, err)\n\tbytes := resp.Bytes()\n\trespStr := string(bytes[:])\n\tif !strings.Contains(respStr, \"Your identity was confirmed and propagated to Snowflake Go.\\nYou can close this window now and go back where you started from.\") {\n\t\tt.Fatalf(\"failed to build response\")\n\t}\n}\n\nfunc postAuthExternalBrowserError(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{}, errors.New(\"failed to get SAML response\")\n}\n\nfunc postAuthExternalBrowserErrorDelayed(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\ttime.Sleep(2 * time.Second)\n\treturn &authResponse{}, errors.New(\"failed to get SAML response\")\n}\n\nfunc postAuthExternalBrowserFail(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: false,\n\t\tMessage: \"external browser auth failed\",\n\t}, nil\n}\n\nfunc postAuthExternalBrowserFailWithCode(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: false,\n\t\tMessage: \"failed to connect to db\",\n\t\tCode:    \"260008\",\n\t}, nil\n}\n\nfunc TestUnitAuthenticateByExternalBrowser(t *testing.T) {\n\tauthenticator := \"externalbrowser\"\n\tapplication := \"testapp\"\n\taccount := \"testaccount\"\n\tuser := \"u\"\n\ttimeout := sfconfig.DefaultExternalBrowserTimeout\n\tsr := &snowflakeRestful{\n\t\tProtocol:         \"https\",\n\t\tHost:             \"abc.com\",\n\t\tPort:             443,\n\t\tFuncPostAuthSAML: postAuthExternalBrowserError,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\t_, _, err := authenticateByExternalBrowser(context.Background(), sr, authenticator, application, account, user, timeout, ConfigBoolTrue)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tsr.FuncPostAuthSAML = postAuthExternalBrowserFail\n\t_, _, err = authenticateByExternalBrowser(context.Background(), sr, authenticator, application, account, user, timeout, ConfigBoolTrue)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tsr.FuncPostAuthSAML = postAuthExternalBrowserFailWithCode\n\t_, _, err = authenticateByExternalBrowser(context.Background(), sr, authenticator, application, account, user, timeout, ConfigBoolTrue)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tdriverErr, ok := err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"should be snowflake error. err: %v\", err)\n\t}\n\tif driverErr.Number != ErrCodeFailedToConnect {\n\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrCodeFailedToConnect, driverErr.Number)\n\t}\n}\n\nfunc TestAuthenticationTimeout(t *testing.T) {\n\tauthenticator := \"externalbrowser\"\n\tapplication := \"testapp\"\n\taccount := \"testaccount\"\n\tuser := \"u\"\n\ttimeout := 1 * time.Second\n\tsr := &snowflakeRestful{\n\t\tProtocol:         \"https\",\n\t\tHost:             \"abc.com\",\n\t\tPort:             443,\n\t\tFuncPostAuthSAML: postAuthExternalBrowserErrorDelayed,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\t_, _, err := authenticateByExternalBrowser(context.Background(), sr, authenticator, application, account, user, timeout, ConfigBoolTrue)\n\tassertEqualE(t, err.Error(), \"authentication timed out\", err.Error())\n}\n\nfunc Test_createLocalTCPListener(t *testing.T) {\n\tlistener, err := createLocalTCPListener(0)\n\tif err != nil {\n\t\tt.Fatalf(\"createLocalTCPListener() failed: %v\", err)\n\t}\n\tif listener == nil {\n\t\tt.Fatal(\"createLocalTCPListener() returned nil listener\")\n\t}\n\n\t// Close the listener after the test.\n\tdefer listener.Close()\n}\n\nfunc TestUnitGetLoginURL(t *testing.T) {\n\texpectedScheme := \"https\"\n\texpectedHost := \"abc.com:443\"\n\tuser := \"u\"\n\tcallbackPort := 123\n\tsr := &snowflakeRestful{\n\t\tProtocol:      \"https\",\n\t\tHost:          \"abc.com\",\n\t\tPort:          443,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\n\tloginURL, proofKey, err := getLoginURL(sr, user, callbackPort)\n\tassertNilF(t, err, \"failed to get login URL\")\n\tassertNotNilF(t, len(proofKey), \"proofKey should be non-empty string\")\n\n\turlPtr, err := url.Parse(loginURL)\n\tassertNilF(t, err, \"failed to parse the login URL\")\n\tassertEqualF(t, urlPtr.Scheme, expectedScheme)\n\tassertEqualF(t, urlPtr.Host, expectedHost)\n\tassertEqualF(t, urlPtr.Path, consoleLoginRequestPath)\n\tassertStringContainsF(t, urlPtr.RawQuery, \"login_name\")\n\tassertStringContainsF(t, urlPtr.RawQuery, \"browser_mode_redirect_port\")\n\tassertStringContainsF(t, urlPtr.RawQuery, \"proof_key\")\n}\n\ntype nonInteractiveSamlResponseProvider struct {\n\tt *testing.T\n}\n\nfunc (provider *nonInteractiveSamlResponseProvider) run(url string) error {\n\tgo func() {\n\t\tresp, err := http.Get(url)\n\t\tassertNilF(provider.t, err)\n\t\tassertEqualE(provider.t, resp.StatusCode, http.StatusOK)\n\t}()\n\treturn nil\n}\n"
  },
  {
    "path": "authokta.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"html\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"time\"\n)\n\ntype authOKTARequest struct {\n\tUsername string `json:\"username\"`\n\tPassword string `json:\"password\"`\n}\n\ntype authOKTAResponse struct {\n\tCookieToken  string `json:\"cookieToken\"`\n\tSessionToken string `json:\"sessionToken\"`\n}\n\n/*\nauthenticateBySAML authenticates a user by SAML\nSAML Authentication\n 1. query GS to obtain IDP token and SSO url\n 2. IMPORTANT Client side validation:\n    validate both token url and sso url contains same prefix\n    (protocol + host + port) as the given authenticator url.\n    Explanation:\n    This provides a way for the user to 'authenticate' the IDP it is\n    sending his/her credentials to.  Without such a check, the user could\n    be coerced to provide credentials to an IDP impersonator.\n 3. query IDP token url to authenticate and retrieve access token\n 4. given access token, query IDP URL snowflake app to get SAML response\n 5. IMPORTANT Client side validation:\n    validate the post back url come back with the SAML response\n    contains the same prefix as the Snowflake's server url, which is the\n    intended destination url to Snowflake.\n\nExplanation:\n\n\tThis emulates the behavior of IDP initiated login flow in the user\n\tbrowser where the IDP instructs the browser to POST the SAML\n\tassertion to the specific SP endpoint.  This is critical in\n\tpreventing a SAML assertion issued to one SP from being sent to\n\tanother SP.\n*/\nfunc authenticateBySAML(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\toktaURL *url.URL,\n\tapplication string,\n\taccount string,\n\tuser string,\n\tpassword string,\n\tdisableSamlURLCheck ConfigBool,\n) (samlResponse []byte, err error) {\n\tlogger.WithContext(ctx).Info(\"step 1: query GS to obtain IDP token and SSO url\")\n\theaders := make(map[string]string)\n\theaders[httpHeaderContentType] = headerContentTypeApplicationJSON\n\theaders[httpHeaderAccept] = headerContentTypeApplicationJSON\n\theaders[httpHeaderUserAgent] = userAgent\n\n\tclientEnvironment := newAuthRequestClientEnvironment()\n\tclientEnvironment.Application = application\n\trequestMain := authRequestData{\n\t\tClientAppID:       clientType,\n\t\tClientAppVersion:  SnowflakeGoDriverVersion,\n\t\tAccountName:       account,\n\t\tClientEnvironment: clientEnvironment,\n\t\tAuthenticator:     oktaURL.String(),\n\t}\n\tauthRequest := authRequest{\n\t\tData: requestMain,\n\t}\n\tparams := &url.Values{}\n\tjsonBody, err := json.Marshal(authRequest)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tlogger.WithContext(ctx).Infof(\"PARAMS for Auth: %v, %v\", params, sr)\n\trespd, err := sr.FuncPostAuthSAML(ctx, sr, headers, jsonBody, sr.LoginTimeout)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif !respd.Success {\n\t\tlogger.WithContext(ctx).Error(\"Authentication FAILED\")\n\t\tsr.TokenAccessor.SetTokens(\"\", \"\", -1)\n\t\tcode, err := strconv.Atoi(respd.Code)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:   code,\n\t\t\tSQLState: SQLStateConnectionRejected,\n\t\t\tMessage:  respd.Message,\n\t\t}\n\t}\n\tlogger.WithContext(ctx).Info(\"step 2: validate Token and SSO URL has the same prefix as oktaURL\")\n\tvar tokenURL *url.URL\n\tvar ssoURL *url.URL\n\tif tokenURL, err = url.Parse(respd.Data.TokenURL); err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to parse token URL. %v\", respd.Data.TokenURL)\n\t}\n\tif ssoURL, err = url.Parse(respd.Data.SSOURL); err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to parse SSO URL. %v\", respd.Data.SSOURL)\n\t}\n\tif !isPrefixEqual(oktaURL, ssoURL) || !isPrefixEqual(oktaURL, tokenURL) {\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:      ErrCodeIdpConnectionError,\n\t\t\tSQLState:    SQLStateConnectionRejected,\n\t\t\tMessage:     errors.ErrMsgIdpConnectionError,\n\t\t\tMessageArgs: []any{oktaURL, respd.Data.TokenURL, respd.Data.SSOURL},\n\t\t}\n\t}\n\tlogger.WithContext(ctx).Info(\"step 3: query IDP token url to authenticate and retrieve access token\")\n\tjsonBody, err = json.Marshal(authOKTARequest{\n\t\tUsername: user,\n\t\tPassword: password,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\trespa, err := sr.FuncPostAuthOKTA(ctx, sr, headers, jsonBody, respd.Data.TokenURL, sr.LoginTimeout)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tlogger.WithContext(ctx).Info(\"step 4: query IDP URL snowflake app to get SAML response\")\n\tparams = &url.Values{}\n\tparams.Add(\"RelayState\", \"/some/deep/link\")\n\tvar oneTimeToken string\n\tif respa.SessionToken != \"\" {\n\t\toneTimeToken = respa.SessionToken\n\t} else {\n\t\toneTimeToken = respa.CookieToken\n\t}\n\tparams.Add(\"onetimetoken\", oneTimeToken)\n\n\theaders = make(map[string]string)\n\theaders[httpHeaderAccept] = \"*/*\"\n\tbd, err := sr.FuncGetSSO(ctx, sr, params, headers, respd.Data.SSOURL, sr.LoginTimeout)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif disableSamlURLCheck == ConfigBoolFalse {\n\t\tlogger.WithContext(ctx).Info(\"step 5: validate post_back_url matches Snowflake URL\")\n\t\ttgtURL, err := postBackURL(bd)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tfullURL := sr.getURL()\n\t\tlogger.WithContext(ctx).Infof(\"tgtURL: %v, origURL: %v\", tgtURL, fullURL)\n\t\tif !isPrefixEqual(tgtURL, fullURL) {\n\t\t\treturn nil, &SnowflakeError{\n\t\t\t\tNumber:      ErrCodeSSOURLNotMatch,\n\t\t\t\tSQLState:    SQLStateConnectionRejected,\n\t\t\t\tMessage:     errors.ErrMsgSSOURLNotMatch,\n\t\t\t\tMessageArgs: []any{tgtURL, fullURL},\n\t\t\t}\n\t\t}\n\t}\n\treturn bd, nil\n}\n\nfunc postBackURL(htmlData []byte) (url *url.URL, err error) {\n\tidx0 := bytes.Index(htmlData, []byte(\"<form\"))\n\tif idx0 < 0 {\n\t\treturn nil, fmt.Errorf(\"failed to find a form tag in HTML response: %v\", htmlData)\n\t}\n\tidx := bytes.Index(htmlData[idx0:], []byte(\"action=\\\"\"))\n\tif idx < 0 {\n\t\treturn nil, fmt.Errorf(\"failed to find action field in HTML response: %v\", htmlData[idx0:])\n\t}\n\tidx += idx0\n\tendIdx := bytes.Index(htmlData[idx+8:], []byte(\"\\\"\"))\n\tif endIdx < 0 {\n\t\treturn nil, fmt.Errorf(\"failed to find the end of action field: %v\", htmlData[idx+8:])\n\t}\n\tr := html.UnescapeString(string(htmlData[idx+8 : idx+8+endIdx]))\n\treturn url.Parse(r)\n}\n\nfunc isPrefixEqual(u1 *url.URL, u2 *url.URL) bool {\n\tp1 := u1.Port()\n\tif p1 == \"\" && u1.Scheme == \"https\" {\n\t\tp1 = \"443\"\n\t}\n\tp2 := u2.Port()\n\tif p2 == \"\" && u2.Scheme == \"https\" {\n\t\tp2 = \"443\"\n\t}\n\treturn u1.Hostname() == u2.Hostname() && p1 == p2 && u1.Scheme == u2.Scheme\n}\n\n// Makes a request to /session/authenticator-request to get SAML Information,\n// such as the IDP Url and Proof Key, depending on the authenticator\nfunc postAuthSAML(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\theaders map[string]string,\n\tbody []byte,\n\ttimeout time.Duration) (\n\tdata *authResponse, err error) {\n\n\tparams := &url.Values{}\n\tparams.Set(requestIDKey, getOrGenerateRequestIDFromContext(ctx).String())\n\tfullURL := sr.getFullURL(authenticatorRequestPath, params)\n\n\tlogger.WithContext(ctx).Infof(\"fullURL: %v\", fullURL)\n\tresp, err := sr.FuncPost(ctx, sr, fullURL, headers, body, timeout, defaultTimeProvider, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif closeErr := resp.Body.Close(); closeErr != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to close response body for %v err: %v\", fullURL, closeErr)\n\t\t}\n\t}()\n\tif resp.StatusCode == http.StatusOK {\n\t\tvar respd authResponse\n\t\terr = json.NewDecoder(resp.Body).Decode(&respd)\n\t\tif err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to decode JSON. err: %v\", err)\n\t\t\treturn nil, err\n\t\t}\n\t\treturn &respd, nil\n\t}\n\tswitch resp.StatusCode {\n\tcase http.StatusBadGateway, http.StatusServiceUnavailable, http.StatusGatewayTimeout:\n\t\t// service availability or connectivity issue. Most likely server side issue.\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:      ErrCodeServiceUnavailable,\n\t\t\tSQLState:    SQLStateConnectionWasNotEstablished,\n\t\t\tMessage:     errors.ErrMsgServiceUnavailable,\n\t\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t\t}\n\tcase http.StatusUnauthorized, http.StatusForbidden:\n\t\t// failed to connect to db. account name may be wrong\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:      ErrCodeFailedToConnect,\n\t\t\tSQLState:    SQLStateConnectionRejected,\n\t\t\tMessage:     errors.ErrMsgFailedToConnect,\n\t\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t\t}\n\t}\n\t_, err = io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to extract HTTP response body. err: %v\", err)\n\t\treturn nil, err\n\t}\n\treturn nil, &SnowflakeError{\n\t\tNumber:      ErrFailedToAuthSAML,\n\t\tSQLState:    SQLStateConnectionRejected,\n\t\tMessage:     errors.ErrMsgFailedToAuthSAML,\n\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t}\n}\n\nfunc postAuthOKTA(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\theaders map[string]string,\n\tbody []byte,\n\tfullURL string,\n\ttimeout time.Duration) (\n\tdata *authOKTAResponse, err error) {\n\tlogger.WithContext(ctx).Infof(\"fullURL: %v\", fullURL)\n\ttargetURL, err := url.Parse(fullURL)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tresp, err := sr.FuncPost(ctx, sr, targetURL, headers, body, timeout, defaultTimeProvider, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif closeErr := resp.Body.Close(); closeErr != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to close response body for %v err: %v\", targetURL, closeErr)\n\t\t}\n\t}()\n\tif resp.StatusCode == http.StatusOK {\n\t\tvar respd authOKTAResponse\n\t\terr = json.NewDecoder(resp.Body).Decode(&respd)\n\t\tif err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to decode JSON. err: %v\", err)\n\t\t\treturn nil, err\n\t\t}\n\t\treturn &respd, nil\n\t}\n\t_, err = io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to extract HTTP response body. err: %v\", err)\n\t\treturn nil, err\n\t}\n\tlogger.WithContext(ctx).Infof(\"HTTP: %v, URL: %v\", resp.StatusCode, fullURL)\n\tlogger.WithContext(ctx).Infof(\"Header: %v\", resp.Header)\n\treturn nil, &SnowflakeError{\n\t\tNumber:      ErrFailedToAuthOKTA,\n\t\tSQLState:    SQLStateConnectionRejected,\n\t\tMessage:     errors.ErrMsgFailedToAuthOKTA,\n\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t}\n}\n\nfunc getSSO(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\tparams *url.Values,\n\theaders map[string]string,\n\tssoURL string,\n\ttimeout time.Duration) (\n\tbd []byte, err error) {\n\tfullURL, err := url.Parse(ssoURL)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfullURL.RawQuery = params.Encode()\n\tlogger.WithContext(ctx).Infof(\"fullURL: %v\", fullURL)\n\tresp, err := sr.FuncGet(ctx, sr, fullURL, headers, timeout)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif closeErr := resp.Body.Close(); closeErr != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to close response body for %v err: %v\", fullURL, closeErr)\n\t\t}\n\t}()\n\tb, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to extract HTTP response body. err: %v\", err)\n\t\treturn nil, err\n\t}\n\tif resp.StatusCode == http.StatusOK {\n\t\treturn b, nil\n\t}\n\tlogger.WithContext(ctx).Infof(\"HTTP: %v, URL: %v \", resp.StatusCode, fullURL)\n\tlogger.WithContext(ctx).Infof(\"Header: %v\", resp.Header)\n\treturn nil, &SnowflakeError{\n\t\tNumber:      ErrFailedToGetSSO,\n\t\tSQLState:    SQLStateConnectionRejected,\n\t\tMessage:     errors.ErrMsgFailedToGetSSO,\n\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t}\n}\n"
  },
  {
    "path": "authokta_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestUnitPostBackURL(t *testing.T) {\n\tc := `<html><form id=\"1\" action=\"https&#x3a;&#x2f;&#x2f;abc.com&#x2f;\"></form></html>`\n\tpbURL, err := postBackURL([]byte(c))\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get URL. err: %v, %v\", err, c)\n\t}\n\tif pbURL.String() != \"https://abc.com/\" {\n\t\tt.Errorf(\"failed to get URL. got: %v, %v\", pbURL, c)\n\t}\n\tc = `<html></html>`\n\t_, err = postBackURL([]byte(c))\n\tif err == nil {\n\t\tt.Fatalf(\"should have failed\")\n\t}\n\tc = `<html><form id=\"1\"/></html>`\n\t_, err = postBackURL([]byte(c))\n\tif err == nil {\n\t\tt.Fatalf(\"should have failed\")\n\t}\n\tc = `<html><form id=\"1\" action=\"https&#x3a;&#x2f;&#x2f;abc.com&#x2f;/></html>`\n\t_, err = postBackURL([]byte(c))\n\tif err == nil {\n\t\tt.Fatalf(\"should have failed\")\n\t}\n}\n\nfunc TestUnitIsPrefixEqual(t *testing.T) {\n\tmustParse := func(raw string) *url.URL {\n\t\tu, err := url.Parse(raw)\n\t\tassertNilF(t, err, \"parsing URL: \"+raw)\n\t\treturn u\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\tu1       string\n\t\tu2       string\n\t\texpected bool\n\t}{\n\t\t{\"same origin\", \"https://abc.com\", \"https://abc.com\", true},\n\t\t{\"same origin with path\", \"https://abc.com/foo\", \"https://abc.com/bar\", true},\n\t\t{\"explicit 443 vs implicit\", \"https://abc.com:443\", \"https://abc.com\", true},\n\t\t{\"implicit vs explicit 443\", \"https://abc.com\", \"https://abc.com:443\", true},\n\t\t{\"both explicit same port\", \"https://abc.com:8443\", \"https://abc.com:8443\", true},\n\t\t{\"different port on same host\", \"https://abc.com\", \"https://abc.com:9443\", false},\n\t\t{\"different port both explicit\", \"https://abc.com:8443\", \"https://abc.com:9443\", false},\n\t\t{\"different hostname\", \"https://abc.com\", \"https://xyz.com\", false},\n\t\t{\"different scheme\", \"https://abc.com\", \"http://abc.com\", false},\n\t\t{\"http port mismatch\", \"http://abc.com\", \"http://abc.com:9090\", false},\n\t}\n\tfor _, tc := range tests {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tgot := isPrefixEqual(mustParse(tc.u1), mustParse(tc.u2))\n\t\t\tassertEqualF(t, got, tc.expected, tc.name)\n\t\t})\n\t}\n}\n\nfunc getTestError(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ time.Duration) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, errors.New(\"failed to run post method\")\n}\n\nfunc getTestAppBadGatewayError(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ time.Duration) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusBadGateway,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc getTestHTMLSuccess(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ time.Duration) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: []byte(\"<htm></html>\")},\n\t}, nil\n}\n\nfunc TestUnitPostAuthSAML(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncPost:      postTestError,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tvar err error\n\t_, err = postAuthSAML(context.Background(), sr, make(map[string]string), []byte{}, 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tsr.FuncPost = postTestAppBadGatewayError\n\t_, err = postAuthSAML(context.Background(), sr, make(map[string]string), []byte{}, 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tsr.FuncPost = postTestSuccessButInvalidJSON\n\t_, err = postAuthSAML(context.Background(), sr, make(map[string]string), []byte{0x12, 0x34}, 0)\n\tif err == nil {\n\t\tt.Fatalf(\"should have failed to post\")\n\t}\n}\n\nfunc TestUnitPostAuthOKTA(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncPost:      postTestError,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tvar err error\n\t_, err = postAuthOKTA(context.Background(), sr, make(map[string]string), []byte{}, \"hahah\", 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tsr.FuncPost = postTestAppBadGatewayError\n\t_, err = postAuthOKTA(context.Background(), sr, make(map[string]string), []byte{}, \"hahah\", 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tsr.FuncPost = postTestSuccessButInvalidJSON\n\t_, err = postAuthOKTA(context.Background(), sr, make(map[string]string), []byte{0x12, 0x34}, \"haha\", 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to run post request after the renewal\")\n\t}\n}\n\nfunc TestUnitGetSSO(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncGet:       getTestError,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tvar err error\n\t_, err = getSSO(context.Background(), sr, &url.Values{}, make(map[string]string), \"hahah\", 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tsr.FuncGet = getTestAppBadGatewayError\n\t_, err = getSSO(context.Background(), sr, &url.Values{}, make(map[string]string), \"hahah\", 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed.\")\n\t}\n\tsr.FuncGet = getTestHTMLSuccess\n\t_, err = getSSO(context.Background(), sr, &url.Values{}, make(map[string]string), \"hahah\", 0)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get HTML content. err: %v\", err)\n\t}\n\t_, err = getSSO(context.Background(), sr, &url.Values{}, make(map[string]string), \"invalid!@url$%^\", 0)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to parse URL.\")\n\t}\n}\n\nfunc postAuthSAMLError(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{}, errors.New(\"failed to get SAML response\")\n}\n\nfunc postAuthSAMLAuthFail(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: false,\n\t\tMessage: \"SAML auth failed\",\n\t}, nil\n}\n\nfunc postAuthSAMLAuthFailWithCode(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: false,\n\t\tCode:    strconv.Itoa(ErrCodeIdpConnectionError),\n\t\tMessage: \"SAML auth failed\",\n\t}, nil\n}\n\nfunc postAuthSAMLAuthSuccessButInvalidURL(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tMessage: \"\",\n\t\tData: authResponseMain{\n\t\t\tTokenURL: \"https://1abc.com/token\",\n\t\t\tSSOURL:   \"https://2abc.com/sso\",\n\t\t},\n\t}, nil\n}\n\nfunc postAuthSAMLAuthSuccessButInvalidTokenURL(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tMessage: \"\",\n\t\tData: authResponseMain{\n\t\t\tTokenURL: \"invalid!@url$%^\",\n\t\t\tSSOURL:   \"https://abc.com/sso\",\n\t\t},\n\t}, nil\n}\n\nfunc postAuthSAMLAuthSuccessButInvalidSSOURL(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tMessage: \"\",\n\t\tData: authResponseMain{\n\t\t\tTokenURL: \"https://abc.com/token\",\n\t\t\tSSOURL:   \"invalid!@url$%^\",\n\t\t},\n\t}, nil\n}\n\nfunc postAuthSAMLAuthSuccess(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ time.Duration) (*authResponse, error) {\n\treturn &authResponse{\n\t\tSuccess: true,\n\t\tMessage: \"\",\n\t\tData: authResponseMain{\n\t\t\tTokenURL: \"https://abc.com/token\",\n\t\t\tSSOURL:   \"https://abc.com/sso\",\n\t\t},\n\t}, nil\n}\n\nfunc postAuthOKTAError(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ string, _ time.Duration) (*authOKTAResponse, error) {\n\treturn &authOKTAResponse{}, errors.New(\"failed to get SAML response\")\n}\n\nfunc postAuthOKTASuccess(_ context.Context, _ *snowflakeRestful, _ map[string]string, _ []byte, _ string, _ time.Duration) (*authOKTAResponse, error) {\n\treturn &authOKTAResponse{}, nil\n}\n\nfunc getSSOError(_ context.Context, _ *snowflakeRestful, _ *url.Values, _ map[string]string, _ string, _ time.Duration) ([]byte, error) {\n\treturn []byte{}, errors.New(\"failed to get SSO html\")\n}\n\nfunc getSSOSuccessButInvalidURL(_ context.Context, _ *snowflakeRestful, _ *url.Values, _ map[string]string, _ string, _ time.Duration) ([]byte, error) {\n\treturn []byte(`<html><form id=\"1\"/></html>`), nil\n}\n\nfunc getSSOSuccess(_ context.Context, _ *snowflakeRestful, _ *url.Values, _ map[string]string, _ string, _ time.Duration) ([]byte, error) {\n\treturn []byte(`<html><form id=\"1\" action=\"https&#x3a;&#x2f;&#x2f;abc.com&#x2f;\"></form></html>`), nil\n}\n\nfunc getSSOSuccessButWrongPrefixURL(_ context.Context, _ *snowflakeRestful, _ *url.Values, _ map[string]string, _ string, _ time.Duration) ([]byte, error) {\n\treturn []byte(`<html><form id=\"1\" action=\"https&#x3a;&#x2f;&#x2f;1abc.com&#x2f;\"></form></html>`), nil\n}\n\nfunc TestUnitAuthenticateBySAML(t *testing.T) {\n\tauthenticator := &url.URL{\n\t\tScheme: \"https\",\n\t\tHost:   \"abc.com\",\n\t}\n\tapplication := \"testapp\"\n\taccount := \"testaccount\"\n\tuser := \"u\"\n\tpassword := \"p\"\n\tsr := &snowflakeRestful{\n\t\tProtocol:         \"https\",\n\t\tHost:             \"abc.com\",\n\t\tPort:             443,\n\t\tFuncPostAuthSAML: postAuthSAMLError,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\tvar err error\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncPostAuthSAML.\")\n\tassertEqualE(t, err.Error(), \"failed to get SAML response\")\n\n\tsr.FuncPostAuthSAML = postAuthSAMLAuthFail\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncPostAuthSAML.\")\n\tassertEqualE(t, err.Error(), \"strconv.Atoi: parsing \\\"\\\": invalid syntax\")\n\n\tsr.FuncPostAuthSAML = postAuthSAMLAuthFailWithCode\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncPostAuthSAML.\")\n\tdriverErr, ok := err.(*SnowflakeError)\n\tassertTrueF(t, ok, \"should be a SnowflakeError\")\n\tassertEqualE(t, driverErr.Number, ErrCodeIdpConnectionError)\n\n\tsr.FuncPostAuthSAML = postAuthSAMLAuthSuccessButInvalidURL\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncPostAuthSAML.\")\n\tdriverErr, ok = err.(*SnowflakeError)\n\tassertTrueF(t, ok, \"should be a SnowflakeError\")\n\tassertEqualE(t, driverErr.Number, ErrCodeIdpConnectionError)\n\n\tsr.FuncPostAuthSAML = postAuthSAMLAuthSuccessButInvalidTokenURL\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncPostAuthSAML.\")\n\tassertEqualE(t, err.Error(), \"failed to parse token URL. invalid!@url$%^\")\n\n\tsr.FuncPostAuthSAML = postAuthSAMLAuthSuccessButInvalidSSOURL\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncPostAuthSAML.\")\n\tassertEqualE(t, err.Error(), \"failed to parse SSO URL. invalid!@url$%^\")\n\n\tsr.FuncPostAuthSAML = postAuthSAMLAuthSuccess\n\tsr.FuncPostAuthOKTA = postAuthOKTAError\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncPostAuthOKTA.\")\n\tassertEqualE(t, err.Error(), \"failed to get SAML response\")\n\n\tsr.FuncPostAuthOKTA = postAuthOKTASuccess\n\tsr.FuncGetSSO = getSSOError\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncGetSSO.\")\n\tassertEqualE(t, err.Error(), \"failed to get SSO html\")\n\n\tsr.FuncGetSSO = getSSOSuccessButInvalidURL\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncGetSSO.\")\n\tassertHasPrefixE(t, err.Error(), \"failed to find action field in HTML response\")\n\n\tsr.FuncGetSSO = getSSOSuccess\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNilF(t, err, \"should have succeeded at FuncGetSSO.\")\n\n\tsr.FuncGetSSO = getSSOSuccessButWrongPrefixURL\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncGetSSO.\")\n\tdriverErr, ok = err.(*SnowflakeError)\n\tassertTrueF(t, ok, \"should be a SnowflakeError\")\n\tassertEqualE(t, driverErr.Number, ErrCodeSSOURLNotMatch)\n}\n\nfunc TestDisableSamlURLCheck(t *testing.T) {\n\tauthenticator := &url.URL{\n\t\tScheme: \"https\",\n\t\tHost:   \"abc.com\",\n\t}\n\tapplication := \"testapp\"\n\taccount := \"testaccount\"\n\tuser := \"u\"\n\tpassword := \"p\"\n\tsr := &snowflakeRestful{\n\t\tProtocol:         \"https\",\n\t\tHost:             \"abc.com\",\n\t\tPort:             443,\n\t\tFuncPostAuthSAML: postAuthSAMLAuthSuccess,\n\t\tFuncPostAuthOKTA: postAuthOKTASuccess,\n\t\tFuncGetSSO:       getSSOSuccessButWrongPrefixURL,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\tvar err error\n\t// Test for disabled SAML URL check\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolTrue)\n\tassertNilF(t, err, \"SAML URL check should have disabled.\")\n\n\t// Test for enabled SAML URL check\n\t_, err = authenticateBySAML(context.Background(), sr, authenticator, application, account, user, password, ConfigBoolFalse)\n\tassertNotNilF(t, err, \"should have failed at FuncGetSSO.\")\n\tdriverErr, ok := err.(*SnowflakeError)\n\tassertTrueF(t, ok, \"should be a SnowflakeError\")\n\tassertEqualE(t, driverErr.Number, ErrCodeSSOURLNotMatch)\n}\n"
  },
  {
    "path": "azure_storage_client.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"cmp\"\n\t\"context\"\n\t\"crypto/md5\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/Azure/azure-sdk-for-go/sdk/azcore\"\n\t\"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy\"\n\t\"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob\"\n\t\"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob\"\n\t\"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/bloberror\"\n\t\"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container\"\n)\n\ntype snowflakeAzureClient struct {\n\tcfg       *Config\n\ttelemetry *snowflakeTelemetry\n}\n\ntype azureLocation struct {\n\tcontainerName string\n\tpath          string\n}\n\ntype azureAPI interface {\n\tUploadStream(ctx context.Context, body io.Reader, o *azblob.UploadStreamOptions) (azblob.UploadStreamResponse, error)\n\tUploadFile(ctx context.Context, file *os.File, o *azblob.UploadFileOptions) (azblob.UploadFileResponse, error)\n\tDownloadFile(ctx context.Context, file *os.File, o *blob.DownloadFileOptions) (int64, error)\n\tDownloadStream(ctx context.Context, o *blob.DownloadStreamOptions) (azblob.DownloadStreamResponse, error)\n\tGetProperties(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error)\n}\n\nfunc (util *snowflakeAzureClient) createClient(info *execResponseStageInfo, _ bool, telemetry *snowflakeTelemetry) (cloudClient, error) {\n\tsasToken := info.Creds.AzureSasToken\n\tu, err := url.Parse(fmt.Sprintf(\"https://%s.%s/%s%s\", info.StorageAccount, info.EndPoint, info.Path, sasToken))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\ttransport, err := newTransportFactory(util.cfg, telemetry).createTransport(transportConfigFor(transportTypeCloudProvider))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tclient, err := azblob.NewClientWithNoCredential(u.String(), &azblob.ClientOptions{\n\t\tClientOptions: azcore.ClientOptions{\n\t\t\tRetry: policy.RetryOptions{\n\t\t\t\tMaxRetries: 60,\n\t\t\t\tRetryDelay: 2 * time.Second,\n\t\t\t},\n\t\t\tTransport: &http.Client{\n\t\t\t\tTransport: transport,\n\t\t\t},\n\t\t},\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn client, nil\n}\n\n// cloudUtil implementation\nfunc (util *snowflakeAzureClient) getFileHeader(ctx context.Context, meta *fileMetadata, filename string) (*fileHeader, error) {\n\tclient, ok := meta.client.(*azblob.Client)\n\tif !ok {\n\t\treturn nil, errors.New(\"failed to parse client to azblob.Client\")\n\t}\n\n\tazureLoc, err := util.extractContainerNameAndPath(meta.stageInfo.Location)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tpath := azureLoc.path + strings.TrimLeft(filename, \"/\")\n\tcontainerClient, err := createContainerClient(client.URL(), util.cfg, util.telemetry)\n\tif err != nil {\n\t\treturn nil, &SnowflakeError{\n\t\t\tMessage: \"failed to create container client\",\n\t\t}\n\t}\n\tvar blobClient azureAPI\n\tblobClient = containerClient.NewBlockBlobClient(path)\n\t// for testing only\n\tif meta.mockAzureClient != nil {\n\t\tblobClient = meta.mockAzureClient\n\t}\n\tresp, err := withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (blob.GetPropertiesResponse, error) {\n\t\treturn blobClient.GetProperties(ctx, &blob.GetPropertiesOptions{\n\t\t\tAccessConditions: &blob.AccessConditions{},\n\t\t\tCPKInfo:          &blob.CPKInfo{},\n\t\t})\n\t})\n\tif err != nil {\n\t\tvar se *azcore.ResponseError\n\t\tif errors.As(err, &se) {\n\t\t\tif se.ErrorCode == string(bloberror.BlobNotFound) {\n\t\t\t\tmeta.resStatus = notFoundFile\n\t\t\t\treturn nil, errors.New(\"could not find file\")\n\t\t\t} else if se.StatusCode == 403 {\n\t\t\t\tmeta.resStatus = renewToken\n\t\t\t\treturn nil, errors.New(\"received 403, attempting to renew\")\n\t\t\t}\n\t\t}\n\t\tmeta.resStatus = errStatus\n\t\tmeta.lastError = err\n\t\treturn nil, fmt.Errorf(\"unexpected error while retrieving file header from azure. %w\", err)\n\t}\n\n\tmeta.resStatus = uploaded\n\tmetadata := withLowerKeys(resp.Metadata)\n\tvar encData encryptionData\n\n\t_, ok = metadata[\"encryptiondata\"]\n\tif ok {\n\t\tif err = json.Unmarshal([]byte(*metadata[\"encryptiondata\"]), &encData); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tmatdesc, ok := metadata[\"matdesc\"]\n\tif !ok {\n\t\t// matdesc is not in response, use empty string\n\t\tmatdesc = new(string)\n\t}\n\tencryptionMetadata := encryptMetadata{\n\t\tencData.WrappedContentKey.EncryptionKey,\n\t\tencData.ContentEncryptionIV,\n\t\t*matdesc,\n\t}\n\n\tdigest, ok := metadata[\"sfcdigest\"]\n\tif !ok {\n\t\t// sfcdigest is not in response, use empty string\n\t\tdigest = new(string)\n\t}\n\treturn &fileHeader{\n\t\t*digest,\n\t\tint64(len(metadata)),\n\t\t&encryptionMetadata,\n\t}, nil\n}\n\n// cloudUtil implementation\nfunc (util *snowflakeAzureClient) uploadFile(\n\tctx context.Context,\n\tdataFile string,\n\tmeta *fileMetadata,\n\tmaxConcurrency int,\n\tmultiPartThreshold int64) error {\n\tazureMeta := map[string]*string{\n\t\t\"sfcdigest\": &meta.sha256Digest,\n\t}\n\tif meta.encryptMeta != nil {\n\t\ted := &encryptionData{\n\t\t\tEncryptionMode: \"FullBlob\",\n\t\t\tWrappedContentKey: contentKey{\n\t\t\t\t\"symmKey1\",\n\t\t\t\tmeta.encryptMeta.key,\n\t\t\t\t\"AES_CBC_256\",\n\t\t\t},\n\t\t\tEncryptionAgent: encryptionAgent{\n\t\t\t\t\"1.0\",\n\t\t\t\t\"AES_CBC_128\",\n\t\t\t},\n\t\t\tContentEncryptionIV: meta.encryptMeta.iv,\n\t\t\tKeyWrappingMetadata: keyMetadata{\n\t\t\t\t\"Java 5.3.0\",\n\t\t\t},\n\t\t}\n\t\tmetadata, err := json.Marshal(ed)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tencryptionMetadata := string(metadata)\n\t\tazureMeta[\"encryptiondata\"] = &encryptionMetadata\n\t\tazureMeta[\"matdesc\"] = &meta.encryptMeta.matdesc\n\t}\n\n\tazureLoc, err := util.extractContainerNameAndPath(meta.stageInfo.Location)\n\tif err != nil {\n\t\treturn err\n\t}\n\tpath := azureLoc.path + strings.TrimLeft(meta.dstFileName, \"/\")\n\tclient, ok := meta.client.(*azblob.Client)\n\tif !ok {\n\t\treturn &SnowflakeError{\n\t\t\tMessage: \"failed to cast to azure client\",\n\t\t}\n\t}\n\tcontainerClient, err := createContainerClient(client.URL(), util.cfg, util.telemetry)\n\n\tif err != nil {\n\t\treturn &SnowflakeError{\n\t\t\tMessage: \"failed to create container client\",\n\t\t}\n\t}\n\tvar blobClient azureAPI\n\tblobClient = containerClient.NewBlockBlobClient(path)\n\t// for testing only\n\tif meta.mockAzureClient != nil {\n\t\tblobClient = meta.mockAzureClient\n\t}\n\tif meta.srcStream != nil {\n\t\tuploadSrc := cmp.Or(meta.realSrcStream, meta.srcStream)\n\t\tdata := uploadSrc.Bytes()\n\t\tcontentMD5 := md5.Sum(data)\n\t\t_, err = withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (azblob.UploadStreamResponse, error) {\n\t\t\treturn blobClient.UploadStream(ctx, bytes.NewReader(data), &azblob.UploadStreamOptions{\n\t\t\t\tBlockSize: int64(len(data)),\n\t\t\t\tMetadata:  azureMeta,\n\t\t\t\tHTTPHeaders: &blob.HTTPHeaders{\n\t\t\t\t\tBlobContentMD5: contentMD5[:],\n\t\t\t\t},\n\t\t\t})\n\t\t})\n\t} else {\n\t\tvar f *os.File\n\t\tf, err = os.Open(dataFile)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"failed to open file: %w\", err)\n\t\t}\n\t\tdefer func() {\n\t\t\tif err = f.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"Failed to close the %v file: %v\", dataFile, err)\n\t\t\t}\n\t\t}()\n\n\t\tvar contentMD5 []byte\n\t\tcontentMD5, err = computeMD5ForFile(f)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"failed to compute MD5: %w\", err)\n\t\t}\n\n\t\tcontentType := \"application/octet-stream\"\n\t\tcontentEncoding := \"utf-8\"\n\t\tblobOptions := &azblob.UploadFileOptions{\n\t\t\tHTTPHeaders: &blob.HTTPHeaders{\n\t\t\t\tBlobContentType:     &contentType,\n\t\t\t\tBlobContentEncoding: &contentEncoding,\n\t\t\t\tBlobContentMD5:      contentMD5,\n\t\t\t},\n\t\t\tMetadata:    azureMeta,\n\t\t\tConcurrency: uint16(maxConcurrency),\n\t\t}\n\t\tif meta.options.putAzureCallback != nil {\n\t\t\tblobOptions.Progress = meta.options.putAzureCallback.call\n\t\t}\n\t\t_, err = withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (azblob.UploadFileResponse, error) {\n\t\t\treturn blobClient.UploadFile(ctx, f, blobOptions)\n\t\t})\n\t}\n\tif err != nil {\n\t\tvar se *azcore.ResponseError\n\t\tif errors.As(err, &se) {\n\t\t\tif se.StatusCode == 403 && util.detectAzureTokenExpireError(se.RawResponse) {\n\t\t\t\tmeta.resStatus = renewToken\n\t\t\t} else {\n\t\t\t\tmeta.resStatus = needRetry\n\t\t\t\tmeta.lastError = err\n\t\t\t}\n\t\t\treturn err\n\t\t}\n\t\tmeta.resStatus = errStatus\n\t\treturn err\n\t}\n\n\tmeta.dstFileSize = meta.uploadSize\n\tmeta.resStatus = uploaded\n\treturn nil\n}\n\n// cloudUtil implementation\nfunc (util *snowflakeAzureClient) nativeDownloadFile(\n\tctx context.Context,\n\tmeta *fileMetadata,\n\tfullDstFileName string,\n\tmaxConcurrency int64,\n\tpartSize int64) error {\n\tazureLoc, err := util.extractContainerNameAndPath(meta.stageInfo.Location)\n\tif err != nil {\n\t\treturn err\n\t}\n\tpath := azureLoc.path + strings.TrimLeft(meta.srcFileName, \"/\")\n\tlogger.Debugf(\"AZURE CLIENT: Send Get Request to the bucket: %v, file: %v\", meta.stageInfo.Location, meta.srcFileName)\n\tclient, ok := meta.client.(*azblob.Client)\n\tif !ok {\n\t\treturn &SnowflakeError{\n\t\t\tMessage: \"failed to cast to azure client\",\n\t\t}\n\t}\n\tcontainerClient, err := createContainerClient(client.URL(), util.cfg, util.telemetry)\n\tif err != nil {\n\t\treturn &SnowflakeError{\n\t\t\tMessage: \"failed to create container client\",\n\t\t}\n\t}\n\tvar blobClient azureAPI\n\tblobClient = containerClient.NewBlockBlobClient(path)\n\t// for testing only\n\tif meta.mockAzureClient != nil {\n\t\tblobClient = meta.mockAzureClient\n\t}\n\tif isFileGetStream(ctx) {\n\t\tblobDownloadResponse, err := withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (azblob.DownloadStreamResponse, error) {\n\t\t\treturn blobClient.DownloadStream(ctx, &azblob.DownloadStreamOptions{})\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tretryReader := blobDownloadResponse.NewRetryReader(context.Background(), &azblob.RetryReaderOptions{})\n\t\tdefer func() {\n\t\t\tif err = retryReader.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"failed to close the Azure reader: %v\", err)\n\t\t\t}\n\t\t}()\n\t\t_, err = meta.dstStream.ReadFrom(retryReader)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\tf, err := os.OpenFile(fullDstFileName, os.O_CREATE|os.O_WRONLY, readWriteFileMode)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"failed to open file: %w\", err)\n\t\t}\n\t\tdefer func() {\n\t\t\tif err = f.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"failed to close the %v file: %v\", fullDstFileName, err)\n\t\t\t}\n\t\t}()\n\t\t_, err = withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (any, error) {\n\t\t\treturn blobClient.DownloadFile(\n\t\t\t\tctx, f, &azblob.DownloadFileOptions{\n\t\t\t\t\tConcurrency: uint16(maxConcurrency),\n\t\t\t\t\tBlockSize:   int64Max(partSize, blob.DefaultDownloadBlockSize),\n\t\t\t\t})\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tmeta.resStatus = downloaded\n\treturn nil\n}\n\nfunc (util *snowflakeAzureClient) extractContainerNameAndPath(location string) (*azureLocation, error) {\n\tstageLocation, err := expandUser(location)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcontainerName := stageLocation\n\tpath := \"\"\n\n\tif strings.Contains(stageLocation, \"/\") {\n\t\tcontainerName = stageLocation[:strings.Index(stageLocation, \"/\")]\n\t\tpath = stageLocation[strings.Index(stageLocation, \"/\")+1:]\n\t\tif path != \"\" && !strings.HasSuffix(path, \"/\") {\n\t\t\tpath += \"/\"\n\t\t}\n\t}\n\treturn &azureLocation{containerName, path}, nil\n}\n\nfunc (util *snowflakeAzureClient) detectAzureTokenExpireError(resp *http.Response) bool {\n\tif resp.StatusCode != 403 {\n\t\treturn false\n\t}\n\tazureErr, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\treturn false\n\t}\n\terrStr := string(azureErr)\n\treturn strings.Contains(errStr, \"Signature not valid in the specified time frame\") ||\n\t\tstrings.Contains(errStr, \"Server failed to authenticate the request\")\n}\n\n// computeMD5ForFile reads a file to compute its MD5 digest, then seeks back to\n// the start so the file can be read again for upload. Azure does not compute\n// Content-MD5 for multi-part (block blob) uploads, so we must provide it.\nfunc computeMD5ForFile(f *os.File) ([]byte, error) {\n\th := md5.New()\n\tif _, err := io.Copy(h, f); err != nil {\n\t\treturn nil, err\n\t}\n\tif _, err := f.Seek(0, io.SeekStart); err != nil {\n\t\treturn nil, err\n\t}\n\treturn h.Sum(nil), nil\n}\n\nfunc createContainerClient(clientURL string, cfg *Config, telemetry *snowflakeTelemetry) (*container.Client, error) {\n\ttransport, err := newTransportFactory(cfg, telemetry).createTransport(transportConfigFor(transportTypeCloudProvider))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn container.NewClientWithNoCredential(clientURL, &container.ClientOptions{ClientOptions: azcore.ClientOptions{\n\t\tTransport: &http.Client{\n\t\t\tTransport: transport,\n\t\t},\n\t}})\n}\n"
  },
  {
    "path": "azure_storage_client_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/md5\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"io\"\n\t\"net/http\"\n\t\"os\"\n\t\"path\"\n\t\"testing\"\n\n\t\"github.com/Azure/azure-sdk-for-go/sdk/azcore\"\n\t\"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob\"\n\t\"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob\"\n)\n\nfunc TestExtractContainerNameAndPath(t *testing.T) {\n\tazureUtil := new(snowflakeAzureClient)\n\ttestcases := []tcBucketPath{\n\t\t{\"sfc-eng-regression/test_sub_dir/\", \"sfc-eng-regression\", \"test_sub_dir/\"},\n\t\t{\"sfc-eng-regression/dir/test_stg/test_sub_dir/\", \"sfc-eng-regression\", \"dir/test_stg/test_sub_dir/\"},\n\t\t{\"sfc-eng-regression/\", \"sfc-eng-regression\", \"\"},\n\t\t{\"sfc-eng-regression//\", \"sfc-eng-regression\", \"/\"},\n\t\t{\"sfc-eng-regression///\", \"sfc-eng-regression\", \"//\"},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(test.in, func(t *testing.T) {\n\t\t\tazureLoc, err := azureUtil.extractContainerNameAndPath(test.in)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t\tif azureLoc.containerName != test.bucket {\n\t\t\t\tt.Errorf(\"failed. in: %v, expected: %v, got: %v\", test.in, test.bucket, azureLoc.containerName)\n\t\t\t}\n\t\t\tif azureLoc.path != test.path {\n\t\t\t\tt.Errorf(\"failed. in: %v, expected: %v, got: %v\", test.in, test.path, azureLoc.path)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestUnitDetectAzureTokenExpireError(t *testing.T) {\n\tazureUtil := new(snowflakeAzureClient)\n\tdd := &execResponseData{}\n\tinvalidSig := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"Signature not valid in the specified time frame\",\n\t\tCode:    \"403\",\n\t\tSuccess: true,\n\t}\n\tba, err := json.Marshal(invalidSig)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tresp := &http.Response{StatusCode: http.StatusForbidden, Body: &fakeResponseBody{body: ba}}\n\tif !azureUtil.detectAzureTokenExpireError(resp) {\n\t\tt.Fatal(\"expected token expired\")\n\t}\n\n\tinvalidAuth := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"Server failed to authenticate the request\",\n\t\tCode:    \"403\",\n\t\tSuccess: true,\n\t}\n\tba, err = json.Marshal(invalidAuth)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tresp = &http.Response{StatusCode: http.StatusForbidden, Body: &fakeResponseBody{body: ba}}\n\tif !azureUtil.detectAzureTokenExpireError(resp) {\n\t\tt.Fatal(\"expected token expired\")\n\t}\n\n\tresp = &http.Response{\n\t\tStatusCode: http.StatusForbidden,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}\n\tif azureUtil.detectAzureTokenExpireError(resp) {\n\t\tt.Fatal(\"invalid body\")\n\t}\n\n\tinvalidMessage := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"unauthorized\",\n\t\tCode:    \"403\",\n\t\tSuccess: true,\n\t}\n\tba, err = json.Marshal(invalidMessage)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tresp = &http.Response{StatusCode: http.StatusForbidden, Body: &fakeResponseBody{body: ba}}\n\tif azureUtil.detectAzureTokenExpireError(resp) {\n\t\tt.Fatal(\"incorrect message\")\n\t}\n\n\tresp = &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}}}\n\n\tif azureUtil.detectAzureTokenExpireError(resp) {\n\t\tt.Fatal(\"status code is success. expected false.\")\n\t}\n}\n\ntype azureObjectAPIMock struct {\n\tUploadStreamFunc   func(ctx context.Context, body io.Reader, o *azblob.UploadStreamOptions) (azblob.UploadStreamResponse, error)\n\tUploadFileFunc     func(ctx context.Context, file *os.File, o *azblob.UploadFileOptions) (azblob.UploadFileResponse, error)\n\tDownloadFileFunc   func(ctx context.Context, file *os.File, o *blob.DownloadFileOptions) (int64, error)\n\tDownloadStreamFunc func(ctx context.Context, o *blob.DownloadStreamOptions) (azblob.DownloadStreamResponse, error)\n\tGetPropertiesFunc  func(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error)\n}\n\nfunc (c *azureObjectAPIMock) UploadStream(ctx context.Context, body io.Reader, o *azblob.UploadStreamOptions) (azblob.UploadStreamResponse, error) {\n\treturn c.UploadStreamFunc(ctx, body, o)\n}\n\nfunc (c *azureObjectAPIMock) UploadFile(ctx context.Context, file *os.File, o *azblob.UploadFileOptions) (azblob.UploadFileResponse, error) {\n\treturn c.UploadFileFunc(ctx, file, o)\n}\n\nfunc (c *azureObjectAPIMock) GetProperties(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error) {\n\treturn c.GetPropertiesFunc(ctx, o)\n}\n\nfunc (c *azureObjectAPIMock) DownloadFile(ctx context.Context, file *os.File, o *blob.DownloadFileOptions) (int64, error) {\n\treturn c.DownloadFileFunc(ctx, file, o)\n}\n\nfunc (c *azureObjectAPIMock) DownloadStream(ctx context.Context, o *blob.DownloadStreamOptions) (azblob.DownloadStreamResponse, error) {\n\treturn c.DownloadStreamFunc(ctx, o)\n}\n\nfunc TestUploadFileWithAzureUploadFailedError(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/storage/users/456/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tencMat := snowflakeFileEncryption{\n\t\tQueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\",\n\t\tQueryID:             \"01abc874-0406-1bf0-0000-53b10668e056\",\n\t\tSMKID:               92019681909886,\n\t}\n\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:               \"data1.txt.gz\",\n\t\tstageLocationType:  \"AZURE\",\n\t\tnoSleepingTime:     true,\n\t\tparallel:           initialParallel,\n\t\tclient:             azureCli,\n\t\tsha256Digest:       \"123456789abcdef\",\n\t\tstageInfo:          &info,\n\t\tdstFileName:        \"data1.txt.gz\",\n\t\tsrcFileName:        path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\tencryptionMaterial: &encMat,\n\t\tencryptMeta:        testEncryptionMeta(),\n\t\toverwrite:          true,\n\t\tdstCompressionType: compressionTypes[\"GZIP\"],\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tUploadFileFunc: func(ctx context.Context, file *os.File, o *azblob.UploadFileOptions) (azblob.UploadFileResponse, error) {\n\t\t\t\treturn azblob.UploadFileResponse{}, errors.New(\"unexpected error uploading file\")\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestUploadStreamWithAzureUploadFailedError(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/storage/users/456/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\tinitialParallel := int64(100)\n\tsrc := []byte{65, 66, 67}\n\tencMat := snowflakeFileEncryption{\n\t\tQueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\",\n\t\tQueryID:             \"01abc874-0406-1bf0-0000-53b10668e056\",\n\t\tSMKID:               92019681909886,\n\t}\n\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:               \"data1.txt.gz\",\n\t\tstageLocationType:  \"AZURE\",\n\t\tnoSleepingTime:     true,\n\t\tparallel:           initialParallel,\n\t\tclient:             azureCli,\n\t\tsha256Digest:       \"123456789abcdef\",\n\t\tstageInfo:          &info,\n\t\tdstFileName:        \"data1.txt.gz\",\n\t\tsrcStream:          bytes.NewBuffer(src),\n\t\tencryptionMaterial: &encMat,\n\t\tencryptMeta:        testEncryptionMeta(),\n\t\toverwrite:          true,\n\t\tdstCompressionType: compressionTypes[\"GZIP\"],\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tUploadStreamFunc: func(ctx context.Context, body io.Reader, o *azblob.UploadStreamOptions) (azblob.UploadStreamResponse, error) {\n\t\t\t\treturn azblob.UploadStreamResponse{}, errors.New(\"unexpected error uploading file\")\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcStream = uploadMeta.srcStream\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestUploadFileWithAzureUploadTokenExpired(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/storage/users/456/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdd := &execResponseData{}\n\tinvalidSig := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"Signature not valid in the specified time frame\",\n\t\tCode:    \"403\",\n\t\tSuccess: true,\n\t}\n\tba, err := json.Marshal(invalidSig)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:               \"data1.txt.gz\",\n\t\tstageLocationType:  \"AZURE\",\n\t\tnoSleepingTime:     true,\n\t\tparallel:           initialParallel,\n\t\tclient:             azureCli,\n\t\tsha256Digest:       \"123456789abcdef\",\n\t\tstageInfo:          &info,\n\t\tdstFileName:        \"data1.txt.gz\",\n\t\tsrcFileName:        path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\tencryptMeta:        testEncryptionMeta(),\n\t\toverwrite:          true,\n\t\tdstCompressionType: compressionTypes[\"GZIP\"],\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tUploadFileFunc: func(ctx context.Context, file *os.File, o *azblob.UploadFileOptions) (azblob.UploadFileResponse, error) {\n\t\t\t\treturn azblob.UploadFileResponse{}, &azcore.ResponseError{\n\t\t\t\t\tErrorCode:   \"12345\",\n\t\t\t\t\tStatusCode:  403,\n\t\t\t\t\tRawResponse: &http.Response{StatusCode: http.StatusForbidden, Body: &fakeResponseBody{body: ba}},\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif uploadMeta.resStatus != renewToken {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\trenewToken, uploadMeta.resStatus)\n\t}\n}\n\nfunc TestUploadFileWithAzureUploadNeedsRetry(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/storage/users/456/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdd := &execResponseData{}\n\tinvalidSig := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"Server Error\",\n\t\tCode:    \"500\",\n\t\tSuccess: true,\n\t}\n\tba, err := json.Marshal(invalidSig)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:               \"data1.txt.gz\",\n\t\tstageLocationType:  \"AZURE\",\n\t\tnoSleepingTime:     false,\n\t\tparallel:           initialParallel,\n\t\tclient:             azureCli,\n\t\tsha256Digest:       \"123456789abcdef\",\n\t\tstageInfo:          &info,\n\t\tdstFileName:        \"data1.txt.gz\",\n\t\tsrcFileName:        path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\tencryptMeta:        testEncryptionMeta(),\n\t\toverwrite:          true,\n\t\tdstCompressionType: compressionTypes[\"GZIP\"],\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tUploadFileFunc: func(ctx context.Context, file *os.File, o *azblob.UploadFileOptions) (azblob.UploadFileResponse, error) {\n\t\t\t\treturn azblob.UploadFileResponse{}, &azcore.ResponseError{\n\t\t\t\t\tErrorCode:   \"12345\",\n\t\t\t\t\tStatusCode:  500,\n\t\t\t\t\tRawResponse: &http.Response{StatusCode: http.StatusForbidden, Body: &fakeResponseBody{body: ba}},\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Fatal(\"should have raised an error\")\n\t}\n\n\tif uploadMeta.resStatus != needRetry {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\tneedRetry, uploadMeta.resStatus)\n\t}\n}\n\nfunc TestDownloadOneFileToAzureFailed(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/rwyitestacco/users/1234/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"AZURE\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            azureCli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tDownloadFileFunc: func(ctx context.Context, file *os.File, o *blob.DownloadFileOptions) (int64, error) {\n\t\t\t\treturn 0, errors.New(\"unexpected error uploading file\")\n\t\t\t},\n\t\t\tGetPropertiesFunc: func(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error) {\n\t\t\t\treturn blob.GetPropertiesResponse{}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n}\n\nfunc TestGetFileHeaderErrorStatus(t *testing.T) {\n\tctx := context.Background()\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/teststage/users/34/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tmeta := fileMetadata{\n\t\tclient:    azureCli,\n\t\tstageInfo: &info,\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tGetPropertiesFunc: func(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error) {\n\t\t\t\treturn blob.GetPropertiesResponse{}, errors.New(\"failed to retrieve headers\")\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tif header, err := (&snowflakeAzureClient{cfg: &Config{}}).getFileHeader(ctx, &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\tif meta.resStatus != errStatus {\n\t\tt.Fatalf(\"expected %v result status, got: %v\", errStatus, meta.resStatus)\n\t}\n\n\tdd := &execResponseData{}\n\tinvalidSig := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"Not Found\",\n\t\tCode:    \"404\",\n\t\tSuccess: true,\n\t}\n\tba, err := json.Marshal(invalidSig)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tmeta = fileMetadata{\n\t\tclient:    azureCli,\n\t\tstageInfo: &info,\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tGetPropertiesFunc: func(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error) {\n\t\t\t\treturn blob.GetPropertiesResponse{}, &azcore.ResponseError{\n\t\t\t\t\tErrorCode:   \"BlobNotFound\",\n\t\t\t\t\tStatusCode:  404,\n\t\t\t\t\tRawResponse: &http.Response{StatusCode: http.StatusNotFound, Body: &fakeResponseBody{body: ba}},\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tif header, err := (&snowflakeAzureClient{cfg: &Config{}}).getFileHeader(ctx, &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\tif meta.resStatus != notFoundFile {\n\t\tt.Fatalf(\"expected %v result status, got: %v\", errStatus, meta.resStatus)\n\t}\n\n\tinvalidSig = &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"Unauthorized\",\n\t\tCode:    \"403\",\n\t\tSuccess: true,\n\t}\n\tba, err = json.Marshal(invalidSig)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tmeta.mockAzureClient = &azureObjectAPIMock{\n\t\tGetPropertiesFunc: func(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error) {\n\t\t\treturn blob.GetPropertiesResponse{}, &azcore.ResponseError{\n\t\t\t\tStatusCode:  403,\n\t\t\t\tRawResponse: &http.Response{StatusCode: http.StatusForbidden, Body: &fakeResponseBody{body: ba}},\n\t\t\t}\n\t\t},\n\t}\n\n\tif header, err := (&snowflakeAzureClient{cfg: &Config{}}).getFileHeader(ctx, &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\tif meta.resStatus != renewToken {\n\t\tt.Fatalf(\"expected %v result status, got: %v\", renewToken, meta.resStatus)\n\t}\n}\n\nfunc TestUploadFileToAzureClientCastFail(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/rwyi-testacco/users/9220/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"AZURE\",\n\t\tnoSleepingTime:    false,\n\t\tclient:            s3Cli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\tencryptMeta:       testEncryptionMeta(),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestUploadFileToAzureSetsBlobContentMD5(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/storage/users/456/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tsrcFile := path.Join(dir, \"/test_data/put_get_1.txt\")\n\tsrcContent, err := os.ReadFile(srcFile)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\texpectedMD5 := md5.Sum(srcContent)\n\tvar capturedMD5 []byte\n\tuploadMeta := fileMetadata{\n\t\tname:               \"data1.txt.gz\",\n\t\tstageLocationType:  \"AZURE\",\n\t\tnoSleepingTime:     true,\n\t\tparallel:           1,\n\t\tclient:             azureCli,\n\t\tsha256Digest:       \"123456789abcdef\",\n\t\tstageInfo:          &info,\n\t\tdstFileName:        \"data1.txt.gz\",\n\t\tsrcFileName:        srcFile,\n\t\tencryptionMaterial: &snowflakeFileEncryption{QueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\", QueryID: \"01abc874-0406-1bf0-0000-53b10668e056\", SMKID: 92019681909886},\n\t\tencryptMeta:        testEncryptionMeta(),\n\t\toverwrite:          true,\n\t\tdstCompressionType: compressionTypes[\"GZIP\"],\n\t\toptions:            &SnowflakeFileTransferOptions{MultiPartThreshold: multiPartThreshold},\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tUploadFileFunc: func(ctx context.Context, file *os.File, o *azblob.UploadFileOptions) (azblob.UploadFileResponse, error) {\n\t\t\t\tif o.HTTPHeaders != nil {\n\t\t\t\t\tcapturedMD5 = o.HTTPHeaders.BlobContentMD5\n\t\t\t\t}\n\t\t\t\treturn azblob.UploadFileResponse{}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{sc: &snowflakeConn{cfg: &Config{}}},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif capturedMD5 == nil {\n\t\tt.Fatal(\"expected BlobContentMD5 to be set, got nil\")\n\t}\n\tif !bytes.Equal(capturedMD5, expectedMD5[:]) {\n\t\tt.Fatalf(\"BlobContentMD5 mismatch: got %x, want %x\", capturedMD5, expectedMD5[:])\n\t}\n}\n\nfunc TestUploadStreamToAzureSetsBlobContentMD5(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/storage/users/456/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tsrc := []byte{65, 66, 67}\n\texpectedMD5 := md5.Sum(src)\n\tvar capturedMD5 []byte\n\tuploadMeta := fileMetadata{\n\t\tname:               \"data1.txt.gz\",\n\t\tstageLocationType:  \"AZURE\",\n\t\tnoSleepingTime:     true,\n\t\tparallel:           1,\n\t\tclient:             azureCli,\n\t\tsha256Digest:       \"123456789abcdef\",\n\t\tstageInfo:          &info,\n\t\tdstFileName:        \"data1.txt.gz\",\n\t\tsrcStream:          bytes.NewBuffer(src),\n\t\tencryptionMaterial: &snowflakeFileEncryption{QueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\", QueryID: \"01abc874-0406-1bf0-0000-53b10668e056\", SMKID: 92019681909886},\n\t\tencryptMeta:        testEncryptionMeta(),\n\t\toverwrite:          true,\n\t\tdstCompressionType: compressionTypes[\"GZIP\"],\n\t\toptions:            &SnowflakeFileTransferOptions{MultiPartThreshold: multiPartThreshold},\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tUploadStreamFunc: func(ctx context.Context, body io.Reader, o *azblob.UploadStreamOptions) (azblob.UploadStreamResponse, error) {\n\t\t\t\tif o.HTTPHeaders != nil {\n\t\t\t\t\tcapturedMD5 = o.HTTPHeaders.BlobContentMD5\n\t\t\t\t}\n\t\t\t\treturn azblob.UploadStreamResponse{}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{sc: &snowflakeConn{cfg: &Config{}}},\n\t}\n\n\tuploadMeta.realSrcStream = uploadMeta.srcStream\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif capturedMD5 == nil {\n\t\tt.Fatal(\"expected BlobContentMD5 to be set, got nil\")\n\t}\n\tif !bytes.Equal(capturedMD5, expectedMD5[:]) {\n\t\tt.Fatalf(\"BlobContentMD5 mismatch: got %x, want %x\", capturedMD5, expectedMD5[:])\n\t}\n}\n\nfunc TestAzureGetHeaderClientCastFail(t *testing.T) {\n\tctx := context.Background()\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"azblob/rwyi-testacco/users/9220/\",\n\t\tLocationType: \"AZURE\",\n\t}\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tmeta := fileMetadata{\n\t\tclient:    s3Cli,\n\t\tstageInfo: &execResponseStageInfo{Location: \"\"},\n\t\tmockAzureClient: &azureObjectAPIMock{\n\t\t\tGetPropertiesFunc: func(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error) {\n\t\t\t\treturn blob.GetPropertiesResponse{}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\t_, err = new(snowflakeAzureClient).getFileHeader(ctx, &meta, \"file.txt\")\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n"
  },
  {
    "path": "bind_uploader.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n\t\"math/big\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"strings\"\n)\n\nconst (\n\tbindStageName            = \"SYSTEM$BIND\"\n\tcreateTemporaryStageStmt = \"CREATE OR REPLACE TEMPORARY STAGE \" + bindStageName +\n\t\t\" file_format=\" + \"(type=csv field_optionally_enclosed_by='\\\"')\"\n\n\t// size (in bytes) of max input stream (10MB default) as per JDBC specs\n\tinputStreamBufferSize = 1024 * 1024 * 10\n)\n\ntype bindUploader struct {\n\tctx            context.Context\n\tsc             *snowflakeConn\n\tstagePath      string\n\tfileCount      int\n\tarrayBindStage string\n}\n\ntype bindingSchema struct {\n\tTyp      string                `json:\"type\"`\n\tNullable bool                  `json:\"nullable\"`\n\tFields   []query.FieldMetadata `json:\"fields\"`\n}\n\ntype bindingValue struct {\n\tvalue  *string\n\tformat string\n\tschema *bindingSchema\n}\n\nfunc (bu *bindUploader) upload(bindings []driver.NamedValue) (*execResponse, error) {\n\tbindingRows, err := bu.buildRowsAsBytes(bindings)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tstartIdx, numBytes, rowNum := 0, 0, 0\n\tbu.fileCount = 0\n\tvar data *execResponse\n\tfor rowNum < len(bindingRows) {\n\t\tfor numBytes < inputStreamBufferSize && rowNum < len(bindingRows) {\n\t\t\tnumBytes += len(bindingRows[rowNum])\n\t\t\trowNum++\n\t\t}\n\t\t// concatenate all byte arrays into 1 and put into input stream\n\t\tvar b bytes.Buffer\n\t\tb.Grow(numBytes)\n\t\tfor i := startIdx; i < rowNum; i++ {\n\t\t\tb.Write(bindingRows[i])\n\t\t}\n\n\t\tbu.fileCount++\n\t\tdata, err = bu.uploadStreamInternal(&b, bu.fileCount, true)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tstartIdx = rowNum\n\t\tnumBytes = 0\n\t}\n\treturn data, nil\n}\n\nfunc (bu *bindUploader) uploadStreamInternal(\n\tinputStream *bytes.Buffer,\n\tdstFileName int,\n\tcompressData bool) (\n\t*execResponse, error) {\n\tif err := bu.createStageIfNeeded(); err != nil {\n\t\treturn nil, err\n\t}\n\tstageName := bu.stagePath\n\tif stageName == \"\" {\n\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:  ErrBindUpload,\n\t\t\tMessage: \"stage name is null\",\n\t\t}, bu.sc)\n\t}\n\n\t// use a placeholder for source file\n\tputCommand := fmt.Sprintf(\"put 'file:///tmp/placeholder/%v' '%v' overwrite=true\", dstFileName, stageName)\n\t// for Windows queries\n\tputCommand = strings.ReplaceAll(putCommand, \"\\\\\", \"\\\\\\\\\")\n\t// prepare context for PUT command\n\tctx := WithFilePutStream(bu.ctx, inputStream)\n\tctx = WithFileTransferOptions(ctx, &SnowflakeFileTransferOptions{\n\t\tcompressSourceFromStream: compressData})\n\treturn bu.sc.exec(ctx, putCommand, false, true, false, []driver.NamedValue{})\n}\n\nfunc (bu *bindUploader) createStageIfNeeded() error {\n\tif bu.arrayBindStage != \"\" {\n\t\treturn nil\n\t}\n\tdata, err := bu.sc.exec(bu.ctx, createTemporaryStageStmt, false, false, false, []driver.NamedValue{})\n\tif err != nil {\n\t\tnewThreshold := \"0\"\n\t\tbu.sc.syncParams.set(sessionArrayBindStageThreshold, &newThreshold)\n\t\treturn err\n\t}\n\tif !data.Success {\n\t\tcode, err := strconv.Atoi(data.Code)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   code,\n\t\t\tSQLState: data.Data.SQLState,\n\t\t\tMessage:  data.Message,\n\t\t\tQueryID:  data.Data.QueryID,\n\t\t}, bu.sc)\n\t}\n\tbu.arrayBindStage = bindStageName\n\treturn nil\n}\n\n// transpose the columns to rows and write them to a list of bytes\nfunc (bu *bindUploader) buildRowsAsBytes(columns []driver.NamedValue) ([][]byte, error) {\n\tnumColumns := len(columns)\n\tif columns[0].Value == nil {\n\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:  ErrBindSerialization,\n\t\t\tMessage: \"no binds found in the first column\",\n\t\t}, bu.sc)\n\t}\n\n\t_, column, err := snowflakeArrayToString(&columns[0], true)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tnumRows := len(column)\n\tcsvRows := make([][]byte, 0)\n\trows := make([][]any, 0)\n\tfor range numRows {\n\t\trows = append(rows, make([]any, numColumns))\n\t}\n\n\tfor rowIdx := range numRows {\n\t\tif column[rowIdx] == nil {\n\t\t\trows[rowIdx][0] = column[rowIdx]\n\t\t} else {\n\t\t\trows[rowIdx][0] = *column[rowIdx]\n\t\t}\n\t}\n\tfor colIdx := 1; colIdx < numColumns; colIdx++ {\n\t\t_, column, err = snowflakeArrayToString(&columns[colIdx], true)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tiNumRows := len(column)\n\t\tif iNumRows != numRows {\n\t\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\t\tNumber:      ErrBindSerialization,\n\t\t\t\tMessage:     errors.ErrMsgBindColumnMismatch,\n\t\t\t\tMessageArgs: []any{colIdx, iNumRows, numRows},\n\t\t\t}, bu.sc)\n\t\t}\n\t\tfor rowIdx := range numRows {\n\t\t\t// length of column = number of rows\n\t\t\tif column[rowIdx] == nil {\n\t\t\t\trows[rowIdx][colIdx] = column[rowIdx]\n\t\t\t} else {\n\t\t\t\trows[rowIdx][colIdx] = *column[rowIdx]\n\t\t\t}\n\t\t}\n\t}\n\tfor _, row := range rows {\n\t\tcsvRows = append(csvRows, bu.createCSVRecord(row))\n\t}\n\treturn csvRows, nil\n}\n\nfunc (bu *bindUploader) createCSVRecord(data []any) []byte {\n\tvar b strings.Builder\n\tb.Grow(1024)\n\tfor i := range data {\n\t\tif i > 0 {\n\t\t\tb.WriteString(\",\")\n\t\t}\n\t\tvalue, ok := data[i].(string)\n\t\tif ok {\n\t\t\tb.WriteString(escapeForCSV(value))\n\t\t} else if !reflect.ValueOf(data[i]).IsNil() {\n\t\t\tlogger.WithContext(bu.ctx).Debugf(\"Cannot convert value to string in createCSVRecord. value: %v\", data[i])\n\t\t}\n\t}\n\tb.WriteString(\"\\n\")\n\treturn []byte(b.String())\n}\n\nfunc (sc *snowflakeConn) processBindings(\n\tctx context.Context,\n\tbindings []driver.NamedValue,\n\tdescribeOnly bool,\n\trequestID UUID,\n\treq *execRequest) error {\n\tarrayBindThreshold := sc.getArrayBindStageThreshold()\n\tnumBinds, err := arrayBindValueCount(bindings)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif 0 < arrayBindThreshold && arrayBindThreshold <= numBinds && !describeOnly && isArrayBind(bindings) {\n\t\tuploader := bindUploader{\n\t\t\tsc:        sc,\n\t\t\tctx:       ctx,\n\t\t\tstagePath: \"@\" + bindStageName + \"/\" + requestID.String(),\n\t\t}\n\t\t_, err := uploader.upload(bindings)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treq.Bindings = nil\n\t\treq.BindStage = uploader.stagePath\n\t} else {\n\t\treq.Bindings, err = getBindValues(bindings, &sc.syncParams)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treq.BindStage = \"\"\n\t}\n\treturn nil\n}\n\nfunc getBindValues(bindings []driver.NamedValue, params *syncParams) (map[string]execBindParameter, error) {\n\ttsmode := types.TimestampNtzType\n\tidx := 1\n\tvar err error\n\tbindValues := make(map[string]execBindParameter, len(bindings))\n\tfor _, binding := range bindings {\n\t\tif tnt, ok := binding.Value.(TypedNullTime); ok {\n\t\t\ttsmode = convertTzTypeToSnowflakeType(tnt.TzType)\n\t\t\tbinding.Value = tnt.Time\n\t\t}\n\t\tt := goTypeToSnowflake(binding.Value, tsmode)\n\t\tif t == types.ChangeType {\n\t\t\ttsmode, err = dataTypeMode(binding.Value)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t} else {\n\t\t\tvar val any\n\t\t\tvar bv bindingValue\n\t\t\tif t == types.SliceType {\n\t\t\t\t// retrieve array binding data\n\t\t\t\tt, val, err = snowflakeArrayToString(&binding, false)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tbv, err = valueToString(binding.Value, tsmode, params)\n\t\t\t\tval = bv.value\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\t\t\tswitch t {\n\t\t\tcase types.NullType, types.UnSupportedType:\n\t\t\t\tt = types.TextType\n\t\t\tcase types.NilObjectType, types.MapType, types.NilMapType:\n\t\t\t\tt = types.ObjectType\n\t\t\tcase types.NilArrayType:\n\t\t\t\tt = types.ArrayType\n\t\t\t}\n\t\t\tbindValues[bindingName(binding, idx)] = execBindParameter{\n\t\t\t\tType:   t.String(),\n\t\t\t\tValue:  val,\n\t\t\t\tFormat: bv.format,\n\t\t\t\tSchema: bv.schema,\n\t\t\t}\n\t\t\tidx++\n\t\t}\n\t}\n\treturn bindValues, nil\n}\n\nfunc bindingName(nv driver.NamedValue, idx int) string {\n\tif nv.Name != \"\" {\n\t\treturn nv.Name\n\t}\n\treturn strconv.Itoa(idx)\n}\n\nfunc arrayBindValueCount(bindValues []driver.NamedValue) (int, error) {\n\tif !isArrayBind(bindValues) {\n\t\treturn 0, nil\n\t}\n\t_, arr, err := snowflakeArrayToString(&bindValues[0], false)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\treturn len(bindValues) * len(arr), nil\n}\n\nfunc isArrayBind(bindings []driver.NamedValue) bool {\n\tif len(bindings) == 0 {\n\t\treturn false\n\t}\n\tfor _, binding := range bindings {\n\t\tif supported := supportedArrayBind(&binding); !supported {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\nfunc supportedArrayBind(nv *driver.NamedValue) bool {\n\tswitch reflect.TypeOf(nv.Value) {\n\tcase reflect.TypeFor[*intArray](), reflect.TypeFor[*int32Array](),\n\t\treflect.TypeFor[*int64Array](), reflect.TypeFor[*float64Array](),\n\t\treflect.TypeFor[*float32Array](), reflect.TypeFor[*decfloatArray](),\n\t\treflect.TypeFor[*boolArray](), reflect.TypeFor[*stringArray](),\n\t\treflect.TypeFor[*byteArray](), reflect.TypeFor[*timestampNtzArray](),\n\t\treflect.TypeFor[*timestampLtzArray](), reflect.TypeFor[*timestampTzArray](),\n\t\treflect.TypeFor[*dateArray](), reflect.TypeFor[*timeArray]():\n\t\treturn true\n\tcase reflect.TypeFor[[]uint8]():\n\t\t// internal binding ts mode\n\t\tval, ok := nv.Value.([]uint8)\n\t\tif !ok {\n\t\t\treturn ok\n\t\t}\n\t\tif len(val) == 0 {\n\t\t\treturn true // for null binds\n\t\t}\n\t\tif types.FixedType <= types.SnowflakeType(val[0]) && types.SnowflakeType(val[0]) <= types.UnSupportedType {\n\t\t\treturn true\n\t\t}\n\t\treturn false\n\tdefault:\n\t\t// Support for bulk array binding insertion using []interface{}\n\t\tif isInterfaceArrayBinding(nv.Value) {\n\t\t\treturn true\n\t\t}\n\t\treturn false\n\t}\n}\n\nfunc supportedDecfloatBind(nv *driver.NamedValue) bool {\n\tif nv.Value == nil {\n\t\treturn false\n\t}\n\n\tval := reflect.Indirect(reflect.ValueOf(nv.Value))\n\n\tif !val.IsValid() {\n\t\treturn false\n\t}\n\n\treturn val.Type() == reflect.TypeFor[big.Float]()\n}\n\nfunc supportedNullBind(nv *driver.NamedValue) bool {\n\tswitch reflect.TypeOf(nv.Value) {\n\tcase reflect.TypeFor[sql.NullString](), reflect.TypeFor[sql.NullInt64](),\n\t\treflect.TypeFor[sql.NullBool](), reflect.TypeFor[sql.NullFloat64](), reflect.TypeFor[TypedNullTime]():\n\t\treturn true\n\t}\n\treturn false\n}\n\nfunc supportedStructuredObjectWriterBind(nv *driver.NamedValue) bool {\n\tif _, ok := nv.Value.(StructuredObjectWriter); ok {\n\t\treturn true\n\t}\n\t_, ok := nv.Value.(reflect.Type)\n\treturn ok\n}\n\nfunc supportedStructuredArrayBind(nv *driver.NamedValue) bool {\n\ttyp := reflect.TypeOf(nv.Value)\n\treturn typ != nil && (typ.Kind() == reflect.Array || typ.Kind() == reflect.Slice)\n}\n\nfunc supportedStructuredMapBind(nv *driver.NamedValue) bool {\n\ttyp := reflect.TypeOf(nv.Value)\n\treturn typ != nil && (typ.Kind() == reflect.Map || typ == reflect.TypeFor[NilMapTypes]())\n}\n"
  },
  {
    "path": "bindings_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"log\"\n\t\"math\"\n\t\"math/big\"\n\t\"math/rand\"\n\t\"reflect\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\nconst (\n\tcreateTableSQL = `create or replace table test_prep_statement(c1 INTEGER,\n\t\tc2 FLOAT, c3 BOOLEAN, c4 STRING, C5 BINARY, C6 TIMESTAMP_NTZ,\n\t\tC7 TIMESTAMP_LTZ, C8 TIMESTAMP_TZ, C9 DATE, C10 TIME)`\n\tdeleteTableSQL = \"drop table if exists TEST_PREP_STATEMENT\"\n\tinsertSQL      = \"insert into TEST_PREP_STATEMENT values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\"\n\tselectAllSQL   = \"select * from TEST_PREP_STATEMENT ORDER BY 1\"\n\n\tcreateTableSQLBulkArray = `create or replace table test_bulk_array(c1 INTEGER,\n\t\tc2 FLOAT, c3 BOOLEAN, c4 STRING, C5 BINARY, C6 INTEGER)`\n\tdeleteTableSQLBulkArray = \"drop table if exists test_bulk_array\"\n\tinsertSQLBulkArray      = \"insert into test_bulk_array values(?, ?, ?, ?, ?, ?)\"\n\tselectAllSQLBulkArray   = \"select * from test_bulk_array ORDER BY 1\"\n\n\tcreateTableSQLBulkArrayDateTimeTimestamp = `create or replace table test_bulk_array_DateTimeTimestamp(\n\t\tC1 TIMESTAMP_NTZ, C2 TIMESTAMP_LTZ, C3 TIMESTAMP_TZ, C4 DATE, C5 TIME)`\n\tdeleteTableSQLBulkArrayDateTimeTimestamp = \"drop table if exists test_bulk_array_DateTimeTimestamp\"\n\tinsertSQLBulkArrayDateTimeTimestamp      = \"insert into test_bulk_array_DateTimeTimestamp values(?, ?, ?, ?, ?)\"\n\tselectAllSQLBulkArrayDateTimeTimestamp   = \"select * from test_bulk_array_DateTimeTimestamp ORDER BY 1\"\n\n\tenableFeatureMaxLOBSize      = \"ALTER SESSION SET FEATURE_INCREASED_MAX_LOB_SIZE_IN_MEMORY='ENABLED'\"\n\tunsetFeatureMaxLOBSize       = \"ALTER SESSION UNSET FEATURE_INCREASED_MAX_LOB_SIZE_IN_MEMORY\"\n\tenableLargeVarcharAndBinary  = \"ALTER SESSION SET ENABLE_LARGE_VARCHAR_AND_BINARY_IN_RESULT=TRUE\"\n\tdisableLargeVarcharAndBinary = \"ALTER SESSION SET ENABLE_LARGE_VARCHAR_AND_BINARY_IN_RESULT=FALSE\"\n\tunsetLargeVarcharAndBinary   = \"ALTER SESSION UNSET ENABLE_LARGE_VARCHAR_AND_BINARY_IN_RESULT\"\n\n\tsmallSize = 16 * 1024 * 1024 // 16 MB - right at LOB threshold\n\tlargeSize = 64 * 1024 * 1024 // 64 MB - well above LOB threshold\n\t// range to use for generating random numbers\n\tlobRandomRange = 100000\n)\n\nfunc TestBindingFloat64(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttypes := [2]string{\"FLOAT\", \"DOUBLE\"}\n\t\texpected := 42.23\n\t\tvar out float64\n\t\tvar rows *RowsExtended\n\t\tfor _, v := range types {\n\t\t\tt.Run(v, func(t *testing.T) {\n\t\t\t\tdbt.mustExec(fmt.Sprintf(\"CREATE OR REPLACE TABLE test (id int, value %v)\", v))\n\t\t\t\tdbt.mustExec(\"INSERT INTO test VALUES (1, ?)\", expected)\n\t\t\t\trows = dbt.mustQuery(\"SELECT value FROM test WHERE id = ?\", 1)\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\t\t\t\tif rows.Next() {\n\t\t\t\t\tassertNilF(t, rows.Scan(&out))\n\t\t\t\t\tif expected != out {\n\t\t\t\t\t\tdbt.Errorf(\"%s: %g != %g\", v, expected, out)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tdbt.Errorf(\"%s: no data\", v)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test\")\n\t})\n}\n\n// TestBindingUint64 tests uint64 binding. Should fail as unit64 is not a\n// supported binding value by Go's sql package.\nfunc TestBindingUint64(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\texpected := uint64(18446744073709551615)\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test (id int, value INTEGER)\")\n\t\tif _, err := dbt.exec(\"INSERT INTO test VALUES (1, ?)\", expected); err == nil {\n\t\t\tdbt.Fatal(\"should fail as uint64 values with high bit set are not supported.\")\n\t\t} else {\n\t\t\tlogger.Infof(\"expected err: %v\", err)\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test\")\n\t})\n}\n\nfunc TestBindingDateTimeTimestamp(t *testing.T) {\n\tcreateDSN(PSTLocation)\n\trunDBTest(t, func(dbt *DBTest) {\n\t\texpected := time.Now()\n\t\tdbt.mustExec(\n\t\t\t\"CREATE OR REPLACE TABLE tztest (id int, ntz timestamp_ntz, ltz timestamp_ltz, dt date, tm time)\")\n\t\tstmt, err := dbt.prepare(\"INSERT INTO tztest(id,ntz,ltz,dt,tm) VALUES(1,?,?,?,?)\")\n\t\tif err != nil {\n\t\t\tdbt.Fatal(err.Error())\n\t\t}\n\t\tdefer stmt.Close()\n\t\tif _, err = stmt.Exec(\n\t\t\tDataTypeTimestampNtz, expected,\n\t\t\tDataTypeTimestampLtz, expected,\n\t\t\tDataTypeDate, expected,\n\t\t\tDataTypeTime, expected); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\trows := dbt.mustQuery(\"SELECT ntz,ltz,dt,tm FROM tztest WHERE id=?\", 1)\n\t\tdefer rows.Close()\n\t\tvar ntz, vltz, dt, tm time.Time\n\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\tif err != nil {\n\t\t\tdbt.Errorf(\"column type error. err: %v\", err)\n\t\t}\n\t\tif columnTypes[0].Name() != \"NTZ\" {\n\t\t\tdbt.Errorf(\"expected column name: %v, got: %v\", \"TEST\", columnTypes[0])\n\t\t}\n\t\tcanNull := dbt.mustNullable(columnTypes[0])\n\t\tif !canNull {\n\t\t\tdbt.Errorf(\"expected nullable: %v, got: %v\", true, canNull)\n\t\t}\n\t\tif columnTypes[0].DatabaseTypeName() != \"TIMESTAMP_NTZ\" {\n\t\t\tdbt.Errorf(\"expected database type: %v, got: %v\", \"TIMESTAMP_NTZ\", columnTypes[0].DatabaseTypeName())\n\t\t}\n\t\tdbt.mustFailDecimalSize(columnTypes[0])\n\t\tdbt.mustFailLength(columnTypes[0])\n\t\tcols, err := rows.Columns()\n\t\tif err != nil {\n\t\t\tdbt.Errorf(\"failed to get columns. err: %v\", err)\n\t\t}\n\t\tif len(cols) != 4 || cols[0] != \"NTZ\" || cols[1] != \"LTZ\" || cols[2] != \"DT\" || cols[3] != \"TM\" {\n\t\t\tdbt.Errorf(\"failed to get columns. got: %v\", cols)\n\t\t}\n\t\tif rows.Next() {\n\t\t\tassertNilF(t, rows.Scan(&ntz, &vltz, &dt, &tm))\n\t\t\tif expected.UnixNano() != ntz.UnixNano() {\n\t\t\t\tdbt.Errorf(\"returned TIMESTAMP_NTZ value didn't match. expected: %v:%v, got: %v:%v\",\n\t\t\t\t\texpected.UnixNano(), expected, ntz.UnixNano(), ntz)\n\t\t\t}\n\t\t\tif expected.UnixNano() != vltz.UnixNano() {\n\t\t\t\tdbt.Errorf(\"returned TIMESTAMP_LTZ value didn't match. expected: %v:%v, got: %v:%v\",\n\t\t\t\t\texpected.UnixNano(), expected, vltz.UnixNano(), vltz)\n\t\t\t}\n\t\t\tif expected.Year() != dt.Year() || expected.Month() != dt.Month() || expected.Day() != dt.Day() {\n\t\t\t\tdbt.Errorf(\"returned DATE value didn't match. expected: %v:%v, got: %v:%v\",\n\t\t\t\t\texpected.Unix()*1000, expected, dt.Unix()*1000, dt)\n\t\t\t}\n\t\t\tif expected.Hour() != tm.Hour() || expected.Minute() != tm.Minute() || expected.Second() != tm.Second() || expected.Nanosecond() != tm.Nanosecond() {\n\t\t\t\tdbt.Errorf(\"returned TIME value didn't match. expected: %v:%v, got: %v:%v\",\n\t\t\t\t\texpected.UnixNano(), expected, tm.UnixNano(), tm)\n\t\t\t}\n\t\t} else {\n\t\t\tdbt.Error(\"no data\")\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE tztest\")\n\t})\n\n\tcreateDSN(\"UTC\")\n}\n\nfunc TestBindingBinary(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE bintest (id int, b binary)\")\n\t\tvar b = []byte{0x01, 0x02, 0x03}\n\t\tdbt.mustExec(\"INSERT INTO bintest(id,b) VALUES(1, ?)\", DataTypeBinary, b)\n\t\trows := dbt.mustQuery(\"SELECT b FROM bintest WHERE id=?\", 1)\n\t\tdefer rows.Close()\n\t\tif rows.Next() {\n\t\t\tvar rb []byte\n\t\t\tif err := rows.Scan(&rb); err != nil {\n\t\t\t\tdbt.Errorf(\"failed to scan data. err: %v\", err)\n\t\t\t}\n\t\t\tif !bytes.Equal(b, rb) {\n\t\t\t\tdbt.Errorf(\"failed to match data. expected: %v, got: %v\", b, rb)\n\t\t\t}\n\t\t} else {\n\t\t\tdbt.Errorf(\"no data\")\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE bintest\")\n\t})\n}\n\nfunc TestBindingTimestampTZ(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\texpected := time.Now()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE tztest (id int, tz timestamp_tz)\")\n\t\tstmt, err := dbt.prepare(\"INSERT INTO tztest(id,tz) VALUES(1, ?)\")\n\t\tif err != nil {\n\t\t\tdbt.Fatal(err.Error())\n\t\t}\n\t\tdefer func() {\n\t\t\tassertNilF(t, stmt.Close())\n\t\t}()\n\t\tif _, err = stmt.Exec(DataTypeTimestampTz, expected); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\trows := dbt.mustQuery(\"SELECT tz FROM tztest WHERE id=?\", 1)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tvar v time.Time\n\t\tif rows.Next() {\n\t\t\tassertNilF(t, rows.Scan(&v))\n\t\t\tif expected.UnixNano() != v.UnixNano() {\n\t\t\t\tdbt.Errorf(\"returned value didn't match. expected: %v:%v, got: %v:%v\",\n\t\t\t\t\texpected.UnixNano(), expected, v.UnixNano(), v)\n\t\t\t}\n\t\t} else {\n\t\t\tdbt.Error(\"no data\")\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE tztest\")\n\t})\n}\n\n// SNOW-755844: Test the use of a pointer *time.Time type in user-defined structures to perform updates/inserts\nfunc TestBindingTimePtrInStruct(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttype timePtrStruct struct {\n\t\t\tid      *int\n\t\t\ttimeVal *time.Time\n\t\t}\n\t\texpectedID := 1\n\t\texpectedTime := time.Now()\n\t\ttestStruct := timePtrStruct{id: &expectedID, timeVal: &expectedTime}\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE timeStructTest (id int, tz timestamp_tz)\")\n\n\t\trunInsertQuery := false\n\t\tfor range 2 {\n\t\t\tif !runInsertQuery {\n\t\t\t\t_, err := dbt.exec(\"INSERT INTO timeStructTest(id,tz) VALUES(?, ?)\", testStruct.id, testStruct.timeVal)\n\t\t\t\tif err != nil {\n\t\t\t\t\tdbt.Fatal(err.Error())\n\t\t\t\t}\n\t\t\t\trunInsertQuery = true\n\t\t\t} else {\n\t\t\t\t// Update row with a new time value\n\t\t\t\texpectedTime = time.Now().Add(1)\n\t\t\t\ttestStruct.timeVal = &expectedTime\n\t\t\t\t_, err := dbt.exec(\"UPDATE timeStructTest SET tz = ? where id = ?\", testStruct.timeVal, testStruct.id)\n\t\t\t\tif err != nil {\n\t\t\t\t\tdbt.Fatal(err.Error())\n\t\t\t\t}\n\t\t\t}\n\n\t\t\trows := dbt.mustQuery(\"SELECT tz FROM timeStructTest WHERE id=?\", &expectedID)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t}()\n\t\t\tvar v time.Time\n\t\t\tif rows.Next() {\n\t\t\t\tassertNilF(t, rows.Scan(&v))\n\t\t\t\tif expectedTime.UnixNano() != v.UnixNano() {\n\t\t\t\t\tdbt.Errorf(\"returned value didn't match. expected: %v:%v, got: %v:%v\",\n\t\t\t\t\t\texpectedTime.UnixNano(), expectedTime, v.UnixNano(), v)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tdbt.Error(\"no data\")\n\t\t\t}\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE timeStructTest\")\n\t})\n}\n\n// SNOW-755844: Test the use of a time.Time type in user-defined structures to perform updates/inserts\nfunc TestBindingTimeInStruct(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttype timeStruct struct {\n\t\t\tid      int\n\t\t\ttimeVal time.Time\n\t\t}\n\t\texpectedID := 1\n\t\texpectedTime := time.Now()\n\t\ttestStruct := timeStruct{id: expectedID, timeVal: expectedTime}\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE timeStructTest (id int, tz timestamp_tz)\")\n\n\t\trunInsertQuery := false\n\t\tfor range 2 {\n\t\t\tif !runInsertQuery {\n\t\t\t\t_, err := dbt.exec(\"INSERT INTO timeStructTest(id,tz) VALUES(?, ?)\", testStruct.id, testStruct.timeVal)\n\t\t\t\tif err != nil {\n\t\t\t\t\tdbt.Fatal(err.Error())\n\t\t\t\t}\n\t\t\t\trunInsertQuery = true\n\t\t\t} else {\n\t\t\t\t// Update row with a new time value\n\t\t\t\texpectedTime = time.Now().Add(1)\n\t\t\t\ttestStruct.timeVal = expectedTime\n\t\t\t\t_, err := dbt.exec(\"UPDATE timeStructTest SET tz = ? where id = ?\", testStruct.timeVal, testStruct.id)\n\t\t\t\tif err != nil {\n\t\t\t\t\tdbt.Fatal(err.Error())\n\t\t\t\t}\n\t\t\t}\n\n\t\t\trows := dbt.mustQuery(\"SELECT tz FROM timeStructTest WHERE id=?\", &expectedID)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t}()\n\t\t\tvar v time.Time\n\t\t\tif rows.Next() {\n\t\t\t\tassertNilF(t, rows.Scan(&v))\n\t\t\t\tif expectedTime.UnixNano() != v.UnixNano() {\n\t\t\t\t\tdbt.Errorf(\"returned value didn't match. expected: %v:%v, got: %v:%v\",\n\t\t\t\t\t\texpectedTime.UnixNano(), expectedTime, v.UnixNano(), v)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tdbt.Error(\"no data\")\n\t\t\t}\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE timeStructTest\")\n\t})\n}\n\nfunc TestBindingInterface(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryContext(\n\t\t\tWithHigherPrecision(context.Background()), selectVariousTypes)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tif !rows.Next() {\n\t\t\tdbt.Error(\"failed to query\")\n\t\t}\n\t\tvar v1, v2, v2a, v3, v4, v5, v6 any\n\t\tif err := rows.Scan(&v1, &v2, &v2a, &v3, &v4, &v5, &v6); err != nil {\n\t\t\tdbt.Errorf(\"failed to scan: %#v\", err)\n\t\t}\n\t\tif s1, ok := v1.(*big.Float); !ok || s1.Cmp(big.NewFloat(1.0)) != 0 {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v1)\n\t\t}\n\t\tif s2, ok := v2.(int64); !ok || s2 != 2 {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v2)\n\t\t}\n\t\tif s2a, ok := v2a.(*big.Int); !ok || big.NewInt(22).Cmp(s2a) != 0 {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v2a)\n\t\t}\n\t\tif s3, ok := v3.(string); !ok || s3 != \"t3\" {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v3)\n\t\t}\n\t\tif s4, ok := v4.(float64); !ok || s4 != 4.2 {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v4)\n\t\t}\n\t})\n}\n\nfunc TestBindingInterfaceString(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQuery(selectVariousTypes)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tif !rows.Next() {\n\t\t\tdbt.Error(\"failed to query\")\n\t\t}\n\t\tvar v1, v2, v2a, v3, v4, v5, v6 any\n\t\tif err := rows.Scan(&v1, &v2, &v2a, &v3, &v4, &v5, &v6); err != nil {\n\t\t\tdbt.Errorf(\"failed to scan: %#v\", err)\n\t\t}\n\t\tif s, ok := v1.(string); !ok {\n\t\t\tdbt.Error(\"failed to convert to string\")\n\t\t} else if d, err := strconv.ParseFloat(s, 64); err != nil {\n\t\t\tdbt.Errorf(\"failed to convert to float. value: %v, err: %v\", v1, err)\n\t\t} else if d != 1.00 {\n\t\t\tdbt.Errorf(\"failed to fetch. expected: 1.00, value: %v\", v1)\n\t\t}\n\t\tif s, ok := v2.(string); !ok || s != \"2\" {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v2)\n\t\t}\n\t\tif s, ok := v2a.(string); !ok || s != \"22\" {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v2a)\n\t\t}\n\t\tif s, ok := v3.(string); !ok || s != \"t3\" {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v3)\n\t\t}\n\t})\n}\n\nfunc TestBulkArrayBindingUUID(t *testing.T) {\n\tmax := math.Pow10(5) // 100K because my power is maximum\n\texpectedUuids := make([]any, int(max))\n\n\tcreateTable := \"CREATE OR REPLACE TABLE TEST_PREP_STATEMENT (uuid VARCHAR)\"\n\tinsert := \"INSERT INTO TEST_PREP_STATEMENT (uuid) VALUES (?)\"\n\n\tfor i := range expectedUuids {\n\t\texpectedUuids[i] = newTestUUID()\n\t}\n\n\tslices.SortStableFunc(expectedUuids, func(i, j any) int {\n\t\treturn strings.Compare(i.(testUUID).String(), j.(testUUID).String())\n\t})\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tvar rows *RowsExtended\n\t\tt.Cleanup(func() {\n\t\t\tif rows != nil {\n\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t}\n\n\t\t\t_, err := dbt.exec(deleteTableSQL)\n\t\t\tif err != nil {\n\t\t\t\tt.Logf(\"failed to drop table. err: %s\", err)\n\t\t\t}\n\t\t})\n\n\t\tdbt.mustExec(createTable)\n\n\t\tarray, err := Array(&expectedUuids)\n\t\tassertNilF(t, err)\n\t\tres := dbt.mustExec(insert, array)\n\n\t\taffected, err := res.RowsAffected()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed to get affected rows. err: %s\", err)\n\t\t} else if affected != int64(max) {\n\t\t\tt.Fatalf(\"failed to insert all rows. expected: %f.0, got: %v\", max, affected)\n\t\t}\n\n\t\trows = dbt.mustQuery(\"SELECT * FROM TEST_PREP_STATEMENT ORDER BY uuid\")\n\t\tif rows == nil {\n\t\t\tt.Fatal(\"failed to query\")\n\t\t}\n\n\t\tif rows.Err() != nil {\n\t\t\tt.Fatalf(\"failed to query. err: %s\", rows.Err())\n\t\t}\n\n\t\tvar actual = make([]testUUID, len(expectedUuids))\n\n\t\tfor i := 0; rows.Next(); i++ {\n\t\t\tvar (\n\t\t\t\tout testUUID\n\t\t\t)\n\t\t\tif err := rows.Scan(&out); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tactual[i] = out\n\t\t}\n\n\t\tfor i := range expectedUuids {\n\t\t\tassertEqualE(t, actual[i], expectedUuids[i])\n\t\t}\n\t})\n\n}\n\nfunc TestBulkArrayBindingInterfaceNil(t *testing.T) {\n\tnilArray := make([]any, 1)\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(createTableSQL)\n\t\tdefer dbt.mustExec(deleteTableSQL)\n\n\t\tdbt.mustExec(insertSQL, mustArray(&nilArray), mustArray(&nilArray),\n\t\t\tmustArray(&nilArray), mustArray(&nilArray), mustArray(&nilArray),\n\t\t\tmustArray(&nilArray, TimestampNTZType), mustArray(&nilArray, TimestampTZType),\n\t\t\tmustArray(&nilArray, TimestampTZType), mustArray(&nilArray, DateType),\n\t\t\tmustArray(&nilArray, TimeType))\n\t\trows := dbt.mustQuery(selectAllSQL)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\n\t\tvar v0 sql.NullInt32\n\t\tvar v1 sql.NullFloat64\n\t\tvar v2 sql.NullBool\n\t\tvar v3 sql.NullString\n\t\tvar v4 []byte\n\t\tvar v5, v6, v7, v8, v9 sql.NullTime\n\n\t\tcnt := 0\n\t\tfor i := 0; rows.Next(); i++ {\n\t\t\tif err := rows.Scan(&v0, &v1, &v2, &v3, &v4, &v5, &v6, &v7, &v8, &v9); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif v0.Valid {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullInt32 column v0. expected %v, got: %v\", nilArray[i], v0)\n\t\t\t}\n\t\t\tif v1.Valid {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullFloat64 column v1. expected %v, got: %v\", nilArray[i], v1)\n\t\t\t}\n\t\t\tif v2.Valid {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullBool column v2. expected %v, got: %v\", nilArray[i], v2)\n\t\t\t}\n\t\t\tif v3.Valid {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullString column v3. expected %v, got: %v\", nilArray[i], v3)\n\t\t\t}\n\t\t\tif v4 != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the []byte column v4. expected %v, got: %v\", nilArray[i], v4)\n\t\t\t}\n\t\t\tif v5.Valid {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullTime column v5. expected %v, got: %v\", nilArray[i], v5)\n\t\t\t}\n\t\t\tif v6.Valid {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullTime column v6. expected %v, got: %v\", nilArray[i], v6)\n\t\t\t}\n\t\t\tif v7.Valid {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullTime column v7. expected %v, got: %v\", nilArray[i], v7)\n\t\t\t}\n\t\t\tif v8.Valid {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullTime column v8. expected %v, got: %v\", nilArray[i], v8)\n\t\t\t}\n\t\t\tif v9.Valid {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullTime column v9. expected %v, got: %v\", nilArray[i], v9)\n\t\t\t}\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != len(nilArray) {\n\t\t\tt.Fatal(\"failed to query\")\n\t\t}\n\t})\n}\n\nfunc TestBulkArrayBindingInterface(t *testing.T) {\n\tintArray := make([]any, 3)\n\tintArray[0] = int32(100)\n\tintArray[1] = int32(200)\n\n\tfltArray := make([]any, 3)\n\tfltArray[0] = float64(0.1)\n\tfltArray[2] = float64(5.678)\n\n\tboolArray := make([]any, 3)\n\tboolArray[1] = false\n\tboolArray[2] = true\n\n\tstrArray := make([]any, 3)\n\tstrArray[2] = \"test3\"\n\n\tbyteArray := make([]any, 3)\n\tbyteArray[0] = []byte{0x01, 0x02, 0x03}\n\tbyteArray[2] = []byte{0x07, 0x08, 0x09}\n\n\tint64Array := make([]any, 3)\n\tint64Array[0] = int64(100)\n\tint64Array[1] = int64(200)\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(createTableSQLBulkArray)\n\t\tdefer dbt.mustExec(deleteTableSQLBulkArray)\n\n\t\tdbt.mustExec(insertSQLBulkArray, mustArray(&intArray), mustArray(&fltArray),\n\t\t\tmustArray(&boolArray), mustArray(&strArray), mustArray(&byteArray), mustArray(&int64Array))\n\t\trows := dbt.mustQuery(selectAllSQLBulkArray)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\n\t\tvar v0 sql.NullInt32\n\t\tvar v1 sql.NullFloat64\n\t\tvar v2 sql.NullBool\n\t\tvar v3 sql.NullString\n\t\tvar v4 []byte\n\t\tvar v5 sql.NullInt64\n\n\t\tcnt := 0\n\t\tfor i := 0; rows.Next(); i++ {\n\t\t\tif err := rows.Scan(&v0, &v1, &v2, &v3, &v4, &v5); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif v0.Valid {\n\t\t\t\tif v0.Int32 != intArray[i] {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullInt32 column v0. expected %v, got: %v\", intArray[i], v0.Int32)\n\t\t\t\t}\n\t\t\t} else if intArray[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullInt32 column v0. expected %v, got: %v\", intArray[i], v0)\n\t\t\t}\n\t\t\tif v1.Valid {\n\t\t\t\tif v1.Float64 != fltArray[i] {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullFloat64 column v1. expected %v, got: %v\", fltArray[i], v1.Float64)\n\t\t\t\t}\n\t\t\t} else if fltArray[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullFloat64 column v1. expected %v, got: %v\", fltArray[i], v1)\n\t\t\t}\n\t\t\tif v2.Valid {\n\t\t\t\tif v2.Bool != boolArray[i] {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullBool column v2. expected %v, got: %v\", boolArray[i], v2.Bool)\n\t\t\t\t}\n\t\t\t} else if boolArray[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullBool column v2. expected %v, got: %v\", boolArray[i], v2)\n\t\t\t}\n\t\t\tif v3.Valid {\n\t\t\t\tif v3.String != strArray[i] {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullString column v3. expected %v, got: %v\", strArray[i], v3.String)\n\t\t\t\t}\n\t\t\t} else if strArray[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullString column v3. expected %v, got: %v\", strArray[i], v3)\n\t\t\t}\n\t\t\tif byteArray[i] != nil {\n\t\t\t\tif !bytes.Equal(v4, byteArray[i].([]byte)) {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the []byte column v4. expected %v, got: %v\", byteArray[i], v4)\n\t\t\t\t}\n\t\t\t} else if v4 != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the []byte column v4. expected %v, got: %v\", byteArray[i], v4)\n\t\t\t}\n\t\t\tif v5.Valid {\n\t\t\t\tif v5.Int64 != int64Array[i] {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullInt64 column v5. expected %v, got: %v\", int64Array[i], v5.Int64)\n\t\t\t\t}\n\t\t\t} else if int64Array[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the sql.NullInt64 column v5. expected %v, got: %v\", int64Array[i], v5)\n\t\t\t}\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != len(intArray) {\n\t\t\tt.Fatal(\"failed to query\")\n\t\t}\n\t})\n}\n\nfunc TestBulkArrayBindingInterfaceDateTimeTimestamp(t *testing.T) {\n\ttz := time.Now()\n\tcreateDSN(PSTLocation)\n\n\tnow := time.Now()\n\tloc, err := time.LoadLocation(PSTLocation)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tntzArray := make([]any, 3)\n\tntzArray[0] = now\n\tntzArray[1] = now.Add(1)\n\n\tltzArray := make([]any, 3)\n\tltzArray[1] = now.Add(2).In(loc)\n\tltzArray[2] = now.Add(3).In(loc)\n\n\ttzArray := make([]any, 3)\n\ttzArray[0] = tz.Add(4).In(loc)\n\ttzArray[2] = tz.Add(5).In(loc)\n\n\tdtArray := make([]any, 3)\n\tdtArray[0] = tz.Add(6).In(loc)\n\tdtArray[1] = now.Add(7).In(loc)\n\n\ttmArray := make([]any, 3)\n\ttmArray[1] = now.Add(8).In(loc)\n\ttmArray[2] = now.Add(9).In(loc)\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(createTableSQLBulkArrayDateTimeTimestamp)\n\t\tdefer dbt.mustExec(deleteTableSQLBulkArrayDateTimeTimestamp)\n\n\t\tdbt.mustExec(insertSQLBulkArrayDateTimeTimestamp,\n\t\t\tmustArray(&ntzArray, TimestampNTZType), mustArray(&ltzArray, TimestampLTZType),\n\t\t\tmustArray(&tzArray, TimestampTZType), mustArray(&dtArray, DateType),\n\t\t\tmustArray(&tmArray, TimeType))\n\n\t\trows := dbt.mustQuery(selectAllSQLBulkArrayDateTimeTimestamp)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\n\t\tvar v0, v1, v2, v3, v4 sql.NullTime\n\n\t\tcnt := 0\n\t\tfor i := 0; rows.Next(); i++ {\n\t\t\tif err := rows.Scan(&v0, &v1, &v2, &v3, &v4); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif v0.Valid {\n\t\t\t\tif v0.Time.UnixNano() != ntzArray[i].(time.Time).UnixNano() {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the column v0. expected %v, got: %v\", ntzArray[i], v0)\n\t\t\t\t}\n\t\t\t} else if ntzArray[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the column v0. expected %v, got: %v\", ntzArray[i], v0)\n\t\t\t}\n\t\t\tif v1.Valid {\n\t\t\t\tif v1.Time.UnixNano() != ltzArray[i].(time.Time).UnixNano() {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the column v1. expected %v, got: %v\", ltzArray[i], v1)\n\t\t\t\t}\n\t\t\t} else if ltzArray[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the column v1. expected %v, got: %v\", ltzArray[i], v1)\n\t\t\t}\n\t\t\tif v2.Valid {\n\t\t\t\tif v2.Time.UnixNano() != tzArray[i].(time.Time).UnixNano() {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the column v2. expected %v, got: %v\", tzArray[i], v2)\n\t\t\t\t}\n\t\t\t} else if tzArray[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the column v2. expected %v, got: %v\", tzArray[i], v2)\n\t\t\t}\n\t\t\tif v3.Valid {\n\t\t\t\tif v3.Time.Year() != dtArray[i].(time.Time).Year() ||\n\t\t\t\t\tv3.Time.Month() != dtArray[i].(time.Time).Month() ||\n\t\t\t\t\tv3.Time.Day() != dtArray[i].(time.Time).Day() {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the column v3. expected %v, got: %v\", dtArray[i], v3)\n\t\t\t\t}\n\t\t\t} else if dtArray[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the column v3. expected %v, got: %v\", dtArray[i], v3)\n\t\t\t}\n\t\t\tif v4.Valid {\n\t\t\t\tif v4.Time.Hour() != tmArray[i].(time.Time).Hour() ||\n\t\t\t\t\tv4.Time.Minute() != tmArray[i].(time.Time).Minute() ||\n\t\t\t\t\tv4.Time.Second() != tmArray[i].(time.Time).Second() {\n\t\t\t\t\tt.Fatalf(\"failed to fetch the column v4. expected %v, got: %v\", tmArray[i], v4)\n\t\t\t\t}\n\t\t\t} else if tmArray[i] != nil {\n\t\t\t\tt.Fatalf(\"failed to fetch the column v4. expected %v, got: %v\", tmArray[i], v4)\n\t\t\t}\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != len(ntzArray) {\n\t\t\tt.Fatal(\"failed to query\")\n\t\t}\n\t})\n\tcreateDSN(\"UTC\")\n}\n\n// TestBindingArray tests basic array binding via the usage of the Array\n// function that converts the passed Golang slice to a Snowflake array type\nfunc TestBindingArray(t *testing.T) {\n\ttestBindingArray(t, false)\n}\n\n// TestBindingBulkArray tests bulk array binding via the usage of the Array\n// function that converts the passed Golang slice to a Snowflake array type\nfunc TestBindingBulkArray(t *testing.T) {\n\tif runningOnGithubAction() {\n\t\tt.Skip(\"client_stage_array_binding_threshold value is internal\")\n\t}\n\ttestBindingArray(t, true)\n}\n\nfunc testBindingArray(t *testing.T, bulk bool) {\n\ttz := time.Now()\n\tcreateDSN(PSTLocation)\n\tintArray := []int{1, 2, 3}\n\tfltArray := []float64{0.1, 2.34, 5.678}\n\tboolArray := []bool{true, false, true}\n\tstrArray := []string{\"test1\", \"test2\", \"test3\"}\n\tbyteArray := [][]byte{{0x01, 0x02, 0x03}, {0x04, 0x05, 0x06}, {0x07, 0x08, 0x09}}\n\n\tnow := time.Now()\n\tloc, err := time.LoadLocation(PSTLocation)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tntzArray := []time.Time{now, now.Add(1), now.Add(2)}\n\tltzArray := []time.Time{now.Add(3).In(loc), now.Add(4).In(loc), now.Add(5).In(loc)}\n\ttzArray := []time.Time{tz.Add(6).In(loc), tz.Add(7).In(loc), tz.Add(8).In(loc)}\n\tdtArray := []time.Time{now.Add(9), now.Add(10), now.Add(11)}\n\ttmArray := []time.Time{now.Add(12), now.Add(13), now.Add(14)}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(createTableSQL)\n\t\tdefer dbt.mustExec(deleteTableSQL)\n\t\tif bulk {\n\t\t\tif _, err := dbt.exec(\"ALTER SESSION SET CLIENT_STAGE_ARRAY_BINDING_THRESHOLD = 1\"); err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t}\n\n\t\tdbt.mustExec(insertSQL, mustArray(&intArray), mustArray(&fltArray),\n\t\t\tmustArray(&boolArray), mustArray(&strArray), mustArray(&byteArray),\n\t\t\tmustArray(&ntzArray, TimestampNTZType), mustArray(&ltzArray, TimestampLTZType),\n\t\t\tmustArray(&tzArray, TimestampTZType), mustArray(&dtArray, DateType),\n\t\t\tmustArray(&tmArray, TimeType))\n\t\trows := dbt.mustQuery(selectAllSQL)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\n\t\tvar v0 int\n\t\tvar v1 float64\n\t\tvar v2 bool\n\t\tvar v3 string\n\t\tvar v4 []byte\n\t\tvar v5, v6, v7, v8, v9 time.Time\n\t\tcnt := 0\n\t\tfor i := 0; rows.Next(); i++ {\n\t\t\tif err := rows.Scan(&v0, &v1, &v2, &v3, &v4, &v5, &v6, &v7, &v8, &v9); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif v0 != intArray[i] {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", intArray[i], v0)\n\t\t\t}\n\t\t\tif v1 != fltArray[i] {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", fltArray[i], v1)\n\t\t\t}\n\t\t\tif v2 != boolArray[i] {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", boolArray[i], v2)\n\t\t\t}\n\t\t\tif v3 != strArray[i] {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", strArray[i], v3)\n\t\t\t}\n\t\t\tif !bytes.Equal(v4, byteArray[i]) {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", byteArray[i], v4)\n\t\t\t}\n\t\t\tif v5.UnixNano() != ntzArray[i].UnixNano() {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", ntzArray[i], v5)\n\t\t\t}\n\t\t\tif v6.UnixNano() != ltzArray[i].UnixNano() {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", ltzArray[i], v6)\n\t\t\t}\n\t\t\tif v7.UnixNano() != tzArray[i].UnixNano() {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", tzArray[i], v7)\n\t\t\t}\n\t\t\tif v8.Year() != dtArray[i].Year() || v8.Month() != dtArray[i].Month() || v8.Day() != dtArray[i].Day() {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", dtArray[i], v8)\n\t\t\t}\n\t\t\tif v9.Hour() != tmArray[i].Hour() || v9.Minute() != tmArray[i].Minute() || v9.Second() != tmArray[i].Second() {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected %v, got: %v\", tmArray[i], v9)\n\t\t\t}\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != len(intArray) {\n\t\t\tt.Fatal(\"failed to query\")\n\t\t}\n\t})\n\tcreateDSN(\"UTC\")\n}\n\nfunc TestBulkArrayBinding(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(fmt.Sprintf(\"create or replace table %v (c1 integer, c2 string, c3 timestamp_ltz, c4 timestamp_tz, c5 timestamp_ntz, c6 date, c7 time, c8 binary)\", dbname))\n\t\tnow := time.Now()\n\t\tsomeTime := time.Date(1, time.January, 1, 12, 34, 56, 123456789, time.UTC)\n\t\tsomeDate := time.Date(2024, time.March, 18, 0, 0, 0, 0, time.UTC)\n\t\tsomeBinary := []byte{0x01, 0x02, 0x03}\n\t\tnumRows := 100000\n\t\tintArr := make([]int, numRows)\n\t\tstrArr := make([]string, numRows)\n\t\tltzArr := make([]time.Time, numRows)\n\t\ttzArr := make([]time.Time, numRows)\n\t\tntzArr := make([]time.Time, numRows)\n\t\tdateArr := make([]time.Time, numRows)\n\t\ttimeArr := make([]time.Time, numRows)\n\t\tbinArr := make([][]byte, numRows)\n\t\tfor i := range numRows {\n\t\t\tintArr[i] = i\n\t\t\tstrArr[i] = \"test\" + strconv.Itoa(i)\n\t\t\tltzArr[i] = now\n\t\t\ttzArr[i] = now.Add(time.Hour).UTC()\n\t\t\tntzArr[i] = now.Add(2 * time.Hour)\n\t\t\tdateArr[i] = someDate\n\t\t\ttimeArr[i] = someTime\n\t\t\tbinArr[i] = someBinary\n\t\t}\n\t\tdbt.mustExec(fmt.Sprintf(\"insert into %v values (?, ?, ?, ?, ?, ?, ?, ?)\", dbname), mustArray(&intArr), mustArray(&strArr), mustArray(&ltzArr, TimestampLTZType), mustArray(&tzArr, TimestampTZType), mustArray(&ntzArr, TimestampNTZType), mustArray(&dateArr, DateType), mustArray(&timeArr, TimeType), mustArray(&binArr))\n\t\trows := dbt.mustQuery(\"select * from \" + dbname + \" order by c1\")\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tcnt := 0\n\t\tvar i int\n\t\tvar s string\n\t\tvar ltz, tz, ntz, date, tt time.Time\n\t\tvar b []byte\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&i, &s, &ltz, &tz, &ntz, &date, &tt, &b); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tassertEqualE(t, i, cnt)\n\t\t\tassertEqualE(t, \"test\"+strconv.Itoa(cnt), s)\n\t\t\tassertEqualE(t, ltz.UTC(), now.UTC())\n\t\t\tassertEqualE(t, tz.UTC(), now.Add(time.Hour).UTC())\n\t\t\tassertEqualE(t, ntz.UTC(), now.Add(2*time.Hour).UTC())\n\t\t\tassertEqualE(t, date, someDate)\n\t\t\tassertEqualE(t, tt, someTime)\n\t\t\tassertBytesEqualE(t, b, someBinary)\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != numRows {\n\t\t\tt.Fatalf(\"expected %v rows, got %v\", numRows, cnt)\n\t\t}\n\t})\n}\n\nfunc TestSupportedDecfloatBind(t *testing.T) {\n\tt.Run(\"dont panic on nil UUID\", func(t *testing.T) {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tt.Errorf(\"expected not to panic, but did panic\")\n\t\t\t}\n\t\t}()\n\t\tvar nilUUID *UUID\n\t\tnv := driver.NamedValue{Value: nilUUID}\n\t\tshouldBind := supportedDecfloatBind(&nv) // should not panic and return false\n\t\tassertFalseE(t, shouldBind, \"expected not to support binding nil *UUID\")\n\t})\n\n\tt.Run(\"dont panic on nil pointer array\", func(t *testing.T) {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tt.Errorf(\"expected not to panic, but did panic\")\n\t\t\t}\n\t\t}()\n\t\tvar nilArray *[]string\n\t\tnv := driver.NamedValue{Value: nilArray}\n\t\tshouldBind := supportedDecfloatBind(&nv) // should not panic and return false\n\t\tassertFalseE(t, shouldBind, \"expected not to support binding nil []string\")\n\t})\n\n\tt.Run(\"dont panic on nil pointer\", func(t *testing.T) {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tt.Errorf(\"expected not to panic, but did panic\")\n\t\t\t}\n\t\t}()\n\t\tvar nilTime *time.Time\n\t\tnv := driver.NamedValue{Value: nilTime}\n\t\tshouldBind := supportedDecfloatBind(&nv) // should not panic and return false\n\t\tassertFalseE(t, shouldBind, \"expected not to support binding nil *time.Time\")\n\t})\n\n\tt.Run(\"dont panic on nil *big.Float\", func(t *testing.T) {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tt.Errorf(\"expected not to panic, but did panic\")\n\t\t\t}\n\t\t}()\n\t\tvar nilBigFloat *big.Float\n\t\tnv := driver.NamedValue{Value: nilBigFloat}\n\t\tshouldBind := supportedDecfloatBind(&nv) // should not panic and return false\n\t\tassertFalseE(t, shouldBind, \"expected not to support binding nil *big.Float\")\n\t})\n\n\tt.Run(\"Is Valid for big.Float\", func(t *testing.T) {\n\t\tval := big.NewFloat(123.456)\n\t\tnv := driver.NamedValue{Value: val}\n\t\tshouldBind := supportedDecfloatBind(&nv)\n\t\tassertTrueE(t, shouldBind, \"expected to support binding big.Float\")\n\t})\n\n\tt.Run(\"Is Not Valid for other types\", func(t *testing.T) {\n\t\tval := 123.456 // float64\n\t\tnv := driver.NamedValue{Value: val}\n\t\tshouldBind := supportedDecfloatBind(&nv)\n\t\tassertFalseE(t, shouldBind, \"expected not to support binding float64\")\n\t})\n}\n\nfunc TestBindingsWithSameValue(t *testing.T) {\n\tarrayInsertTable := \"test_array_binding_insert\"\n\tstageBindingTable := \"test_stage_binding_insert\"\n\tinterfaceArrayTable := \"test_interface_binding_insert\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(fmt.Sprintf(\"create or replace table %v (c1 integer, c2 string, c3 timestamp_ltz, c4 timestamp_tz, c5 timestamp_ntz, c6 date, c7 time, c9 boolean, c10 double)\", arrayInsertTable))\n\t\tdbt.mustExec(fmt.Sprintf(\"create or replace table %v (c1 integer, c2 string, c3 timestamp_ltz, c4 timestamp_tz, c5 timestamp_ntz, c6 date, c7 time, c9 boolean, c10 double)\", stageBindingTable))\n\t\tdbt.mustExec(fmt.Sprintf(\"create or replace table %v (c1 integer, c2 string, c3 timestamp_ltz, c4 timestamp_tz, c5 timestamp_ntz, c6 date, c7 time, c9 boolean, c10 double)\", interfaceArrayTable))\n\n\t\tdefer func() {\n\t\t\tdbt.mustExec(fmt.Sprintf(\"drop table if exists %v\", arrayInsertTable))\n\t\t\tdbt.mustExec(fmt.Sprintf(\"drop table if exists %v\", stageBindingTable))\n\t\t\tdbt.mustExec(fmt.Sprintf(\"drop table if exists %v\", interfaceArrayTable))\n\t\t}()\n\n\t\tnumRows := 5\n\n\t\tintArr := make([]int, numRows)\n\t\tstrArr := make([]string, numRows)\n\t\ttimeArr := make([]time.Time, numRows)\n\t\tboolArr := make([]bool, numRows)\n\t\tdoubleArr := make([]float64, numRows)\n\n\t\tintAnyArr := make([]any, numRows)\n\t\tstrAnyArr := make([]any, numRows)\n\t\ttimeAnyArr := make([]any, numRows)\n\t\tboolAnyArr := make([]bool, numRows)\n\t\tdoubleAnyArr := make([]float64, numRows)\n\n\t\tfor i := range numRows {\n\t\t\tintArr[i] = i\n\t\t\tintAnyArr[i] = i\n\n\t\t\tdouble := rand.Float64()\n\t\t\tdoubleArr[i] = double\n\t\t\tdoubleAnyArr[i] = double\n\n\t\t\tstrArr[i] = \"test\" + strconv.Itoa(i)\n\t\t\tstrAnyArr[i] = \"test\" + strconv.Itoa(i)\n\n\t\t\tb := getRandomBool()\n\t\t\tboolArr[i] = b\n\t\t\tboolAnyArr[i] = b\n\n\t\t\tdate := getRandomDate()\n\t\t\ttimeArr[i] = date\n\t\t\ttimeAnyArr[i] = date\n\t\t}\n\n\t\tdbt.mustExec(fmt.Sprintf(\"insert into %v values (?, ?, ?, ?, ?, ?, ?, ?, ?)\", interfaceArrayTable), mustArray(&intAnyArr), mustArray(&strAnyArr), mustArray(&timeAnyArr, TimestampLTZType), mustArray(&timeAnyArr, TimestampTZType), mustArray(&timeAnyArr, TimestampNTZType), mustArray(&timeAnyArr, DateType), mustArray(&timeAnyArr, TimeType), mustArray(&boolArr), mustArray(&doubleArr))\n\t\tdbt.mustExec(fmt.Sprintf(\"insert into %v values (?, ?, ?, ?, ?, ?, ?, ?, ?)\", arrayInsertTable), mustArray(&intArr), mustArray(&strArr), mustArray(&timeArr, TimestampLTZType), mustArray(&timeArr, TimestampTZType), mustArray(&timeArr, TimestampNTZType), mustArray(&timeArr, DateType), mustArray(&timeArr, TimeType), mustArray(&boolArr), mustArray(&doubleArr))\n\t\tdbt.mustExec(\"ALTER SESSION SET CLIENT_STAGE_ARRAY_BINDING_THRESHOLD = 1\")\n\t\tdbt.mustExec(fmt.Sprintf(\"insert into %v values (?, ?, ?, ?, ?, ?, ?, ?, ?)\", stageBindingTable), mustArray(&intArr), mustArray(&strArr), mustArray(&timeArr, TimestampLTZType), mustArray(&timeArr, TimestampTZType), mustArray(&timeArr, TimestampNTZType), mustArray(&timeArr, DateType), mustArray(&timeArr, TimeType), mustArray(&boolArr), mustArray(&doubleArr))\n\n\t\tinsertRows := dbt.mustQuery(\"select * from \" + arrayInsertTable + \" order by c1\")\n\t\tbindingRows := dbt.mustQuery(\"select * from \" + stageBindingTable + \" order by c1\")\n\t\tinterfaceRows := dbt.mustQuery(\"select * from \" + interfaceArrayTable + \" order by c1\")\n\n\t\tdefer func() {\n\t\t\tassertNilF(t, insertRows.Close())\n\t\t\tassertNilF(t, bindingRows.Close())\n\t\t\tassertNilF(t, interfaceRows.Close())\n\t\t}()\n\t\tvar i, bi, ii int\n\t\tvar s, bs, is string\n\t\tvar ltz, bltz, iltz, itz, btz, tz, intz, ntz, bntz, iDate, date, bDate, itt, tt, btt time.Time\n\t\tvar b, bb, ib bool\n\t\tvar d, bd, id float64\n\n\t\ttimeFormat := \"15:04:05\"\n\t\tfor k := range numRows {\n\t\t\tassertTrueF(t, insertRows.Next())\n\t\t\tassertNilF(t, insertRows.Scan(&i, &s, &ltz, &tz, &ntz, &date, &tt, &b, &d))\n\n\t\t\tassertTrueF(t, bindingRows.Next())\n\t\t\tassertNilF(t, bindingRows.Scan(&bi, &bs, &bltz, &btz, &bntz, &bDate, &btt, &bb, &bd))\n\n\t\t\tassertTrueF(t, interfaceRows.Next())\n\t\t\tassertNilF(t, interfaceRows.Scan(&ii, &is, &iltz, &itz, &intz, &iDate, &itt, &ib, &id))\n\n\t\t\tassertEqualE(t, k, i)\n\t\t\tassertEqualE(t, k, bi)\n\t\t\tassertEqualE(t, k, ii)\n\n\t\t\tassertEqualE(t, \"test\"+strconv.Itoa(k), s)\n\t\t\tassertEqualE(t, \"test\"+strconv.Itoa(k), bs)\n\t\t\tassertEqualE(t, \"test\"+strconv.Itoa(k), is)\n\n\t\t\tutcTime := timeArr[k].UTC()\n\t\t\tassertEqualE(t, ltz.UTC(), utcTime)\n\t\t\tassertEqualE(t, bltz.UTC(), utcTime)\n\t\t\tassertEqualE(t, iltz.UTC(), utcTime)\n\n\t\t\tassertEqualE(t, tz.UTC(), utcTime)\n\t\t\tassertEqualE(t, btz.UTC(), utcTime)\n\t\t\tassertEqualE(t, itz.UTC(), utcTime)\n\n\t\t\tassertEqualE(t, ntz.UTC(), utcTime)\n\t\t\tassertEqualE(t, bntz.UTC(), utcTime)\n\t\t\tassertEqualE(t, intz.UTC(), utcTime)\n\n\t\t\ttestingDate := timeArr[k].Truncate(24 * time.Hour)\n\t\t\tassertEqualE(t, date, testingDate)\n\t\t\tassertEqualE(t, bDate, testingDate)\n\t\t\tassertEqualE(t, iDate, testingDate)\n\n\t\t\ttestingTime := timeArr[k].Format(timeFormat)\n\t\t\tassertEqualE(t, tt.Format(timeFormat), testingTime)\n\t\t\tassertEqualE(t, btt.Format(timeFormat), testingTime)\n\t\t\tassertEqualE(t, itt.Format(timeFormat), testingTime)\n\n\t\t\tassertEqualE(t, b, boolArr[k])\n\t\t\tassertEqualE(t, bb, boolArr[k])\n\t\t\tassertEqualE(t, ib, boolArr[k])\n\n\t\t\tassertEqualE(t, d, doubleArr[k])\n\t\t\tassertEqualE(t, bd, doubleArr[k])\n\t\t\tassertEqualE(t, id, doubleArr[k])\n\n\t\t}\n\t})\n}\n\nfunc TestBulkArrayBindingTimeWithPrecision(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(fmt.Sprintf(\"create or replace table %v (s time(0), ms time(3), us time(6), ns time(9))\", dbname))\n\t\tsomeTimeWithSeconds := time.Date(1, time.January, 1, 1, 1, 1, 0, time.UTC)\n\t\tsomeTimeWithMilliseconds := time.Date(1, time.January, 1, 2, 2, 2, 123000000, time.UTC)\n\t\tsomeTimeWithMicroseconds := time.Date(1, time.January, 1, 3, 3, 3, 123456000, time.UTC)\n\t\tsomeTimeWithNanoseconds := time.Date(1, time.January, 1, 4, 4, 4, 123456789, time.UTC)\n\t\tnumRows := 100000\n\t\tsecondsArr := make([]time.Time, numRows)\n\t\tmillisecondsArr := make([]time.Time, numRows)\n\t\tmicrosecondsArr := make([]time.Time, numRows)\n\t\tnanosecondsArr := make([]time.Time, numRows)\n\t\tfor i := range numRows {\n\t\t\tsecondsArr[i] = someTimeWithSeconds\n\t\t\tmillisecondsArr[i] = someTimeWithMilliseconds\n\t\t\tmicrosecondsArr[i] = someTimeWithMicroseconds\n\t\t\tnanosecondsArr[i] = someTimeWithNanoseconds\n\t\t}\n\t\tdbt.mustExec(fmt.Sprintf(\"insert into %v values (?, ?, ?, ?)\", dbname), mustArray(&secondsArr, TimeType), mustArray(&millisecondsArr, TimeType), mustArray(&microsecondsArr, TimeType), mustArray(&nanosecondsArr, TimeType))\n\t\trows := dbt.mustQuery(\"select * from \" + dbname)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tcnt := 0\n\t\tvar s, ms, us, ns time.Time\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&s, &ms, &us, &ns); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tassertEqualE(t, s, someTimeWithSeconds)\n\t\t\tassertEqualE(t, ms, someTimeWithMilliseconds)\n\t\t\tassertEqualE(t, us, someTimeWithMicroseconds)\n\t\t\tassertEqualE(t, ns, someTimeWithNanoseconds)\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != numRows {\n\t\t\tt.Fatalf(\"expected %v rows, got %v\", numRows, cnt)\n\t\t}\n\t})\n}\n\nfunc TestBulkArrayMultiPartBinding(t *testing.T) {\n\trowCount := 1000000 // large enough to be partitioned into multiple files\n\trandomIter := rand.Intn(3) + 2\n\trandomStrings := make([]string, rowCount)\n\tstr := randomString(30)\n\tfor i := range rowCount {\n\t\trandomStrings[i] = str\n\t}\n\ttempTableName := fmt.Sprintf(\"test_table_%v\", randomString(5))\n\tctx := context.Background()\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(fmt.Sprintf(\"CREATE TABLE %s (C VARCHAR(64) NOT NULL)\", tempTableName))\n\t\tdefer dbt.mustExec(\"drop table \" + tempTableName)\n\n\t\tfor range randomIter {\n\t\t\tdbt.mustExecContext(ctx,\n\t\t\t\tfmt.Sprintf(\"INSERT INTO %s VALUES (?)\", tempTableName),\n\t\t\t\tmustArray(&randomStrings))\n\t\t\trows := dbt.mustQuery(\"select count(*) from \" + tempTableName)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t}()\n\t\t\tif rows.Next() {\n\t\t\t\tvar count int\n\t\t\t\tif err := rows.Scan(&count); err != nil {\n\t\t\t\t\tt.Error(err)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\trows := dbt.mustQuery(\"select count(*) from \" + tempTableName)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tif rows.Next() {\n\t\t\tvar count int\n\t\t\tif err := rows.Scan(&count); err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t\tif count != randomIter*rowCount {\n\t\t\t\tt.Errorf(\"expected %v rows, got %v rows intead\", randomIter*rowCount, count)\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestBulkArrayMultiPartBindingInt(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"create or replace table binding_test (c1 integer)\")\n\t\tstartNum := 1000000\n\t\tendNum := 3000000\n\t\tnumRows := endNum - startNum\n\t\tintArr := make([]int, numRows)\n\t\tfor i := startNum; i < endNum; i++ {\n\t\t\tintArr[i-startNum] = i\n\t\t}\n\t\t_, err := dbt.exec(\"insert into binding_test values (?)\", mustArray(&intArr))\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Should have succeeded to insert. err: %v\", err)\n\t\t}\n\n\t\trows := dbt.mustQuery(\"select * from binding_test order by c1\")\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tcnt := startNum\n\t\tvar i int\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&i); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif i != cnt {\n\t\t\t\tt.Errorf(\"expected: %v, got: %v\", cnt, i)\n\t\t\t}\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != endNum {\n\t\t\tt.Fatalf(\"expected %v rows, got %v\", numRows, cnt-startNum)\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE binding_test\")\n\t})\n}\n\nfunc TestBulkArrayMultiPartBindingWithNull(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"create or replace table binding_test (c1 integer, c2 string)\")\n\t\tstartNum := 1000000\n\t\tendNum := 2000000\n\t\tnumRows := endNum - startNum\n\n\t\t// Define the integer and string arrays\n\t\tintArr := make([]any, numRows)\n\t\tstringArr := make([]any, numRows)\n\t\tfor i := startNum; i < endNum; i++ {\n\t\t\tintArr[i-startNum] = i\n\t\t\tstringArr[i-startNum] = fmt.Sprint(i)\n\t\t}\n\n\t\t// Set some of the rows to NULL\n\t\tintArr[numRows-1] = nil\n\t\tintArr[numRows-2] = nil\n\t\tintArr[numRows-3] = nil\n\t\tstringArr[1] = nil\n\t\tstringArr[2] = nil\n\t\tstringArr[3] = nil\n\n\t\t_, err := dbt.exec(\"insert into binding_test values (?, ?)\", mustArray(&intArr), mustArray(&stringArr))\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Should have succeeded to insert. err: %v\", err)\n\t\t}\n\n\t\trows := dbt.mustQuery(\"select * from binding_test order by c1,c2\")\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tcnt := startNum\n\t\tvar i sql.NullInt32\n\t\tvar s sql.NullString\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&i, &s); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\t// Verify integer column c1\n\t\t\tif i.Valid {\n\t\t\t\tif int(i.Int32) != intArr[cnt-startNum] {\n\t\t\t\t\tt.Fatalf(\"expected: %v, got: %v\", cnt, int(i.Int32))\n\t\t\t\t}\n\t\t\t} else if !(cnt == startNum+numRows-1 || cnt == startNum+numRows-2 || cnt == startNum+numRows-3) {\n\t\t\t\tt.Fatalf(\"expected NULL in column c1 at index: %v\", cnt-startNum)\n\t\t\t}\n\t\t\t// Verify string column c2\n\t\t\tif s.Valid {\n\t\t\t\tif s.String != stringArr[cnt-startNum] {\n\t\t\t\t\tt.Fatalf(\"expected: %v, got: %v\", cnt, s.String)\n\t\t\t\t}\n\t\t\t} else if !(cnt == startNum+1 || cnt == startNum+2 || cnt == startNum+3) {\n\t\t\t\tt.Fatalf(\"expected NULL in column c2 at index: %v\", cnt-startNum)\n\t\t\t}\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != endNum {\n\t\t\tt.Fatalf(\"expected %v rows, got %v\", numRows, cnt-startNum)\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE binding_test\")\n\t})\n}\n\nfunc TestFunctionParameters(t *testing.T) {\n\ttestcases := []struct {\n\t\ttestDesc   string\n\t\tparamType  string\n\t\tinput      any\n\t\tnullResult bool\n\t}{\n\t\t{\"textAndNullStringResultInNull\", \"text\", sql.NullString{}, true},\n\t\t{\"numberAndNullInt64ResultInNull\", \"number\", sql.NullInt64{}, true},\n\t\t{\"floatAndNullFloat64ResultInNull\", \"float\", sql.NullFloat64{}, true},\n\t\t{\"booleanAndAndNullBoolResultInNull\", \"boolean\", sql.NullBool{}, true},\n\t\t{\"dateAndTypedNullTimeResultInNull\", \"date\", TypedNullTime{sql.NullTime{}, DateType}, true},\n\t\t{\"datetimeAndTypedNullTimeResultInNull\", \"datetime\", TypedNullTime{sql.NullTime{}, TimestampNTZType}, true},\n\t\t{\"timeAndTypedNullTimeResultInNull\", \"time\", TypedNullTime{sql.NullTime{}, TimeType}, true},\n\t\t{\"timestampAndTypedNullTimeResultInNull\", \"timestamp\", TypedNullTime{sql.NullTime{}, TimestampNTZType}, true},\n\t\t{\"timestamp_ntzAndTypedNullTimeResultInNull\", \"timestamp_ntz\", TypedNullTime{sql.NullTime{}, TimestampNTZType}, true},\n\t\t{\"timestamp_ltzAndTypedNullTimeResultInNull\", \"timestamp_ltz\", TypedNullTime{sql.NullTime{}, TimestampLTZType}, true},\n\t\t{\"timestamp_tzAndTypedNullTimeResultInNull\", \"timestamp_tz\", TypedNullTime{sql.NullTime{}, TimestampTZType}, true},\n\t\t{\"textAndStringResultInNotNull\", \"text\", \"string\", false},\n\t\t{\"numberAndIntegerResultInNotNull\", \"number\", 123, false},\n\t\t{\"floatAndFloatResultInNotNull\", \"float\", 123.01, false},\n\t\t{\"booleanAndBooleanResultInNotNull\", \"boolean\", true, false},\n\t\t{\"dateAndTimeResultInNotNull\", \"date\", time.Now(), false},\n\t\t{\"datetimeAndTimeResultInNotNull\", \"datetime\", time.Now(), false},\n\t\t{\"timeAndTimeResultInNotNull\", \"time\", time.Now(), false},\n\t\t{\"timestampAndTimeResultInNotNull\", \"timestamp\", time.Now(), false},\n\t\t{\"timestamp_ntzAndTimeResultInNotNull\", \"timestamp_ntz\", time.Now(), false},\n\t\t{\"timestamp_ltzAndTimeResultInNotNull\", \"timestamp_ltz\", time.Now(), false},\n\t\t{\"timestamp_tzAndTimeResultInNotNull\", \"timestamp_tz\", time.Now(), false},\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\t_, err := dbt.exec(\"ALTER SESSION SET BIND_NULL_VALUE_USE_NULL_DATATYPE=false\")\n\t\tif err != nil {\n\t\t\tlog.Println(err)\n\t\t}\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.testDesc, func(t *testing.T) {\n\t\t\t\tquery := fmt.Sprintf(`\n\t\t\t\tCREATE OR REPLACE FUNCTION NULLPARAMFUNCTION(\"param1\" %v)\n\t\t\t\tRETURNS TABLE(\"r1\" %v)\n\t\t\t\tLANGUAGE SQL\n\t\t\t\tAS 'select param1';`, tc.paramType, tc.paramType)\n\t\t\t\tdbt.mustExec(query)\n\t\t\t\trows, err := dbt.query(\"select * from table(NULLPARAMFUNCTION(?))\", tc.input)\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t}\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\t\t\t\tif rows.Err() != nil {\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t}\n\t\t\t\tif !rows.Next() {\n\t\t\t\t\tt.Fatal(\"no rows fetched\")\n\t\t\t\t}\n\t\t\t\tvar r1 any\n\t\t\t\terr = rows.Scan(&r1)\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t}\n\t\t\t\tif tc.nullResult && r1 != nil {\n\t\t\t\t\tt.Fatalf(\"the result for %v is of type %v but should be null\", tc.paramType, reflect.TypeOf(r1))\n\t\t\t\t}\n\t\t\t\tif !tc.nullResult && r1 == nil {\n\t\t\t\t\tt.Fatalf(\"the result for %v should not be null\", tc.paramType)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\n// TestVariousBindingModes tests 24 parameter types × 3 binding modes.\n// Subtests share a hardcoded table name (BINDING_MODES) via CREATE OR REPLACE,\n// so they CANNOT run in parallel — concurrent subtests would overwrite each\n// other's tables. Making this parallel-safe would require unique table names\n// per subtest.\nfunc TestVariousBindingModes(t *testing.T) {\n\ttestcases := []struct {\n\t\ttestDesc  string\n\t\tparamType string\n\t\tinput     any\n\t\tisNil     bool\n\t}{\n\t\t{\"textAndString\", \"text\", \"string\", false},\n\t\t{\"numberAndInteger\", \"number\", 123, false},\n\t\t{\"floatAndFloat\", \"float\", 123.01, false},\n\t\t{\"booleanAndBoolean\", \"boolean\", true, false},\n\t\t{\"dateAndTime\", \"date\", time.Now().Truncate(24 * time.Hour), false},\n\t\t{\"datetimeAndTime\", \"datetime\", time.Now(), false},\n\t\t{\"timeAndTime\", \"time\", \"12:34:56\", false},\n\t\t{\"timestampAndTime\", \"timestamp\", time.Now(), false},\n\t\t{\"timestamp_ntzAndTime\", \"timestamp_ntz\", time.Now(), false},\n\t\t{\"timestamp_ltzAndTime\", \"timestamp_ltz\", time.Now(), false},\n\t\t{\"timestamp_tzAndTime\", \"timestamp_tz\", time.Now(), false},\n\t\t{\"textAndNullString\", \"text\", sql.NullString{}, true},\n\t\t{\"numberAndNullInt64\", \"number\", sql.NullInt64{}, true},\n\t\t{\"floatAndNullFloat64\", \"float\", sql.NullFloat64{}, true},\n\t\t{\"booleanAndAndNullBool\", \"boolean\", sql.NullBool{}, true},\n\t\t{\"dateAndTypedNullTime\", \"date\", TypedNullTime{sql.NullTime{}, DateType}, true},\n\t\t{\"datetimeAndTypedNullTime\", \"datetime\", TypedNullTime{sql.NullTime{}, TimestampNTZType}, true},\n\t\t{\"timeAndTypedNullTime\", \"time\", TypedNullTime{sql.NullTime{}, TimeType}, true},\n\t\t{\"timestampAndTypedNullTime\", \"timestamp\", TypedNullTime{sql.NullTime{}, TimestampNTZType}, true},\n\t\t{\"timestamp_ntzAndTypedNullTime\", \"timestamp_ntz\", TypedNullTime{sql.NullTime{}, TimestampNTZType}, true},\n\t\t{\"timestamp_ltzAndTypedNullTime\", \"timestamp_ltz\", TypedNullTime{sql.NullTime{}, TimestampLTZType}, true},\n\t\t{\"timestamp_tzAndTypedNullTime\", \"timestamp_tz\", TypedNullTime{sql.NullTime{}, TimestampTZType}, true},\n\t\t{\"LOBSmallSize\", fmt.Sprintf(\"varchar(%v)\", smallSize), fastStringGeneration(smallSize), false},\n\t\t{\"LOBLargeSize\", fmt.Sprintf(\"varchar(%v)\", largeSize), fastStringGeneration(largeSize), false},\n\t}\n\n\tbindingModes := []struct {\n\t\tparam     string\n\t\tquery     string\n\t\ttransform func(any) any\n\t}{\n\t\t{\n\t\t\tparam:     \"?\",\n\t\t\ttransform: func(v any) any { return v },\n\t\t},\n\t\t{\n\t\t\tparam:     \":1\",\n\t\t\ttransform: func(v any) any { return v },\n\t\t},\n\t\t{\n\t\t\tparam:     \":param\",\n\t\t\ttransform: func(v any) any { return sql.Named(\"param\", v) },\n\t\t},\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tfor _, tc := range testcases {\n\t\t\t// TODO SNOW-1264687\n\t\t\tif strings.Contains(tc.testDesc, \"LOB\") {\n\t\t\t\tskipOnJenkins(t, \"skipped until SNOW-1264687 is fixed\")\n\t\t\t}\n\t\t\tfor _, bindingMode := range bindingModes {\n\t\t\t\tt.Run(tc.testDesc+\" \"+bindingMode.param, func(t *testing.T) {\n\t\t\t\t\tquery := fmt.Sprintf(`CREATE OR REPLACE TABLE BINDING_MODES(param1 %v)`, tc.paramType)\n\t\t\t\t\tdbt.mustExec(query)\n\t\t\t\t\tif _, err := dbt.exec(fmt.Sprintf(\"INSERT INTO BINDING_MODES VALUES (%v)\", bindingMode.param), bindingMode.transform(tc.input)); err != nil {\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\t\t\t\t\tif tc.isNil {\n\t\t\t\t\t\tquery = \"SELECT * FROM BINDING_MODES WHERE param1 IS NULL\"\n\t\t\t\t\t} else {\n\t\t\t\t\t\tquery = fmt.Sprintf(\"SELECT * FROM BINDING_MODES WHERE param1 = %v\", bindingMode.param)\n\t\t\t\t\t}\n\t\t\t\t\trows, err := dbt.query(query, bindingMode.transform(tc.input))\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\t\t\t\t\tdefer func() {\n\t\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t\t}()\n\t\t\t\t\tif !rows.Next() {\n\t\t\t\t\t\tt.Fatal(\"Expected to return a row\")\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc skipMaxLobSizeTestOnGithubActions(t *testing.T) {\n\tif runningOnGithubAction() {\n\t\tt.Skip(\"Max Lob Size parameters are not available on GH Actions\")\n\t}\n}\n\nfunc TestLOBRetrievalWithArrow(t *testing.T) {\n\ttestLOBRetrieval(t, true)\n}\n\nfunc TestLOBRetrievalWithJSON(t *testing.T) {\n\ttestLOBRetrieval(t, false)\n}\n\nfunc testLOBRetrieval(t *testing.T, useArrowFormat bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif useArrowFormat {\n\t\t\tdbt.mustExec(forceARROW)\n\t\t} else {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\n\t\tvar res string\n\t\ttestSizes := [2]int{smallSize, largeSize}\n\t\tfor _, testSize := range testSizes {\n\t\t\tt.Run(fmt.Sprintf(\"testLOB_%v_useArrowFormat=%v\", strconv.Itoa(testSize), strconv.FormatBool(useArrowFormat)), func(t *testing.T) {\n\t\t\t\trows, err := dbt.query(fmt.Sprintf(\"SELECT randstr(%v, 124)\", testSize))\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\t\t\t\tassertTrueF(t, rows.Next(), fmt.Sprintf(\"no rows returned for the LOB size %v\", testSize))\n\n\t\t\t\t// retrieve the result\n\t\t\t\terr = rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\n\t\t\t\t// verify the length of the result\n\t\t\t\tassertEqualF(t, len(res), testSize)\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestMaxLobSize(t *testing.T) {\n\tskipMaxLobSizeTestOnGithubActions(t)\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(enableFeatureMaxLOBSize)\n\t\tdefer dbt.mustExec(unsetLargeVarcharAndBinary)\n\t\tt.Run(\"Max Lob Size disabled\", func(t *testing.T) {\n\t\t\tdbt.mustExec(disableLargeVarcharAndBinary)\n\t\t\t_, err := dbt.query(\"select randstr(20000000, random())\")\n\t\t\tassertNotNilF(t, err)\n\t\t\tassertStringContainsF(t, err.Error(), \"Actual length 20000000 exceeds supported length\")\n\t\t})\n\n\t\tt.Run(\"Max Lob Size enabled\", func(t *testing.T) {\n\t\t\tdbt.mustExec(enableLargeVarcharAndBinary)\n\t\t\trows, err := dbt.query(\"select randstr(20000000, random())\")\n\t\t\tassertNilF(t, err)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t}()\n\t\t})\n\t})\n}\n\nfunc TestInsertLobDataWithLiteralArrow(t *testing.T) {\n\t// TODO SNOW-1264687\n\tskipOnJenkins(t, \"skipped until SNOW-1264687 is fixed\")\n\ttestInsertLOBData(t, true, true)\n}\n\nfunc TestInsertLobDataWithLiteralJSON(t *testing.T) {\n\t// TODO SNOW-1264687\n\tskipOnJenkins(t, \"skipped until SNOW-1264687 is fixed\")\n\ttestInsertLOBData(t, false, true)\n}\n\nfunc TestInsertLobDataWithBindingsArrow(t *testing.T) {\n\t// TODO SNOW-1264687\n\tskipOnJenkins(t, \"skipped until SNOW-1264687 is fixed\")\n\ttestInsertLOBData(t, true, false)\n}\n\nfunc TestInsertLobDataWithBindingsJSON(t *testing.T) {\n\t// TODO SNOW-1264687\n\tskipOnJenkins(t, \"skipped until SNOW-1264687 is fixed\")\n\ttestInsertLOBData(t, false, false)\n}\n\nfunc testInsertLOBData(t *testing.T, useArrowFormat bool, isLiteral bool) {\n\texpectedNumCols := 3\n\tcolumnMeta := []struct {\n\t\tcolumnName string\n\t\tcolumnType reflect.Type\n\t}{\n\t\t{\"C1\", reflect.TypeFor[string]()},\n\t\t{\"C2\", reflect.TypeFor[string]()},\n\t\t{\"C3\", reflect.TypeFor[string]()},\n\t}\n\ttestCases := []struct {\n\t\ttestDesc string\n\t\tc1Size   int\n\t\tc2Size   int\n\t\tc3Size   int\n\t}{\n\t\t{\"testLOBInsertSmallSize\", smallSize, smallSize, lobRandomRange},\n\t\t{\"testLOBInsertLargeSize\", largeSize, smallSize, lobRandomRange},\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tvar c1 string\n\t\tvar c2 string\n\t\tvar c3 int\n\n\t\tdbt.mustExec(enableFeatureMaxLOBSize)\n\t\tif useArrowFormat {\n\t\t\tdbt.mustExec(forceARROW)\n\t\t} else {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\n\t\tfor _, tc := range testCases {\n\t\t\tt.Run(tc.testDesc, func(t *testing.T) {\n\t\t\t\tc1Data := fastStringGeneration(tc.c1Size)\n\t\t\t\tc2Data := fastStringGeneration(tc.c2Size)\n\t\t\t\tc3Data := rand.Intn(tc.c3Size)\n\n\t\t\t\tdbt.mustExec(fmt.Sprintf(\"CREATE OR REPLACE TABLE lob_test_table (c1 varchar(%v), c2 varchar(%v), c3 int)\", tc.c1Size, tc.c2Size))\n\t\t\t\tif isLiteral {\n\t\t\t\t\tdbt.mustExec(fmt.Sprintf(\"INSERT INTO lob_test_table VALUES ('%s', '%s', %v)\", c1Data, c2Data, c3Data))\n\t\t\t\t} else {\n\t\t\t\t\tdbt.mustExec(\"INSERT INTO lob_test_table VALUES (?, ?, ?)\", c1Data, c2Data, c3Data)\n\t\t\t\t}\n\t\t\t\trows, err := dbt.query(\"SELECT * FROM lob_test_table\")\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\t\t\t\tassertTrueF(t, rows.Next(), fmt.Sprintf(\"%s: no rows returned\", tc.testDesc))\n\n\t\t\t\terr = rows.Scan(&c1, &c2, &c3)\n\t\t\t\tassertNilF(t, err)\n\n\t\t\t\t// check the number of columns\n\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualF(t, len(columnTypes), expectedNumCols)\n\n\t\t\t\t// verify the column metadata: name, type and length\n\t\t\t\tfor colIdx := range expectedNumCols {\n\t\t\t\t\tcolName := columnTypes[colIdx].Name()\n\t\t\t\t\tassertEqualF(t, colName, columnMeta[colIdx].columnName)\n\n\t\t\t\t\tcolType := columnTypes[colIdx].ScanType()\n\t\t\t\t\tassertEqualF(t, colType, columnMeta[colIdx].columnType)\n\n\t\t\t\t\tcolLength, ok := columnTypes[colIdx].Length()\n\n\t\t\t\t\tswitch colIdx {\n\t\t\t\t\tcase 0:\n\t\t\t\t\t\tassertTrueF(t, ok)\n\t\t\t\t\t\tassertEqualF(t, colLength, int64(tc.c1Size))\n\t\t\t\t\t\t// verify the data\n\t\t\t\t\t\tassertEqualF(t, c1, c1Data)\n\t\t\t\t\tcase 1:\n\t\t\t\t\t\tassertTrueF(t, ok)\n\t\t\t\t\t\tassertEqualF(t, colLength, int64(tc.c2Size))\n\t\t\t\t\t\t// verify the data\n\t\t\t\t\t\tassertEqualF(t, c2, c2Data)\n\t\t\t\t\tcase 2:\n\t\t\t\t\t\tassertFalseF(t, ok)\n\t\t\t\t\t\t// verify the data\n\t\t\t\t\t\tassertEqualF(t, c3, c3Data)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t})\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS lob_test_table\")\n\t\t}\n\t\tdbt.mustExec(unsetFeatureMaxLOBSize)\n\t})\n}\n\nfunc fastStringGeneration(size int) string {\n\tif size <= 0 {\n\t\treturn \"\"\n\t}\n\n\tpattern := \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\"\n\tpatternLen := len(pattern)\n\n\tif size <= patternLen {\n\t\treturn pattern[:size]\n\t}\n\n\tfullRepeats := size / patternLen\n\tremainder := size % patternLen\n\n\tvar result strings.Builder\n\tresult.Grow(size)\n\n\tfullPattern := strings.Repeat(pattern, fullRepeats)\n\tresult.WriteString(fullPattern)\n\n\tif remainder > 0 {\n\t\tresult.WriteString(pattern[:remainder])\n\t}\n\n\treturn result.String()\n}\n\nfunc getRandomDate() time.Time {\n\treturn time.Date(rand.Intn(1582)+1, time.January, rand.Intn(40), rand.Intn(40), rand.Intn(40), rand.Intn(40), rand.Intn(40), time.UTC)\n}\n\nfunc getRandomBool() bool {\n\treturn rand.Int63n(time.Now().Unix())%2 == 0\n}\n"
  },
  {
    "path": "chunk.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\n\t\"unicode\"\n\t\"unicode/utf16\"\n\t\"unicode/utf8\"\n)\n\nconst (\n\tdefaultChunkBufferSize  int64 = 8 << 10 // 8k\n\tdefaultStringBufferSize int64 = 512\n)\n\ntype largeChunkDecoder struct {\n\tr io.Reader\n\n\trows  int // hint for number of rows\n\tcells int // hint for number of cells/row\n\n\trem int // bytes remaining in rbuf\n\tptr int // position in rbuf\n\n\trbuf []byte\n\tsbuf *bytes.Buffer // buffer for decodeString\n\n\tioError error\n}\n\nfunc decodeLargeChunk(r io.Reader, rowCount int, cellCount int) ([][]*string, error) {\n\tlogger.Info(\"custom JSON Decoder\")\n\tlcd := largeChunkDecoder{\n\t\tr, rowCount, cellCount,\n\t\t0, 0,\n\t\tmake([]byte, defaultChunkBufferSize),\n\t\tbytes.NewBuffer(make([]byte, defaultStringBufferSize)),\n\t\tnil,\n\t}\n\n\trows, err := lcd.decode()\n\tif lcd.ioError != nil && lcd.ioError != io.EOF {\n\t\treturn nil, lcd.ioError\n\t} else if err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn rows, nil\n}\n\nfunc (lcd *largeChunkDecoder) mkError(s string) error {\n\treturn fmt.Errorf(\"corrupt chunk: %s\", s)\n}\n\nfunc (lcd *largeChunkDecoder) decode() ([][]*string, error) {\n\tif lcd.nextByteNonWhitespace() != '[' {\n\t\treturn nil, lcd.mkError(\"expected chunk to begin with '['\")\n\t}\n\n\trows := make([][]*string, 0, lcd.rows)\n\tif lcd.nextByteNonWhitespace() == ']' {\n\t\treturn rows, nil // special case of an empty chunk\n\t}\n\tlcd.rewind(1)\n\nOuterLoop:\n\tfor {\n\t\trow, err := lcd.decodeRow()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\trows = append(rows, row)\n\n\t\tswitch lcd.nextByteNonWhitespace() {\n\t\tcase ',':\n\t\t\tcontinue // more elements in the array\n\t\tcase ']':\n\t\t\treturn rows, nil // we've scanned the whole chunk\n\t\tdefault:\n\t\t\tbreak OuterLoop\n\t\t}\n\t}\n\treturn nil, lcd.mkError(\"invalid row boundary\")\n}\n\nfunc (lcd *largeChunkDecoder) decodeRow() ([]*string, error) {\n\tif lcd.nextByteNonWhitespace() != '[' {\n\t\treturn nil, lcd.mkError(\"expected row to begin with '['\")\n\t}\n\n\trow := make([]*string, 0, lcd.cells)\n\tif lcd.nextByteNonWhitespace() == ']' {\n\t\treturn row, nil // special case of an empty row\n\t}\n\tlcd.rewind(1)\n\nOuterLoop:\n\tfor {\n\t\tcell, err := lcd.decodeCell()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\trow = append(row, cell)\n\n\t\tswitch lcd.nextByteNonWhitespace() {\n\t\tcase ',':\n\t\t\tcontinue // more elements in the array\n\t\tcase ']':\n\t\t\treturn row, nil // we've scanned the whole row\n\t\tdefault:\n\t\t\tbreak OuterLoop\n\t\t}\n\t}\n\treturn nil, lcd.mkError(\"invalid cell boundary\")\n}\n\nfunc (lcd *largeChunkDecoder) decodeCell() (*string, error) {\n\tc := lcd.nextByteNonWhitespace()\n\tswitch c {\n\tcase '\"':\n\t\ts, err := lcd.decodeString()\n\t\treturn &s, err\n\tcase 'n':\n\t\tif lcd.nextByte() == 'u' &&\n\t\t\tlcd.nextByte() == 'l' &&\n\t\t\tlcd.nextByte() == 'l' {\n\t\t\treturn nil, nil\n\t\t}\n\t}\n\treturn nil, lcd.mkError(\"cell begins with unexpected byte\")\n}\n\n// TODO we can optimize this further by optimistically searching\n// the read buffer for the next string. If it's short enough and\n// doesn't contain any escaped characters, we can construct the\n// return string directly without writing to the sbuf\nfunc (lcd *largeChunkDecoder) decodeString() (string, error) {\n\tlcd.sbuf.Reset()\n\tfor {\n\t\t// NOTE if you make changes here, ensure this\n\t\t// variable does not escape to the heap\n\t\tc := lcd.nextByte()\n\t\tif c == '\"' {\n\t\t\tbreak\n\t\t} else if c == '\\\\' {\n\t\t\tif err := lcd.decodeEscaped(); err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t} else if c < ' ' {\n\t\t\treturn \"\", lcd.mkError(\"unexpected control character\")\n\t\t} else if c < utf8.RuneSelf {\n\t\t\tlcd.sbuf.WriteByte(c)\n\t\t} else {\n\t\t\tlcd.rewind(1)\n\t\t\tlcd.sbuf.WriteRune(lcd.readRune())\n\t\t}\n\t}\n\treturn lcd.sbuf.String(), nil\n}\n\nfunc (lcd *largeChunkDecoder) decodeEscaped() error {\n\t// NOTE if you make changes here, ensure this\n\t// variable does not escape to the heap\n\tc := lcd.nextByte()\n\n\tswitch c {\n\tcase '\"', '\\\\', '/', '\\'':\n\t\tlcd.sbuf.WriteByte(c)\n\tcase 'b':\n\t\tlcd.sbuf.WriteByte('\\b')\n\tcase 'f':\n\t\tlcd.sbuf.WriteByte('\\f')\n\tcase 'n':\n\t\tlcd.sbuf.WriteByte('\\n')\n\tcase 'r':\n\t\tlcd.sbuf.WriteByte('\\r')\n\tcase 't':\n\t\tlcd.sbuf.WriteByte('\\t')\n\tcase 'u':\n\t\trr := lcd.getu4()\n\t\tif rr < 0 {\n\t\t\treturn lcd.mkError(\"invalid escape sequence\")\n\t\t}\n\t\tif utf16.IsSurrogate(rr) {\n\t\t\trr1, size := lcd.getu4WithPrefix()\n\t\t\tif dec := utf16.DecodeRune(rr, rr1); dec != unicode.ReplacementChar {\n\t\t\t\t// A valid pair; consume.\n\t\t\t\tlcd.sbuf.WriteRune(dec)\n\t\t\t\tbreak\n\t\t\t}\n\t\t\t// Invalid surrogate; fall back to replacement rune.\n\t\t\tlcd.rewind(size)\n\t\t\trr = unicode.ReplacementChar\n\t\t}\n\t\tlcd.sbuf.WriteRune(rr)\n\tdefault:\n\t\treturn lcd.mkError(\"invalid escape sequence: \" + string(c))\n\t}\n\treturn nil\n}\n\nfunc (lcd *largeChunkDecoder) readRune() rune {\n\tlcd.ensureBytes(4)\n\tr, size := utf8.DecodeRune(lcd.rbuf[lcd.ptr:])\n\tlcd.ptr += size\n\tlcd.rem -= size\n\treturn r\n}\n\nfunc (lcd *largeChunkDecoder) getu4WithPrefix() (rune, int) {\n\tlcd.ensureBytes(6)\n\n\t// NOTE take a snapshot of the cursor state. If this\n\t// is not a valid rune, then we need to roll back to\n\t// where we were before we began consuming bytes\n\tptr := lcd.ptr\n\n\tif lcd.nextByte() != '\\\\' {\n\t\treturn -1, lcd.ptr - ptr\n\t}\n\tif lcd.nextByte() != 'u' {\n\t\treturn -1, lcd.ptr - ptr\n\t}\n\tr := lcd.getu4()\n\treturn r, lcd.ptr - ptr\n}\n\nfunc (lcd *largeChunkDecoder) getu4() rune {\n\tvar r rune\n\tfor range 4 {\n\t\tc := lcd.nextByte()\n\t\tswitch {\n\t\tcase '0' <= c && c <= '9':\n\t\t\tc = c - '0'\n\t\tcase 'a' <= c && c <= 'f':\n\t\t\tc = c - 'a' + 10\n\t\tcase 'A' <= c && c <= 'F':\n\t\t\tc = c - 'A' + 10\n\t\tdefault:\n\t\t\treturn -1\n\t\t}\n\t\tr = r*16 + rune(c)\n\t}\n\treturn r\n}\n\nfunc (lcd *largeChunkDecoder) nextByteNonWhitespace() byte {\n\tfor {\n\t\tc := lcd.nextByte()\n\t\tswitch c {\n\t\tcase ' ', '\\t', '\\n', '\\r':\n\t\t\tcontinue\n\t\tdefault:\n\t\t\treturn c\n\t\t}\n\t}\n}\n\nfunc (lcd *largeChunkDecoder) rewind(n int) {\n\tlcd.ptr -= n\n\tlcd.rem += n\n}\n\nfunc (lcd *largeChunkDecoder) nextByte() byte {\n\tif lcd.rem == 0 {\n\t\tif lcd.ioError != nil {\n\t\t\treturn 0\n\t\t}\n\n\t\tlcd.ptr = 0\n\t\tlcd.rem = lcd.fillBuffer(lcd.rbuf)\n\t\tif lcd.rem == 0 {\n\t\t\treturn 0\n\t\t}\n\t}\n\n\tb := lcd.rbuf[lcd.ptr]\n\tlcd.ptr++\n\n\tlcd.rem--\n\treturn b\n}\n\nfunc (lcd *largeChunkDecoder) ensureBytes(n int) {\n\tif lcd.rem <= n {\n\t\trbuf := make([]byte, defaultChunkBufferSize)\n\t\t// NOTE when the buffer reads from the stream, there's no\n\t\t// guarantee that it will actually be filled. As such we\n\t\t// must use (ptr+rem) to compute the end of the slice.\n\t\toff := copy(rbuf, lcd.rbuf[lcd.ptr:lcd.ptr+lcd.rem])\n\t\tadd := lcd.fillBuffer(rbuf[off:])\n\n\t\tlcd.ptr = 0\n\t\tlcd.rem += add\n\t\tlcd.rbuf = rbuf\n\t}\n}\n\nfunc (lcd *largeChunkDecoder) fillBuffer(b []byte) int {\n\tn, err := lcd.r.Read(b)\n\tif err != nil && err != io.EOF {\n\t\tlcd.ioError = err\n\t\treturn 0\n\t} else if n <= 0 {\n\t\tlcd.ioError = io.EOF\n\t\treturn 0\n\t}\n\treturn n\n}\n"
  },
  {
    "path": "chunk_downloader.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bufio\"\n\t\"compress/gzip\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/ipc\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n)\n\nvar (\n\terrNoConnection = errors.New(\"failed to retrieve connection\")\n)\n\ntype chunkDownloader interface {\n\ttotalUncompressedSize() (acc int64)\n\tstart() error\n\tnext() (chunkRowType, error)\n\treset()\n\tgetChunkMetas() []query.ExecResponseChunk\n\tgetQueryResultFormat() resultFormat\n\tgetRowType() []query.ExecResponseRowType\n\tsetNextChunkDownloader(downloader chunkDownloader)\n\tgetNextChunkDownloader() chunkDownloader\n\tgetRawArrowBatches() []*rawArrowBatchData\n}\n\ntype snowflakeChunkDownloader struct {\n\tsc                 *snowflakeConn\n\tctx                context.Context\n\tpool               memory.Allocator\n\tTotal              int64\n\tTotalRowIndex      int64\n\tCellCount          int\n\tCurrentChunk       []chunkRowType\n\tCurrentChunkIndex  int\n\tCurrentChunkSize   int\n\tCurrentIndex       int\n\tChunkHeader        map[string]string\n\tChunkMetas         []query.ExecResponseChunk\n\tChunks             map[int][]chunkRowType\n\tChunksChan         chan int\n\tChunksError        chan *chunkError\n\tChunksErrorCounter int\n\tChunksFinalErrors  []*chunkError\n\tChunksMutex        *sync.Mutex\n\tDoneDownloadCond   *sync.Cond\n\tfirstBatchRaw      *rawArrowBatchData\n\tNextDownloader     chunkDownloader\n\tQrmk               string\n\tQueryResultFormat  string\n\trawBatches         []*rawArrowBatchData\n\tRowSet             rowSetType\n\tFuncDownload       func(context.Context, *snowflakeChunkDownloader, int)\n\tFuncDownloadHelper func(context.Context, *snowflakeChunkDownloader, int) error\n\tFuncGet            func(context.Context, *snowflakeConn, string, map[string]string, time.Duration) (*http.Response, error)\n}\n\nfunc (scd *snowflakeChunkDownloader) totalUncompressedSize() (acc int64) {\n\tfor _, c := range scd.ChunkMetas {\n\t\tacc += c.UncompressedSize\n\t}\n\treturn\n}\n\nfunc (scd *snowflakeChunkDownloader) start() error {\n\tif usesArrowBatches(scd.ctx) && scd.getQueryResultFormat() == arrowFormat {\n\t\treturn scd.startArrowBatches()\n\t}\n\tscd.CurrentChunkSize = len(scd.RowSet.JSON) // cache the size\n\tscd.CurrentIndex = -1                       // initial chunks idx\n\tscd.CurrentChunkIndex = -1                  // initial chunk\n\n\tscd.CurrentChunk = make([]chunkRowType, scd.CurrentChunkSize)\n\tpopulateJSONRowSet(scd.CurrentChunk, scd.RowSet.JSON)\n\n\tif scd.getQueryResultFormat() == arrowFormat && scd.RowSet.RowSetBase64 != \"\" {\n\t\tparams, err := scd.getConfigParams()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"getting config params: %w\", err)\n\t\t}\n\t\t// if the rowsetbase64 retrieved from the server is empty, move on to downloading chunks\n\t\tloc := getCurrentLocation(params)\n\t\tfirstArrowChunk, err := buildFirstArrowChunk(scd.RowSet.RowSetBase64, loc, scd.pool)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"building first arrow chunk: %w\", err)\n\t\t}\n\t\thigherPrecision := higherPrecisionEnabled(scd.ctx)\n\t\tscd.CurrentChunk, err = firstArrowChunk.decodeArrowChunk(scd.ctx, scd.RowSet.RowType, higherPrecision, params)\n\t\tscd.CurrentChunkSize = firstArrowChunk.rowCount\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"decoding arrow chunk: %w\", err)\n\t\t}\n\t}\n\n\t// start downloading chunks if exists\n\tchunkMetaLen := len(scd.ChunkMetas)\n\tif chunkMetaLen > 0 {\n\t\tchunkDownloadWorkers := defaultMaxChunkDownloadWorkers\n\t\tchunkDownloadWorkersStr, ok := scd.sc.syncParams.get(clientPrefetchThreadsKey)\n\t\tif ok {\n\t\t\tvar err error\n\t\t\tchunkDownloadWorkers, err = strconv.Atoi(*chunkDownloadWorkersStr)\n\t\t\tif err != nil {\n\t\t\t\tlogger.Warnf(\"invalid value for CLIENT_PREFETCH_THREADS: %v\", *chunkDownloadWorkersStr)\n\t\t\t\tchunkDownloadWorkers = defaultMaxChunkDownloadWorkers\n\t\t\t}\n\t\t}\n\t\tif chunkDownloadWorkers <= 0 {\n\t\t\tlogger.Warnf(\"invalid value for CLIENT_PREFETCH_THREADS: %v. It should be a positive integer. Defaulting to %v\", chunkDownloadWorkers, defaultMaxChunkDownloadWorkers)\n\t\t\tchunkDownloadWorkers = defaultMaxChunkDownloadWorkers\n\t\t}\n\n\t\tlogger.WithContext(scd.ctx).Debugf(\"chunkDownloadWorkers: %v\", chunkDownloadWorkers)\n\t\tlogger.WithContext(scd.ctx).Debugf(\"chunks: %v, total bytes: %d\", chunkMetaLen, scd.totalUncompressedSize())\n\t\tscd.ChunksMutex = &sync.Mutex{}\n\t\tscd.DoneDownloadCond = sync.NewCond(scd.ChunksMutex)\n\t\tscd.Chunks = make(map[int][]chunkRowType)\n\t\tscd.ChunksChan = make(chan int, chunkMetaLen)\n\t\tscd.ChunksError = make(chan *chunkError, chunkDownloadWorkers)\n\t\tfor i := range chunkMetaLen {\n\t\t\tchunk := scd.ChunkMetas[i]\n\t\t\tlogger.WithContext(scd.ctx).Debugf(\"Result Format: %v, add chunk to channel ChunksChan: %v, URL: %v, RowCount: %v, UncompressedSize: %v, ChunkResultFormat: %v\",\n\t\t\t\tscd.getQueryResultFormat(), i+1, chunk.URL, chunk.RowCount, chunk.UncompressedSize, scd.QueryResultFormat)\n\t\t\tscd.ChunksChan <- i\n\t\t}\n\t\tfor i := 0; i < intMin(chunkDownloadWorkers, chunkMetaLen); i++ {\n\t\t\tscd.schedule()\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (scd *snowflakeChunkDownloader) schedule() {\n\ttimer := time.Now()\n\tselect {\n\tcase nextIdx := <-scd.ChunksChan:\n\t\tlogger.WithContext(scd.ctx).Infof(\"schedule chunk: %v\", nextIdx+1)\n\t\tgo GoroutineWrapper(\n\t\t\tscd.ctx,\n\t\t\tfunc() {\n\t\t\t\tscd.FuncDownload(scd.ctx, scd, nextIdx)\n\t\t\t},\n\t\t)\n\tdefault:\n\t\t// no more download\n\t\tchunkCount := len(scd.ChunkMetas)\n\t\tavgTime := 0.0\n\t\tif chunkCount > 0 {\n\t\t\tavgTime = float64(time.Since(timer)) / float64(chunkCount)\n\t\t}\n\t\tlogger.WithContext(scd.ctx).Infof(\"Processed %v chunks. It took %v ms, average chunk processing time: %v ms\", len(scd.ChunkMetas), time.Since(timer).String(), avgTime)\n\t}\n}\n\nfunc (scd *snowflakeChunkDownloader) checkErrorRetry() error {\n\tselect {\n\tcase errc := <-scd.ChunksError:\n\t\tif scd.ChunksErrorCounter >= maxChunkDownloaderErrorCounter ||\n\t\t\terrors.Is(errc.Error, context.Canceled) ||\n\t\t\terrors.Is(errc.Error, context.DeadlineExceeded) {\n\n\t\t\tscd.ChunksFinalErrors = append(scd.ChunksFinalErrors, errc)\n\t\t\tlogger.WithContext(scd.ctx).Warnf(\"chunk idx: %v, err: %v. no further retry\", errc.Index, errc.Error)\n\t\t\treturn errc.Error\n\t\t}\n\n\t\t// add the index to the chunks channel so that the download will be retried.\n\t\tgo GoroutineWrapper(\n\t\t\tscd.ctx,\n\t\t\tfunc() {\n\t\t\t\tscd.FuncDownload(scd.ctx, scd, errc.Index)\n\t\t\t},\n\t\t)\n\t\tscd.ChunksErrorCounter++\n\t\tlogger.WithContext(scd.ctx).Warnf(\"chunk idx: %v, err: %v. retrying (%v/%v)...\",\n\t\t\terrc.Index, errc.Error, scd.ChunksErrorCounter, maxChunkDownloaderErrorCounter)\n\t\treturn nil\n\tdefault:\n\t\tlogger.WithContext(scd.ctx).Info(\"no error is detected.\")\n\t\treturn nil\n\t}\n}\n\nfunc (scd *snowflakeChunkDownloader) next() (chunkRowType, error) {\n\tfor {\n\t\tscd.CurrentIndex++\n\t\tif scd.CurrentIndex < scd.CurrentChunkSize {\n\t\t\treturn scd.CurrentChunk[scd.CurrentIndex], nil\n\t\t}\n\t\tscd.CurrentChunkIndex++ // next chunk\n\t\tscd.CurrentIndex = -1   // reset\n\t\tif scd.CurrentChunkIndex >= len(scd.ChunkMetas) {\n\t\t\tbreak\n\t\t}\n\n\t\tscd.ChunksMutex.Lock()\n\t\tif scd.CurrentChunkIndex > 0 {\n\t\t\tscd.Chunks[scd.CurrentChunkIndex-1] = nil // detach the previously used chunk\n\t\t}\n\n\t\tfor scd.Chunks[scd.CurrentChunkIndex] == nil {\n\t\t\tlogger.WithContext(scd.ctx).Debugf(\"waiting for chunk idx: %v/%v\",\n\t\t\t\tscd.CurrentChunkIndex+1, len(scd.ChunkMetas))\n\n\t\t\tif err := scd.checkErrorRetry(); err != nil {\n\t\t\t\tscd.ChunksMutex.Unlock()\n\t\t\t\treturn chunkRowType{}, fmt.Errorf(\"checking for error: %w\", err)\n\t\t\t}\n\n\t\t\t// wait for chunk downloader goroutine to broadcast the event,\n\t\t\t// 1) one chunk download finishes or 2) an error occurs.\n\t\t\tscd.DoneDownloadCond.Wait()\n\t\t}\n\t\tlogger.WithContext(scd.ctx).Debugf(\"ready: chunk %v\", scd.CurrentChunkIndex+1)\n\t\tscd.CurrentChunk = scd.Chunks[scd.CurrentChunkIndex]\n\t\tscd.ChunksMutex.Unlock()\n\t\tscd.CurrentChunkSize = len(scd.CurrentChunk)\n\n\t\t// kick off the next download\n\t\tscd.schedule()\n\t}\n\n\tlogger.WithContext(scd.ctx).Debugf(\"no more data\")\n\tif len(scd.ChunkMetas) > 0 {\n\t\tclose(scd.ChunksError)\n\t\tclose(scd.ChunksChan)\n\t}\n\treturn chunkRowType{}, io.EOF\n}\n\nfunc (scd *snowflakeChunkDownloader) reset() {\n\tscd.Chunks = nil // detach all chunks. No way to go backward without reinitialize it.\n}\n\nfunc (scd *snowflakeChunkDownloader) getChunkMetas() []query.ExecResponseChunk {\n\treturn scd.ChunkMetas\n}\n\nfunc (scd *snowflakeChunkDownloader) getQueryResultFormat() resultFormat {\n\treturn resultFormat(scd.QueryResultFormat)\n}\n\nfunc (scd *snowflakeChunkDownloader) setNextChunkDownloader(nextDownloader chunkDownloader) {\n\tscd.NextDownloader = nextDownloader\n}\n\nfunc (scd *snowflakeChunkDownloader) getNextChunkDownloader() chunkDownloader {\n\treturn scd.NextDownloader\n}\n\nfunc (scd *snowflakeChunkDownloader) getRowType() []query.ExecResponseRowType {\n\treturn scd.RowSet.RowType\n}\n\n// rawArrowBatchData holds raw (untransformed) arrow records for a single batch.\ntype rawArrowBatchData struct {\n\trecords  *[]arrow.Record\n\trowCount int\n\tloc      *time.Location\n}\n\nfunc (scd *snowflakeChunkDownloader) getRawArrowBatches() []*rawArrowBatchData {\n\tif scd.firstBatchRaw == nil || scd.firstBatchRaw.records == nil {\n\t\treturn scd.rawBatches\n\t}\n\treturn append([]*rawArrowBatchData{scd.firstBatchRaw}, scd.rawBatches...)\n}\n\n// releaseRawArrowBatches releases any raw arrow records still owned by the\n// chunk downloader. Records whose ownership was transferred to BatchRaw\n// (via GetArrowBatches) will already have been nilled out and are skipped.\nfunc (scd *snowflakeChunkDownloader) releaseRawArrowBatches() {\n\treleaseRecords := func(raw *rawArrowBatchData) {\n\t\tif raw == nil || raw.records == nil {\n\t\t\treturn\n\t\t}\n\t\tfor _, rec := range *raw.records {\n\t\t\trec.Release()\n\t\t}\n\t\traw.records = nil\n\t}\n\treleaseRecords(scd.firstBatchRaw)\n\tfor _, raw := range scd.rawBatches {\n\t\treleaseRecords(raw)\n\t}\n}\n\nfunc (scd *snowflakeChunkDownloader) getConfigParams() (*syncParams, error) {\n\tif scd.sc == nil || scd.sc.cfg == nil {\n\t\treturn nil, errNoConnection\n\t}\n\treturn &scd.sc.syncParams, nil\n}\n\nfunc getChunk(\n\tctx context.Context,\n\tsc *snowflakeConn,\n\tfullURL string,\n\theaders map[string]string,\n\ttimeout time.Duration) (\n\t*http.Response, error,\n) {\n\tu, err := url.Parse(fullURL)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to parse URL: %w\", err)\n\t}\n\treturn newRetryHTTP(ctx, sc.rest.Client, http.NewRequest, u, headers, timeout, sc.rest.MaxRetryCount, sc.currentTimeProvider, sc.cfg).execute()\n}\n\nfunc (scd *snowflakeChunkDownloader) startArrowBatches() error {\n\tvar loc *time.Location\n\tparams, err := scd.getConfigParams()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"getting config params: %w\", err)\n\t}\n\tloc = getCurrentLocation(params)\n\tif scd.RowSet.RowSetBase64 != \"\" {\n\t\tfirstArrowChunk, err := buildFirstArrowChunk(scd.RowSet.RowSetBase64, loc, scd.pool)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"building first arrow chunk: %w\", err)\n\t\t}\n\t\tscd.firstBatchRaw = &rawArrowBatchData{\n\t\t\tloc: loc,\n\t\t}\n\t\tif firstArrowChunk.allocator != nil {\n\t\t\tscd.firstBatchRaw.records, err = firstArrowChunk.decodeArrowBatchRaw()\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"decoding arrow batch: %w\", err)\n\t\t\t}\n\t\t\tscd.firstBatchRaw.rowCount = countRawArrowBatchRows(scd.firstBatchRaw.records)\n\t\t}\n\t}\n\tchunkMetaLen := len(scd.ChunkMetas)\n\tscd.rawBatches = make([]*rawArrowBatchData, chunkMetaLen)\n\tfor i := range scd.rawBatches {\n\t\tscd.rawBatches[i] = &rawArrowBatchData{\n\t\t\tloc:      loc,\n\t\t\trowCount: scd.ChunkMetas[i].RowCount,\n\t\t}\n\t\tscd.CurrentChunkIndex++\n\t}\n\treturn nil\n}\n\n/* largeResultSetReader is a reader that wraps the large result set with leading and tailing brackets. */\ntype largeResultSetReader struct {\n\tstatus int\n\tbody   io.Reader\n}\n\nfunc (r *largeResultSetReader) Read(p []byte) (n int, err error) {\n\tif r.status == 0 {\n\t\tp[0] = 0x5b // initial 0x5b ([)\n\t\tr.status = 1\n\t\treturn 1, nil\n\t}\n\tif r.status == 1 {\n\t\tvar len int\n\t\tlen, err = r.body.Read(p)\n\t\tif err == io.EOF {\n\t\t\tr.status = 2\n\t\t\treturn len, nil\n\t\t}\n\t\tif err != nil {\n\t\t\treturn 0, fmt.Errorf(\"reading body: %w\", err)\n\t\t}\n\t\treturn len, nil\n\t}\n\tif r.status == 2 {\n\t\tp[0] = 0x5d // tail 0x5d (])\n\t\tr.status = 3\n\t\treturn 1, nil\n\t}\n\t// ensure no data and EOF\n\treturn 0, io.EOF\n}\n\nfunc downloadChunk(ctx context.Context, scd *snowflakeChunkDownloader, idx int) {\n\tlogger.WithContext(ctx).Infof(\"download start chunk: %v\", idx+1)\n\tdefer scd.DoneDownloadCond.Broadcast()\n\n\ttimer := time.Now()\n\tif err := scd.FuncDownloadHelper(ctx, scd, idx); err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\n\t\t\t\"failed to extract HTTP response body. URL: %v, err: %v\", scd.ChunkMetas[idx].URL, err)\n\t\tscd.ChunksError <- &chunkError{Index: idx, Error: err}\n\t} else if errors.Is(scd.ctx.Err(), context.Canceled) || errors.Is(scd.ctx.Err(), context.DeadlineExceeded) {\n\t\tscd.ChunksError <- &chunkError{Index: idx, Error: scd.ctx.Err()}\n\t}\n\telapsedTime := time.Since(timer).String()\n\tlogger.Debugf(\"“Processed %v chunk %v out of %v. It took %v ms. Chunk size: %v, rows: %v”.\", scd.getQueryResultFormat(), idx+1, len(scd.ChunkMetas), elapsedTime, scd.ChunkMetas[idx].UncompressedSize, scd.ChunkMetas[idx].RowCount)\n}\n\nfunc downloadChunkHelper(ctx context.Context, scd *snowflakeChunkDownloader, idx int) error {\n\theaders := make(map[string]string)\n\tif len(scd.ChunkHeader) > 0 {\n\t\tlogger.WithContext(ctx).Debug(\"chunk header is provided.\")\n\t\tfor k, v := range scd.ChunkHeader {\n\t\t\tlogger.WithContext(ctx).Debugf(\"adding header: %v, value: %v\", k, v)\n\n\t\t\theaders[k] = v\n\t\t}\n\t} else {\n\t\theaders[headerSseCAlgorithm] = headerSseCAes\n\t\theaders[headerSseCKey] = scd.Qrmk\n\t}\n\n\tresp, err := scd.FuncGet(ctx, scd.sc, scd.ChunkMetas[idx].URL, headers, scd.sc.rest.RequestTimeout)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"getting chunk: %w\", err)\n\t}\n\tdefer func() {\n\t\tif err = resp.Body.Close(); err != nil {\n\t\t\tlogger.Warnf(\"downloadChunkHelper: closing response body %v: %v\", scd.ChunkMetas[idx].URL, err)\n\t\t}\n\t}()\n\tlogger.WithContext(ctx).Debugf(\"response returned chunk: %v for URL: %v\", idx+1, scd.ChunkMetas[idx].URL)\n\tif resp.StatusCode != http.StatusOK {\n\t\tb, err := io.ReadAll(resp.Body)\n\t\tif err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"reading response body: %v\", err)\n\t\t}\n\t\tlogger.WithContext(ctx).Debugf(\"HTTP: %v, URL: %v, Header: %v, Body: %v\", resp.StatusCode, scd.ChunkMetas[idx].URL, resp.Header, b)\n\t\treturn &SnowflakeError{\n\t\t\tNumber:      ErrFailedToGetChunk,\n\t\t\tSQLState:    SQLStateConnectionFailure,\n\t\t\tMessage:     errors2.ErrMsgFailedToGetChunk,\n\t\t\tMessageArgs: []any{idx},\n\t\t}\n\t}\n\n\tbufStream := bufio.NewReader(resp.Body)\n\treturn decodeChunk(ctx, scd, idx, bufStream)\n}\n\nfunc decodeChunk(ctx context.Context, scd *snowflakeChunkDownloader, idx int, bufStream *bufio.Reader) error {\n\tgzipMagic, err := bufStream.Peek(2)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"peeking for gzip magic bytes: %w\", err)\n\t}\n\tstart := time.Now()\n\tvar source io.Reader\n\tif gzipMagic[0] == 0x1f && gzipMagic[1] == 0x8b {\n\t\t// detects and uncompresses Gzip format data\n\t\tbufStream0, err := gzip.NewReader(bufStream)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"creating gzip reader: %w\", err)\n\t\t}\n\t\tdefer func() {\n\t\t\tif err = bufStream0.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"decodeChunk: closing gzip reader: %v\", err)\n\t\t\t}\n\t\t}()\n\t\tsource = bufStream0\n\t} else {\n\t\tsource = bufStream\n\t}\n\tst := &largeResultSetReader{\n\t\tstatus: 0,\n\t\tbody:   source,\n\t}\n\tvar respd []chunkRowType\n\tif scd.getQueryResultFormat() != arrowFormat {\n\t\tvar decRespd [][]*string\n\t\tif !customJSONDecoderEnabled {\n\t\t\tdec := json.NewDecoder(st)\n\t\t\tfor {\n\t\t\t\tif err := dec.Decode(&decRespd); err == io.EOF {\n\t\t\t\t\tbreak\n\t\t\t\t} else if err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"decoding json: %w\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tdecRespd, err = decodeLargeChunk(st, scd.ChunkMetas[idx].RowCount, scd.CellCount)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"decoding large chunk: %w\", err)\n\t\t\t}\n\t\t}\n\t\trespd = make([]chunkRowType, len(decRespd))\n\t\tpopulateJSONRowSet(respd, decRespd)\n\t} else {\n\t\tipcReader, err := ipc.NewReader(source, ipc.WithAllocator(scd.pool))\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"creating ipc reader: %w\", err)\n\t\t}\n\t\tvar loc *time.Location\n\t\tparams, err := scd.getConfigParams()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"getting config params: %w\", err)\n\t\t}\n\t\tloc = getCurrentLocation(params)\n\t\tarc := arrowResultChunk{\n\t\t\tipcReader,\n\t\t\t0,\n\t\t\tloc,\n\t\t\tscd.pool,\n\t\t}\n\t\tif usesArrowBatches(scd.ctx) {\n\t\t\tvar err error\n\t\t\tscd.rawBatches[idx].records, err = arc.decodeArrowBatchRaw()\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"decoding Arrow batch: %w\", err)\n\t\t\t}\n\t\t\tscd.rawBatches[idx].rowCount = countRawArrowBatchRows(scd.rawBatches[idx].records)\n\t\t\treturn nil\n\t\t}\n\t\thighPrec := higherPrecisionEnabled(scd.ctx)\n\t\trespd, err = arc.decodeArrowChunk(ctx, scd.RowSet.RowType, highPrec, params)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"decoding arrow chunk: %w\", err)\n\t\t}\n\t}\n\tlogger.WithContext(scd.ctx).Debugf(\n\t\t\"decoded %d rows w/ %d bytes in %s (chunk %v)\",\n\t\tscd.ChunkMetas[idx].RowCount,\n\t\tscd.ChunkMetas[idx].UncompressedSize,\n\t\ttime.Since(start), idx+1,\n\t)\n\n\tscd.ChunksMutex.Lock()\n\tdefer scd.ChunksMutex.Unlock()\n\tscd.Chunks[idx] = respd\n\treturn nil\n}\n\nfunc populateJSONRowSet(dst []chunkRowType, src [][]*string) {\n\t// populate string rowset from src to dst's chunkRowType struct's RowSet field\n\tfor i, row := range src {\n\t\tdst[i].RowSet = row\n\t}\n}\n\nfunc countRawArrowBatchRows(recs *[]arrow.Record) (cnt int) {\n\tif recs == nil {\n\t\treturn 0\n\t}\n\tfor _, r := range *recs {\n\t\tcnt += int(r.NumRows())\n\t}\n\treturn\n}\n\nfunc getAllocator(ctx context.Context) memory.Allocator {\n\tpool, ok := ctx.Value(arrowAlloc).(memory.Allocator)\n\tif !ok {\n\t\treturn memory.DefaultAllocator\n\t}\n\treturn pool\n}\n\nfunc usesArrowBatches(ctx context.Context) bool {\n\treturn ia.BatchesEnabled(ctx)\n}\n"
  },
  {
    "path": "chunk_downloader_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"testing\"\n\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n)\n\nfunc TestChunkDownloaderDoesNotStartWhenArrowParsingCausesError(t *testing.T) {\n\ttcs := []string{\n\t\t\"invalid base64\",\n\t\t\"aW52YWxpZCBhcnJvdw==\", // valid base64, but invalid arrow\n\t}\n\tfor _, tc := range tcs {\n\t\tt.Run(tc, func(t *testing.T) {\n\t\t\tscd := snowflakeChunkDownloader{\n\t\t\t\tctx:               context.Background(),\n\t\t\t\tQueryResultFormat: \"arrow\",\n\t\t\t\tRowSet: rowSetType{\n\t\t\t\t\tRowSetBase64: tc,\n\t\t\t\t},\n\t\t\t}\n\n\t\t\terr := scd.start()\n\n\t\t\tassertNotNilF(t, err)\n\t\t})\n\t}\n}\n\nfunc TestWithArrowBatchesWhenQueryReturnsNoRowsWhenUsingNativeGoSQLInterface(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tvar rows driver.Rows\n\t\tvar err error\n\t\terr = dbt.conn.Raw(func(x any) error {\n\t\t\trows, err = x.(driver.QueryerContext).QueryContext(ia.EnableArrowBatches(context.Background()), \"SELECT 1 WHERE 0 = 1\", nil)\n\t\t\treturn err\n\t\t})\n\t\tassertNilF(t, err)\n\t\trows.Close()\n\t})\n}\n\nfunc TestWithArrowBatchesWhenQueryReturnsRowsAndReadingRows(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryContext(ia.EnableArrowBatches(context.Background()), \"SELECT 1\")\n\t\tdefer rows.Close()\n\t\tassertFalseF(t, rows.Next())\n\t})\n}\n\nfunc TestWithArrowBatchesWhenQueryReturnsNoRowsAndReadingRows(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryContext(ia.EnableArrowBatches(context.Background()), \"SELECT 1 WHERE 1 = 0\")\n\t\tdefer rows.Close()\n\t\tassertFalseF(t, rows.Next())\n\t})\n}\n\nfunc TestWithArrowBatchesWhenQueryReturnsNoRowsAndReadingArrowBatchData(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tvar rows driver.Rows\n\t\tvar err error\n\t\terr = dbt.conn.Raw(func(x any) error {\n\t\t\trows, err = x.(driver.QueryerContext).QueryContext(ia.EnableArrowBatches(context.Background()), \"SELECT 1 WHERE 1 = 0\", nil)\n\t\t\treturn err\n\t\t})\n\t\tassertNilF(t, err)\n\t\tdefer rows.Close()\n\t\tprovider := rows.(SnowflakeRows).(ia.BatchDataProvider)\n\t\tinfo, err := provider.GetArrowBatches()\n\t\tassertNilF(t, err)\n\t\tassertEmptyE(t, info.Batches)\n\t})\n}\n"
  },
  {
    "path": "chunk_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"io\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/array\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n)\n\nfunc TestBadChunkData(t *testing.T) {\n\ttestDecodeErr(t, \"\")\n\ttestDecodeErr(t, \"null\")\n\ttestDecodeErr(t, \"42\")\n\ttestDecodeErr(t, \"\\\"null\\\"\")\n\ttestDecodeErr(t, \"{}\")\n\n\ttestDecodeErr(t, \"[[]\")\n\ttestDecodeErr(t, \"[null]\")\n\ttestDecodeErr(t, `[[hello world]]`)\n\n\ttestDecodeErr(t, `[[\"\"hello world\"\"]]`)\n\ttestDecodeErr(t, `[[\"\\\"hello world\"\"]]`)\n\ttestDecodeErr(t, `[[\"\"hello world\\\"\"]]`)\n\ttestDecodeErr(t, `[[\"hello world`)\n\ttestDecodeErr(t, `[[\"hello world\"`)\n\ttestDecodeErr(t, `[[\"hello world\"]`)\n\n\ttestDecodeErr(t, `[[\"\\uQQQQ\"]]`)\n\n\tfor b := range byte(' ') {\n\t\ttestDecodeErr(t, string([]byte{\n\t\t\t'[', '[', '\"', b, '\"', ']', ']',\n\t\t}))\n\t}\n}\n\nfunc TestValidChunkData(t *testing.T) {\n\ttestDecodeOk(t, \"[]\")\n\ttestDecodeOk(t, \"[  ]\")\n\ttestDecodeOk(t, \"[[]]\")\n\ttestDecodeOk(t, \"[ [  ]   ]\")\n\ttestDecodeOk(t, \"[[],[],[],[]]\")\n\ttestDecodeOk(t, \"[[] , []  , [], []  ]\")\n\n\ttestDecodeOk(t, \"[[null]]\")\n\ttestDecodeOk(t, \"[[\\n\\t\\r null]]\")\n\ttestDecodeOk(t, \"[[null,null]]\")\n\ttestDecodeOk(t, \"[[ null , null ]]\")\n\ttestDecodeOk(t, \"[[null],[null],[null]]\")\n\ttestDecodeOk(t, \"[[null],[ null  ] ,  [null]]\")\n\n\ttestDecodeOk(t, `[[\"\"]]`)\n\ttestDecodeOk(t, `[[\"false\"]]`)\n\ttestDecodeOk(t, `[[\"true\"]]`)\n\ttestDecodeOk(t, `[[\"42\"]]`)\n\n\ttestDecodeOk(t, `[[\"\"]]`)\n\ttestDecodeOk(t, `[[\"hello\"]]`)\n\ttestDecodeOk(t, `[[\"hello world\"]]`)\n\n\ttestDecodeOk(t, `[[\"/ ' \\\\ \\b \\t \\n \\f \\r \\\"\"]]`)\n\ttestDecodeOk(t, `[[\"❄\"]]`)\n\ttestDecodeOk(t, `[[\"\\u2744\"]]`)\n\ttestDecodeOk(t, `[[\"\\uFfFc\"]]`)       // consume replacement chars\n\ttestDecodeOk(t, `[[\"\\ufffd\"]]`)       // consume replacement chars\n\ttestDecodeOk(t, `[[\"\\u0000\"]]`)       // yes, this is valid\n\ttestDecodeOk(t, `[[\"\\uD834\\uDD1E\"]]`) // surrogate pair\n\ttestDecodeOk(t, `[[\"\\uD834\\u0000\"]]`) // corrupt surrogate pair\n\n\ttestDecodeOk(t, `[[\"$\"]]`)      // \"$\"\n\ttestDecodeOk(t, `[[\"\\u0024\"]]`) // \"$\"\n\n\ttestDecodeOk(t, `[[\"\\uC2A2\"]]`) // \"¢\"\n\ttestDecodeOk(t, `[[\"¢\"]]`)      // \"¢\"\n\n\ttestDecodeOk(t, `[[\"\\u00E2\\u82AC\"]]`) // \"€\"\n\ttestDecodeOk(t, `[[\"€\"]]`)            // \"€\"\n\n\ttestDecodeOk(t, `[[\"\\uF090\\u8D88\"]]`) // \"𐍈\"\n\ttestDecodeOk(t, `[[\"𐍈\"]]`)            // \"𐍈\"\n}\n\nfunc TestSmallBufferChunkData(t *testing.T) {\n\tr := strings.NewReader(`[\n\t  [null,\"hello world\"],\n\t  [\"foo bar\", null],\n\t  [null, null] ,\n\t  [\"foo bar\",   \"hello world\" ]\n\t]`)\n\n\tlcd := largeChunkDecoder{\n\t\tr, 0, 0,\n\t\t0, 0,\n\t\tmake([]byte, 1),\n\t\tbytes.NewBuffer(make([]byte, defaultStringBufferSize)),\n\t\tnil,\n\t}\n\n\tif _, err := lcd.decode(); err != nil {\n\t\tt.Fatalf(\"failed with small buffer: %s\", err)\n\t}\n}\n\nfunc TestEnsureBytes(t *testing.T) {\n\t// the content here doesn't matter\n\tr := strings.NewReader(\"0123456789\")\n\n\tlcd := largeChunkDecoder{\n\t\tr, 0, 0,\n\t\t3, 8189,\n\t\tmake([]byte, 8192),\n\t\tbytes.NewBuffer(make([]byte, defaultStringBufferSize)),\n\t\tnil,\n\t}\n\n\tlcd.ensureBytes(4)\n\n\t// we expect the new remainder to be 3 + 10 (length of r)\n\tif lcd.rem != 13 {\n\t\tt.Fatalf(\"buffer was not refilled correctly\")\n\t}\n}\n\nfunc testDecodeOk(t *testing.T, s string) {\n\tvar rows [][]*string\n\tif err := json.Unmarshal([]byte(s), &rows); err != nil {\n\t\tt.Fatalf(\"test case is not valid json / [][]*string: %s\", s)\n\t}\n\n\t// NOTE we parse and stringify the expected result to\n\t// remove superficial differences, like whitespace\n\texpect, err := json.Marshal(rows)\n\tif err != nil {\n\t\tt.Fatalf(\"unreachable: %s\", err)\n\t}\n\n\trows, err = decodeLargeChunk(strings.NewReader(s), 0, 0)\n\tif err != nil {\n\t\tt.Fatalf(\"expected decode to succeed: %s\", err)\n\t}\n\n\tactual, err := json.Marshal(rows)\n\tif err != nil {\n\t\tt.Fatalf(\"json marshal failed: %s\", err)\n\t}\n\tif string(actual) != string(expect) {\n\t\tt.Fatalf(`\n\t\tresult did not match expected result\n\t\t  expect=%s\n\t\t   bytes=(%v)\n\n\t\t  acutal=%s\n\t\t   bytes=(%v)`,\n\t\t\tstring(expect), expect,\n\t\t\tstring(actual), actual,\n\t\t)\n\t}\n}\n\nfunc testDecodeErr(t *testing.T, s string) {\n\tif _, err := decodeLargeChunk(strings.NewReader(s), 0, 0); err == nil {\n\t\tt.Fatalf(\"expected decode to fail for input: %s\", s)\n\t}\n}\n\nfunc TestEnableArrowBatches(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tctx := ia.EnableArrowBatches(sct.sc.ctx)\n\t\tnumrows := 3000 // approximately 6 ArrowBatch objects\n\n\t\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\t\tdefer pool.AssertSize(t, 0)\n\t\tctx = WithArrowAllocator(ctx, pool)\n\n\t\tquery := fmt.Sprintf(selectRandomGenerator, numrows)\n\t\trows := sct.mustQueryContext(ctx, query, []driver.NamedValue{})\n\t\tdefer rows.Close()\n\n\t\t// getting result batches via raw bridge\n\t\tinfo, err := rows.(*snowflakeRows).GetArrowBatches()\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tbatches := info.Batches\n\t\tnumBatches := len(batches)\n\t\tmaxWorkers := 10 // enough for 3000 rows\n\t\ttype count struct {\n\t\t\tm       sync.Mutex\n\t\t\trecVal  int\n\t\t\tmetaVal int\n\t\t}\n\t\tcnt := count{recVal: 0}\n\t\tvar wg sync.WaitGroup\n\t\tchunks := make(chan int, numBatches)\n\n\t\tfor w := 1; w <= maxWorkers; w++ {\n\t\t\twg.Add(1)\n\t\t\tgo func(wg *sync.WaitGroup, chunks <-chan int) {\n\t\t\t\tdefer wg.Done()\n\n\t\t\t\tfor i := range chunks {\n\t\t\t\t\tbatch := batches[i]\n\t\t\t\t\tvar recs *[]arrow.Record\n\t\t\t\t\tif batch.Records != nil {\n\t\t\t\t\t\trecs = batch.Records\n\t\t\t\t\t} else if batch.Download != nil {\n\t\t\t\t\t\tvar downloadErr error\n\t\t\t\t\t\trecs, _, downloadErr = batch.Download(context.Background())\n\t\t\t\t\t\tif downloadErr != nil {\n\t\t\t\t\t\t\tt.Error(downloadErr)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif recs != nil {\n\t\t\t\t\t\tfor _, r := range *recs {\n\t\t\t\t\t\t\tcnt.m.Lock()\n\t\t\t\t\t\t\tcnt.recVal += int(r.NumRows())\n\t\t\t\t\t\t\tcnt.m.Unlock()\n\t\t\t\t\t\t\tr.Release()\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcnt.m.Lock()\n\t\t\t\t\tcnt.metaVal += batch.RowCount\n\t\t\t\t\tcnt.m.Unlock()\n\t\t\t\t}\n\t\t\t}(&wg, chunks)\n\t\t}\n\t\tfor j := range numBatches {\n\t\t\tchunks <- j\n\t\t}\n\t\tclose(chunks)\n\n\t\twg.Wait()\n\t\tif cnt.recVal != numrows {\n\t\t\tt.Errorf(\"number of rows from records didn't match. expected: %v, got: %v\", numrows, cnt.recVal)\n\t\t}\n\t\tif cnt.metaVal != numrows {\n\t\t\tt.Errorf(\"number of rows from arrow batch metadata didn't match. expected: %v, got: %v\", numrows, cnt.metaVal)\n\t\t}\n\t})\n}\n\nfunc TestWithArrowBatchesAsync(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tctx := WithAsyncMode(sct.sc.ctx)\n\t\tctx = ia.EnableArrowBatches(ctx)\n\t\tnumrows := 50000\n\n\t\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\t\tdefer pool.AssertSize(t, 0)\n\t\tctx = WithArrowAllocator(ctx, pool)\n\n\t\tquery := fmt.Sprintf(selectRandomGenerator, numrows)\n\t\trows := sct.mustQueryContext(ctx, query, []driver.NamedValue{})\n\t\tdefer rows.Close()\n\n\t\tinfo, err := rows.(*snowflakeRows).GetArrowBatches()\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tbatches := info.Batches\n\t\tnumBatches := len(batches)\n\t\tmaxWorkers := 10\n\t\ttype count struct {\n\t\t\tm       sync.Mutex\n\t\t\trecVal  int\n\t\t\tmetaVal int\n\t\t}\n\t\tcnt := count{recVal: 0}\n\t\tvar wg sync.WaitGroup\n\t\tchunks := make(chan int, numBatches)\n\n\t\tfor w := 1; w <= maxWorkers; w++ {\n\t\t\twg.Add(1)\n\t\t\tgo func(wg *sync.WaitGroup, chunks <-chan int) {\n\t\t\t\tdefer wg.Done()\n\n\t\t\t\tfor i := range chunks {\n\t\t\t\t\tbatch := batches[i]\n\t\t\t\t\tvar recs *[]arrow.Record\n\t\t\t\t\tif batch.Records != nil {\n\t\t\t\t\t\trecs = batch.Records\n\t\t\t\t\t} else if batch.Download != nil {\n\t\t\t\t\t\tvar downloadErr error\n\t\t\t\t\t\trecs, _, downloadErr = batch.Download(context.Background())\n\t\t\t\t\t\tif downloadErr != nil {\n\t\t\t\t\t\t\tt.Error(downloadErr)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif recs != nil {\n\t\t\t\t\t\tfor _, r := range *recs {\n\t\t\t\t\t\t\tcnt.m.Lock()\n\t\t\t\t\t\t\tcnt.recVal += int(r.NumRows())\n\t\t\t\t\t\t\tcnt.m.Unlock()\n\t\t\t\t\t\t\tr.Release()\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcnt.m.Lock()\n\t\t\t\t\tcnt.metaVal += batch.RowCount\n\t\t\t\t\tcnt.m.Unlock()\n\t\t\t\t}\n\t\t\t}(&wg, chunks)\n\t\t}\n\t\tfor j := range numBatches {\n\t\t\tchunks <- j\n\t\t}\n\t\tclose(chunks)\n\n\t\twg.Wait()\n\t\tif cnt.recVal != numrows {\n\t\t\tt.Errorf(\"number of rows from records didn't match. expected: %v, got: %v\", numrows, cnt.recVal)\n\t\t}\n\t\tif cnt.metaVal != numrows {\n\t\t\tt.Errorf(\"number of rows from arrow batch metadata didn't match. expected: %v, got: %v\", numrows, cnt.metaVal)\n\t\t}\n\t})\n}\n\nfunc TestWithArrowBatchesButReturningJSON(t *testing.T) {\n\ttestWithArrowBatchesButReturningJSON(t, false)\n}\n\nfunc TestWithArrowBatchesButReturningJSONAsync(t *testing.T) {\n\ttestWithArrowBatchesButReturningJSON(t, true)\n}\n\nfunc testWithArrowBatchesButReturningJSON(t *testing.T, async bool) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\trequestID := NewUUID()\n\t\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\t\tdefer pool.AssertSize(t, 0)\n\t\tctx := WithArrowAllocator(context.Background(), pool)\n\t\tctx = ia.EnableArrowBatches(ctx)\n\t\tctx = WithRequestID(ctx, requestID)\n\t\tif async {\n\t\t\tctx = WithAsyncMode(ctx)\n\t\t}\n\n\t\tsct.mustExec(forceJSON, nil)\n\t\trows := sct.mustQueryContext(ctx, \"SELECT 'hello'\", nil)\n\t\tdefer rows.Close()\n\t\t_, err := rows.(ia.BatchDataProvider).GetArrowBatches()\n\t\tassertNotNilF(t, err)\n\t\tvar se *SnowflakeError\n\t\tassertTrueE(t, errors.As(err, &se))\n\t\tassertEqualE(t, se.Message, errors2.ErrMsgNonArrowResponseInArrowBatches)\n\t\tassertEqualE(t, se.Number, ErrNonArrowResponseInArrowBatches)\n\n\t\tv := make([]driver.Value, 1)\n\t\tassertNilE(t, rows.Next(v))\n\t\tassertEqualE(t, v[0], \"hello\")\n\t})\n}\n\nfunc TestWithArrowBatchesMultistatement(t *testing.T) {\n\ttestWithArrowBatchesMultistatement(t, false)\n}\n\nfunc TestWithArrowBatchesMultistatementAsync(t *testing.T) {\n\ttestWithArrowBatchesMultistatement(t, true)\n}\n\nfunc testWithArrowBatchesMultistatement(t *testing.T, async bool) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsct.mustExec(\"ALTER SESSION SET ENABLE_FIX_1758055_ADD_ARROW_SUPPORT_FOR_MULTI_STMTS = true\", nil)\n\t\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\t\tdefer pool.AssertSize(t, 0)\n\t\tctx := WithMultiStatement(ia.EnableArrowBatches(WithArrowAllocator(context.Background(), pool)), 2)\n\t\tif async {\n\t\t\tctx = WithAsyncMode(ctx)\n\t\t}\n\t\tdriverRows := sct.mustQueryContext(ctx, \"SELECT 'abc' UNION SELECT 'def' ORDER BY 1; SELECT 'ghi' UNION SELECT 'jkl' ORDER BY 1\", nil)\n\t\tdefer driverRows.Close()\n\t\tsfRows := driverRows.(SnowflakeRows)\n\t\texpectedResults := [][]string{{\"abc\", \"def\"}, {\"ghi\", \"jkl\"}}\n\t\tresultSetIdx := 0\n\t\tfor hasNextResultSet := true; hasNextResultSet; hasNextResultSet = sfRows.NextResultSet() != io.EOF {\n\t\t\tinfo, err := driverRows.(ia.BatchDataProvider).GetArrowBatches()\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualF(t, len(info.Batches), 1)\n\t\t\tbatch := info.Batches[0]\n\t\t\tassertNotNilF(t, batch.Records)\n\t\t\trecords := *batch.Records\n\t\t\tassertEqualF(t, len(records), 1)\n\t\t\trecord := records[0]\n\t\t\tdefer record.Release()\n\t\t\tassertEqualF(t, record.Column(0).(*array.String).Value(0), expectedResults[resultSetIdx][0])\n\t\t\tassertEqualF(t, record.Column(0).(*array.String).Value(1), expectedResults[resultSetIdx][1])\n\t\t\tresultSetIdx++\n\t\t}\n\t\tassertEqualF(t, resultSetIdx, len(expectedResults))\n\t\terr := sfRows.NextResultSet()\n\t\tassertErrIsE(t, err, io.EOF)\n\t})\n}\n\nfunc TestWithArrowBatchesMultistatementWithJSONResponse(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsct.mustExec(forceJSON, nil)\n\t\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\t\tdefer pool.AssertSize(t, 0)\n\t\tctx := WithMultiStatement(ia.EnableArrowBatches(WithArrowAllocator(context.Background(), pool)), 2)\n\t\tdriverRows := sct.mustQueryContext(ctx, \"SELECT 'abc' UNION SELECT 'def' ORDER BY 1; SELECT 'ghi' UNION SELECT 'jkl' ORDER BY 1\", nil)\n\t\tdefer driverRows.Close()\n\t\tsfRows := driverRows.(SnowflakeRows)\n\t\tresultSetIdx := 0\n\t\tfor hasNextResultSet := true; hasNextResultSet; hasNextResultSet = sfRows.NextResultSet() != io.EOF {\n\t\t\t_, err := driverRows.(ia.BatchDataProvider).GetArrowBatches()\n\t\t\tassertNotNilF(t, err)\n\t\t\tvar se *SnowflakeError\n\t\t\tassertTrueF(t, errors.As(err, &se))\n\t\t\tassertEqualE(t, se.Number, ErrNonArrowResponseInArrowBatches)\n\t\t\tassertEqualE(t, se.Message, errors2.ErrMsgNonArrowResponseInArrowBatches)\n\t\t\tresultSetIdx++\n\t\t}\n\t\tassertEqualF(t, resultSetIdx, 2)\n\t\terr := sfRows.NextResultSet()\n\t\tassertErrIsE(t, err, io.EOF)\n\t})\n}\n\nfunc TestWithArrowBatchesMultistatementWithLargeResultSet(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsct.mustExec(\"ALTER SESSION SET ENABLE_FIX_1758055_ADD_ARROW_SUPPORT_FOR_MULTI_STMTS = true\", nil)\n\t\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\t\tdefer pool.AssertSize(t, 0)\n\t\tctx := WithMultiStatement(ia.EnableArrowBatches(WithArrowAllocator(context.Background(), pool)), 2)\n\t\tdriverRows := sct.mustQueryContext(ctx, \"SELECT 'abc' FROM TABLE(GENERATOR(ROWCOUNT => 1000000)); SELECT 'abc' FROM TABLE(GENERATOR(ROWCOUNT => 1000000))\", nil)\n\t\tdefer driverRows.Close()\n\t\tsfRows := driverRows.(SnowflakeRows)\n\t\trowCount := 0\n\t\tfor hasNextResultSet := true; hasNextResultSet; hasNextResultSet = sfRows.NextResultSet() != io.EOF {\n\t\t\tinfo, err := driverRows.(ia.BatchDataProvider).GetArrowBatches()\n\t\t\tassertNilF(t, err)\n\t\t\tassertTrueF(t, len(info.Batches) > 1)\n\t\t\tfor _, batch := range info.Batches {\n\t\t\t\tvar recs *[]arrow.Record\n\t\t\t\tif batch.Records != nil {\n\t\t\t\t\trecs = batch.Records\n\t\t\t\t} else if batch.Download != nil {\n\t\t\t\t\trecs, _, err = batch.Download(context.Background())\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t}\n\t\t\t\tif recs != nil {\n\t\t\t\t\tfor _, record := range *recs {\n\t\t\t\t\t\tdefer record.Release()\n\t\t\t\t\t\tfor i := 0; i < int(record.NumRows()); i++ {\n\t\t\t\t\t\t\tassertEqualF(t, record.Column(0).(*array.String).Value(i), \"abc\")\n\t\t\t\t\t\t\trowCount++\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\terr := sfRows.NextResultSet()\n\t\tassertErrIsE(t, err, io.EOF)\n\t})\n}\n\nfunc TestQueryArrowStream(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tnumrows := 50000\n\n\t\tquery := fmt.Sprintf(selectRandomGenerator, numrows)\n\t\tloader, err := sct.sc.QueryArrowStream(sct.sc.ctx, query)\n\t\tassertNilF(t, err)\n\n\t\tif loader.TotalRows() != int64(numrows) {\n\t\t\tt.Errorf(\"total numrows did not match expected, wanted %v, got %v\", numrows, loader.TotalRows())\n\t\t}\n\n\t\tbatches, err := loader.GetBatches()\n\t\tassertNilF(t, err)\n\t\tassertTrueF(t, len(batches) > 0, \"should have at least one batch\")\n\t\tassertTrueF(t, len(loader.RowTypes()) > 0, \"should have row types\")\n\t})\n}\n\nfunc TestQueryArrowStreamDescribeOnly(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tnumrows := 50000\n\n\t\tquery := fmt.Sprintf(selectRandomGenerator, numrows)\n\t\tloader, err := sct.sc.QueryArrowStream(WithDescribeOnly(sct.sc.ctx), query)\n\t\tassertNilF(t, err, \"failed to run query\")\n\n\t\tif loader.TotalRows() != 0 {\n\t\t\tt.Errorf(\"total numrows did not match expected, wanted 0, got %v\", loader.TotalRows())\n\t\t}\n\n\t\tif len(loader.RowTypes()) != 2 {\n\t\t\tt.Errorf(\"rowTypes length did not match expected, wanted 2, got %v\", len(loader.RowTypes()))\n\t\t}\n\t})\n}\n\nfunc TestRetainChunkWOHighPrecision(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tvar rows driver.Rows\n\t\tvar err error\n\n\t\terr = dbt.conn.Raw(func(connection any) error {\n\t\t\trows, err = connection.(driver.QueryerContext).QueryContext(ia.EnableArrowBatches(context.Background()), \"select 0\", nil)\n\t\t\treturn err\n\t\t})\n\t\tassertNilF(t, err, \"error running select 0 query\")\n\n\t\tinfo, err := rows.(ia.BatchDataProvider).GetArrowBatches()\n\t\tassertNilF(t, err, \"error getting arrow batch data\")\n\t\tassertEqualF(t, len(info.Batches), 1, \"should have one batch\")\n\n\t\trecords := info.Batches[0].Records\n\t\tassertNotNilF(t, records, \"records should not be nil\")\n\n\t\tnumRecords := len(*records)\n\t\tassertEqualF(t, numRecords, 1, \"should have exactly one record\")\n\n\t\trecord := (*records)[0]\n\t\tassertEqualF(t, len(record.Columns()), 1, \"should have exactly one column\")\n\n\t\tcolumn := record.Column(0).(*array.Int8)\n\t\trow := column.Len()\n\t\tassertEqualF(t, row, 1, \"should have exactly one row\")\n\n\t\tint8Val := column.Value(0)\n\t\tassertEqualF(t, int8Val, int8(0), \"value of cell should be 0\")\n\t})\n}\n\nfunc TestQueryArrowStreamMultiStatement(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsct.mustExec(\"ALTER SESSION SET ENABLE_FIX_1758055_ADD_ARROW_SUPPORT_FOR_MULTI_STMTS = true\", nil)\n\t\tctx := WithMultiStatement(ia.EnableArrowBatches(sct.sc.ctx), 2)\n\t\tloader, err := sct.sc.QueryArrowStream(ctx, \"SELECT 'abc'; SELECT 'abc' UNION SELECT 'def' ORDER BY 1\")\n\t\tassertNilF(t, err)\n\t\tassertTrueF(t, len(loader.RowTypes()) > 0, \"should have row types\")\n\t\tassertTrueF(t, loader.TotalRows() > 0, \"should have total rows\")\n\t})\n}\n\nfunc TestQueryArrowStreamMultiStatementForJSONData(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tctx := WithMultiStatement(ia.EnableArrowBatches(sct.sc.ctx), 2)\n\t\tloader, err := sct.sc.QueryArrowStream(ctx, \"SELECT 'abc'; SELECT 'abc'\")\n\t\tassertNilF(t, err)\n\t\tassertTrueF(t, loader.TotalRows() > 0, \"should return data\")\n\t})\n}\n"
  },
  {
    "path": "ci/_init.sh",
    "content": "#!/usr/bin/env -e\n\nexport PLATFORM=$(echo $(uname) | tr '[:upper:]' '[:lower:]')\n# Use the internal Docker Registry\nexport INTERNAL_REPO=artifactory.ci1.us-west-2.aws-dev.app.snowflake.com/internal-production-docker-snowflake-virtual\nexport DOCKER_REGISTRY_NAME=$INTERNAL_REPO/docker\nexport WORKSPACE=${WORKSPACE:-/tmp}\n\nexport DRIVER_NAME=go\n\nTEST_IMAGE_VERSION=1\ndeclare -A TEST_IMAGE_NAMES=(\n    [$DRIVER_NAME-chainguard-go1_24]=$DOCKER_REGISTRY_NAME/client-$DRIVER_NAME-chainguard-go1.24-test:$TEST_IMAGE_VERSION\n)\nexport TEST_IMAGE_NAMES\n"
  },
  {
    "path": "ci/build.bat",
    "content": "REM Format and Lint Golang driver\n\n@echo off\nsetlocal EnableDelayedExpansion\n\necho [INFO] Download tools\nwhere golint\nIF !ERRORLEVEL! NEQ 0 go install golang.org/x/lint/golint@latest\nwhere make2help\nIF !ERRORLEVEL! NEQ 0 go install github.com/Songmu/make2help/cmd/make2help@latest\n\necho [INFO] Go mod\ngo mod tidy\ngo mod vendor\n\nFOR /F \"tokens=1\" %%a IN ('go list ./...') DO (\n    echo [INFO] Verifying %%a\n    go vet %%a\n    golint -set_exit_status %%a\n)\n\n"
  },
  {
    "path": "ci/build.sh",
    "content": "#!/bin/bash\n#\n# Format, lint and WhiteSource scan Golang driver\n#\nset -e\nset -o pipefail\n\nCI_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\ncd $CI_DIR/..\nmake fmt lint\n"
  },
  {
    "path": "ci/container/test_authentication.sh",
    "content": "#!/bin/bash -e\n\nset -o pipefail\n\nexport AUTH_PARAMETER_FILE=./.github/workflows/parameters_aws_auth_tests.json\neval $(jq -r '.authtestparams | to_entries | map(\"export \\(.key)=\\(.value|tostring)\")|.[]' $AUTH_PARAMETER_FILE)\n\nexport SNOWFLAKE_AUTH_TEST_PRIVATE_KEY_PATH=./.github/workflows/rsa_keys/rsa_key.p8\nexport SNOWFLAKE_AUTH_TEST_INVALID_PRIVATE_KEY_PATH=./.github/workflows/rsa_keys/rsa_key_invalid.p8\nexport RUN_AUTH_TESTS=true\n\nexport AUTHENTICATION_TESTS_ENV=\"docker\"\n\nexport RUN_AUTH_TESTS=true\nexport AUTHENTICATION_TESTS_ENV=\"docker\"\n\ngo test -v -run TestExternalBrowser*\ngo test -v -run TestClientStoreCredentials\ngo test -v -run TestOkta*\ngo test -v -run TestOauth*\ngo test -v -run TestKeypair*\ngo test -v -run TestEndToEndPat*\ngo test -v -run TestMfaSuccessful"
  },
  {
    "path": "ci/container/test_component.sh",
    "content": "#!/bin/bash\n\nset -e\nset -o pipefail\n\nCI_SCRIPTS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nTOPDIR=$(cd $CI_SCRIPTS_DIR/../.. && pwd)\n\ncd $TOPDIR\ncp parameters.json.local parameters.json\nmake test\n"
  },
  {
    "path": "ci/docker/rockylinux9/Dockerfile",
    "content": "ARG BASE_IMAGE=rockylinux:9\nFROM $BASE_IMAGE\n\nARG TARGETARCH\n\n# Update all packages first (including glibc) to get latest versions\nRUN dnf update -y && dnf clean all\n\n# Install glibc-devel - it should match the updated glibc version\n# If there's still a mismatch, try installing an older compatible version\nRUN dnf install -y --allowerasing --nobest glibc-devel || \\\n    (echo \"Direct install failed, checking available versions...\" && \\\n     dnf list available glibc-devel | head -5 && \\\n     CURRENT_GLIBC=$(rpm -q glibc --qf '%{VERSION}-%{RELEASE}\\n') && \\\n     echo \"Current glibc: $CURRENT_GLIBC\" && \\\n     dnf install -y --allowerasing --nobest glibc-devel || true) && \\\n    dnf clean all\n\n# Install minimal required packages + gcc for CGO (race detection)\nRUN dnf install -y --allowerasing --nobest \\\n    gcc \\\n    java-11-openjdk \\\n    python3 \\\n    curl \\\n    wget \\\n    jq \\\n    tar \\\n    gzip \\\n    procps-ng \\\n    && dnf clean all\n\n# Set Java 11 as the default using environment variables\nENV JAVA_HOME=/usr/lib/jvm/java-11-openjdk\nENV PATH=\"${JAVA_HOME}/bin:${PATH}\"\n\n# Accept full Go version as build argument (e.g., GO_VERSION=1.24.2)\nARG GO_VERSION\n\n# Download and install Go version\nRUN GOARCH=${TARGETARCH} && \\\n    GO_VERSION_SHORT=$(echo ${GO_VERSION} | cut -d. -f1,2) && \\\n    echo \"Installing Go ${GO_VERSION} for ${GOARCH}...\" && \\\n    wget -q https://golang.org/dl/go${GO_VERSION}.linux-${GOARCH}.tar.gz -O /tmp/go.tar.gz && \\\n    mkdir -p /usr/local/go${GO_VERSION_SHORT} && \\\n    tar -C /usr/local/go${GO_VERSION_SHORT} --strip-components=1 -xzf /tmp/go.tar.gz && \\\n    rm /tmp/go.tar.gz && \\\n    # Create wrapper script for short version (e.g., go1.24) \\\n    echo \"#!/bin/bash\" > /usr/local/bin/go${GO_VERSION_SHORT} && \\\n    echo \"export GOROOT=/usr/local/go${GO_VERSION_SHORT}\" >> /usr/local/bin/go${GO_VERSION_SHORT} && \\\n    echo 'exec $GOROOT/bin/go \"$@\"' >> /usr/local/bin/go${GO_VERSION_SHORT} && \\\n    chmod +x /usr/local/bin/go${GO_VERSION_SHORT}\n\n# Ensure /usr/local/bin is in PATH (should be by default, but making sure)\nENV PATH=\"/usr/local/bin:${PATH}\"\n\n# Accept user ID as build argument to match host permissions\nARG USER_ID=1001\nARG GROUP_ID=1001\n\n# Create user for proper permission testing\n# Always create \"user\" user - use requested IDs if available, otherwise auto-assign\nRUN if ! getent group user >/dev/null 2>&1; then \\\n        (groupadd -g ${GROUP_ID} user 2>/dev/null || groupadd user); \\\n    fi && \\\n    if ! getent passwd user >/dev/null 2>&1; then \\\n        (useradd -u ${USER_ID} -g user -m -s /bin/bash user 2>/dev/null || useradd -g user -m -s /bin/bash user); \\\n    fi && \\\n    mkdir -p /home/user/go && \\\n    chown -R user:user /home/user\n\nUSER user\nWORKDIR /home/user/gosnowflake\n"
  },
  {
    "path": "ci/gofix.sh",
    "content": "#!/usr/bin/env bash\nset -euo pipefail\n\nCI_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\ncd \"$CI_DIR/..\"\n\nGOOS_LIST=(linux darwin windows)\nGOARCH_LIST=(amd64 arm64)\n\n# Standard GOOS/GOARCH values — handled by the matrix, not via -tags.\n# Version tags (go1.X) and toolchain tags (gc, gccgo, ignore) are also excluded.\nSTANDARD_TAGS=(\n  linux darwin windows freebsd openbsd netbsd plan9 solaris aix js wasip1 android ios\n  amd64 arm64 386 arm mips mips64 mipsle mips64le ppc64 ppc64le riscv64 s390x wasm\n  cgo gc gccgo ignore\n)\n\nensure_clean_worktree() {\n  if ! git diff --quiet --ignore-submodules -- || \\\n     ! git diff --cached --quiet --ignore-submodules --; then\n    echo \"ERROR: working tree is dirty before go fix runs.\"\n    echo \"Run this check from a clean checkout so failures only reflect go fix changes.\"\n    exit 1\n  fi\n}\n\n# Automatically discover custom build tags from //go:build lines.\n# Strips boolean operators and negations, deduplicates, then removes\n# standard tags and go1.X version constraints.\ndiscover_custom_tags() {\n  while IFS= read -r tag; do\n    # Skip go1.X version tags\n    [[ \"$tag\" =~ ^go[0-9] ]] && continue\n    # Skip standard GOOS/GOARCH/toolchain tags\n    skip=false\n    for std in \"${STANDARD_TAGS[@]}\"; do\n      [[ \"$tag\" == \"$std\" ]] && skip=true && break\n    done\n    $skip || echo \"$tag\"\n  done < <(\n    git grep -h '//go:build' -- '*.go' \\\n      | sed 's|//go:build||g' \\\n      | tr '!&|() \\t' '\\n' \\\n      | grep -v '^$' \\\n      | sort -u\n  )\n}\n\nensure_clean_worktree\n\nCUSTOM_TAGS=()\nwhile IFS= read -r tag; do\n  CUSTOM_TAGS+=(\"$tag\")\ndone < <(discover_custom_tags)\nTAGS_LIST=(\"\" \"${CUSTOM_TAGS[@]}\")\n\nTOTAL=$(( ${#GOOS_LIST[@]} * ${#GOARCH_LIST[@]} * ${#TAGS_LIST[@]} + ${#TAGS_LIST[@]} ))\nRUN=0\n\necho \"Discovered custom build tags: ${CUSTOM_TAGS[*]:-none}\"\necho \"Running go fix across all OS/arch/tag combinations (CGO_ENABLED=0)...\"\n\nfor os in \"${GOOS_LIST[@]}\"; do\n  for arch in \"${GOARCH_LIST[@]}\"; do\n    for tags in \"${TAGS_LIST[@]}\"; do\n      RUN=$(( RUN + 1 ))\n      tag_flag=\"\"\n      tag_label=\"(no tags)\"\n      if [[ -n \"$tags\" ]]; then\n        tag_flag=\"-tags=$tags\"\n        tag_label=\"tags=$tags\"\n      fi\n      echo \"  [$RUN/$TOTAL] CGO_ENABLED=0 GOOS=$os GOARCH=$arch $tag_label\"\n      # \"no cgo types\" is a harmless warning from go/packages when it cannot\n      # invoke the cgo preprocessor (cross-compilation, no C toolchain, etc.).\n      # No go fix fixer depends on cgo type information, so suppress the noise.\n      CGO_ENABLED=0 GOOS=\"$os\" GOARCH=\"$arch\" go fix $tag_flag ./... \\\n        2> >(grep -v \"^go fix: warning: no cgo types:\" >&2)\n    done\n  done\ndone\n\n# Run cgo-enabled passes on the native target so that files with\n# `import \"C\"` (excluded when CGO_ENABLED=0) are also checked.\n# Cross-GOOS/GOARCH is not needed here because cgo requires a\n# C cross-compiler that is not generally available.\necho \"Running go fix with CGO_ENABLED=1 (native target)...\"\nfor tags in \"${TAGS_LIST[@]}\"; do\n  RUN=$(( RUN + 1 ))\n  tag_flag=\"\"\n  tag_label=\"(no tags)\"\n  if [[ -n \"$tags\" ]]; then\n    tag_flag=\"-tags=$tags\"\n    tag_label=\"tags=$tags\"\n  fi\n  echo \"  [$RUN/$TOTAL] CGO_ENABLED=1 (native) $tag_label\"\n  CGO_ENABLED=1 go fix $tag_flag ./...\ndone\n\necho \"Checking for uncommitted changes...\"\nif ! git diff --exit-code; then\n  echo \"\"\n  echo \"ERROR: go fix produced changes.\"\n  echo \"Run 'ci/gofix.sh' locally and commit the result.\"\n  exit 1\nfi\n\necho \"All files are up to date.\"\n"
  },
  {
    "path": "ci/image/Dockerfile",
    "content": "FROM artifactory.int.snowflakecomputing.com/development-chainguard-virtual/snowflake.com/go:1.24.0-dev\n\nUSER root\n\nRUN apk update && apk add python3 python3-dev jq aws-cli gosu py3-pip\nRUN python3 -m ensurepip\nRUN pip install -U snowflake-connector-python\n\n# workspace\nRUN mkdir -p /home/user && \\\n    chmod 777 /home/user\nWORKDIR /mnt/host\n\n# entry point\nCOPY scripts/entrypoint.sh /usr/local/bin/entrypoint.sh\nRUN chmod +x /usr/local/bin/entrypoint.sh\nENTRYPOINT [\"/usr/local/bin/entrypoint.sh\"]\n"
  },
  {
    "path": "ci/image/build.sh",
    "content": "#!/usr/bin/env bash -e\n#\n# Build Docker images\n#\nset -o pipefail\nTHIS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nsource $THIS_DIR/../_init.sh\n\nfor name in \"${!TEST_IMAGE_NAMES[@]}\"; do\n    docker build \\\n        --platform linux/amd64 \\\n        --file $THIS_DIR/Dockerfile \\\n        --label snowflake \\\n        --label $DRIVER_NAME \\\n        --tag ${TEST_IMAGE_NAMES[$name]} .\ndone\n"
  },
  {
    "path": "ci/image/scripts/entrypoint.sh",
    "content": "#!/bin/bash -ex\n# Add local user\n# Either use the LOCAL_USER_ID if passed in at runtime or\n# fallback\n\nUSER_ID=${LOCAL_USER_ID:-9001}\n\necho \"Starting with UID : $USER_ID\"\nadduser -s /bin/bash -u $USER_ID -h /home/user -D user\nexport HOME=/home/user\nmkdir -p /home/user/.cache\nchown user:user /home/user/.cache\n\nexec gosu user \"$@\"\n\n"
  },
  {
    "path": "ci/image/update.sh",
    "content": "#!/usr/bin/env bash -e\n#\n# Build Docker images\n#\nset -o pipefail\nTHIS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nsource $THIS_DIR/../_init.sh\n\nfor image in $(docker images --format \"{{.ID}},{{.Repository}}:{{.Tag}}\" | grep \"artifactory.ci1.us-west-2.aws-dev.app.snowflake.com\" | grep \"client-$DRIVER_NAME\"); do\n    target_id=$(echo $image | awk -F, '{print $1}')\n    target_name=$(echo $image | awk -F, '{print $2}')\n    for name in \"${!TEST_IMAGE_NAMES[@]}\"; do\n        if [[ \"$target_name\" == \"${TEST_IMAGE_NAMES[$name]}\" ]]; then\n            echo $name\n            docker_hub_image_name=$(echo ${TEST_IMAGE_NAMES[$name]/$DOCKER_REGISTRY_NAME/snowflakedb})\n            set -x\n            docker tag $target_id $docker_hub_image_name\n            set +x\n            docker push \"${TEST_IMAGE_NAMES[$name]}\"\n        fi\n    done\ndone\n"
  },
  {
    "path": "ci/scripts/.gitignore",
    "content": "wiremock-standalone-*.jar"
  },
  {
    "path": "ci/scripts/README.md",
    "content": "# Refreshing wiremock test cert\n\nPassword for CA is `password`.\n\n```bash\nopenssl x509 -req -in wiremock.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out wiremock.crt -days 365 -sha256 -extfile wiremock.v3.ext\nopenssl pkcs12 -export -out wiremock.p12 -inkey wiremock.key -in wiremock.crt\n```\n\n# Refreshing ECDSA cert\n\n```bash\nopenssl x509 -req -in wiremock-ecdsa.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out wiremock-ecdsa.crt -days 365 -sha256 -extfile wiremock.v3.ext\nopenssl pkcs12 -export -inkey wiremock-ecdsa.key -in wiremock-ecdsa.crt -out wiremock-ecdsa.p12\n```"
  },
  {
    "path": "ci/scripts/ca.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIIF1zCCA7+gAwIBAgIUXh8f8hI5mKqCrUJaDn0zF6qGmw0wDQYJKoZIhvcNAQEL\nBQAwezELMAkGA1UEBhMCUEwxFDASBgNVBAgMC01hem93aWVja2llMQ8wDQYDVQQH\nDAZXYXJzYXcxEjAQBgNVBAoMCVNub3dmbGFrZTEQMA4GA1UECwwHRHJpdmVyczEf\nMB0GA1UEAwwWU25vd2ZsYWtlIHRlc3QgUm9vdCBDQTAeFw0yNTAzMDUwOTQ0MTha\nFw0zNTAzMDMwOTQ0MThaMHsxCzAJBgNVBAYTAlBMMRQwEgYDVQQIDAtNYXpvd2ll\nY2tpZTEPMA0GA1UEBwwGV2Fyc2F3MRIwEAYDVQQKDAlTbm93Zmxha2UxEDAOBgNV\nBAsMB0RyaXZlcnMxHzAdBgNVBAMMFlNub3dmbGFrZSB0ZXN0IFJvb3QgQ0EwggIi\nMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCW0bhevdDp+6S3eIqEAWvFJ66M\nST3WcvYUwdEILGRHyjYT34R2dM2HsmJ8NUA17NFpnWIRbv+f8oKFec90dDfKOdzQ\nvZmiHHun0zYLOf/QE0wj6rtB9zcn8Skwio7f9BQAed9Krovb6/f5tfRMzhDqsk6u\nUt+ra2INrA4apAEaw1hZVMN8htkH+M7GSha4hLIM+HOSmBt8pulxlwVFaqpvwZR6\n8ettpR9lX3PXFP2s09rY3Pq2PfB6JNF9qmMZzqlgr4qI0HKu5VTTSL3eWmJiZmVb\nmplISSzL7kKjPoBXLeNJTRtkfO1XKBvDXrNfnfexIlv8lJ9eCVaHaHLw+qgJNq3v\nTR/BbmrfroLfdpzW2DlF9PDNEookrri2oZyky2DwGklyH5DsUU5T5xTk+eOHsSvB\nJQEBrl9JCEhWNgVCgzPcQ9Ma7PaIaKw9SQAXWDFd5DLzAZ7Q5dHXy82k942Cp6kZ\nO6/s9SnhHPQQZg4H4ruqGuy1CdsOvd9ZpCRYUKXZoZYcEidqLRAb+rYCsf8dWiMn\nQvru0/V18upRsK9BCgRAQcP0R//HXBH199nqGuCnPCGgRIiRfwawyp/C5rXCb0BN\neYfBhdvdnd144CgvHq5tsAHjdw7yhP87zF6Wa+bKThfihfK/LKpIwVLRnN/e6Nea\nuWSu1Ns+6aywd5MBNwIDAQABo1MwUTAdBgNVHQ4EFgQU0GVyoh2s3w5Ka8ynllvA\npHtFh6AwHwYDVR0jBBgwFoAU0GVyoh2s3w5Ka8ynllvApHtFh6AwDwYDVR0TAQH/\nBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAgEANPpuw7bno0cTkaY0CA+0YsHf5r8T\nlSSNUtvREGudH09gPUmVFnU7MMNe+q6gOFkPIl+Mdbj/loaN8eNeZ3OO84VjbVvR\n2MtuQti7OcxhptUG9YkS6BeW/ZIp4QGYthDByg5Kc0Wf8mkNqCWuXYnQK7zyTqIM\n37TmPZMfD0+ck5Nc5r3S1n2xH0sTTwKjhw54OUpDxxfXARkdCg0u7wJlm/kxiUiA\nrhw9fXVVkeLh1J8sRIyXsLdJBDjDhVOoz/lBCgEUYJ0R/icUxl7jGt7XEXqUY4ER\nxYb8oVdEmUPYRR5m7Q5076HKCLXNY/Jn5BvtfaPCs288jXWSidY9B71baaBzeN6C\nY+1Yh9m/+SVz+g+5/PAm0kdzvWytewi53GDnG6P1peJi3TZOMhL+WU1gv3JSNiZ5\n+JbmQIM3jM22QJeElMA+tavB+Hm1PDIqgfVsvOOmpd/npKUc8AlNDA9/sNA9h0V7\n0ldbQoPXVh81+7O+uDMrN3x8naCOAdsAaz4mHEBlhSn55snvbeXSkEw2oVtzt9fB\nqscc02cN/9gf0UdIXsyDpL0ZL/rkjbmauE5QC45WKRc87cZYH8OhnROg+A2Dr3bk\n0LIZdOSbsZmVoyKWDO5P2p3l3z4x3D1P+KBWIxx/fCtdIvHg1EFHmn0SHuyoxzsO\ngVB+n3ggLTYRR0s=\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "ci/scripts/ca.key",
    "content": "-----BEGIN ENCRYPTED PRIVATE KEY-----\nMIIJtTBfBgkqhkiG9w0BBQ0wUjAxBgkqhkiG9w0BBQwwJAQQR+n/YtOhd0h7AmwV\nGU9glAICCAAwDAYIKoZIhvcNAgkFADAdBglghkgBZQMEASoEED/jsTIXZ/aJZt1B\n0sr61w4EgglQKHQFRJpHyd4I84WNu87VANMHLwxApMnsag9ccKEIDCiMOpESkiE8\nvh7gE+MkeZUCjXTsFslz00u0ZTeSGsE6BlasS0FnITzkM+3y4HiW4ezC8EU15hd9\nAcs4n7cNMPPPFvnUKtE4gye9DqdxUnEYEcT+fasHhIhmzpn/WaBaBmdv7pZPz2MX\nAblwJ860qO1W33+nn6bGcCokvNC1GIePbh1DdsaSJvLy3zljOeqO2jyp74n1DRnH\n2XWR9e+IYa68kpuHNrosHNSkOmkxb+zTQeL4rFgeQi6gdnJdMzKSyrGKg2/feVUF\nK9QlqtJuest2SDKwmECO/nTKdMTicv3CnMuwXURaggceFLHE0ea7AdoZc2gSx2Zr\nePjqKlKMF0lYirA6ZTpL1FLptFju4IS2rxI6uKf21eMSM8sQ7ui97IELZhKdykwo\nPEmj7d0aO5J7OaatGtNreVpSarYdSO4rfZW/iGbRda74NJnH3Wiy958UHcMob+45\nMEQtww3NoLZbSbdfvn4+xoLZIzqm6uu4avsb952imq4UxwgEBcVaDGjeGJF34yuC\nuYXQqQRTjSjD9Cru579gW6wZXzW3G9hsuC66f686CvaE3nJK2+OkRtSYogSfk2lq\nO8G9UFQ7tGtUrsXWIt6+iUWRv1PA7OFIwXjxumoMFsMK2xxI9UNXuIUeC7qWAeOB\ntlXCygdrYBoZekfjM5yeWRCC4KZSEnD4DDXR+f40GJU9cHIjSTiBbWFHDIgLm49y\n8JdtzRZKMnxUt2jetEPoTMCIzsbHYK4D5+SkQQ2S4ti9qdmqFTW+E9vDDOHMrmfZ\ncvbMTOCBrr2AP5itXcNs2m0tyXYl4cWR/3c8owFvivZljav+TARxhYzZRUXX7Ozv\nHt20/tJNtofWp4vd9QyrWYo06krgSl+P1EWpHQlpc9zb8AMjuCN8k80/eK5uF2Dd\nuTQa3+6PIeL/jf0vstDSbhAu5C2cFOF1REifaBtgsXDgnAUaemMgNBcA211frzcT\nFbp7p1qoQ7jwcYyq1khdk3W2qLpNTJILgdQaeLEGFUzDGmKBlbloBiW+43bCbTII\nmm7SuY7rLcQQc1REfcLEkZo+KFRfZkLt8gd1bUMTZ2XdGw22P2BfQFFTvSCm9hrJ\nGMmUnT7W9fb6vPl1QoGlqrG+6o+LAGaPx/wlrd6Ut19YqRaZmYY8n/kqEGllo2eH\n5wA4sO9OjXcIK3BHoeZDvdvEueqq4ynEWohW21M9w8HptxaeguiSIWaXpxeNSKOx\n+H0dfGG+s1MkQVMxpFT/WzmQXWM15ESy7SLYbj4qKj5M1cnfSTk5e67rrZYaXDoL\nqKx1Ta3ol8KtqJmHs2wPSlrg5hi7iwl+mz1Q+er1NmUitm3+9nDHfBCqKPIA3Nsn\nffGaaRvRp/nkidgDewjCh5QxbzeHeqYqgwn6MA+ybKbVmLeceS/8djVoRCBlUH4u\ns94lcruWkEfhx0dflOjbNqctfGIkDDX5OBwab+eaPswFgg99ijJ2TcuvAxNTSTrs\nefd3KXyD0wWvLvJfBRxfenLzrEt3zbN2tNah+guR48D6dM3T/g+U1W9MzmvToo5L\npPtOjL7xvb6lrkzfemmI4yVex8/otNcpLfVMlY16twAjaybaBR2Aoq9rEr8j5Kqa\nTP+6H3krV36Vbed+6aVFfF4CsraxVzUUHXyGaV9B9pwpubvaxHjqMuUdHm1LfsNL\nVcDXow4HMzOdnOXQ7CA/5d5VNG0bxqnhjPor3sL1mvdBz/JdLmlxnn56q9v+09d1\nCnSQoAPyj2ZFMLbTJgiBY23ovfoV2PU7fQwtZOKG4xuJDgRIabrJchsRqwjw7Niu\nucKCEFYPIc+MZCAQg1CxZ7/JofEgbiBAE6xwDwbycSbyLhRnEafEo76KPwp87Uck\nrzxrgeDEhPviXSmguidsrxMjJnkOeTS1ZoskbOdfQ0npdqTIscS80u705RuVzc7P\nM6OPLsuLuxII/lciKlDo3DuoqvRSrlTPkF1Kmp7lwN0AyqSkUcgXdNRQBeE5fGh3\nm+Jdj2WMX5Rj0TVMos66uImvB3/b0MrOtZivmJ6Ed9oNQZg5msYCpxhzrd2A+AOQ\nsE2alhC3HtPPHjiXVev2i7CcGyvlBTApFT5qfOg605zT3h3ObT1fXR10a2SqwiHC\nKWfQAQPe+fs6OMSJNHgi8DjEa4YtJ498zW93vLvHu+X7I2mnQLbf+eJ2DBiB39eo\n2oWj4R2SBK5JD6cc+Uq2pmdhTxLj/9KQ2MmWA6HIYv15qBPwYUh9bIjZ0/H3gDHH\n+BfLmfe2MSDaWKx3z+KhTH05fLI14QFY5uSogTvlUIWIR24FMU1SV6J00lQ8dujG\ncE/ayVRVLGvZN7VUynZ0mcmB7eowZBjblJQMwmxeUdmbjc/g5otAvBx5V+Xlio+4\nz8uPUc/8D9A8+ja5NzXNeZhiPcvzU81L1LOva2hvB24w/E+2qt8TLs9Bc4FO/dVP\nBClriniw9CTbHFki4OFUVdvJkvXEnOWJGJzk/l2IuTs3Nm7ghyU9Z4ZV64q8um/A\ntn/aAvIt+v++IJPaT6/aHLVyJyLK45xP6mTdKNQkn3P2c2CsxsdXz8dT9nhoGBin\nc/WlbSQCrRADKYYJgpc8irZfZoy6gKldT441enzz+C8jUb4btaDh6dZftb90CHsl\nBplPKvHeu6kld4mVaQadaEZrfmX21SS7RJpbaNIZ5+HgRMwTSU51uie0iUj1mmZ0\nTyk7YI+PHzuwGEFRHPw3StwNy79ihmamq2ef2UKK4QjDzW/4SCbRDy3WI0AzYQOR\noLgbcSB9dRYPdW8sHay8EQ+8jrnklvc6iWsu4zE1+ptZnCMgSvv2iqCKW5MELx3x\nZ9NlfcmbQIaZCN6LKZaim/L8rK/bocB5yM4teApYKvPOiTXh/9csmrrZccSXg7Ct\nsIiZnqA0VW3fWN8EtFhhZUGv/q7VEyi/Iz+j9RrFaZDZ/pM1uQvvGWqR2AdTppDj\nRuUPEma0xt5SpGnttERsH5MUV8YGgRVuoiLg0P15yJR7mNy/VpPJhxWWKG2x8R+u\n75QzlRR12rg0HfbqA+d1ADNbKWTJEAY1hks2tA+DOWPK/4/cEOF7bJIxZY2MJgAz\n8RhXbAQaxpX+cbPbHdMdvYKWpFi+GBNYXCIOoj3l79ATkCDguynmjrk=\n-----END ENCRYPTED PRIVATE KEY-----\n"
  },
  {
    "path": "ci/scripts/ca.srl",
    "content": "54587BDD05D4BE6A6D8852CA7FDB421189EA1C6D\n"
  },
  {
    "path": "ci/scripts/execute_tests.sh",
    "content": "#!/bin/bash\n#\n# Build and Test Golang driver\n#\nset -e\nset -o pipefail\nCI_SCRIPTS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nTOPDIR=$(cd $CI_SCRIPTS_DIR/../.. && pwd)\neval $(jq -r '.testconnection | to_entries | map(\"export \\(.key)=\\(.value|tostring)\")|.[]' $TOPDIR/parameters.json)\nenv | grep SNOWFLAKE | grep -v PASS | grep -v SECRET | sort\ncd $TOPDIR\ngo install github.com/jstemmer/go-junit-report/v2@latest\n\nif [[ \"$HOME_EMPTY\" == \"yes\" ]] ; then\n  export GOCACHE=$HOME/go-build\n  export GOMODCACHE=$HOME/go-modules\n  export HOME=\nfi\n\nCOVPKGS=$(go list ./... | grep -v '/cmd/' | tr '\\n' ',' | sed 's/,$//')\n\nif [[ \"$SEQUENTIAL_TESTS\" == \"true\" ]] ; then\n  # Test each package separately to avoid buffering (slower but real-time output)\n  PACKAGES=$(go list ./...)\n\n  if [[ -n \"$JENKINS_HOME\" ]]; then\n    export WORKSPACE=${WORKSPACE:-/mnt/workspace}\n    (\n      for pkg in $PACKAGES; do\n        # Convert full package path to relative path\n        pkg_path=$(echo $pkg | sed \"s|^github.com/snowflakedb/gosnowflake/v2||\" | sed \"s|^/||\")\n        if [[ -z \"$pkg_path\" ]]; then\n          pkg_path=\".\"\n        else\n          pkg_path=\"./$pkg_path\"\n        fi\n        echo \"=== Testing package: $pkg_path ===\" >&2\n        GODEBUG=$TEST_GO_DEBUG go test $GO_TEST_PARAMS -timeout 90m -race -v \"$pkg_path\"\n      done\n    ) | /home/user/go/bin/go-junit-report -iocopy -out $WORKSPACE/junit-go.xml\n  else\n    set +e\n    FAILED=0\n    (\n      for pkg in $PACKAGES; do\n        pkg_path=$(echo $pkg | sed \"s|^github.com/snowflakedb/gosnowflake/v2||\" | sed \"s|^/||\")\n        if [[ -z \"$pkg_path\" ]]; then\n          pkg_path=\".\"\n        else\n          pkg_path=\"./$pkg_path\"\n        fi\n        echo \"=== Testing package: $pkg_path ===\" >&2\n        # Note: -coverprofile only works with single package, use -coverpkg for multiple\n        GODEBUG=$TEST_GO_DEBUG go test $GO_TEST_PARAMS -timeout 90m -race -coverpkg=\"$COVPKGS\" -coverprofile=\"${pkg_path//\\//_}_coverage.txt\" -covermode=atomic -v \"$pkg_path\"\n        if [[ $? -ne 0 ]]; then\n          FAILED=1\n          echo \"[ERROR] Package $pkg_path tests failed\" >&2\n        fi\n      done\n      # Merge coverage files\n      go install github.com/wadey/gocovmerge@latest\n      gocovmerge *_coverage.txt > coverage.txt\n      rm -f *_coverage.txt\n      exit $FAILED\n    ) | tee test-output.txt\n    TEST_EXIT_CODE=${PIPESTATUS[0]}\n    cat test-output.txt | go-junit-report > test-report.junit.xml\n    exit $TEST_EXIT_CODE\n  fi\nelse\n  # Test all packages with ./... (parallel, faster, but buffered per package)\n  if [[ -n \"$JENKINS_HOME\" ]]; then\n    export WORKSPACE=${WORKSPACE:-/mnt/workspace}\n    GODEBUG=$TEST_GO_DEBUG go test $GO_TEST_PARAMS -timeout 90m -race -v ./... | /home/user/go/bin/go-junit-report -iocopy -out $WORKSPACE/junit-go.xml\n  else\n    set +e\n    GODEBUG=$TEST_GO_DEBUG go test $GO_TEST_PARAMS -timeout 90m -race -coverpkg=\"$COVPKGS\" -coverprofile=coverage.txt -covermode=atomic -v ./... | tee test-output.txt\n    TEST_EXIT_CODE=${PIPESTATUS[0]}\n    cat test-output.txt | go-junit-report > test-report.junit.xml\n    exit $TEST_EXIT_CODE\n  fi\nfi"
  },
  {
    "path": "ci/scripts/hang_webserver.py",
    "content": "#!/usr/bin/env python3\nimport sys\nfrom http.server import BaseHTTPRequestHandler,HTTPServer\nfrom socketserver import ThreadingMixIn\nimport threading\nimport time\nimport json\n\nclass HTTPRequestHandler(BaseHTTPRequestHandler):\n    invocations = 0\n\n    def do_POST(self):\n        if self.path.startswith('/reset'):\n            print(\"Resetting HTTP mocks\")\n            HTTPRequestHandler.invocations = 0\n            self.__respond(200)\n        elif self.path.startswith('/invocations'):\n            self.__respond(200, body=str(HTTPRequestHandler.invocations))\n        elif self.path.startswith('/ocsp'):\n            print(\"ocsp\")\n            self.ocspMocks()\n        elif self.path.startswith('/session/v1/login-request'):\n            self.authMocks()\n\n    def ocspMocks(self):\n        if self.path.startswith('/ocsp/403'):\n            self.send_response(403)\n            self.send_header('Content-Type', 'text/plain')\n            self.end_headers()\n        elif self.path.startswith('/ocsp/404'):\n            self.send_response(404)\n            self.send_header('Content-Type', 'text/plain')\n            self.end_headers()\n        elif self.path.startswith('/ocsp/hang'):\n            print(\"Hanging\")\n            time.sleep(300)\n            self.send_response(200, 'OK')\n            self.send_header('Content-Type', 'text/plain')\n            self.end_headers()\n        else:\n            self.send_response(200, 'OK')\n            self.send_header('Content-Type', 'text/plain')\n            self.end_headers()\n\n    def authMocks(self):\n        content_length = int(self.headers.get('content-length', 0))\n        body = self.rfile.read(content_length)\n        jsonBody = json.loads(body)\n        if jsonBody['data']['ACCOUNT_NAME'] == \"jwtAuthTokenTimeout\":\n            HTTPRequestHandler.invocations += 1\n            if HTTPRequestHandler.invocations >= 3:\n                self.__respond(200, body='''{\n                    \"data\": {\n                        \"token\": \"someToken\"\n                    },\n                    \"success\": true\n                }''')\n            else:\n                time.sleep(2000)\n                self.send_response(200)\n        else:\n            print(\"Unknown auth request\")\n            self.send_response(500)\n\n    def __respond(self, http_code, content_type='application/json', body=None):\n        print(\"responding:\", body)\n        self.send_response(http_code)\n        self.send_header('Content-Type', content_type)\n        self.end_headers()\n        if body != None:\n            responseBody = bytes(body, \"utf-8\")\n            self.wfile.write(responseBody)\n\n    do_GET = do_POST\n\nclass ThreadedHTTPServer(ThreadingMixIn, HTTPServer):\n  allow_reuse_address = True\n\n  def shutdown(self):\n    self.socket.close()\n    HTTPServer.shutdown(self)\n\nclass SimpleHttpServer():\n  def __init__(self, ip, port):\n    self.server = ThreadedHTTPServer((ip,port), HTTPRequestHandler)\n\n  def start(self):\n    self.server_thread = threading.Thread(target=self.server.serve_forever)\n    self.server_thread.daemon = True\n    self.server_thread.start()\n\n  def waitForThread(self):\n    self.server_thread.join()\n\n  def stop(self):\n    self.server.shutdown()\n    self.waitForThread()\n\nif __name__=='__main__':\n    if len(sys.argv) != 2:\n        print(\"Usage: python3 {} PORT\".format(sys.argv[0]))\n        sys.exit(2)\n\n    PORT = int(sys.argv[1])\n\n    server = SimpleHttpServer('localhost', PORT)\n    print('HTTP Server Running on PORT {}..........'.format(PORT))\n    server.start()\n    server.waitForThread()\n\n"
  },
  {
    "path": "ci/scripts/login_internal_docker.sh",
    "content": "#!/bin/bash -e\n#\n# Login the Internal Docker Registry\n#\nif [[ -z \"$GITHUB_ACTIONS\" ]]; then\n    echo \"[INFO] Login the internal Docker Registry\"\n    if ! docker login $INTERNAL_REPO; then\n        echo \"[ERROR] Failed to connect to the Artifactory server. Ensure 'sf artifact oci auth' has been run.\"\n        exit 1\n    fi\nelse\n    echo \"[INFO] No login the internal Docker Registry\"\nfi\n"
  },
  {
    "path": "ci/scripts/run_wiremock.sh",
    "content": "#!/usr/bin/env bash\n\nSCRIPT_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\n\ncd $SCRIPT_DIR\n\nif [[ \"$1\" == \"--ecdsa\" || \"$WIREMOCK_ENABLE_ECDSA\" == \"true\" ]] ; then\n  echo \"Using ecliptic curves\"\n  pfxFile=\"$SCRIPT_DIR/wiremock-ecdsa.p12\"\nelse\n  echo \"Using RSA\"\n  pfxFile=\"$SCRIPT_DIR/wiremock.p12\"\nfi\n\nif [ ! -f \"$SCRIPT_DIR/wiremock-standalone-3.11.0.jar\" ]; then\n  curl -O https://repo1.maven.org/maven2/org/wiremock/wiremock-standalone/3.11.0/wiremock-standalone-3.11.0.jar\nfi\n\njava -jar \"$SCRIPT_DIR/wiremock-standalone-3.11.0.jar\" --port ${WIREMOCK_PORT:=14355} --https-port ${WIREMOCK_HTTPS_PORT:=13567} --https-keystore \"$pfxFile\" --keystore-type PKCS12 --keystore-password password\n"
  },
  {
    "path": "ci/scripts/setup_connection_parameters.sh",
    "content": "#!/bin/bash -e\n#\n# Set connection parameters\n#\nCI_SCRIPTS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nif [[ \"$CLOUD_PROVIDER\" == \"AZURE\" ]]; then\n    PARAMETER_FILE=parameters_azure_golang.json.gpg\n    PRIVATE_KEY=rsa_key_golang_azure.p8.gpg\nelif [[ \"$CLOUD_PROVIDER\" == \"GCP\" ]]; then\n    PARAMETER_FILE=parameters_gcp_golang.json.gpg\n    PRIVATE_KEY=rsa_key_golang_gcp.p8.gpg\nelse\n    PARAMETER_FILE=parameters_aws_golang.json.gpg\n    PRIVATE_KEY=rsa_key_golang_aws.p8.gpg\nfi\ngpg --quiet --batch --yes --decrypt --passphrase=\"$PARAMETERS_SECRET\" --output $CI_SCRIPTS_DIR/../../parameters.json $CI_SCRIPTS_DIR/../../.github/workflows/$PARAMETER_FILE\ngpg --quiet --batch --yes --decrypt --passphrase=\"$PARAMETERS_SECRET\" --output $CI_SCRIPTS_DIR/../../rsa-2048-private-key.p8 $CI_SCRIPTS_DIR/../../.github/workflows/rsa-2048-private-key.p8.gpg\ngpg --quiet --batch --yes --decrypt --passphrase=\"$GOLANG_PRIVATE_KEY_SECRET\" --output $CI_SCRIPTS_DIR/../../.github/workflows/parameters/public/rsa_key_golang.p8 $CI_SCRIPTS_DIR/../../.github/workflows/parameters/public/$PRIVATE_KEY\n"
  },
  {
    "path": "ci/scripts/setup_gpg.sh",
    "content": "#!/bin/bash\n\n# GPG setup script for creating unique GPG home directory\n\nsetup_gpg_home() {\n  # Create unique GPG home directory\n  export GNUPGHOME=\"${THIS_DIR}/.gnupg_$$_$(date +%s%N)_${BUILD_NUMBER:-}\"\n  mkdir -p \"$GNUPGHOME\"\n  chmod 700 \"$GNUPGHOME\"\n  \n  cleanup_gpg() {\n    if [[ -n \"$GNUPGHOME\" && -d \"$GNUPGHOME\" ]]; then\n      rm -rf \"$GNUPGHOME\"\n    fi\n  }\n  \n  trap cleanup_gpg EXIT\n}\n\nsetup_gpg_home\n\n"
  },
  {
    "path": "ci/scripts/wiremock-ecdsa-pub.key",
    "content": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEX3j37DbAKoO6Cwn0TsoMcsVXEF52\nlDa2tEHX2kMoxLExE4cgBipPyHgwNEblfAbaA1eC03fytJZw0wd08GvA+Q==\n-----END PUBLIC KEY-----\n"
  },
  {
    "path": "ci/scripts/wiremock-ecdsa.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIID/jCCAeagAwIBAgIUVFh73QXUvmptiFLKf9tCEYnqHG0wDQYJKoZIhvcNAQEL\nBQAwezELMAkGA1UEBhMCUEwxFDASBgNVBAgMC01hem93aWVja2llMQ8wDQYDVQQH\nDAZXYXJzYXcxEjAQBgNVBAoMCVNub3dmbGFrZTEQMA4GA1UECwwHRHJpdmVyczEf\nMB0GA1UEAwwWU25vd2ZsYWtlIHRlc3QgUm9vdCBDQTAeFw0yNjAzMDYxODQ4MjJa\nFw0yNzAzMDYxODQ4MjJaMHkxCzAJBgNVBAYTAlBMMRQwEgYDVQQIDAtNYXpvd2ll\nY2tpZTEPMA0GA1UEBwwGV2Fyc2F3MRIwEAYDVQQKDAlTbm93Zmxha2UxGzAZBgNV\nBAsMEkRldmVsb3BlciBwbGF0Zm9ybTESMBAGA1UEAwwJbG9jYWxob3N0MCowBQYD\nK2VwAyEAGLQr+l2G3bxeA8oXH6epvuZ1ZLY381WEwehREgaYpTyjdjB0MB8GA1Ud\nIwQYMBaAFNBlcqIdrN8OSmvMp5ZbwKR7RYegMAkGA1UdEwQCMAAwCwYDVR0PBAQD\nAgTwMBoGA1UdEQQTMBGHBH8AAAGCCWxvY2FsaG9zdDAdBgNVHQ4EFgQU/9pFFL7e\n4Fr4IzzELxg3Y3nWns4wDQYJKoZIhvcNAQELBQADggIBAIE6g+wbA5JIWaU+atNL\nQr62D+a1IlB4kE+Ysaz5iMCDNKIfbNe5/Mrgzbuc8iiRCz2QicPHEtS5OC39jeKM\ntX1JQGfA9G8P+IEX6POPgSYbBjO2uj9qdATFF3bjHtB9KPe/lF34rWD5v8ajMoOY\noosRM+wOMT/H08AOmPRe3T1qVVCk9G87qGRw2cvpyoOh46dzcsaJ/4QNAMzp7PY1\nyn8h8VRJoqkSHf/du1ACoqcmsfF26fMmVRjGmiMoIteIr/8CAFzc9yMXXTq4/F2P\nDT1XoWeQopdmWTkxS2DCiStxYWEYAVURzg4C1zeq3/KC48oZrNhNylkaHpsHx5x6\nMxC8RoVN2zA8GZEsIVdRXi/gl8DjAwLieTwIErtczaMgNwmX1qU+qBoXAzZ4bEJT\nUuwfO/LcUywX6TZ91bO/tVsLOH2vNWjeQI/ewqUjpnPxqx9WG1QLaQ2wu2oqKBQQ\nYPZzpezG10tThgTkNyPlFyV0pT2YjfruDovC7EBGkaO1/ZheNvSbsZbuXJKDecr6\nLhrAPh95V8mVUYCjI8bQK+K+u5feBN3pXtY9hfltcJ2611Xfv7Tm8R7JLgGQSlim\n7D9i4/XWLKVfRtCbQLabgGzc46Kk8W5Ae8Ie1UrdhetehJPAMO/v8rOKnWqR3HxR\ni+s79C6kuYGYmRblr9LJ82pn\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "ci/scripts/wiremock-ecdsa.csr",
    "content": "-----BEGIN CERTIFICATE REQUEST-----\nMIH5MIGsAgEAMHkxCzAJBgNVBAYTAlBMMRQwEgYDVQQIDAtNYXpvd2llY2tpZTEP\nMA0GA1UEBwwGV2Fyc2F3MRIwEAYDVQQKDAlTbm93Zmxha2UxGzAZBgNVBAsMEkRl\ndmVsb3BlciBwbGF0Zm9ybTESMBAGA1UEAwwJbG9jYWxob3N0MCowBQYDK2VwAyEA\nGLQr+l2G3bxeA8oXH6epvuZ1ZLY381WEwehREgaYpTygADAFBgMrZXADQQAQX4XJ\nI6PxjoC2RofZayHk+ud2oyXdLE1M9NarUY6+2lKntFIIhn/s1F+4UK0cnDB40vJp\nMXV6quLOTF06azUM\n-----END CERTIFICATE REQUEST-----\n"
  },
  {
    "path": "ci/scripts/wiremock-ecdsa.key",
    "content": "-----BEGIN PRIVATE KEY-----\nMC4CAQAwBQYDK2VwBCIEICQI1T3B7DZ45py/Oa4fEjhdz3kMDlRFXvY8vv9DA5Io\n-----END PRIVATE KEY-----\n"
  },
  {
    "path": "ci/scripts/wiremock.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIIF7TCCA9WgAwIBAgIUVFh73QXUvmptiFLKf9tCEYnqHGwwDQYJKoZIhvcNAQEL\nBQAwezELMAkGA1UEBhMCUEwxFDASBgNVBAgMC01hem93aWVja2llMQ8wDQYDVQQH\nDAZXYXJzYXcxEjAQBgNVBAoMCVNub3dmbGFrZTEQMA4GA1UECwwHRHJpdmVyczEf\nMB0GA1UEAwwWU25vd2ZsYWtlIHRlc3QgUm9vdCBDQTAeFw0yNjAzMDYxMzE0MDZa\nFw0yNzAzMDYxMzE0MDZaMG4xCzAJBgNVBAYTAlBMMRQwEgYDVQQIDAtNYXpvd2ll\nY2tpZTEPMA0GA1UEBwwGV2Fyc2F3MRIwEAYDVQQKDAlTbm93Zmxha2UxEDAOBgNV\nBAsMB0RyaXZlcnMxEjAQBgNVBAMMCWxvY2FsaG9zdDCCAiIwDQYJKoZIhvcNAQEB\nBQADggIPADCCAgoCggIBAMMpVsRRrW7/UFzfb/WfkjF5tKIJBNze/90qC2xheSsq\nh3yQPPgfQXnSPLTCR0Z0ZEhV5NbiZPlSS5Nl9zD/JwSryFuFAtTrYhOcqBpnzz46\nn3bZUHNfC/sD6qNVL43LsyvfKWWBVyxlSpCMmEdgyqvPTRHJ3l3EW8uCBUxHQM35\nFxUNpTdc/tFCXVDZgRGUwQ23yRmwGx2HbXN1PEsmJ/yZ/mZg9oIWNUqTWGj6DY8R\n8gmf5oXgkjPlu2G6xxb6lo6cAToAWhjBuCVzo7ciCXpaGVxXv4IyksB+xJxjYFll\n1CBeYKXw5+UdCjzA04MA8Q+E0TNRRiv74sHYq2egS80+6NByjmHolzd/6nOUo5ed\ne96Mj5rfOojGn0Omwf8r1B/+aYZcYtOHyN44ZskZnDMv1NGlyn5o0lcn+RJyMi4D\n+MgwgOEYvDcByp9YG5y6MxAUo3Gexl8cifCGbBRZaL2PNWKhHVB0IKZwvY5WLPMD\n0d8pDl5+LrMq/1ra5ObhPhiOdgjpaPuH5lnyTkx0YG9adNsaczPFzzXARHIj3Il7\nWuEqBbf5a/iZcKlPOTNhlxhWIYUJ+1qunKXt3mhZx3IVX1pqionSGJkYwNTkWtJl\ntCzJquaPWmdMBfdtDNoavH5pRnbCtI/DB37gJ3u4VHfqZU2R7hXBkwW22IOiSKjv\nAgMBAAGjdjB0MB8GA1UdIwQYMBaAFNBlcqIdrN8OSmvMp5ZbwKR7RYegMAkGA1Ud\nEwQCMAAwCwYDVR0PBAQDAgTwMBoGA1UdEQQTMBGHBH8AAAGCCWxvY2FsaG9zdDAd\nBgNVHQ4EFgQUn/a/Lb80EZf1PHprSyO+qvRv3y4wDQYJKoZIhvcNAQELBQADggIB\nAFxnpTGBeUmdeef8N04X0LoUNiDTrhgPnJy5DYhFwfK27wsFHH4uWTf4Fg61VblG\nQJOhVYkZshvltdVRDr/Y1iAfCvwRlweA43QrXtMnDy+326ig277E2Z1C7K3f7lHS\nt/vUFR3fmSRdOAzFoJQISgzwL4tFw0wS36lwh6bOYHp/pm7BG4g+Z3ftWw8eUjmv\nudpupYXG36SflfZWasy4I1fl0mDWIS6eKkR76DqqugBMH1QMprTwr0OjXaWiku6r\nz3IsMPVnVXeejNNoP/67AfGzEb3FeFGVMl+qg7lL155blga1ph8upWo4k6qsZF7S\n4ZlscEaYSZj20ZR5ZN/n8F8d43uqzL0RUbaNyvYS12nnun5XnkfVFa2QJdq/EOV7\ndEyp9/GCIazqMf3cNUnQWUaQ/ow6zzL6+2bc5GnjRYps8z2+zyFFUgfINxrcg3K1\nT3C2ZNV3lSOwuzlyMD236HgM+Kt7mq2nmiDTlcp7JqrsLr6qzidL8jfnqjG9Jyg4\ny6cJzWPKTfVmqsJtfx1YBnIkddh4NYtpUgBGjYkYIRIonZ7eu9fapKKiRguckD4T\nP1BTd3BzwYqTmNXlxVV2uVhh7mPZo+jghK2HtuUcjsZPbWm2ju8kPmRo83fpBvk7\n6OYjoXKwQZxnQSqJ9rPf1fqGepn4kQR6qvM6phVSBs5x\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "ci/scripts/wiremock.csr",
    "content": "-----BEGIN CERTIFICATE REQUEST-----\nMIIEszCCApsCAQAwbjELMAkGA1UEBhMCUEwxFDASBgNVBAgMC01hem93aWVja2ll\nMQ8wDQYDVQQHDAZXYXJzYXcxEjAQBgNVBAoMCVNub3dmbGFrZTEQMA4GA1UECwwH\nRHJpdmVyczESMBAGA1UEAwwJbG9jYWxob3N0MIICIjANBgkqhkiG9w0BAQEFAAOC\nAg8AMIICCgKCAgEAwylWxFGtbv9QXN9v9Z+SMXm0ogkE3N7/3SoLbGF5KyqHfJA8\n+B9BedI8tMJHRnRkSFXk1uJk+VJLk2X3MP8nBKvIW4UC1OtiE5yoGmfPPjqfdtlQ\nc18L+wPqo1UvjcuzK98pZYFXLGVKkIyYR2DKq89NEcneXcRby4IFTEdAzfkXFQ2l\nN1z+0UJdUNmBEZTBDbfJGbAbHYdtc3U8SyYn/Jn+ZmD2ghY1SpNYaPoNjxHyCZ/m\nheCSM+W7YbrHFvqWjpwBOgBaGMG4JXOjtyIJeloZXFe/gjKSwH7EnGNgWWXUIF5g\npfDn5R0KPMDTgwDxD4TRM1FGK/viwdirZ6BLzT7o0HKOYeiXN3/qc5Sjl5173oyP\nmt86iMafQ6bB/yvUH/5phlxi04fI3jhmyRmcMy/U0aXKfmjSVyf5EnIyLgP4yDCA\n4Ri8NwHKn1gbnLozEBSjcZ7GXxyJ8IZsFFlovY81YqEdUHQgpnC9jlYs8wPR3ykO\nXn4usyr/Wtrk5uE+GI52COlo+4fmWfJOTHRgb1p02xpzM8XPNcBEciPciXta4SoF\nt/lr+JlwqU85M2GXGFYhhQn7Wq6cpe3eaFnHchVfWmqKidIYmRjA1ORa0mW0LMmq\n5o9aZ0wF920M2hq8fmlGdsK0j8MHfuAne7hUd+plTZHuFcGTBbbYg6JIqO8CAwEA\nAaAAMA0GCSqGSIb3DQEBCwUAA4ICAQBHoiHRzxkLHkWfgq1wbFrVnsHrnALSY+Nl\n994fFykF4fDA5eLvfIWmuU5YZwyz+9Bw0SGoefb9RfFxZbQByBglhFbHPEvID1Sw\n3ByJPMLccep7lkLd/BfIgyZ7vSyIK3mKY4wSnGqf3eiQeMU57ViP3AL6Q0Uos3Jm\njmUWIeEHrSE2HfHREK8ar0xGKTimQymW6P+ecRKQKs7I7aEJL5t3/zp2w+EyxIGC\nezP+rtH8QdfDJN3nui+2ljgonvbwrYMJTBJYZ/oOx/msKUF4EO2FT/VJKQsOZnyL\ns0HXMEEJ9AKlFo9gagZ6ZqxnVYCPoeW8Nfb56YwZ9im2wbo2yaNAFTMaKoH1/2g0\nLHZd1vq1sU6xT3V3R+5Iiw4k7u8mx6ietSbwuyOkHkQ+RZf5hZKvdHSymKTuN/e4\n40XzGBhcTqs57KHbsiWFBnRFiIZgFq5kbC0G+c927g8XRB9j3xiMjBBwUR0Kp78q\nbTvAzod0ZhYeltFw63TkNe/yH4RZefseub0eice6Fjmpv0BgjYNP2guCnd3u7KaG\nH0zYSFHzN00jtDNNs1Jx1drsHZcr6fAOeeUmI9ExsDkt8vyMmpshd+w3LEh/ZVL2\npvvtcut0s24OszF5HCRScxSXv3SSUDX1asRyUHY5STLdK74o+dfqXT+ja+MRJEEh\nIiE2ITiP8Q==\n-----END CERTIFICATE REQUEST-----\n"
  },
  {
    "path": "ci/scripts/wiremock.key",
    "content": "-----BEGIN PRIVATE KEY-----\nMIIJQQIBADANBgkqhkiG9w0BAQEFAASCCSswggknAgEAAoICAQDDKVbEUa1u/1Bc\n32/1n5IxebSiCQTc3v/dKgtsYXkrKod8kDz4H0F50jy0wkdGdGRIVeTW4mT5UkuT\nZfcw/ycEq8hbhQLU62ITnKgaZ88+Op922VBzXwv7A+qjVS+Ny7Mr3yllgVcsZUqQ\njJhHYMqrz00Ryd5dxFvLggVMR0DN+RcVDaU3XP7RQl1Q2YERlMENt8kZsBsdh21z\ndTxLJif8mf5mYPaCFjVKk1ho+g2PEfIJn+aF4JIz5bthuscW+paOnAE6AFoYwbgl\nc6O3Igl6WhlcV7+CMpLAfsScY2BZZdQgXmCl8OflHQo8wNODAPEPhNEzUUYr++LB\n2KtnoEvNPujQco5h6Jc3f+pzlKOXnXvejI+a3zqIxp9DpsH/K9Qf/mmGXGLTh8je\nOGbJGZwzL9TRpcp+aNJXJ/kScjIuA/jIMIDhGLw3AcqfWBucujMQFKNxnsZfHInw\nhmwUWWi9jzVioR1QdCCmcL2OVizzA9HfKQ5efi6zKv9a2uTm4T4YjnYI6Wj7h+ZZ\n8k5MdGBvWnTbGnMzxc81wERyI9yJe1rhKgW3+Wv4mXCpTzkzYZcYViGFCftarpyl\n7d5oWcdyFV9aaoqJ0hiZGMDU5FrSZbQsyarmj1pnTAX3bQzaGrx+aUZ2wrSPwwd+\n4Cd7uFR36mVNke4VwZMFttiDokio7wIDAQABAoICAAgrmeCm1A5FOAsQpkeagkH5\n/hBD37qTchNt6C6Ft3nm0jyVGUhV8/rH92yl2YVfPWIzM7JfUKozbMs4m0Gnh5hQ\nIheFblnq73SHZsORkavhmRLJBETgN3MvIHVCuAvv+Ynzp3BYGtsr877bc/XrsnBr\nlvwQqcjefe1Q0yyfVbI0eb09kKt3BDVPLvLsjX+77N0d0u3Ktp06MeCB3vVScp1w\n9k/jl/kC5FZBQZPw1qfPsNoATLlRboLSXPw5bTj5YrDeYnAYMFgVpsJCoMRQ83lL\nflZPAiB5l4qMLr+mqr5ItLm/hGejZJdDQPjMJc634l+rnXUliOeHKGDEfmCHOxpu\nN2C8iXJysQJhDGfHvLmNeKdaXgJt+T37W8M8t02oHDECpMwMSOHMlVpxut8DBhpa\nhz9olGxwp7c2fSemJGiWNUXCfMtkhUl4VLRAqZ7pD91VtmQAi8gAIg15MHIjlGAh\nEVQZZE1qd0SUxy4nCNYt9L3AhU2I/I8k7cQMKBX0vOrQQvaZmBo5FI3uSejMeNgn\nMQWQvzR1XIzBeMCv8c5kgRr6C6RPGYzycxO3fP93TfpwY/vehuBwAh+38qYY6Azn\nzVYqjn5hTnxhH3pCG3ugoqiLSnfrptw/TUVR9GOwMPNwD3QR6Hv57EljLyaaDQho\nbyLkPdKXEQUmFHEoLTWRAoIBAQDmh/yS0gmoWBDDA8/xIzB2a8NM9VfrutoI2HNM\ncnrQXWDdgjcLM/AAuV3ESyP0+1PFFv5gxCg35fPX+uj24dydsyxCAbFBxCBPvBUC\n3Mc2PskEDmFyuYDwxbLItxDgjMZX1kWhCGONV2LOHfxy1itkZ6aWhP4p77/+9oaU\n26Uq5mcWMMUV0wWX6IS7ttpK6xmXY3LauEzqmwgQITfrLBMdpDyJZjGYYpYLOWvg\nhGIkkEH+ACyrU1SZOYl3tCYmteXSfJeuwLP4g2vcLaj0j+z7fvJ1YAVByeuOHKV/\nJgHv1XE3tRZH1ZZ1QoeHHlaizjzjCCic/ld93SHzYwgFDyN5AoIBAQDYuQGKIEbS\nKlZpaZAvyU9XYEXDSRLGnkKLOo0A54IsM/2YueYPgJ3ovMyVU5coMcXC3AACo0Zy\nOREHXdmNmKe+PcZArbn/BvTMihXChLKeGc/MFyCBqbniDM6/LSkqT1mU2jL8AEKz\nxwU9kHX4NZrq4CfYoqA3b8x/dVCgV/8L0o5+mubHm7NUyFYlOHPWQ0u+auEKEdAB\ndVtv3VuPUwgkmE4OgsDv3q165jQ0Yr5cxXwlNUHd0yJlo9QklN8ua6rxCLU+ylbB\nRgU+tALD7pBPF2pa+m5G3efOUOTFhwWFsQ/mABZscz9emiQXNHVuwj7feLOOE/Yq\nPkhecmmsPm2nAoIBAEG4BKXqYLxwFp8xqAcLTBaGVA/NZXobM2sQIZZqkF50MFgV\ndhGohcP/FB8QeLivKUtnaa82XGzLDj/FFMLE0rrWSEis6NZhzgBNEwRU4imxrmaM\nnvUwsvRwt64GmjYZi7WgrQriNFcn0VAHNl+adJZUAiao1TgpU+egae9nymc3da3a\ny2SUWuTacXR+BS8UZKBGxohZv/ulpJ/MiH9veieaGXPmAT9642FhxkIkG0JnKZj6\nfcF9qQFhaLIKVlH0ywa9ZBR6dRPki0wibCcEHL/5ia8yZ21A3fkOa5OaxzSS+Yqz\nAh4KYrEc/Tvkxzf0aWEjg0h2LYUBFFupILEohqkCggEAUhLGHXwZte+op+T9YMt5\nC5r+8HTU8njutHFpAsWpy3mo+VS1ZnuL0Z7mT0rHvMYUobXVHyqcPBeWdla5U+FS\n7T3RvZ7NCGKnBGrq0K6WQj9+LUk420HejlfRWB8PLuG8CB4WHs8uc4zUVDtIIcaT\nM43OKUF1MWlaZY6VCRQqF10W76VT7pXtdRclYJUfcS4tGiC5tqmGP3clOJj42q9U\nLx+qt94WmQCYbCmP7aLTeqijWifwGMSjiyBe77edSaQmqX9lvDC+aBVPWS6suWy4\nI+u3MFsUtivFZKHH8XIvyjCC19SCqXF/tyDiuBL6wgY370NzpEO0/sx1dacYk81U\nkwKCAQB6g2V31JRn6CjkCTDG9Lf71AQwW1ZaLB71rhKyepAVV3vnYBRjuApGx3cN\nWFVIU9Cc010xDjeBmlbkqsfDujZRKTdU8aq9U8N26UWNkiwQjD1kCQR7KrvatZaU\nwglJ04BXZhVW/qT5Q2j/bgBmEjjbes83ZNWwWbx9x+h/YUVcCJ+n6OQCmDRBEvz6\n1XkRpWt1HR9yEpH8kIuwWBqe/+afmASaLCK19jQcQ80QDvEcn8cy8A0UHM3FToWf\nR3OBlkcHYlUMZbj0VpiDktEUxl/ycPVWesH7WOhsB4HxSqtLpjebJBffzU/e+k+u\nQ39oXb8n1ljeCNi/Ksj8e/KstwzI\n-----END PRIVATE KEY-----\n"
  },
  {
    "path": "ci/scripts/wiremock.v3.ext",
    "content": "authorityKeyIdentifier=keyid,issuer\nbasicConstraints=CA:FALSE\nkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment\nsubjectAltName = @alt_names\n[alt_names]\nIP.1 = 127.0.0.1\nDNS.1 = localhost\n"
  },
  {
    "path": "ci/test.bat",
    "content": "REM Test Golang driver\n\nsetlocal EnableDelayedExpansion\n\nstart /b python ci\\scripts\\hang_webserver.py 12345\n\ncurl -O https://repo1.maven.org/maven2/org/wiremock/wiremock-standalone/3.11.0/wiremock-standalone-3.11.0.jar\nSTART /B java -jar wiremock-standalone-3.11.0.jar --port %WIREMOCK_PORT% -https-port %WIREMOCK_HTTPS_PORT% --https-keystore ci/scripts/wiremock.p12 --keystore-type PKCS12 --keystore-password password\n\nif \"%CLOUD_PROVIDER%\"==\"AWS\" (\n    set PARAMETER_FILENAME=parameters_aws_golang.json.gpg\n    set PRIVATE_KEY=rsa_key_golang_aws.p8.gpg\n) else if \"%CLOUD_PROVIDER%\"==\"AZURE\" (\n    set PARAMETER_FILENAME=parameters_azure_golang.json.gpg\n    set PRIVATE_KEY=rsa_key_golang_azure.p8.gpg\n) else if \"%CLOUD_PROVIDER%\"==\"GCP\" (\n    set PARAMETER_FILENAME=parameters_gcp_golang.json.gpg\n    set PRIVATE_KEY=rsa_key_golang_gcp.p8.gpg\n)\n\nif not defined PARAMETER_FILENAME (\n    echo [ERROR] failed to detect CLOUD_PROVIDER: %CLOUD_PROVIDER%\n    exit /b 1\n)\n\ngpg --quiet --batch --yes --decrypt --passphrase=\"%PARAMETERS_SECRET%\" --output parameters.json .github/workflows/%PARAMETER_FILENAME%\nif %ERRORLEVEL% NEQ 0 (\n    echo [ERROR] failed to decrypt the test parameters \n    exit /b 1\n)\n\ngpg --quiet --batch --yes --decrypt --passphrase=\"%PARAMETERS_SECRET%\" --output rsa-2048-private-key.p8 .github/workflows/rsa-2048-private-key.p8.gpg\nif %ERRORLEVEL% NEQ 0 (\n    echo [ERROR] failed to decrypt the rsa-2048 private key\n    exit /b 1\n)\n\nREM Create directory structure for golang private key\nif not exist \".github\\workflows\\parameters\\public\" mkdir \".github\\workflows\\parameters\\public\"\n\ngpg --quiet --batch --yes --decrypt --passphrase=\"%GOLANG_PRIVATE_KEY_SECRET%\" --output .github\\workflows\\parameters\\public\\rsa_key_golang.p8 .github\\workflows\\parameters\\public\\%PRIVATE_KEY%\nif %ERRORLEVEL% NEQ 0 (\n    echo [ERROR] failed to decrypt the golang private key\n    exit /b 1\n)\n\necho @echo off>parameters.bat\njq -r \".testconnection | to_entries | map(\\\"set \\(.key)=\\(.value)\\\") | .[]\" parameters.json >> parameters.bat\ncall parameters.bat\nif %ERRORLEVEL% NEQ 0 (\n    echo [ERROR] failed to set the test parameters\n    exit /b 1\n)\n\necho [INFO] Account:   %SNOWFLAKE_TEST_ACCOUNT%\necho [INFO] User   :   %SNOWFLAKE_TEST_USER%\necho [INFO] Database:  %SNOWFLAKE_TEST_DATABASE%\necho [INFO] Warehouse: %SNOWFLAKE_TEST_WAREHOUSE%\necho [INFO] Role:      %SNOWFLAKE_TEST_ROLE%\n\ngo install github.com/jstemmer/go-junit-report/v2@latest\n\nREM Build coverpkg list excluding cmd/ packages\nset COVPKGS=\nfor /f \"usebackq delims=\" %%p in (`go list ./...`) do (\n    echo %%p | findstr /C:\"/cmd/\" >nul\n    if !ERRORLEVEL! NEQ 0 (\n        if \"!COVPKGS!\"==\"\" (\n            set COVPKGS=%%p\n        ) else (\n            set COVPKGS=!COVPKGS!,%%p\n        )\n    )\n)\n\nREM Test based on SEQUENTIAL_TESTS setting\nif \"%SEQUENTIAL_TESTS%\"==\"true\" (\n    REM Test each package separately to avoid buffering - real-time output but slower\n    echo [INFO] Running tests sequentially for real-time output\n\n    REM Clear any existing output file\n    if exist test-output.txt del test-output.txt\n\n    REM Track if any test failed\n    set TEST_FAILED=0\n\n    REM Loop through each package and test separately\n    for /f \"usebackq delims=\" %%p in (`go list ./...`) do (\n        set PKG=%%p\n        REM Convert full package path to relative path\n        set PKG_PATH=!PKG:github.com/snowflakedb/gosnowflake/v2=!\n        if \"!PKG_PATH!\"==\"\" (\n            set PKG_PATH=.\n        ) else (\n            set PKG_PATH=.!PKG_PATH!\n        )\n\n        echo === Testing package: !PKG_PATH! ===\n        echo === Testing package: !PKG_PATH! === >> test-output.txt\n\n        REM Test package and append to output (no -race on Windows ARM)\n        REM Replace / with _ for coverage filename\n        set COV_FILE=!PKG_PATH:/=_!_coverage.txt\n        go test %GO_TEST_PARAMS% --timeout 90m -coverpkg=!COVPKGS! -coverprofile=!COV_FILE! -covermode=atomic -v !PKG_PATH! >> test-output.txt 2>&1\n\n        REM Track failure but continue testing other packages\n        if !ERRORLEVEL! NEQ 0 (\n            echo [ERROR] Package !PKG_PATH! tests failed\n            set TEST_FAILED=1\n        )\n    )\n\n    REM Merge coverage files\n    go install github.com/wadey/gocovmerge@latest\n    gocovmerge *_coverage.txt > coverage.txt\n    del *_coverage.txt\n\n    REM Set exit code based on whether any test failed\n    set TEST_EXIT=!TEST_FAILED!\n) else (\n    REM Test all packages with ./... - parallel, faster, but buffered\n    echo [INFO] Running tests in parallel\n    go test %GO_TEST_PARAMS% --timeout 90m -coverpkg=!COVPKGS! -coverprofile=coverage.txt -covermode=atomic -v ./... > test-output.txt 2>&1\n    set TEST_EXIT=!ERRORLEVEL!\n)\n\nREM Display the test output\ntype test-output.txt\n\nREM Generate JUnit report from the saved output\ntype test-output.txt | go-junit-report > test-report.junit.xml\n\nREM End local scope and exit with the test exit code\nendlocal & exit /b %TEST_EXIT%\n"
  },
  {
    "path": "ci/test.sh",
    "content": "#!/bin/bash\n#\n# Test Golang driver\n#\nset -e\nset -o pipefail\n\nCI_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\n\n$CI_DIR/scripts/run_wiremock.sh &\n\nif [[ -n \"$JENKINS_HOME\" ]]; then\n  ROOT_DIR=\"$(cd \"${CI_DIR}/..\" && pwd)\"\n  export WORKSPACE=${WORKSPACE:-/tmp}\n\n  source $CI_DIR/_init.sh\n\n  declare -A TARGET_TEST_IMAGES\n  if [[ -n \"$TARGET_DOCKER_TEST_IMAGE\" ]]; then\n      echo \"[INFO] TARGET_DOCKER_TEST_IMAGE: $TARGET_DOCKER_TEST_IMAGE\"\n      IMAGE_NAME=${TEST_IMAGE_NAMES[$TARGET_DOCKER_TEST_IMAGE]}\n      if [[ -z \"$IMAGE_NAME\" ]]; then\n          echo \"[ERROR] The target platform $TARGET_DOCKER_TEST_IMAGE doesn't exist. Check $CI_DIR/_init.sh\"\n          exit 1\n      fi\n      TARGET_TEST_IMAGES=([$TARGET_DOCKER_TEST_IMAGE]=$IMAGE_NAME)\n  else\n      echo \"[ERROR] Set TARGET_DOCKER_TEST_IMAGE to the docker image name to run the test\"\n      for name in \"${!TEST_IMAGE_NAMES[@]}\"; do\n          echo \"  \" $name\n      done\n      exit 2\n  fi\n\n  for name in \"${!TARGET_TEST_IMAGES[@]}\"; do\n      echo \"[INFO] Testing $DRIVER_NAME on $name\"\n      docker container run \\\n          --rm \\\n          --network=host \\\n          -v $ROOT_DIR:/mnt/host \\\n          -v $WORKSPACE:/mnt/workspace \\\n          -e LOCAL_USER_ID=$(id -u ${USER}) \\\n          -e GIT_COMMIT \\\n          -e GIT_BRANCH \\\n          -e GIT_URL \\\n          -e AWS_ACCESS_KEY_ID \\\n          -e AWS_SECRET_ACCESS_KEY \\\n          -e GITHUB_ACTIONS \\\n          -e GITHUB_SHA \\\n          -e GITHUB_REF \\\n          -e RUNNER_TRACKING_ID \\\n          -e JOB_NAME \\\n          -e BUILD_NUMBER \\\n          -e JENKINS_HOME \\\n          ${TEST_IMAGE_NAMES[$name]} \\\n          /mnt/host/ci/container/test_component.sh\n          echo \"[INFO] Test Results: $WORKSPACE/junit.xml\"\n  done\nelse\n  source $CI_DIR/scripts/setup_connection_parameters.sh\n  cd $CI_DIR/..\n  make test\nfi\n"
  },
  {
    "path": "ci/test_authentication.sh",
    "content": "#!/bin/bash -e\n\nset -o pipefail\n\nexport THIS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nsource \"$THIS_DIR/scripts/setup_gpg.sh\"\nexport WORKSPACE=${WORKSPACE:-/tmp}\n\nCI_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nif [[ -n \"$JENKINS_HOME\" ]]; then\n  ROOT_DIR=\"$(cd \"${CI_DIR}/..\" && pwd)\"\n  export WORKSPACE=${WORKSPACE:-/tmp}\n\n  source $CI_DIR/_init.sh\n\n  echo \"Use /sbin/ip\"\n  IP_ADDR=$(/sbin/ip -4 addr show scope global dev eth0 | grep inet | awk '{print $2}' | cut -d / -f 1)\n\nfi\n\ngpg --quiet --batch --yes --decrypt --passphrase=\"$PARAMETERS_SECRET\" --output $THIS_DIR/../.github/workflows/parameters_aws_auth_tests.json \"$THIS_DIR/../.github/workflows/parameters_aws_auth_tests.json.gpg\"\ngpg --quiet --batch --yes --decrypt --passphrase=\"$PARAMETERS_SECRET\" --output $THIS_DIR/../.github/workflows/rsa_keys/rsa_key.p8 \"$THIS_DIR/../.github/workflows/rsa_keys/rsa_key.p8.gpg\"\ngpg --quiet --batch --yes --decrypt --passphrase=\"$PARAMETERS_SECRET\" --output $THIS_DIR/../.github/workflows/rsa_keys/rsa_key_invalid.p8 \"$THIS_DIR/../.github/workflows/rsa_keys/rsa_key_invalid.p8.gpg\"\n\ndocker run \\\n  -v $(cd $THIS_DIR/.. && pwd):/mnt/host \\\n  -v $WORKSPACE:/mnt/workspace \\\n  --rm \\\n  artifactory.ci1.us-west-2.aws-dev.app.snowflake.com/internal-production-docker-snowflake-virtual/docker/snowdrivers-test-external-browser-golang:8 \\\n  \"/mnt/host/ci/container/test_authentication.sh\"\n"
  },
  {
    "path": "ci/test_revocation.sh",
    "content": "#!/bin/bash\n#\n# Test certificate revocation validation using the revocation-validation framework.\n#\n\nset -o pipefail\n\nTHIS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nDRIVER_DIR=\"$( dirname \"${THIS_DIR}\")\"\nWORKSPACE=${WORKSPACE:-${DRIVER_DIR}}\n\necho \"[Info] Starting revocation validation tests\"\necho \"[Info] Go driver path: $DRIVER_DIR\"\n\nset -e\n\n# Clone revocation-validation framework\nREVOCATION_DIR=\"/tmp/revocation-validation\"\nREVOCATION_BRANCH=\"${REVOCATION_BRANCH:-main}\"\n\nrm -rf \"$REVOCATION_DIR\"\nif [ -n \"$GITHUB_USER\" ] && [ -n \"$GITHUB_TOKEN\" ]; then\n    git clone --depth 1 --branch \"$REVOCATION_BRANCH\" \"https://${GITHUB_USER}:${GITHUB_TOKEN}@github.com/snowflake-eng/revocation-validation.git\" \"$REVOCATION_DIR\"\nelse\n    git clone --depth 1 --branch \"$REVOCATION_BRANCH\" \"https://github.com/snowflake-eng/revocation-validation.git\" \"$REVOCATION_DIR\"\nfi\n\ncd \"$REVOCATION_DIR\"\n\n# Point the framework at the local Go driver checkout\ngo mod edit -replace \"github.com/snowflakedb/gosnowflake/v2=${DRIVER_DIR}\"\ngo mod tidy\necho \"[Info] Replaced gosnowflake module with local checkout: $DRIVER_DIR\"\n\necho \"[Info] Running tests with Go $(go version | grep -oE 'go[0-9]+\\.[0-9]+')...\"\n\ngo run . \\\n    --client snowflake \\\n    --output \"${WORKSPACE}/revocation-results.json\" \\\n    --output-html \"${WORKSPACE}/revocation-report.html\" \\\n    --log-level debug\n\nEXIT_CODE=$?\n\nif [ -f \"${WORKSPACE}/revocation-results.json\" ]; then\n    echo \"[Info] Results: ${WORKSPACE}/revocation-results.json\"\nfi\nif [ -f \"${WORKSPACE}/revocation-report.html\" ]; then\n    echo \"[Info] Report: ${WORKSPACE}/revocation-report.html\"\nfi\n\nexit $EXIT_CODE\n"
  },
  {
    "path": "ci/test_rockylinux9.sh",
    "content": "#!/bin/bash -e\n#\n# Test GoSnowflake driver in Rocky Linux 9\n# NOTES:\n#   - Go version MUST be passed in as the first argument, e.g: \"1.24.2\"\n#   - This is the script that test_rockylinux9_docker.sh runs inside of the docker container\n\nif [[ -z \"${1}\" ]]; then\n    echo \"[ERROR] Go version is required as first argument (e.g., '1.24.2')\"\n    echo \"Usage: $0 <go_version>\"\n    exit 1\nfi\n\nGO_VERSION=\"${1}\"\nTHIS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nCONNECTOR_DIR=\"$( dirname \"${THIS_DIR}\")\"\n\n# Validate prerequisites\nif [[ ! -f \"${CONNECTOR_DIR}/parameters.json\" ]]; then\n    echo \"[ERROR] parameters.json not found - connection parameters must be decrypted first\"\n    exit 1\nfi\n\nif [[ ! -f \"${CONNECTOR_DIR}/.github/workflows/parameters/public/rsa_key_golang.p8\" ]]; then\n    echo \"[ERROR] Private key not found - must be decrypted first\"  \n    exit 1\nfi\n\n# Setup Go environment\necho \"[Info] Using Go ${GO_VERSION}\"\n\n# Extract short version for wrapper script\nGO_VERSION_SHORT=$(echo ${GO_VERSION} | cut -d. -f1,2)\n\nif ! command -v go${GO_VERSION_SHORT} &> /dev/null; then\n    echo \"[ERROR] Go ${GO_VERSION_SHORT} not found!\"\n    exit 1\nfi\n\n# Set GOROOT to short version directory (e.g., /usr/local/go1.24)  \nexport GOROOT=\"/usr/local/go${GO_VERSION_SHORT}\"\nexport PATH=\"${GOROOT}/bin:$PATH\"\nexport GOPATH=\"/home/user/go\"\nexport PATH=\"$GOPATH/bin:$PATH\"\n\necho \"[Info] Go ${GO_VERSION} version: $(go version)\"\n\ncd $CONNECTOR_DIR\n\necho \"[Info] Downloading Go modules\"\ngo mod download\n\n# Load connection parameters\neval $(jq -r '.testconnection | to_entries | map(\"export \\(.key)=\\(.value|tostring)\")|.[]' ${CONNECTOR_DIR}/parameters.json)\nexport SNOWFLAKE_TEST_PRIVATE_KEY=\"${CONNECTOR_DIR}/.github/workflows/parameters/public/rsa_key_golang.p8\"\n\n# Start WireMock  \n${CONNECTOR_DIR}/ci/scripts/run_wiremock.sh &\n\n# Run tests using make test\ncd ${CONNECTOR_DIR}\nmake test\n"
  },
  {
    "path": "ci/test_rockylinux9_docker.sh",
    "content": "#!/bin/bash -e\n# Test GoSnowflake driver in Rocky Linux 9 Docker\n# NOTES:\n#   - Go version MUST be specified as first argument\n#   - Usage: ./test_rockylinux9_docker.sh \"1.24.2\"\n\nset -o pipefail\n\nif [[ -z \"${1}\" ]]; then\n    echo \"[ERROR] Go version is required as first argument (e.g., '1.24.2')\"\n    echo \"Usage: $0 <go_version>\"\n    exit 1\nfi\n\nGO_ENV=${1}\n\n# Set constants\nTHIS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nCONNECTOR_DIR=\"$( dirname \"${THIS_DIR}\")\"\nWORKSPACE=${WORKSPACE:-${CONNECTOR_DIR}}\n\n# TODO: Uncomment when set_base_image.sh is created for Go\n# source $THIS_DIR/set_base_image.sh\n\ncd $THIS_DIR/docker/rockylinux9\n\nCONTAINER_NAME=test_gosnowflake_rockylinux9\n\necho \"[Info] Building docker image for Rocky Linux 9 with Go ${GO_ENV}\"\n\n# Get current user/group IDs to match host permissions\nUSER_ID=$(id -u)\nGROUP_ID=$(id -g)\n\ndocker build --pull -t ${CONTAINER_NAME}:1.0 \\\n    --build-arg BASE_IMAGE=rockylinux:9 \\\n    --build-arg GO_VERSION=$GO_ENV \\\n    --build-arg USER_ID=$USER_ID \\\n    --build-arg GROUP_ID=$GROUP_ID \\\n    . -f Dockerfile\n\n# Use setup_connection_parameters.sh like native jobs (outside container)\nif [[ \"$GITHUB_ACTIONS\" == \"true\" ]]; then\n    source ${CONNECTOR_DIR}/ci/scripts/setup_connection_parameters.sh\nfi\n\ndocker run --network=host \\\n    -e TERM=vt102 \\\n    -e JENKINS_HOME \\\n    -e GITHUB_ACTIONS \\\n    -e CLOUD_PROVIDER \\\n    -e GO_TEST_PARAMS \\\n    -e WIREMOCK_PORT \\\n    -e WIREMOCK_HTTPS_PORT \\\n    --mount type=bind,source=\"${CONNECTOR_DIR}\",target=/home/user/gosnowflake \\\n    ${CONTAINER_NAME}:1.0 \\\n    ci/test_rockylinux9.sh ${GO_ENV}\n"
  },
  {
    "path": "ci/test_wif.sh",
    "content": "#!/bin/bash -e\n\nset -o pipefail\n\nexport THIS_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nexport RSA_KEY_PATH_AWS_AZURE=\"$THIS_DIR/wif/parameters/rsa_wif_aws_azure\"\nexport RSA_KEY_PATH_GCP=\"$THIS_DIR/wif/parameters/rsa_wif_gcp\"\nexport PARAMETERS_FILE_PATH=\"$THIS_DIR/wif/parameters/parameters_wif.json\"\n\nrun_tests_and_set_result() {\n  local provider=\"$1\"\n  local host=\"$2\"\n  local snowflake_host=\"$3\"\n  local rsa_key_path=\"$4\"\n  local snowflake_user=\"$5\"\n  local impersonation_path=\"$6\"\n  local snowflake_user_for_impersonation=\"$7\"\n\n  # NOTE: /home/user is the only dir we can write to (SNOW-2231498 to improve WORKDIR)\n  ssh -i \"$rsa_key_path\" -o IdentitiesOnly=yes -p 443 \"$host\" env BRANCH=\"$BRANCH\" SNOWFLAKE_TEST_WIF_HOST=\"$snowflake_host\" SNOWFLAKE_TEST_WIF_PROVIDER=\"$provider\" SNOWFLAKE_TEST_WIF_ACCOUNT=\"$SNOWFLAKE_TEST_WIF_ACCOUNT SNOWFLAKE_TEST_WIF_USERNAME=\"$snowflake_user\" SNOWFLAKE_TEST_WIF_IMPERSONATION_PATH=\"$impersonation_path\" SNOWFLAKE_TEST_WIF_USERNAME_IMPERSONATION=\"$snowflake_user_for_impersonation\"\" bash << EOF\n      set -e\n      set -o pipefail\n      docker run \\\n        --rm \\\n        --cpus=1 \\\n        -m 2g \\\n        -e BRANCH \\\n        -e SNOWFLAKE_TEST_WIF_PROVIDER \\\n        -e SNOWFLAKE_TEST_WIF_HOST \\\n        -e SNOWFLAKE_TEST_WIF_ACCOUNT \\\n        -e SNOWFLAKE_TEST_WIF_USERNAME \\\n        -e SNOWFLAKE_TEST_WIF_IMPERSONATION_PATH \\\n        -e SNOWFLAKE_TEST_WIF_USERNAME_IMPERSONATION \\\n        snowflakedb/client-go-chainguard-go1.24-test:1 \\\n          bash -c \"\n            cd /home/user\n            echo 'Running tests on branch: \\$BRANCH, provider: \\$SNOWFLAKE_TEST_WIF_PROVIDER'\n            if [[ \\\"\\$BRANCH\\\" =~ ^PR-[0-9]+\\$ ]]; then\n              wget -O - https://github.com/snowflakedb/gosnowflake/archive/refs/pull/\\$(echo \\$BRANCH | cut -d- -f2)/head.tar.gz | tar -xz\n            else\n              wget -O - https://github.com/snowflakedb/gosnowflake/archive/refs/heads/$BRANCH.tar.gz | tar -xz\n            fi\n            mv gosnowflake-* gosnowflake\n            cd gosnowflake\n            SKIP_SETUP=true go test -v -run TestWorkloadIdentityAuthOnCloudVM\n          \"\nEOF\n  local status=$?\n\n  if [[ $status -ne 0 ]]; then\n    echo \"$provider tests failed with exit status: $status\"\n    EXIT_STATUS=1\n  else\n    echo \"$provider tests passed\"\n  fi\n}\n\nget_branch() {\n  local branch\n  if [[ -n \"${GIT_BRANCH}\" ]]; then\n    # Jenkins\n    branch=\"${GIT_BRANCH}\"\n  else\n    # Local\n    branch=$(git rev-parse --abbrev-ref HEAD)\n  fi\n  echo \"${branch}\"\n}\n\nsetup_parameters() {\n  source \"$THIS_DIR/scripts/setup_gpg.sh\"\n  gpg --quiet --batch --yes --decrypt --passphrase=\"$PARAMETERS_SECRET\" --output \"$RSA_KEY_PATH_AWS_AZURE\" \"${RSA_KEY_PATH_AWS_AZURE}.gpg\"\n  gpg --quiet --batch --yes --decrypt --passphrase=\"$PARAMETERS_SECRET\" --output \"$RSA_KEY_PATH_GCP\" \"${RSA_KEY_PATH_GCP}.gpg\"\n  chmod 600 \"$RSA_KEY_PATH_AWS_AZURE\"\n  chmod 600 \"$RSA_KEY_PATH_GCP\"\n  gpg --quiet --batch --yes --decrypt --passphrase=\"$PARAMETERS_SECRET\" --output \"$PARAMETERS_FILE_PATH\" \"${PARAMETERS_FILE_PATH}.gpg\"\n  eval $(jq -r '.wif | to_entries | map(\"export \\(.key)=\\(.value|tostring)\")|.[]' $PARAMETERS_FILE_PATH)\n}\n\nBRANCH=$(get_branch)\nexport BRANCH\nsetup_parameters\n\n# Run tests for all cloud providers\nEXIT_STATUS=0\nset +e  # Don't exit on first failure\nrun_tests_and_set_result \"AZURE\" \"$HOST_AZURE\" \"$SNOWFLAKE_TEST_WIF_HOST_AZURE\" \"$RSA_KEY_PATH_AWS_AZURE\" \"$SNOWFLAKE_TEST_WIF_USERNAME_AZURE\"\nrun_tests_and_set_result \"AWS\" \"$HOST_AWS\" \"$SNOWFLAKE_TEST_WIF_HOST_AWS\" \"$RSA_KEY_PATH_AWS_AZURE\" \"$SNOWFLAKE_TEST_WIF_USERNAME_AWS\" \"$SNOWFLAKE_TEST_WIF_IMPERSONATION_PATH_AWS\" \"$SNOWFLAKE_TEST_WIF_USERNAME_AWS_IMPERSONATION\"\nrun_tests_and_set_result \"GCP\" \"$HOST_GCP\" \"$SNOWFLAKE_TEST_WIF_HOST_GCP\" \"$RSA_KEY_PATH_GCP\" \"$SNOWFLAKE_TEST_WIF_USERNAME_GCP\" \"$SNOWFLAKE_TEST_WIF_IMPERSONATION_PATH_GCP\" \"$SNOWFLAKE_TEST_WIF_USERNAME_GCP_IMPERSONATION\"\nrun_tests_and_set_result \"GCP+OIDC\" \"$HOST_GCP\" \"$SNOWFLAKE_TEST_WIF_HOST_GCP\" \"$RSA_KEY_PATH_GCP\" \"$SNOWFLAKE_TEST_WIF_USERNAME_GCP_OIDC\"\nset -e  # Re-enable exit on error\necho \"Exit status: $EXIT_STATUS\"\nexit $EXIT_STATUS\n"
  },
  {
    "path": "ci/wif/parameters/parameters_wif.json.gpg",
    "content": "\r\u0004\t\u0003\b\u0017'QW\u0005-q\b\u0001d\rYêőTkv5F2яyD`\u0016mw\u0016GL\u0016Wݽd_\\'q6T*'9\u0001_֮t\u000b%?wļHbZvfwӘ].\u0016\u0016\u0017h\\Θ_&uzT[&1G\u00140=)}V\u0016;\u0005j ==X;E\u0003\u0012(\u000f,k7I&\u0014\u0019@ŕZק\u0018З\u0010\u001b$>-\u0006@Ʈ9y\u00070IF-;U']x)A5'D\u001a+$\u000f\u000f>3ܒ\u0019A\u0016Ư\u0011~9?):}*m}]7^,e@!\\Cl\u000f6ą\u001aUi\u0004\u001f-@9!k\u0007\u001c&V\u001fgN\u0005 G{\u001a\u0016h'\u0013bw3/\u0012\n>QX\u0007ZjZ\u0001ub\r'D\u0014.\u001e# \u0018\u0017{Dj'̪Tŋ,%QH5\u0013"
  },
  {
    "path": "ci/wif/parameters/rsa_wif_aws_azure.gpg",
    "content": "\r\u0004\t\u0003\b髃6K\u00015%܇ټ飐\u000f|eRk]n\u000fc-TloB,\u0011ܐ͒R7B]<ER-|\u00044u'K2<\u0013:BC&fԳeX%i9\u00060@LG\u0011$a\u0004hOnTejB@\u001a)~Nto&\u0003Ȍ\u0012\u0013ru/i\u001cuq%0\u0012!]ݮ2-lw\u0018`!ʚgF߬\u0019i\u0010DDĘX\u0013..d\"^\ruA,ۅA\u000es\u001a\\ؕa\u000ebf\u0007\u0016^qF\u001c\nʷ5S\u0010_|{<镉!Y?\\9']|)\u0015RZF+ZN\u001eZc9~Sd+5\u001d1ޱX\u001e\u0018\n{TgG_t5Fw\u0011sJ9\u0011f`+!M"
  },
  {
    "path": "ci/wif/parameters/rsa_wif_gcp.gpg",
    "content": "\r\u0004\t\u0003\b髃6K\u00015%܇ټ飐\u000f|eRk]n\u000fc-TloB,\u0011ܐ͒R7B]<ER-|\u00044u'K2<\u0013:BC&fԳeX%i9\u00060@LG\u0011$a\u0004hOnTejB@\u001a)~Nto&\u0003Ȍ\u0012\u0013ru/i\u001cuq%0\u0012!]ݮ2-lw\u0018`!ʚgF߬\u0019i\u0010DDĘX\u0013..d\"^\ruA,ۅA\u000es\u001a\\ؕa\u000ebf\u0007\u0016^qF\u001c\nʷ5S\u0010_|{<镉!Y?\\9']|)\u0015RZF+ZN\u001eZc9~Sd+5\u001d1ޱX\u001e\u0018\n{TgG_t5Fw\u0011sJ9\u0011f`+!M"
  },
  {
    "path": "client.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"time\"\n)\n\n// InternalClient is implemented by HTTPClient\n// Deprecated: this will be removed in a future release.\ntype InternalClient interface {\n\tGet(context.Context, *url.URL, map[string]string, time.Duration) (*http.Response, error)\n\tPost(context.Context, *url.URL, map[string]string, []byte, time.Duration, currentTimeProvider) (*http.Response, error)\n}\n\ntype httpClient struct {\n\tsr *snowflakeRestful\n}\n\nfunc (cli *httpClient) Get(\n\tctx context.Context,\n\turl *url.URL,\n\theaders map[string]string,\n\ttimeout time.Duration) (*http.Response, error) {\n\treturn cli.sr.FuncGet(ctx, cli.sr, url, headers, timeout)\n}\n\nfunc (cli *httpClient) Post(\n\tctx context.Context,\n\turl *url.URL,\n\theaders map[string]string,\n\tbody []byte,\n\ttimeout time.Duration,\n\tcurrentTimeProvider currentTimeProvider) (*http.Response, error) {\n\treturn cli.sr.FuncPost(ctx, cli.sr, url, headers, body, timeout, currentTimeProvider, nil)\n}\n"
  },
  {
    "path": "client_configuration.go",
    "content": "package gosnowflake\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strings\"\n)\n\n// log levels for easy logging\nconst (\n\tlevelOff   string = \"OFF\"   // log level for logging switched off\n\tlevelError string = \"ERROR\" // error log level\n\tlevelWarn  string = \"WARN\"  // warn log level\n\tlevelInfo  string = \"INFO\"  // info log level\n\tlevelDebug string = \"DEBUG\" // debug log level\n\tlevelTrace string = \"TRACE\" // trace log level\n)\n\nconst (\n\tdefaultConfigName = \"sf_client_config.json\"\n\tclientConfEnvName = \"SF_CLIENT_CONFIG_FILE\"\n)\n\nfunc getClientConfig(filePathFromConnectionString string) (*ClientConfig, string, error) {\n\tconfigPredefinedDirPaths := clientConfigPredefinedDirs()\n\tfilePath, err := findClientConfigFilePath(filePathFromConnectionString, configPredefinedDirPaths)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\tif filePath == \"\" { // we did not find a config file\n\t\treturn nil, \"\", nil\n\t}\n\tconfig, err := parseClientConfiguration(filePath)\n\treturn config, filePath, err\n}\n\nfunc findClientConfigFilePath(filePathFromConnectionString string, configPredefinedDirs []string) (string, error) {\n\tif filePathFromConnectionString != \"\" {\n\t\tlogger.Infof(\"Using client configuration path from a connection string: %s\", filePathFromConnectionString)\n\t\treturn filePathFromConnectionString, nil\n\t}\n\tenvConfigFilePath := os.Getenv(clientConfEnvName)\n\tif envConfigFilePath != \"\" {\n\t\tlogger.Infof(\"Using client configuration path from an environment variable: %s\", envConfigFilePath)\n\t\treturn envConfigFilePath, nil\n\t}\n\treturn searchForConfigFile(configPredefinedDirs)\n}\n\nfunc searchForConfigFile(directories []string) (string, error) {\n\tfor _, dir := range directories {\n\t\tfilePath := path.Join(dir, defaultConfigName)\n\t\texists, err := existsFile(filePath)\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"error while searching for client config in directory: %s, err: %w\", dir, err)\n\t\t}\n\t\tif exists {\n\t\t\tlogger.Infof(\"Using client configuration from a default directory: %s\", filePath)\n\t\t\treturn filePath, nil\n\t\t}\n\t\tlogger.Debugf(\"No client config found in directory: %s\", dir)\n\t}\n\tlogger.Info(\"No client config file found in default directories\")\n\treturn \"\", nil\n}\n\nfunc existsFile(filePath string) (bool, error) {\n\t_, err := os.Stat(filePath)\n\tif err == nil {\n\t\treturn true, nil\n\t}\n\tif errors.Is(err, os.ErrNotExist) {\n\t\treturn false, nil\n\t}\n\treturn false, err\n}\n\nfunc clientConfigPredefinedDirs() []string {\n\tvar predefinedDirs []string\n\texeFile, err := os.Executable()\n\tif err != nil {\n\t\tlogger.Warnf(\"Unable to access the application directory for client configuration search, err: %v\", err)\n\t} else {\n\t\tpredefinedDirs = append(predefinedDirs, filepath.Dir(exeFile))\n\t}\n\thomeDir, err := os.UserHomeDir()\n\tif err != nil {\n\t\tlogger.Warnf(\"Unable to access Home directory for client configuration search, err: %v\", err)\n\t} else {\n\t\tpredefinedDirs = append(predefinedDirs, homeDir)\n\t}\n\tif predefinedDirs == nil {\n\t\treturn []string{}\n\t}\n\treturn predefinedDirs\n}\n\n// ClientConfig config root\ntype ClientConfig struct {\n\tCommon *ClientConfigCommonProps `json:\"common\"`\n}\n\n// ClientConfigCommonProps properties from \"common\" section\ntype ClientConfigCommonProps struct {\n\tLogLevel string `json:\"log_level,omitempty\"`\n\tLogPath  string `json:\"log_path,omitempty\"`\n}\n\nfunc parseClientConfiguration(filePath string) (*ClientConfig, error) {\n\tif filePath == \"\" {\n\t\treturn nil, nil\n\t}\n\t// Check if group (5th LSB) or others (2nd LSB) have a write permission to the file\n\texpectedPerm := os.FileMode(1<<4 | 1<<1)\n\tfileContents, err := getFileContents(filePath, expectedPerm)\n\tif err != nil {\n\t\treturn nil, parsingClientConfigError(err)\n\t}\n\tvar clientConfig ClientConfig\n\terr = json.Unmarshal(fileContents, &clientConfig)\n\tif err != nil {\n\t\treturn nil, parsingClientConfigError(err)\n\t}\n\tunknownValues := getUnknownValues(fileContents)\n\tif len(unknownValues) > 0 {\n\t\tfor val := range unknownValues {\n\t\t\tlogger.Warnf(\"Unknown configuration entry: %s with value: %s\", val, unknownValues[val])\n\t\t}\n\t}\n\terr = validateClientConfiguration(&clientConfig)\n\tif err != nil {\n\t\treturn nil, parsingClientConfigError(err)\n\t}\n\treturn &clientConfig, nil\n}\n\nfunc getUnknownValues(fileContents []byte) map[string]any {\n\tvar values map[string]any\n\terr := json.Unmarshal(fileContents, &values)\n\tif err != nil {\n\t\treturn nil\n\t}\n\tif values[\"common\"] == nil {\n\t\treturn nil\n\t}\n\tcommonValues := values[\"common\"].(map[string]any)\n\tlowercaseCommonValues := make(map[string]any, len(commonValues))\n\tfor k, v := range commonValues {\n\t\tlowercaseCommonValues[strings.ToLower(k)] = v\n\t}\n\tdelete(lowercaseCommonValues, \"log_level\")\n\tdelete(lowercaseCommonValues, \"log_path\")\n\treturn lowercaseCommonValues\n}\n\nfunc parsingClientConfigError(err error) error {\n\treturn fmt.Errorf(\"parsing client config failed: %w\", err)\n}\n\nfunc validateClientConfiguration(clientConfig *ClientConfig) error {\n\tif clientConfig == nil {\n\t\treturn errors.New(\"client config not found\")\n\t}\n\tif clientConfig.Common == nil {\n\t\treturn errors.New(\"common section in client config not found\")\n\t}\n\treturn validateLogLevel(*clientConfig)\n}\n\nfunc validateLogLevel(clientConfig ClientConfig) error {\n\tvar logLevel = clientConfig.Common.LogLevel\n\tif logLevel != \"\" {\n\t\t_, err := toLogLevel(logLevel)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc toLogLevel(logLevelString string) (string, error) {\n\tvar logLevel = strings.ToUpper(logLevelString)\n\tswitch logLevel {\n\tcase levelOff, levelError, levelWarn, levelInfo, levelDebug, levelTrace:\n\t\treturn logLevel, nil\n\tdefault:\n\t\treturn \"\", errors.New(\"unknown log level: \" + logLevelString)\n\t}\n}\n"
  },
  {
    "path": "client_configuration_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestFindConfigFileFromConnectionParameters(t *testing.T) {\n\tdirs := createTestDirectories(t)\n\tconnParameterConfigPath := createFile(t, \"conn_parameters_config.json\", \"random content\", dirs.dir)\n\tenvConfigPath := createFile(t, \"env_var_config.json\", \"random content\", dirs.dir)\n\tt.Setenv(clientConfEnvName, envConfigPath)\n\tcreateFile(t, defaultConfigName, \"random content\", dirs.predefinedDir1)\n\tcreateFile(t, defaultConfigName, \"random content\", dirs.predefinedDir2)\n\n\tclientConfigFilePath, err := findClientConfigFilePath(connParameterConfigPath, predefinedTestDirs(dirs))\n\n\tassertEqualE(t, err, nil)\n\tassertEqualE(t, clientConfigFilePath, connParameterConfigPath, \"config file path\")\n}\n\nfunc TestFindConfigFileFromEnvVariable(t *testing.T) {\n\tdirs := createTestDirectories(t)\n\tenvConfigPath := createFile(t, \"env_var_config.json\", \"random content\", dirs.dir)\n\tt.Setenv(clientConfEnvName, envConfigPath)\n\tcreateFile(t, defaultConfigName, \"random content\", dirs.predefinedDir1)\n\tcreateFile(t, defaultConfigName, \"random content\", dirs.predefinedDir2)\n\n\tclientConfigFilePath, err := findClientConfigFilePath(\"\", predefinedTestDirs(dirs))\n\n\tassertEqualE(t, err, nil)\n\tassertEqualE(t, clientConfigFilePath, envConfigPath, \"config file path\")\n}\n\nfunc TestFindConfigFileFromFirstPredefinedDir(t *testing.T) {\n\tdirs := createTestDirectories(t)\n\tconfigPath := createFile(t, defaultConfigName, \"random content\", dirs.predefinedDir1)\n\tcreateFile(t, defaultConfigName, \"random content\", dirs.predefinedDir2)\n\n\tclientConfigFilePath, err := findClientConfigFilePath(\"\", predefinedTestDirs(dirs))\n\n\tassertEqualE(t, err, nil)\n\tassertEqualE(t, clientConfigFilePath, configPath, \"config file path\")\n}\n\nfunc TestFindConfigFileFromSubsequentDirectoryIfNotFoundInPreviousOne(t *testing.T) {\n\tdirs := createTestDirectories(t)\n\tcreateFile(t, \"wrong_file_name.json\", \"random content\", dirs.predefinedDir1)\n\tconfigPath := createFile(t, defaultConfigName, \"random content\", dirs.predefinedDir2)\n\n\tclientConfigFilePath, err := findClientConfigFilePath(\"\", predefinedTestDirs(dirs))\n\n\tassertEqualE(t, err, nil)\n\tassertEqualE(t, clientConfigFilePath, configPath, \"config file path\")\n}\n\nfunc TestNotFindConfigFileWhenNotDefined(t *testing.T) {\n\tdirs := createTestDirectories(t)\n\tcreateFile(t, \"wrong_file_name.json\", \"random content\", dirs.predefinedDir1)\n\tcreateFile(t, \"wrong_file_name.json\", \"random content\", dirs.predefinedDir2)\n\n\tclientConfigFilePath, err := findClientConfigFilePath(\"\", predefinedTestDirs(dirs))\n\n\tassertEqualE(t, err, nil)\n\tassertEqualE(t, clientConfigFilePath, \"\", \"config file path\")\n}\n\nfunc TestCreatePredefinedDirs(t *testing.T) {\n\tskipOnMissingHome(t)\n\texeDir, _ := os.Executable()\n\tappDir := filepath.Dir(exeDir)\n\thomeDir, err := os.UserHomeDir()\n\tassertNilF(t, err, \"get home dir error\")\n\n\tlocations := clientConfigPredefinedDirs()\n\n\tassertEqualF(t, len(locations), 2, \"size\")\n\tassertEqualE(t, locations[0], appDir, \"driver directory\")\n\tassertEqualE(t, locations[1], homeDir, \"home directory\")\n}\n\nfunc TestGetClientConfig(t *testing.T) {\n\tdir := t.TempDir()\n\tfileName := \"config.json\"\n\tconfigContents := createClientConfigContent(\"INFO\", \"/some-path/some-directory\")\n\tcreateFile(t, fileName, configContents, dir)\n\tfilePath := path.Join(dir, fileName)\n\n\tclientConfigFilePath, _, err := getClientConfig(filePath)\n\n\tassertNilF(t, err)\n\tassertNotNilF(t, clientConfigFilePath)\n\tassertEqualE(t, clientConfigFilePath.Common.LogLevel, \"INFO\", \"log level\")\n\tassertEqualE(t, clientConfigFilePath.Common.LogPath, \"/some-path/some-directory\", \"log path\")\n}\n\nfunc TestNoResultForGetClientConfigWhenNoFileFound(t *testing.T) {\n\tclientConfigFilePath, _, err := getClientConfig(\"\")\n\n\tassertNilF(t, err)\n\tassertNilF(t, clientConfigFilePath)\n}\n\nfunc TestParseConfiguration(t *testing.T) {\n\tdir := t.TempDir()\n\ttestCases := []struct {\n\t\ttestName         string\n\t\tfileName         string\n\t\tfileContents     string\n\t\texpectedLogLevel string\n\t\texpectedLogPath  string\n\t}{\n\t\t{\n\t\t\ttestName:         \"TestWithLogLevelUpperCase\",\n\t\t\tfileName:         \"config_1.json\",\n\t\t\tfileContents:     createClientConfigContent(\"INFO\", \"/some-path/some-directory\"),\n\t\t\texpectedLogLevel: \"INFO\",\n\t\t\texpectedLogPath:  \"/some-path/some-directory\",\n\t\t},\n\t\t{\n\t\t\ttestName:         \"TestWithLogLevelLowerCase\",\n\t\t\tfileName:         \"config_2.json\",\n\t\t\tfileContents:     createClientConfigContent(\"info\", \"/some-path/some-directory\"),\n\t\t\texpectedLogLevel: \"info\",\n\t\t\texpectedLogPath:  \"/some-path/some-directory\",\n\t\t},\n\t\t{\n\t\t\ttestName: \"TestWithMissingValues\",\n\t\t\tfileName: \"config_3.json\",\n\t\t\tfileContents: `{\n\t\t\t\t\"common\": {}\n\t\t\t}`,\n\t\t\texpectedLogLevel: \"\",\n\t\t\texpectedLogPath:  \"\",\n\t\t},\n\t}\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.testName, func(t *testing.T) {\n\t\t\tfileName := createFile(t, tc.fileName, tc.fileContents, dir)\n\n\t\t\tconfig, err := parseClientConfiguration(fileName)\n\n\t\t\tassertNilF(t, err, \"parse client configuration error\")\n\t\t\tassertEqualE(t, config.Common.LogLevel, tc.expectedLogLevel, \"log level\")\n\t\t\tassertEqualE(t, config.Common.LogPath, tc.expectedLogPath, \"log path\")\n\t\t})\n\t}\n}\n\nfunc TestParseAllLogLevels(t *testing.T) {\n\tdir := t.TempDir()\n\tfor _, logLevel := range []string{\"OFF\", \"ERROR\", \"WARN\", \"INFO\", \"DEBUG\", \"TRACE\"} {\n\t\tt.Run(logLevel, func(t *testing.T) {\n\t\t\tfileContents := fmt.Sprintf(`{\n\t\t\t\t\"common\": {\n\t\t\t\t\t\"log_level\" : \"%s\",\n\t\t\t\t\t\"log_path\" : \"/some-path/some-directory\"\n\t\t\t\t}\n\t\t\t}`, logLevel)\n\t\t\tfileName := createFile(t, fmt.Sprintf(\"config_%s.json\", logLevel), fileContents, dir)\n\n\t\t\tconfig, err := parseClientConfiguration(fileName)\n\n\t\t\tassertNilF(t, err, \"parse client config error\")\n\t\t\tassertEqualE(t, config.Common.LogLevel, logLevel, \"log level\")\n\t\t})\n\t}\n}\n\nfunc TestParseConfigurationFails(t *testing.T) {\n\tdir := t.TempDir()\n\ttestCases := []struct {\n\t\ttestName                      string\n\t\tfileName                      string\n\t\tFileContents                  string\n\t\texpectedErrorMessageToContain string\n\t}{\n\t\t{\n\t\t\ttestName:                      \"TestWithWrongLogLevel\",\n\t\t\tfileName:                      \"config_1.json\",\n\t\t\tFileContents:                  createClientConfigContent(\"something weird\", \"/some-path/some-directory\"),\n\t\t\texpectedErrorMessageToContain: \"unknown log level\",\n\t\t},\n\t\t{\n\t\t\ttestName: \"TestWithWrongTypeOfLogLevel\",\n\t\t\tfileName: \"config_2.json\",\n\t\t\tFileContents: `{\n\t\t\t\t\"common\": {\n\t\t\t\t\t\"log_level\" : 15,\n\t\t\t\t\t\"log_path\" : \"/some-path/some-directory\"\n\t\t\t\t}\n\t\t\t}`,\n\t\t\texpectedErrorMessageToContain: \"ClientConfigCommonProps.common.log_level\",\n\t\t},\n\t\t{\n\t\t\ttestName: \"TestWithWrongTypeOfLogPath\",\n\t\t\tfileName: \"config_3.json\",\n\t\t\tFileContents: `{\n\t\t\t\t\"common\": {\n\t\t\t\t\t\"log_level\" : \"INFO\",\n\t\t\t\t\t\"log_path\" : true\n\t\t\t\t}\n\t\t\t}`,\n\t\t\texpectedErrorMessageToContain: \"ClientConfigCommonProps.common.log_path\",\n\t\t},\n\t\t{\n\t\t\ttestName:                      \"TestWithoutCommon\",\n\t\t\tfileName:                      \"config_4.json\",\n\t\t\tFileContents:                  \"{}\",\n\t\t\texpectedErrorMessageToContain: \"common section in client config not found\",\n\t\t},\n\t}\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.testName, func(t *testing.T) {\n\t\t\tfileName := createFile(t, tc.fileName, tc.FileContents, dir)\n\n\t\t\t_, err := parseClientConfiguration(fileName)\n\n\t\t\tassertNotNilF(t, err, \"parse client configuration error\")\n\t\t\terrMessage := fmt.Sprint(err)\n\t\t\texpectedPrefix := \"parsing client config failed\"\n\t\t\tassertHasPrefixE(t, errMessage, expectedPrefix, \"error message\")\n\t\t\tassertStringContainsE(t, errMessage, tc.expectedErrorMessageToContain, \"error message\")\n\t\t})\n\t}\n}\n\nfunc TestUnknownValues(t *testing.T) {\n\ttestCases := []struct {\n\t\ttestName       string\n\t\tinputString    string\n\t\texpectedOutput map[string]string\n\t}{\n\t\t{\n\t\t\ttestName: \"EmptyCommon\",\n\t\t\tinputString: `{\n\t\t\t\t\"common\": {}\n\t\t\t}`,\n\t\t\texpectedOutput: map[string]string{},\n\t\t},\n\t\t{\n\t\t\ttestName: \"CommonMissing\",\n\t\t\tinputString: `{\n\t\t\t}`,\n\t\t\texpectedOutput: map[string]string{},\n\t\t},\n\t\t{\n\t\t\ttestName: \"UnknownProperty\",\n\t\t\tinputString: `{\n\t\t\t\t\"common\": {\n\t\t\t\t\t\"unknown_key\": \"unknown_value\"\n\t\t\t\t}\n\t\t\t}`,\n\t\t\texpectedOutput: map[string]string{\n\t\t\t\t\"unknown_key\": \"unknown_value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttestName: \"KnownAndUnknownProperty\",\n\t\t\tinputString: `{\n\t\t\t\t\"common\": {\n\t\t\t\t\t\"lOg_level\": \"level\",\n\t\t\t\t\t\"log_PATH\": \"path\",\n\t\t\t\t\t\"unknown_key\": \"unknown_value\"\n\t\t\t\t}\n\t\t\t}`,\n\t\t\texpectedOutput: map[string]string{\n\t\t\t\t\"unknown_key\": \"unknown_value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttestName: \"KnownProperties\",\n\t\t\tinputString: `{\n\t\t\t\t\"common\": {\n\t\t\t\t\t\"log_level\": \"level\",\n\t\t\t\t\t\"log_path\": \"path\"\n\t\t\t\t}\n\t\t\t}`,\n\t\t\texpectedOutput: map[string]string{},\n\t\t},\n\n\t\t{\n\t\t\ttestName:       \"EmptyInput\",\n\t\t\tinputString:    \"\",\n\t\t\texpectedOutput: map[string]string{},\n\t\t},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.testName, func(t *testing.T) {\n\t\t\tinputBytes := []byte(tc.inputString)\n\t\t\tresult := getUnknownValues(inputBytes)\n\t\t\tassertEqualE(t, fmt.Sprint(result), fmt.Sprint(tc.expectedOutput))\n\t\t})\n\t}\n}\n\nfunc TestConfigFileOpenSymlinkFail(t *testing.T) {\n\tskipOnWindows(t, \"file permission is different\")\n\tdir := t.TempDir()\n\tconfigFilePath := createFile(t, defaultConfigName, \"random content\", dir)\n\tsymlinkFile := path.Join(dir, \"test_symlink\")\n\texpectedErrMsg := \"too many levels of symbolic links\"\n\n\terr := os.Symlink(configFilePath, symlinkFile)\n\tassertNilF(t, err, \"failed to create symlink\")\n\n\t_, err = getFileContents(symlinkFile, os.FileMode(1<<4|1<<1))\n\tassertNotNilF(t, err, \"should have blocked opening symlink\")\n\tassertTrueF(t, strings.Contains(err.Error(), expectedErrMsg))\n}\n\nfunc createFile(t *testing.T, fileName string, fileContents string, directory string) string {\n\tfullFileName := path.Join(directory, fileName)\n\terr := os.WriteFile(fullFileName, []byte(fileContents), 0644)\n\tassertNilF(t, err, \"create file error\")\n\treturn fullFileName\n}\n\nfunc createTestDirectories(t *testing.T) struct {\n\tdir            string\n\tpredefinedDir1 string\n\tpredefinedDir2 string\n} {\n\tdir := t.TempDir()\n\tpredefinedDir1 := path.Join(dir, \"dir1\")\n\terr := os.Mkdir(predefinedDir1, 0700)\n\tassertNilF(t, err, \"predefined dir1 error\")\n\tpredefinedDir2 := path.Join(dir, \"dir2\")\n\terr = os.Mkdir(predefinedDir2, 0700)\n\tassertNilF(t, err, \"predefined dir2 error\")\n\treturn struct {\n\t\tdir            string\n\t\tpredefinedDir1 string\n\t\tpredefinedDir2 string\n\t}{\n\t\tdir:            dir,\n\t\tpredefinedDir1: predefinedDir1,\n\t\tpredefinedDir2: predefinedDir2,\n\t}\n}\n\nfunc predefinedTestDirs(dirs struct {\n\tdir            string\n\tpredefinedDir1 string\n\tpredefinedDir2 string\n}) []string {\n\treturn []string{dirs.predefinedDir1, dirs.predefinedDir2}\n}\n\nfunc createClientConfigContent(logLevel string, logPath string) string {\n\treturn fmt.Sprintf(`{\n\t\t\t\"common\": {\n\t\t\t\t\"log_level\" : \"%s\",\n\t\t\t\t\"log_path\" : \"%s\"\n\t\t\t}\n\t\t}`,\n\t\tlogLevel,\n\t\tstrings.ReplaceAll(logPath, \"\\\\\", \"\\\\\\\\\"),\n\t)\n}\n"
  },
  {
    "path": "client_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"testing\"\n)\n\ntype DummyTransport struct {\n\tpostRequests int\n\tgetRequests  int\n}\n\nfunc (t *DummyTransport) RoundTrip(r *http.Request) (*http.Response, error) {\n\tif r.URL.Path == \"\" {\n\t\tswitch r.Method {\n\t\tcase http.MethodGet:\n\t\t\tt.getRequests++\n\t\tcase http.MethodPost:\n\t\t\tt.postRequests++\n\t\t}\n\t\treturn &http.Response{StatusCode: 200}, nil\n\t}\n\treturn createTestNoRevocationTransport().RoundTrip(r)\n}\n\nfunc TestInternalClient(t *testing.T) {\n\tconfig, err := ParseDSN(dsn)\n\tassertNilF(t, err, \"failed to parse dsn\")\n\ttransport := DummyTransport{}\n\tconfig.Transporter = &transport\n\tdriver := SnowflakeDriver{}\n\tdb, err := driver.OpenWithConfig(context.Background(), *config)\n\tassertNilF(t, err, \"failed to open with config\")\n\n\tinternalClient := (db.(*snowflakeConn)).internal\n\tresp, err := internalClient.Get(context.Background(), &url.URL{}, make(map[string]string), 0)\n\tassertNilF(t, err, \"GET request should succeed\")\n\tassertEqualF(t, resp.StatusCode, 200, \"GET response status code should be 200\")\n\tassertEqualF(t, transport.getRequests, 1, \"Expected exactly one GET request\")\n\n\tresp, err = internalClient.Post(context.Background(), &url.URL{}, make(map[string]string), make([]byte, 0), 0, defaultTimeProvider)\n\tassertNilF(t, err, \"POST request should succeed\")\n\tassertEqualF(t, resp.StatusCode, 200, \"POST response status code should be 200\")\n\tassertEqualF(t, transport.postRequests, 1, \"Expected exactly one POST request\")\n\n\tdb.Close()\n}\n"
  },
  {
    "path": "cmd/arrow/.gitignore",
    "content": "arrow_batches\ntransform_batches_to_rows/transform_batches_to_rows\n"
  },
  {
    "path": "cmd/arrow/Makefile",
    "content": "SUBDIRS := batches transform_batches_to_rows\nTARGETS := all install run lint fmt\n\n$(TARGETS): subdirs\n\nsubdirs: $(SUBDIRS)\n\n$(SUBDIRS):\n\t@$(MAKE) -C $@ $(filter $(TARGETS),$(MAKECMDGOALS))\n\n.PHONY: subdirs $(TARGETS) $(SUBDIRS)\n"
  },
  {
    "path": "cmd/arrow/transform_batches_to_rows/Makefile",
    "content": "include ../../../gosnowflake.mak\nCMD_TARGET=transform_batches_to_rows\n\n## Install\ninstall: cinstall\n\n## Run\nrun: crun\n\n## Lint\nlint: clint\n\n## Format source codes\nfmt: cfmt\n\n.PHONY: install run lint fmt\n"
  },
  {
    "path": "cmd/arrow/transform_batches_to_rows/transform_batches_to_rows.go",
    "content": "package main\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"errors\"\n\t\"flag\"\n\t\"io\"\n\t\"log\"\n\n\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\t\"github.com/snowflakedb/gosnowflake/v2/arrowbatches\"\n)\n\nfunc main() {\n\tif !flag.Parsed() {\n\t\tflag.Parse()\n\t}\n\n\tcfg, err := sf.GetConfigFromEnv([]*sf.ConfigParam{\n\t\t{Name: \"Account\", EnvName: \"SNOWFLAKE_TEST_ACCOUNT\", FailOnMissing: true},\n\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_TEST_USER\", FailOnMissing: true},\n\t\t{Name: \"Password\", EnvName: \"SNOWFLAKE_TEST_PASSWORD\", FailOnMissing: true},\n\t\t{Name: \"Host\", EnvName: \"SNOWFLAKE_TEST_HOST\", FailOnMissing: false},\n\t\t{Name: \"Port\", EnvName: \"SNOWFLAKE_TEST_PORT\", FailOnMissing: false},\n\t\t{Name: \"Protocol\", EnvName: \"SNOWFLAKE_TEST_PROTOCOL\", FailOnMissing: false},\n\t})\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to create Config, err: %v\", err)\n\t}\n\n\tconnector := sf.NewConnector(sf.SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\n\tconn, err := db.Conn(context.Background())\n\tif err != nil {\n\t\tlog.Fatalf(\"cannot create a connection. %v\", err)\n\t}\n\tdefer conn.Close()\n\n\t_, err = conn.ExecContext(context.Background(), \"ALTER SESSION SET GO_QUERY_RESULT_FORMAT = json\")\n\tif err != nil {\n\t\tlog.Fatalf(\"cannot force JSON as result format. %v\", err)\n\t}\n\n\tvar rows driver.Rows\n\terr = conn.Raw(func(x any) error {\n\t\trows, err = x.(driver.QueryerContext).QueryContext(arrowbatches.WithArrowBatches(context.Background()), \"SELECT 1, 'hello' UNION SELECT 2, 'hi' UNION SELECT 3, 'howdy'\", nil)\n\t\treturn err\n\t})\n\tif err != nil {\n\t\tlog.Fatalf(\"cannot run a query. %v\", err)\n\t}\n\tdefer rows.Close()\n\n\t_, err = arrowbatches.GetArrowBatches(rows.(sf.SnowflakeRows))\n\tvar se *sf.SnowflakeError\n\tif !errors.As(err, &se) || se.Number != sf.ErrNonArrowResponseInArrowBatches {\n\t\tlog.Fatalf(\"expected to fail while retrieving arrow batches\")\n\t}\n\n\tres := make([]driver.Value, 2)\n\tfor {\n\t\terr = rows.Next(res)\n\t\tif err == io.EOF {\n\t\t\tbreak\n\t\t}\n\t\tprintln(res[0].(string), res[1].(string))\n\t}\n}\n"
  },
  {
    "path": "cmd/logger/Makefile",
    "content": "include ../../gosnowflake.mak\nCMD_TARGET=logger\n\n## Install\ninstall: cinstall\n\n## Run\nrun: crun\n\n## Lint\nlint: clint\n\n## Format source codes\nfmt: cfmt\n\n.PHONY: install run lint fmt\n"
  },
  {
    "path": "cmd/logger/logger.go",
    "content": "package main\n\nimport (\n\t\"bytes\"\n\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\t\"log\"\n\t\"strings\"\n)\n\nfunc main() {\n\tbuf := &bytes.Buffer{}\n\tbuf2 := &bytes.Buffer{}\n\n\tvar mylog = sf.GetLogger()\n\tmylog.SetOutput(buf)\n\tmylog.Info(\"Hello I am default\")\n\tmylog.Info(\"Hello II amm default\")\n\tmylog.Debug(\"Default I am debug NOT SHOWN\")\n\t_ = mylog.SetLogLevel(\"debug\")\n\tmylog.Debug(\"Default II amm debug TO SHOW\")\n\n\tvar testlog = sf.CreateDefaultLogger()\n\t_ = testlog.SetLogLevel(\"debug\")\n\ttestlog.SetOutput(buf)\n\ttestlog.SetOutput(buf2)\n\tsf.SetLogger(testlog)\n\n\tvar mylog2 = sf.GetLogger()\n\tmylog2.Debug(\"test debug log is shown\")\n\t_ = mylog2.SetLogLevel(\"info\")\n\tmylog2.Debug(\"test debug log is not shownII\")\n\tlog.Print(\"Expect all true values:\")\n\n\t// verify logger switch\n\tvar strbuf = buf.String()\n\tlog.Printf(\"%t:%t:%t:%t\", strings.Contains(strbuf, \"I am default\"),\n\t\tstrings.Contains(strbuf, \"II amm default\"),\n\t\t!strings.Contains(strbuf, \"test debug log is shown\"),\n\t\tstrings.Contains(buf2.String(), \"test debug log is shown\"))\n\n\t// verify log level switch\n\tlog.Printf(\"%t:%t:%t:%t\", !strings.Contains(strbuf, \"Default I am debug NOT SHOWN\"),\n\t\tstrings.Contains(strbuf, \"Default II amm debug TO SHOW\"),\n\t\tstrings.Contains(buf2.String(), \"test debug log is shown\"),\n\t\t!strings.Contains(buf2.String(), \"test debug log is not shownII\"))\n\n}\n"
  },
  {
    "path": "cmd/mfa/Makefile",
    "content": "include ../../gosnowflake.mak\nCMD_TARGET=mfa\n\n## Install\ninstall: cinstall\n\n## Run\nrun: crun\n\n## Lint\nlint: clint\n\n## Format source codes\nfmt: cfmt\n\n.PHONY: install run lint fmt\n"
  },
  {
    "path": "cmd/mfa/mfa.go",
    "content": "package main\n\nimport (\n\t\"database/sql\"\n\t\"flag\"\n\t\"fmt\"\n\t\"log\"\n\n\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n)\n\nfunc main() {\n\tif !flag.Parsed() {\n\t\tflag.Parse()\n\t}\n\n\tcfg, err := sf.GetConfigFromEnv([]*sf.ConfigParam{\n\t\t{Name: \"Account\", EnvName: \"SNOWFLAKE_TEST_ACCOUNT\", FailOnMissing: true},\n\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_TEST_USER\", FailOnMissing: true},\n\t\t{Name: \"Password\", EnvName: \"SNOWFLAKE_TEST_PASSWORD\", FailOnMissing: true},\n\t\t{Name: \"Host\", EnvName: \"SNOWFLAKE_TEST_HOST\", FailOnMissing: false},\n\t\t{Name: \"Port\", EnvName: \"SNOWFLAKE_TEST_PORT\", FailOnMissing: false},\n\t\t{Name: \"Protocol\", EnvName: \"SNOWFLAKE_TEST_PROTOCOL\", FailOnMissing: false},\n\t})\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to create Config, err: %v\", err)\n\t}\n\tcfg.Authenticator = sf.AuthTypeUsernamePasswordMFA\n\tdsn, err := sf.DSN(cfg)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to create DSN from Config. err: %v\", err)\n\t}\n\n\t// The external browser flow should start with the call to Open\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to connect. err: %v\", err)\n\t}\n\tdefer db.Close()\n\tquery := \"SELECT 1\"\n\trows, err := db.Query(query)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to run a query. %v, err: %v\", query, err)\n\t}\n\tdefer rows.Close()\n\tvar v int\n\tfor rows.Next() {\n\t\terr := rows.Scan(&v)\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"failed to get result. err: %v\", err)\n\t\t}\n\t\tif v != 1 {\n\t\t\tlog.Fatalf(\"failed to get 1. got: %v\", v)\n\t\t}\n\t\tfmt.Printf(\"Congrats! You have successfully run %v with Snowflake DB!\", query)\n\t}\n}\n"
  },
  {
    "path": "cmd/programmatic_access_token/.gitignore",
    "content": "pat\n"
  },
  {
    "path": "cmd/programmatic_access_token/Makefile",
    "content": "include ../../gosnowflake.mak\nCMD_TARGET=pat\n\n## Install\ninstall: cinstall\n\n## Run\nrun: crun\n\n## Lint\nlint: clint\n\n## Format source codes\nfmt: cfmt\n\n.PHONY: install run lint fmt\n"
  },
  {
    "path": "cmd/programmatic_access_token/pat.go",
    "content": "// you have to configure PAT on your user\n\npackage main\n\nimport (\n\t\"database/sql\"\n\t\"flag\"\n\t\"fmt\"\n\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\t\"log\"\n)\n\nfunc main() {\n\tif !flag.Parsed() {\n\t\tflag.Parse()\n\t}\n\n\tcfg, err := sf.GetConfigFromEnv([]*sf.ConfigParam{\n\t\t{Name: \"Account\", EnvName: \"SNOWFLAKE_TEST_ACCOUNT\", FailOnMissing: true},\n\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_TEST_USER\", FailOnMissing: true},\n\t\t{Name: \"Token\", EnvName: \"SNOWFLAKE_TEST_PAT\", FailOnMissing: true},\n\t\t{Name: \"Host\", EnvName: \"SNOWFLAKE_TEST_HOST\", FailOnMissing: false},\n\t\t{Name: \"Port\", EnvName: \"SNOWFLAKE_TEST_PORT\", FailOnMissing: false},\n\t\t{Name: \"Protocol\", EnvName: \"SNOWFLAKE_TEST_PROTOCOL\", FailOnMissing: false},\n\t})\n\tcfg.Authenticator = sf.AuthTypePat\n\tif err != nil {\n\t\tlog.Fatalf(\"cannot build config. %v\", err)\n\t}\n\n\tconnector := sf.NewConnector(sf.SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\n\tquery := \"SELECT 1\"\n\trows, err := db.Query(query)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to run a query. %v, err: %v\", query, err)\n\t}\n\tdefer rows.Close()\n\tvar v int\n\tif !rows.Next() {\n\t\tlog.Fatalf(\"no rows returned\")\n\t}\n\tif err = rows.Scan(&v); err != nil {\n\t\tlog.Fatalf(\"failed to scan rows. %v\", err)\n\t}\n\tif v != 1 {\n\t\tlog.Fatalf(\"unexpected result, expected 1, got %v\", v)\n\t}\n\tfmt.Printf(\"Congrats! You have successfully run %v with Snowflake DB!\\n\", query)\n}\n"
  },
  {
    "path": "cmd/tomlfileconnection/.gitignore",
    "content": "tomlfileconnection.go"
  },
  {
    "path": "cmd/tomlfileconnection/Makefile",
    "content": "include ../../gosnowflake.mak\nCMD_TARGET=tomlfileconnection\n\n## Install\ninstall: cinstall\n\n## Run\nrun: crun\n\n## Lint\nlint: clint\n\n## Format source codes\nfmt: cfmt\n\n.PHONY: install run lint fmt\n"
  },
  {
    "path": "cmd/variant/Makefile",
    "content": "include ../../gosnowflake.mak\nCMD_TARGET=variant\n\n## Install\ninstall: cinstall\n\n## Run\nrun: crun\n\n## Lint\nlint: clint\n\n## Format source codes\nfmt: cfmt\n\n.PHONY: install run lint fmt\n"
  },
  {
    "path": "cmd/variant/insertvariantobject.go",
    "content": "package main\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"encoding/json\"\n\t\"flag\"\n\t\"fmt\"\n\t\"log\"\n\t\"strconv\"\n\t\"time\"\n\n\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n)\n\nfunc main() {\n\tif !flag.Parsed() {\n\t\tflag.Parse()\n\t}\n\n\tcfg, err := sf.GetConfigFromEnv([]*sf.ConfigParam{\n\t\t{Name: \"Account\", EnvName: \"SNOWFLAKE_TEST_ACCOUNT\", FailOnMissing: true},\n\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_TEST_USER\", FailOnMissing: true},\n\t\t{Name: \"Password\", EnvName: \"SNOWFLAKE_TEST_PASSWORD\", FailOnMissing: true},\n\t\t{Name: \"Warehouse\", EnvName: \"SNOWFLAKE_TEST_WAREHOUSE\", FailOnMissing: true},\n\t\t{Name: \"Database\", EnvName: \"SNOWFLAKE_TEST_DATABASE\", FailOnMissing: true},\n\t\t{Name: \"Schema\", EnvName: \"SNOWFLAKE_TEST_SCHEMA\", FailOnMissing: true},\n\t\t{Name: \"Host\", EnvName: \"SNOWFLAKE_TEST_HOST\", FailOnMissing: false},\n\t\t{Name: \"Port\", EnvName: \"SNOWFLAKE_TEST_PORT\", FailOnMissing: false},\n\t\t{Name: \"Protocol\", EnvName: \"SNOWFLAKE_TEST_PROTOCOL\", FailOnMissing: false},\n\t})\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to create Config, err: %v\", err)\n\t}\n\tdsn, err := sf.DSN(cfg)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to create DSN from Config: %v, err: %v\", cfg, err)\n\t}\n\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to connect. %v, err: %v\", dsn, err)\n\t}\n\tdefer db.Close()\n\n\tctx := context.Background()\n\tconn, err := db.Conn(ctx)\n\tif err != nil {\n\t\tlog.Fatalf(\"Failed to acquire connection. err: %v\", err)\n\t}\n\tdefer conn.Close()\n\n\ttablename := \"insert_variant_object_\" + strconv.FormatInt(time.Now().UnixNano(), 10)\n\tparam := map[string]string{\"key\": \"value\"}\n\tjsonStr, err := json.Marshal(param)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to marshal json. err: %v\", err)\n\t}\n\n\tcreateTableQuery := \"CREATE TABLE \" + tablename + \" (c1 VARIANT, c2 OBJECT)\"\n\n\t// https://docs.snowflake.com/en/sql-reference/functions/parse_json\n\t// can do with TO_VARIANT(PARSE_JSON(..)) as well, but PARSE_JSON already produces VARIANT\n\tinsertQuery := \"INSERT INTO \" + tablename + \" (c1, c2) SELECT PARSE_JSON(?), TO_OBJECT(PARSE_JSON(?))\"\n\t// https://docs.snowflake.com/en/sql-reference/data-types-semistructured#object\n\tinsertOnlyObject := \"INSERT INTO \" + tablename + \" (c2) SELECT OBJECT_CONSTRUCT('name', 'Jones'::VARIANT, 'age',  42::VARIANT)\"\n\n\tselectQuery := \"SELECT c1, c2 FROM \" + tablename\n\n\tdropQuery := \"DROP TABLE \" + tablename\n\n\tfmt.Printf(\"Creating table: %v\\n\", createTableQuery)\n\t_, err = conn.ExecContext(ctx, createTableQuery)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to run the query. %v, err: %v\", createTableQuery, err)\n\t}\n\tdefer func() {\n\t\tfmt.Printf(\"Dropping the table: %v\\n\", dropQuery)\n\t\t_, err = conn.ExecContext(ctx, dropQuery)\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"failed to run the query. %v, err: %v\", dropQuery, err)\n\t\t}\n\t}()\n\tfmt.Printf(\"Inserting VARIANT and OBJECT data into table: %v\\n\", insertQuery)\n\t_, err = conn.ExecContext(ctx, insertQuery,\n\t\tstring(jsonStr),\n\t\tstring(jsonStr),\n\t)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to run the query. %v, err: %v\", insertQuery, err)\n\t}\n\tfmt.Printf(\"Now for another approach: %v\\n\", insertOnlyObject)\n\t_, err = conn.ExecContext(ctx, insertOnlyObject)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to run the query. %v, err: %v\", insertOnlyObject, err)\n\t}\n\n\tfmt.Printf(\"Querying the table into which we just inserted the data: %v\\n\", selectQuery)\n\trows, err := conn.QueryContext(ctx, selectQuery)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to run the query. %v, err: %v\", selectQuery, err)\n\t}\n\tdefer rows.Close()\n\tvar c1, c2 any\n\tfor rows.Next() {\n\t\terr := rows.Scan(&c1, &c2)\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"failed to get result. err: %v\", err)\n\t\t}\n\t\tfmt.Printf(\"%v (type: %T), %v (type: %T)\\n\", c1, c1, c2, c2)\n\t}\n\tif rows.Err() != nil {\n\t\tfmt.Printf(\"ERROR: %v\\n\", rows.Err())\n\t\treturn\n\t}\n\n}\n"
  },
  {
    "path": "codecov.yml",
    "content": "parsers:\n  go:\n    partials_as_hits: true\n\nignore:\n  - \"cmd/\"\n"
  },
  {
    "path": "connection.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync/atomic\"\n\t\"time\"\n\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\t\"go.opentelemetry.io/otel/propagation\"\n)\n\nconst (\n\thttpHeaderContentType      = \"Content-Type\"\n\thttpHeaderAccept           = \"accept\"\n\thttpHeaderUserAgent        = \"User-Agent\"\n\thttpHeaderServiceName      = \"X-Snowflake-Service\"\n\thttpHeaderContentLength    = \"Content-Length\"\n\thttpHeaderHost             = \"Host\"\n\thttpHeaderValueOctetStream = \"application/octet-stream\"\n\thttpHeaderContentEncoding  = \"Content-Encoding\"\n\thttpClientAppID            = \"CLIENT_APP_ID\"\n\thttpClientAppVersion       = \"CLIENT_APP_VERSION\"\n)\n\nconst (\n\tstatementTypeIDSelect           = int64(0x1000)\n\tstatementTypeIDDml              = int64(0x3000)\n\tstatementTypeIDMultiTableInsert = statementTypeIDDml + int64(0x500)\n\tstatementTypeIDMultistatement   = int64(0xA000)\n)\n\nconst (\n\tsessionClientSessionKeepAlive                   = \"client_session_keep_alive\"\n\tsessionClientSessionKeepAliveHeartbeatFrequency = \"client_session_keep_alive_heartbeat_frequency\"\n\tsessionClientValidateDefaultParameters          = \"CLIENT_VALIDATE_DEFAULT_PARAMETERS\"\n\tsessionArrayBindStageThreshold                  = \"client_stage_array_binding_threshold\"\n\tserviceName                                     = \"service_name\"\n)\n\ntype resultType string\n\nconst (\n\tsnowflakeResultType ContextKey = \"snowflakeResultType\"\n\texecResultType      resultType = \"exec\"\n\tqueryResultType     resultType = \"query\"\n)\n\ntype execKey string\n\nconst (\n\texecutionType          execKey = \"executionType\"\n\texecutionTypeStatement string  = \"statement\"\n)\n\n// snowflakeConn manages its own context.\n// External cancellation should not be supported because the connection\n// may be reused after the original query/request has completed.\ntype snowflakeConn struct {\n\tctx                 context.Context\n\tcfg                 *Config\n\trest                *snowflakeRestful\n\tsequenceCounter     uint64\n\ttelemetry           *snowflakeTelemetry\n\tinternal            InternalClient\n\tqueryContextCache   queryContextCache\n\tcurrentTimeProvider currentTimeProvider\n\tsyncParams          syncParams\n\tidToken             string\n\tmfaToken            string\n}\n\nvar (\n\tqueryIDPattern = `[\\w\\-_]+`\n\tqueryIDRegexp  = regexp.MustCompile(queryIDPattern)\n)\n\nfunc (sc *snowflakeConn) exec(\n\tctx context.Context,\n\tquery string,\n\tnoResult bool,\n\tisInternal bool,\n\tdescribeOnly bool,\n\tbindings []driver.NamedValue) (\n\t*execResponse, error) {\n\tif sc.cfg.LogQueryText || isLogQueryTextEnabled(ctx) {\n\t\tif len(bindings) > 0 && (sc.cfg.LogQueryParameters || isLogQueryParametersEnabled(ctx)) {\n\t\t\tlogger.WithContext(ctx).Infof(\"Executing query: %v with bindings: %v\", query, bindings)\n\t\t} else {\n\t\t\tlogger.WithContext(ctx).Infof(\"Executing query: %v\", query)\n\t\t}\n\t} else {\n\t\tlogger.WithContext(ctx).Infof(\"Executing query\")\n\t}\n\n\tvar err error\n\tcounter := atomic.AddUint64(&sc.sequenceCounter, 1) // query sequence counter\n\t_, _, sessionID := safeGetTokens(sc.rest)\n\tctx = context.WithValue(ctx, SFSessionIDKey, sessionID)\n\tqueryContext, err := buildQueryContext(&sc.queryContextCache)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"error while building query context: %v\", err)\n\t}\n\treq := execRequest{\n\t\tSQLText:      query,\n\t\tAsyncExec:    noResult,\n\t\tParameters:   map[string]any{},\n\t\tIsInternal:   isInternal,\n\t\tDescribeOnly: describeOnly,\n\t\tSequenceID:   counter,\n\t\tQueryContext: queryContext,\n\t}\n\tif key := ctx.Value(multiStatementCount); key != nil {\n\t\treq.Parameters[string(multiStatementCount)] = key\n\t}\n\tif tag := ctx.Value(queryTag); tag != nil {\n\t\treq.Parameters[string(queryTag)] = tag\n\t}\n\tlogger.WithContext(ctx).Debugf(\"parameters: %v\", req.Parameters)\n\n\t// handle bindings, if required\n\trequestID := getOrGenerateRequestIDFromContext(ctx)\n\tif len(bindings) > 0 {\n\t\tif err = sc.processBindings(ctx, bindings, describeOnly, requestID, &req); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\tlogger.WithContext(ctx).Debugf(\"bindings: %v\", req.Bindings)\n\n\t// populate headers\n\theaders := getHeaders()\n\tif isFileTransfer(query) {\n\t\theaders[httpHeaderAccept] = headerContentTypeApplicationJSON\n\t}\n\n\t// propagate traceID and spanID via traceparent header. this is a no-op if invalid IDs\n\tpropagator := propagation.TraceContext{}\n\tpropagator.Inject(ctx, propagation.MapCarrier(headers))\n\n\tif sn, ok := sc.syncParams.get(serviceName); ok {\n\t\theaders[httpHeaderServiceName] = *sn\n\t}\n\n\tjsonBody, err := json.Marshal(req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tdata, err := sc.rest.FuncPostQuery(ctx, sc.rest, &url.Values{}, headers,\n\t\tjsonBody, sc.rest.RequestTimeout, requestID, sc.cfg)\n\tif err != nil {\n\t\treturn data, err\n\t}\n\tcode := -1\n\tif data.Code != \"\" {\n\t\tcode, err = strconv.Atoi(data.Code)\n\t\tif err != nil {\n\t\t\treturn data, err\n\t\t}\n\t}\n\tlogger.WithContext(ctx).Debugf(\"Success: %v, Code: %v\", data.Success, code)\n\n\tif !sc.cfg.DisableQueryContextCache && data.Data.QueryContext != nil {\n\t\tqueryContext, err := extractQueryContext(data)\n\t\tif err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"error while decoding query context: %v\", err)\n\t\t} else {\n\t\t\tsc.queryContextCache.add(sc, queryContext.Entries...)\n\t\t}\n\t}\n\n\tif !data.Success {\n\t\terr = exceptionTelemetry(populateErrorFields(code, data), sc)\n\t\treturn nil, err\n\t}\n\n\t// handle PUT/GET commands\n\tfileTransferChan := make(chan error, 1)\n\tif isFileTransfer(query) {\n\t\tgo func() {\n\t\t\tdata, err = sc.processFileTransfer(ctx, data, query, isInternal)\n\t\t\tfileTransferChan <- err\n\t\t}()\n\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\tlogger.WithContext(ctx).Debugf(\"File transfer has been cancelled\")\n\t\t\treturn nil, ctx.Err()\n\t\tcase err := <-fileTransferChan:\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t}\n\n\tlogger.WithContext(ctx).Debugf(\"Exec/Query: queryId=%v SUCCESS with total=%v, returned=%v \", data.Data.QueryID, data.Data.Total, data.Data.Returned)\n\tif data.Data.FinalDatabaseName != \"\" {\n\t\tsc.cfg.Database = data.Data.FinalDatabaseName\n\t}\n\tif data.Data.FinalSchemaName != \"\" {\n\t\tsc.cfg.Schema = data.Data.FinalSchemaName\n\t}\n\tif data.Data.FinalWarehouseName != \"\" {\n\t\tsc.cfg.Warehouse = data.Data.FinalWarehouseName\n\t}\n\tif data.Data.FinalRoleName != \"\" {\n\t\tsc.cfg.Role = data.Data.FinalRoleName\n\t}\n\tsc.populateSessionParameters(data.Data.Parameters)\n\treturn data, err\n}\n\nfunc extractQueryContext(data *execResponse) (queryContext, error) {\n\tvar queryContext queryContext\n\terr := json.Unmarshal(data.Data.QueryContext, &queryContext)\n\treturn queryContext, err\n}\n\nfunc buildQueryContext(qcc *queryContextCache) (requestQueryContext, error) {\n\trqc := requestQueryContext{}\n\tif qcc == nil || len(qcc.entries) == 0 {\n\t\tlogger.Debugf(\"empty qcc\")\n\t\treturn rqc, nil\n\t}\n\tfor _, qce := range qcc.entries {\n\t\tcontextData := contextData{}\n\t\tif qce.Context == \"\" {\n\t\t\tcontextData.Base64Data = qce.Context\n\t\t}\n\t\trqc.Entries = append(rqc.Entries, requestQueryContextEntry{\n\t\t\tID:        qce.ID,\n\t\t\tPriority:  qce.Priority,\n\t\t\tTimestamp: qce.Timestamp,\n\t\t\tContext:   contextData,\n\t\t})\n\t}\n\treturn rqc, nil\n}\n\nfunc (sc *snowflakeConn) Begin() (driver.Tx, error) {\n\treturn sc.BeginTx(context.Background(), driver.TxOptions{})\n}\n\nfunc (sc *snowflakeConn) BeginTx(\n\tctx context.Context,\n\topts driver.TxOptions) (\n\tdriver.Tx, error) {\n\tlogger.WithContext(ctx).Debug(\"BeginTx\")\n\tif opts.ReadOnly {\n\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   ErrNoReadOnlyTransaction,\n\t\t\tSQLState: SQLStateFeatureNotSupported,\n\t\t\tMessage:  errors.ErrMsgNoReadOnlyTransaction,\n\t\t}, sc)\n\t}\n\tif int(opts.Isolation) != int(sql.LevelDefault) {\n\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   ErrNoDefaultTransactionIsolationLevel,\n\t\t\tSQLState: SQLStateFeatureNotSupported,\n\t\t\tMessage:  errors.ErrMsgNoDefaultTransactionIsolationLevel,\n\t\t}, sc)\n\t}\n\tif sc.rest == nil {\n\t\treturn nil, driver.ErrBadConn\n\t}\n\tisDesc := isDescribeOnly(ctx)\n\tisInternal := isInternal(ctx)\n\tif _, err := sc.exec(ctx, \"BEGIN\", false, /* noResult */\n\t\tisInternal, isDesc, nil); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &snowflakeTx{sc, ctx}, nil\n}\n\nfunc (sc *snowflakeConn) cleanup() {\n\t// must flush log buffer while the process is running.\n\tlogger.WithContext(sc.ctx).Debug(\"Snowflake connection closing.\")\n\tif sc.rest != nil && sc.rest.Client != nil {\n\t\tsc.rest.Client.CloseIdleConnections()\n\t}\n}\n\nfunc (sc *snowflakeConn) Close() (err error) {\n\tlogger.WithContext(sc.ctx).Info(\"Closing connection\")\n\tif err := sc.telemetry.sendBatch(); err != nil {\n\t\tlogger.WithContext(sc.ctx).Warnf(\"error while sending telemetry. %v\", err)\n\t}\n\tsc.stopHeartBeat()\n\tsc.rest.HeartBeat = nil\n\tdefer sc.cleanup()\n\n\tif sc.cfg != nil && !sc.cfg.ServerSessionKeepAlive {\n\t\tlogger.WithContext(sc.ctx).Debug(\"Closing session since ServerSessionKeepAlive is false\")\n\t\t// we have to replace context with background, otherwise we can use a one that is cancelled or timed out\n\t\tif err = sc.rest.FuncCloseSession(context.Background(), sc.rest, sc.rest.RequestTimeout); err != nil {\n\t\t\tlogger.WithContext(sc.ctx).Errorf(\"error while closing session: %v\", err)\n\t\t}\n\t} else {\n\t\tlogger.WithContext(sc.ctx).Info(\"Skipping session close since ServerSessionKeepAlive is true\")\n\t}\n\treturn nil\n}\n\nfunc (sc *snowflakeConn) PrepareContext(\n\tctx context.Context,\n\tquery string) (\n\tdriver.Stmt, error) {\n\tlogger.WithContext(sc.ctx).Debugf(\"Prepare Context\")\n\tif sc.rest == nil {\n\t\treturn nil, driver.ErrBadConn\n\t}\n\tstmt := &snowflakeStmt{\n\t\tsc:    sc,\n\t\tquery: query,\n\t}\n\treturn stmt, nil\n}\n\nfunc (sc *snowflakeConn) ExecContext(\n\tctx context.Context,\n\tquery string,\n\targs []driver.NamedValue) (\n\tdriver.Result, error) {\n\tif sc.rest == nil {\n\t\treturn nil, driver.ErrBadConn\n\t}\n\t_, _, sessionID := safeGetTokens(sc.rest)\n\tctx = context.WithValue(ctx, SFSessionIDKey, sessionID)\n\tlogger.WithContext(ctx).Debug(\"ExecContext:\")\n\tnoResult := isAsyncMode(ctx)\n\tisDesc := isDescribeOnly(ctx)\n\tisInternal := isInternal(ctx)\n\tctx = setResultType(ctx, execResultType)\n\tdata, err := sc.exec(ctx, query, noResult, isInternal, isDesc, args)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"error: %v\", err)\n\t\tif data != nil {\n\t\t\tcode, e := strconv.Atoi(data.Code)\n\t\t\tif e != nil {\n\t\t\t\treturn nil, e\n\t\t\t}\n\t\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\t\tNumber:   code,\n\t\t\t\tSQLState: data.Data.SQLState,\n\t\t\t\tMessage:  err.Error(),\n\t\t\t\tQueryID:  data.Data.QueryID,\n\t\t\t}, sc)\n\t\t}\n\t\treturn nil, err\n\t}\n\n\t// if async exec, return result object right away\n\tif noResult {\n\t\treturn data.Data.AsyncResult, nil\n\t}\n\n\tif isDml(data.Data.StatementTypeID) {\n\t\t// collects all values from the returned row sets\n\t\tupdatedRows, err := updateRows(data.Data)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tlogger.WithContext(ctx).Debugf(\"number of updated rows: %#v\", updatedRows)\n\t\treturn &snowflakeResult{\n\t\t\taffectedRows: updatedRows,\n\t\t\tinsertID:     -1,\n\t\t\tqueryID:      data.Data.QueryID,\n\t\t}, nil // last insert id is not supported by Snowflake\n\t} else if isMultiStmt(&data.Data) {\n\t\treturn sc.handleMultiExec(ctx, data.Data)\n\t} else if isDql(&data.Data) {\n\t\tlogger.WithContext(ctx).Debug(\"This query is DQL\")\n\t\tif isStatementContext(ctx) {\n\t\t\treturn &snowflakeResultNoRows{queryID: data.Data.QueryID}, nil\n\t\t}\n\t\treturn driver.ResultNoRows, nil\n\t}\n\tlogger.WithContext(ctx).Debug(\"This query is DDL\")\n\tif isStatementContext(ctx) {\n\t\treturn &snowflakeResultNoRows{queryID: data.Data.QueryID}, nil\n\t}\n\treturn driver.ResultNoRows, nil\n}\n\nfunc (sc *snowflakeConn) QueryContext(\n\tctx context.Context,\n\tquery string,\n\targs []driver.NamedValue) (\n\tdriver.Rows, error) {\n\tqid, err := getResumeQueryID(ctx)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif qid == \"\" {\n\t\treturn sc.queryContextInternal(ctx, query, args)\n\t}\n\n\t// check the query status to find out if there is a result to fetch\n\t_, err = sc.checkQueryStatus(ctx, qid)\n\tsnowflakeErr, isSnowflakeError := err.(*SnowflakeError)\n\tif err == nil || (isSnowflakeError && snowflakeErr.Number == ErrQueryIsRunning) {\n\t\t// the query is running. Rows object will be returned from here.\n\t\treturn sc.buildRowsForRunningQuery(ctx, qid)\n\t}\n\treturn nil, err\n}\n\nfunc (sc *snowflakeConn) queryContextInternal(\n\tctx context.Context,\n\tquery string,\n\targs []driver.NamedValue) (\n\tdriver.Rows, error) {\n\tif sc.rest == nil {\n\t\treturn nil, driver.ErrBadConn\n\t}\n\n\t_, _, sessionID := safeGetTokens(sc.rest)\n\tctx = context.WithValue(setResultType(ctx, queryResultType), SFSessionIDKey, sessionID)\n\tlogger.WithContext(ctx).Debug(\"QueryContextInternal\")\n\tnoResult := isAsyncMode(ctx)\n\tisDesc := isDescribeOnly(ctx)\n\tisInternal := isInternal(ctx)\n\tdata, err := sc.exec(ctx, query, noResult, isInternal, isDesc, args)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"error: %v\", err)\n\t\tif data != nil {\n\t\t\tcode, e := strconv.Atoi(data.Code)\n\t\t\tif e != nil {\n\t\t\t\treturn nil, e\n\t\t\t}\n\t\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\t\tNumber:   code,\n\t\t\t\tSQLState: data.Data.SQLState,\n\t\t\t\tMessage:  err.Error(),\n\t\t\t\tQueryID:  data.Data.QueryID,\n\t\t\t}, sc)\n\t\t}\n\t\treturn nil, err\n\t}\n\n\t// if async query, return row object right away\n\tif noResult {\n\t\treturn data.Data.AsyncRows, nil\n\t}\n\n\trows := new(snowflakeRows)\n\trows.sc = sc\n\trows.queryID = data.Data.QueryID\n\trows.ctx = ctx\n\n\tif isMultiStmt(&data.Data) {\n\t\t// handleMultiQuery is responsible to fill rows with childResults\n\t\tif err = sc.handleMultiQuery(ctx, data.Data, rows); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else {\n\t\trows.addDownloader(populateChunkDownloader(ctx, sc, data.Data))\n\t}\n\n\terr = rows.ChunkDownloader.start()\n\treturn rows, err\n}\n\nfunc (sc *snowflakeConn) Prepare(query string) (driver.Stmt, error) {\n\treturn sc.PrepareContext(context.Background(), query)\n}\n\nfunc (sc *snowflakeConn) Exec(\n\tquery string,\n\targs []driver.Value) (\n\tdriver.Result, error) {\n\treturn sc.ExecContext(context.Background(), query, toNamedValues(args))\n}\n\nfunc (sc *snowflakeConn) Query(\n\tquery string,\n\targs []driver.Value) (\n\tdriver.Rows, error) {\n\treturn sc.QueryContext(context.Background(), query, toNamedValues(args))\n}\n\nfunc (sc *snowflakeConn) Ping(ctx context.Context) error {\n\tlogger.WithContext(ctx).Debug(\"Ping\")\n\tif sc.rest == nil {\n\t\treturn driver.ErrBadConn\n\t}\n\tnoResult := isAsyncMode(ctx)\n\tisDesc := isDescribeOnly(ctx)\n\tisInternal := isInternal(ctx)\n\tctx = setResultType(ctx, execResultType)\n\t_, err := sc.exec(ctx, \"SELECT 1\", noResult, isInternal,\n\t\tisDesc, []driver.NamedValue{})\n\treturn err\n}\n\n// CheckNamedValue determines which types are handled by this driver aside from\n// the instances captured by driver.Value\nfunc (sc *snowflakeConn) CheckNamedValue(nv *driver.NamedValue) error {\n\tif supportedNullBind(nv) || supportedDecfloatBind(nv) || supportedArrayBind(nv) || supportedStructuredObjectWriterBind(nv) || supportedStructuredArrayBind(nv) || supportedStructuredMapBind(nv) {\n\t\treturn nil\n\t}\n\treturn driver.ErrSkip\n}\n\nfunc (sc *snowflakeConn) GetQueryStatus(\n\tctx context.Context,\n\tqueryID string) (\n\t*SnowflakeQueryStatus, error) {\n\tqueryRet, err := sc.checkQueryStatus(ctx, queryID)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &SnowflakeQueryStatus{\n\t\tqueryRet.SQLText,\n\t\tqueryRet.StartTime,\n\t\tqueryRet.EndTime,\n\t\tqueryRet.ErrorCode,\n\t\tqueryRet.ErrorMessage,\n\t\tqueryRet.Stats.ScanBytes,\n\t\tqueryRet.Stats.ProducedRows,\n\t}, nil\n}\n\nfunc (sc *snowflakeConn) AddTelemetryData(_ context.Context, eventDate time.Time, data map[string]string) error {\n\ttd := &telemetryData{\n\t\tTimestamp: eventDate.UnixMilli(),\n\t\tMessage:   data,\n\t}\n\treturn sc.telemetry.addLog(td)\n}\n\n// QueryArrowStream executes a query and returns an ArrowStreamLoader for\n// streaming raw Arrow IPC record batches from the result.\nfunc (sc *snowflakeConn) QueryArrowStream(ctx context.Context, query string, bindings ...driver.NamedValue) (ArrowStreamLoader, error) {\n\tctx = ia.EnableArrowBatches(context.WithValue(ctx, asyncMode, false))\n\tctx = setResultType(ctx, queryResultType)\n\tisDesc := isDescribeOnly(ctx)\n\tisInternal := isInternal(ctx)\n\tdata, err := sc.exec(ctx, query, false, isInternal, isDesc, bindings)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"error: %v\", err)\n\t\tif data != nil {\n\t\t\tcode, e := strconv.Atoi(data.Code)\n\t\t\tif e != nil {\n\t\t\t\treturn nil, e\n\t\t\t}\n\t\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\t\tNumber:   code,\n\t\t\t\tSQLState: data.Data.SQLState,\n\t\t\t\tMessage:  err.Error(),\n\t\t\t\tQueryID:  data.Data.QueryID,\n\t\t\t}, sc)\n\t\t}\n\t\treturn nil, err\n\t}\n\n\tvar resultIDs []string\n\tif len(data.Data.ResultIDs) > 0 {\n\t\tresultIDs = strings.Split(data.Data.ResultIDs, \",\")\n\t}\n\n\tscd := &snowflakeArrowStreamChunkDownloader{\n\t\tsc:          sc,\n\t\tChunkMetas:  data.Data.Chunks,\n\t\tTotal:       data.Data.Total,\n\t\tQrmk:        data.Data.Qrmk,\n\t\tChunkHeader: data.Data.ChunkHeaders,\n\t\tFuncGet:     getChunk,\n\t\tRowSet: rowSetType{\n\t\t\tRowType:      data.Data.RowType,\n\t\t\tJSON:         data.Data.RowSet,\n\t\t\tRowSetBase64: data.Data.RowSetBase64,\n\t\t},\n\t\tresultIDs: resultIDs,\n\t}\n\n\tif scd.hasNextResultSet() {\n\t\tif err = scd.NextResultSet(ctx); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn scd, nil\n}\n\n// buildSnowflakeConn creates a new snowflakeConn.\n// The provided context is used only for establishing the initial connection.\nfunc buildSnowflakeConn(ctx context.Context, config Config) (*snowflakeConn, error) {\n\tsc := &snowflakeConn{\n\t\tsequenceCounter:     0,\n\t\tctx:                 ctx,\n\t\tcfg:                 &config,\n\t\tcurrentTimeProvider: defaultTimeProvider,\n\t}\n\tinitPlatformDetection()\n\terr := initEasyLogging(config.ClientConfigFile)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tlogger.Debugf(\"Building snowflakeConn: %v\", fmt.Sprintf(\"host: %v, account: %v, user: %v, password existed: %v, role: %v, database: %v, schema: %v, warehouse: %v, %v\",\n\t\tconfig.Host, config.Account, config.User, config.Password != \"\", config.Role, config.Database, config.Schema, config.Warehouse, sfconfig.DescribeProxy(&config)))\n\ttelemetry := &snowflakeTelemetry{}\n\n\ttransportFactory := newTransportFactory(&config, telemetry)\n\tst, err := transportFactory.createTransport(defaultTransportConfigs.forTransportType(transportTypeSnowflake))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar tokenAccessor TokenAccessor\n\tif sc.cfg.TokenAccessor != nil {\n\t\ttokenAccessor = sc.cfg.TokenAccessor\n\t} else {\n\t\ttokenAccessor = getSimpleTokenAccessor()\n\t}\n\n\t// authenticate\n\tsc.rest = &snowflakeRestful{\n\t\tHost:     sc.cfg.Host,\n\t\tPort:     sc.cfg.Port,\n\t\tProtocol: sc.cfg.Protocol,\n\t\tClient: &http.Client{\n\t\t\t// request timeout including reading response body\n\t\t\tTimeout:   sc.cfg.ClientTimeout,\n\t\t\tTransport: st,\n\t\t},\n\t\tJWTClient: &http.Client{\n\t\t\tTimeout:   sc.cfg.JWTClientTimeout,\n\t\t\tTransport: st,\n\t\t},\n\t\tTokenAccessor:       tokenAccessor,\n\t\tLoginTimeout:        sc.cfg.LoginTimeout,\n\t\tRequestTimeout:      sc.cfg.RequestTimeout,\n\t\tMaxRetryCount:       sc.cfg.MaxRetryCount,\n\t\tFuncPost:            postRestful,\n\t\tFuncGet:             getRestful,\n\t\tFuncAuthPost:        postAuthRestful,\n\t\tFuncPostQuery:       postRestfulQuery,\n\t\tFuncPostQueryHelper: postRestfulQueryHelper,\n\t\tFuncRenewSession:    renewRestfulSession,\n\t\tFuncPostAuth:        postAuth,\n\t\tFuncCloseSession:    closeSession,\n\t\tFuncCancelQuery:     cancelQuery,\n\t\tFuncPostAuthSAML:    postAuthSAML,\n\t\tFuncPostAuthOKTA:    postAuthOKTA,\n\t\tFuncGetSSO:          getSSO,\n\t}\n\n\ttelemetry.sr = sc.rest\n\tsc.telemetry = telemetry\n\tsc.syncParams = newSyncParams(sc.cfg.Params)\n\n\treturn sc, nil\n}\n"
  },
  {
    "path": "connection_configuration_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"database/sql\"\n\ttoml \"github.com/BurntSushi/toml\"\n\t\"os\"\n\t\"strconv\"\n\t\"testing\"\n)\n\n// TODO move this test to config package when we have wiremock support in an internal package\nfunc TestTomlConnection(t *testing.T) {\n\tos.Setenv(\"SNOWFLAKE_HOME\", \"./test_data/\")                       // TODO replace with snowflakeHome const\n\tos.Setenv(\"SNOWFLAKE_DEFAULT_CONNECTION_NAME\", \"toml-connection\") // TODO replace with snowflakeConnectionName const\n\n\tdefer os.Unsetenv(\"SNOWFLAKE_HOME\")                    // TODO replace with snowflakeHome const\n\tdefer os.Unsetenv(\"SNOWFLAKE_DEFAULT_CONNECTION_NAME\") // TODO replace with snowflakeHome const\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/password/successful_flow.json\"},\n\t\twiremockMapping{filePath: \"select1.json\", params: map[string]string{\n\t\t\t\"%AUTHORIZATION_HEADER%\": \"session token\",\n\t\t}},\n\t)\n\ttype Connection struct {\n\t\tAccount  string `toml:\"account\"`\n\t\tUser     string `toml:\"user\"`\n\t\tPassword string `toml:\"password\"`\n\t\tHost     string `toml:\"host\"`\n\t\tPort     string `toml:\"port\"`\n\t\tProtocol string `toml:\"protocol\"`\n\t}\n\n\ttype TomlStruct struct {\n\t\tConnection Connection `toml:\"toml-connection\"`\n\t}\n\n\tcfg := wiremock.connectionConfig()\n\tconnection := &TomlStruct{\n\t\tConnection: Connection{\n\t\t\tAccount:  cfg.Account,\n\t\t\tUser:     cfg.User,\n\t\t\tPassword: cfg.Password,\n\t\t\tHost:     cfg.Host,\n\t\t\tPort:     strconv.Itoa(cfg.Port),\n\t\t\tProtocol: cfg.Protocol,\n\t\t},\n\t}\n\n\tf, err := os.OpenFile(\"./test_data/connections.toml\", os.O_APPEND|os.O_WRONLY, 0600)\n\tassertNilF(t, err, \"Failed to create connections.toml file\")\n\tdefer f.Close()\n\n\tencoder := toml.NewEncoder(f)\n\terr = encoder.Encode(connection)\n\tassertNilF(t, err, \"Failed to parse the config to toml structure\")\n\n\tif !isWindows {\n\t\terr = os.Chmod(\"./test_data/connections.toml\", 0600)\n\t\tassertNilF(t, err, \"The error occurred because you cannot change the file permission\")\n\t}\n\n\tdb, err := sql.Open(\"snowflake\", \"autoConfig\")\n\tassertNilF(t, err, \"The error occurred because the db cannot be established\")\n\trunSmokeQuery(t, db)\n}\n"
  },
  {
    "path": "connection_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"math/big\"\n\t\"strconv\"\n\n\t\"net/http\"\n\t\"net/url\"\n\t\"strings\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"testing\"\n\t\"time\"\n\n\t\"go.opentelemetry.io/otel\"\n\t\"go.opentelemetry.io/otel/sdk/trace\"\n)\n\nconst (\n\tserviceNameStub   = \"SV\"\n\tserviceNameAppend = \"a\"\n)\n\nfunc TestInvalidConnection(t *testing.T) {\n\tdb := openDB(t)\n\tif err := db.Close(); err != nil {\n\t\tt.Error(\"should not cause error in Close\")\n\t}\n\tif err := db.Close(); err != nil {\n\t\tt.Error(\"should not cause error in the second call of Close\")\n\t}\n\tif _, err := db.ExecContext(context.Background(), \"CREATE TABLE OR REPLACE test0(c1 int)\"); err == nil {\n\t\tt.Error(\"should fail to run Exec\")\n\t}\n\tif _, err := db.QueryContext(context.Background(), \"SELECT CURRENT_TIMESTAMP()\"); err == nil {\n\t\tt.Error(\"should fail to run Query\")\n\t}\n\tif _, err := db.BeginTx(context.Background(), nil); err == nil {\n\t\tt.Error(\"should fail to run Begin\")\n\t}\n}\n\n// postQueryMock generates a response based on the X-Snowflake-Service header,\n// to generate a response with the SERVICE_NAME field appending a character at\n// the end of the header. This way it could test both the send and receive logic\nfunc postQueryMock(_ context.Context, _ *snowflakeRestful, _ *url.Values,\n\theaders map[string]string, _ []byte, _ time.Duration, _ UUID,\n\t_ *Config) (*execResponse, error) {\n\tvar serviceName string\n\tif serviceHeader, ok := headers[httpHeaderServiceName]; ok {\n\t\tserviceName = serviceHeader + serviceNameAppend\n\t} else {\n\t\tserviceName = serviceNameStub\n\t}\n\n\tdd := &execResponseData{\n\t\tParameters: []nameValueParameter{{\"SERVICE_NAME\", serviceName}},\n\t}\n\treturn &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"\",\n\t\tCode:    \"0\",\n\t\tSuccess: true,\n\t}, nil\n}\n\nfunc TestExecWithEmptyRequestID(t *testing.T) {\n\tctx := WithRequestID(context.Background(), nilUUID)\n\tpostQueryMock := func(_ context.Context, _ *snowflakeRestful,\n\t\t_ *url.Values, _ map[string]string, _ []byte, _ time.Duration,\n\t\trequestID UUID, _ *Config) (*execResponse, error) {\n\t\t// ensure the same requestID from context is used\n\t\tif len(requestID) == 0 {\n\t\t\tt.Fatal(\"requestID is empty\")\n\t\t}\n\t\tdd := &execResponseData{}\n\t\treturn &execResponse{\n\t\t\tData:    *dd,\n\t\t\tMessage: \"\",\n\t\t\tCode:    \"0\",\n\t\t\tSuccess: true,\n\t\t}, nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncPostQuery: postQueryMock,\n\t}\n\n\tsc := &snowflakeConn{\n\t\tcfg:  &Config{},\n\t\trest: sr,\n\t}\n\tif _, err := sc.exec(ctx, \"\", false /* noResult */, false, /* isInternal */\n\t\tfalse /* describeOnly */, nil); err != nil {\n\t\tt.Fatalf(\"err: %v\", err)\n\t}\n}\n\nfunc TestGetQueryResultUsesTokenFromTokenAccessor(t *testing.T) {\n\tta := getSimpleTokenAccessor()\n\ttoken := \"snowflake-test-token\"\n\tta.SetTokens(token, \"\", 1)\n\tfuncGetMock := func(_ context.Context, _ *snowflakeRestful, _ *url.URL,\n\t\theaders map[string]string, _ time.Duration) (*http.Response, error) {\n\t\tif headers[headerAuthorizationKey] != fmt.Sprintf(headerSnowflakeToken, token) {\n\t\t\tt.Fatalf(\"header authorization key is not correct: %v\", headers[headerAuthorizationKey])\n\t\t}\n\t\tdd := &execResponseData{}\n\t\ter := &execResponse{\n\t\t\tData:    *dd,\n\t\t\tMessage: \"\",\n\t\t\tCode:    \"0\",\n\t\t\tSuccess: true,\n\t\t}\n\t\tba, err := json.Marshal(er)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"err: %v\", err)\n\t\t}\n\t\treturn &http.Response{\n\t\t\tStatusCode: http.StatusOK,\n\t\t\tBody:       &fakeResponseBody{body: ba},\n\t\t}, nil\n\t}\n\tsr := &snowflakeRestful{\n\t\tFuncGet:       funcGetMock,\n\t\tTokenAccessor: ta,\n\t}\n\tsc := &snowflakeConn{\n\t\tcfg:                 &Config{},\n\t\trest:                sr,\n\t\tcurrentTimeProvider: defaultTimeProvider,\n\t}\n\tif _, err := sc.getQueryResultResp(context.Background(), \"\"); err != nil {\n\t\tt.Fatalf(\"err: %v\", err)\n\t}\n}\n\nfunc TestGetQueryResultTokenExpiry(t *testing.T) {\n\tta := getSimpleTokenAccessor()\n\ttoken := \"snowflake-test-token\"\n\tta.SetTokens(token, \"\", 1)\n\tfuncGetMock := func(_ context.Context, _ *snowflakeRestful, _ *url.URL,\n\t\theaders map[string]string, _ time.Duration) (*http.Response, error) {\n\t\trespData := execResponseData{}\n\t\ter := &execResponse{\n\t\t\tData:    respData,\n\t\t\tMessage: \"\",\n\t\t\tCode:    sessionExpiredCode,\n\t\t\tSuccess: true,\n\t\t}\n\t\tba, err := json.Marshal(er)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"err: %v\", err)\n\t\t}\n\t\treturn &http.Response{\n\t\t\tStatusCode: http.StatusOK,\n\t\t\tBody:       &fakeResponseBody{body: ba},\n\t\t}, nil\n\t}\n\n\texpectedToken := \"new token\"\n\texpectedMaster := \"new master\"\n\texpectedSession := int64(321)\n\n\trenewSessionDummy := func(_ context.Context, sr *snowflakeRestful, _ time.Duration) error {\n\t\tta.SetTokens(expectedToken, expectedMaster, expectedSession)\n\t\treturn nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncGet:          funcGetMock,\n\t\tFuncRenewSession: renewSessionDummy,\n\t\tTokenAccessor:    ta,\n\t}\n\tsc := &snowflakeConn{\n\t\tcfg:                 &Config{},\n\t\trest:                sr,\n\t\tcurrentTimeProvider: defaultTimeProvider,\n\t}\n\t_, err := sc.getQueryResultResp(context.Background(), \"\")\n\tassertNilF(t, err, fmt.Sprintf(\"err: %v\", err))\n\n\tupdatedToken, updatedMaster, updatedSession := ta.GetTokens()\n\tassertEqualF(t, updatedToken, expectedToken)\n\tassertEqualF(t, updatedMaster, expectedMaster)\n\tassertEqualF(t, updatedSession, expectedSession)\n}\n\nfunc TestGetQueryResultTokenNotSet(t *testing.T) {\n\tta := getSimpleTokenAccessor()\n\tfuncGetMock := func(_ context.Context, _ *snowflakeRestful, _ *url.URL,\n\t\theaders map[string]string, _ time.Duration) (*http.Response, error) {\n\t\trespData := execResponseData{}\n\t\ter := &execResponse{\n\t\t\tData:    respData,\n\t\t\tMessage: \"\",\n\t\t\tCode:    sessionExpiredCode,\n\t\t\tSuccess: true,\n\t\t}\n\t\tba, err := json.Marshal(er)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"err: %v\", err)\n\t\t}\n\t\treturn &http.Response{\n\t\t\tStatusCode: http.StatusOK,\n\t\t\tBody:       &fakeResponseBody{body: ba},\n\t\t}, nil\n\t}\n\n\texpectedToken := \"new token\"\n\texpectedMaster := \"new master\"\n\texpectedSession := int64(321)\n\n\trenewSessionDummy := func(_ context.Context, sr *snowflakeRestful, _ time.Duration) error {\n\t\tta.SetTokens(expectedToken, expectedMaster, expectedSession)\n\t\treturn nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncGet:          funcGetMock,\n\t\tFuncRenewSession: renewSessionDummy,\n\t\tTokenAccessor:    ta,\n\t}\n\tsc := &snowflakeConn{\n\t\tcfg:                 &Config{},\n\t\trest:                sr,\n\t\tcurrentTimeProvider: defaultTimeProvider,\n\t}\n\t_, err := sc.getQueryResultResp(context.Background(), \"\")\n\tassertNilF(t, err, fmt.Sprintf(\"err: %v\", err))\n\n\tupdatedToken, updatedMaster, updatedSession := ta.GetTokens()\n\tassertEqualF(t, updatedToken, expectedToken)\n\tassertEqualF(t, updatedMaster, expectedMaster)\n\tassertEqualF(t, updatedSession, expectedSession)\n}\n\nfunc TestCheckNamedValue(t *testing.T) {\n\tsc := &snowflakeConn{}\n\n\tt.Run(\"dont panic on nil UUID\", func(t *testing.T) {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tt.Errorf(\"expected not to panic, but did panic\")\n\t\t\t}\n\t\t}()\n\t\tvar nilUUID *UUID\n\t\tnv := driver.NamedValue{Value: nilUUID}\n\t\terr := sc.CheckNamedValue(&nv) // should not panic and return false\n\t\tassertErrIsE(t, err, driver.ErrSkip, \"expected not to support binding nil *UUID\")\n\t})\n\n\tt.Run(\"dont panic on nil pointer array\", func(t *testing.T) {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tt.Errorf(\"expected not to panic, but did panic\")\n\t\t\t}\n\t\t}()\n\t\tvar nilArray *[]string\n\t\tnv := driver.NamedValue{Value: nilArray}\n\t\terr := sc.CheckNamedValue(&nv) // should not panic and return false\n\t\tassertErrIsE(t, err, driver.ErrSkip, \"expected not to support binding nil []string\")\n\t})\n\n\tt.Run(\"dont panic on nil pointer\", func(t *testing.T) {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tt.Errorf(\"expected not to panic, but did panic\")\n\t\t\t}\n\t\t}()\n\t\tvar nilTime *time.Time\n\t\tnv := driver.NamedValue{Value: nilTime}\n\t\terr := sc.CheckNamedValue(&nv) // should not panic and return false\n\t\tassertErrIsE(t, err, driver.ErrSkip, \"expected not to support binding nil *time.Time\")\n\t})\n\n\tt.Run(\"dont panic on nil *big.Float\", func(t *testing.T) {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tt.Errorf(\"expected not to panic, but did panic\")\n\t\t\t}\n\t\t}()\n\t\tvar nilBigFloat *big.Float\n\t\tnv := driver.NamedValue{Value: nilBigFloat}\n\t\terr := sc.CheckNamedValue(&nv) // should not panic and return false\n\t\tassertErrIsE(t, err, driver.ErrSkip, \"expected not to support binding nil *big.Float\")\n\t})\n\n\tt.Run(\"Is Valid for big.Float\", func(t *testing.T) {\n\t\tval := big.NewFloat(123.456)\n\t\tnv := driver.NamedValue{Value: val}\n\t\terr := sc.CheckNamedValue(&nv)\n\t\tassertNilE(t, err, \"expected to support binding big.Float\")\n\t})\n\n\tt.Run(\"Is Not Valid for other types\", func(t *testing.T) {\n\t\tval := 123.456 // float64\n\t\tnv := driver.NamedValue{Value: val}\n\t\terr := sc.CheckNamedValue(&nv)\n\t\tassertErrIsE(t, err, driver.ErrSkip, \"expected not to support binding float64\")\n\t})\n}\n\nfunc TestExecWithSpecificRequestID(t *testing.T) {\n\torigRequestID := NewUUID()\n\tctx := WithRequestID(context.Background(), origRequestID)\n\tpostQueryMock := func(_ context.Context, _ *snowflakeRestful,\n\t\t_ *url.Values, _ map[string]string, _ []byte, _ time.Duration,\n\t\trequestID UUID, _ *Config) (*execResponse, error) {\n\t\t// ensure the same requestID from context is used\n\t\tif requestID != origRequestID {\n\t\t\tt.Fatal(\"requestID doesn't match\")\n\t\t}\n\t\tdd := &execResponseData{}\n\t\treturn &execResponse{\n\t\t\tData:    *dd,\n\t\t\tMessage: \"\",\n\t\t\tCode:    \"0\",\n\t\t\tSuccess: true,\n\t\t}, nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncPostQuery: postQueryMock,\n\t}\n\n\tsc := &snowflakeConn{\n\t\tcfg:  &Config{},\n\t\trest: sr,\n\t}\n\tif _, err := sc.exec(ctx, \"\", false /* noResult */, false, /* isInternal */\n\t\tfalse /* describeOnly */, nil); err != nil {\n\t\tt.Fatalf(\"err: %v\", err)\n\t}\n}\n\nfunc TestExecContextPropagationIntegrationTest(t *testing.T) {\n\toriginalTracerProvider := otel.GetTracerProvider()\n\n\ttp := trace.NewTracerProvider()\n\totel.SetTracerProvider(tp)\n\tt.Cleanup(func() {\n\t\totel.SetTracerProvider(originalTracerProvider)\n\t})\n\n\ttracer := otel.Tracer(\"TestExecContextPropagationTracer\")\n\n\tctx, span := tracer.Start(context.Background(), \"test-span\")\n\tdefer span.End()\n\n\ttraceID := span.SpanContext().TraceID().String()\n\tspanID := span.SpanContext().SpanID().String()\n\n\t// expected header values\n\texpectedTraceparent := fmt.Sprintf(\"00-%s-%s-01\", traceID, spanID)\n\n\tpostQueryMock := func(_ context.Context, _ *snowflakeRestful,\n\t\t_ *url.Values, headers map[string]string, _ []byte, _ time.Duration,\n\t\t_ UUID, _ *Config) (*execResponse, error) {\n\n\t\t// ensure the traceID and spanID from the ctx passed in has been injected into the headers\n\t\t// in W3 Trace Context format\n\t\tassertEqualE(t, headers[\"traceparent\"], expectedTraceparent)\n\n\t\tdd := &execResponseData{}\n\t\treturn &execResponse{\n\t\t\tData:    *dd,\n\t\t\tMessage: \"\",\n\t\t\tCode:    \"0\",\n\t\t\tSuccess: true,\n\t\t}, nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncPostQuery: postQueryMock,\n\t}\n\n\tsc := &snowflakeConn{\n\t\tcfg:  &Config{},\n\t\trest: sr,\n\t}\n\n\t_, err := sc.exec(ctx, \"\", false /* noResult */, false, /* isInternal */\n\t\tfalse /* describeOnly */, nil)\n\tassertNilF(t, err)\n}\n\n// TestServiceName tests two things:\n// 1. request header contains X-Snowflake-Service if the cfg parameters\n// contains SERVICE_NAME\n// 2. SERVICE_NAME is updated by response payload\n// Uses interactive postQueryMock that generates a response based on header\nfunc TestServiceName(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncPostQuery: postQueryMock,\n\t}\n\n\tsc := &snowflakeConn{\n\t\tcfg:  &Config{},\n\t\trest: sr,\n\t}\n\n\texpectServiceName := serviceNameStub\n\tfor range 5 {\n\t\t_, err := sc.exec(context.Background(), \"\", false, /* noResult */\n\t\t\tfalse /* isInternal */, false /* describeOnly */, nil)\n\t\tassertNilF(t, err)\n\t\tif actualServiceName, ok := sc.syncParams.get(serviceName); ok {\n\t\t\tif *actualServiceName != expectServiceName {\n\t\t\t\tt.Errorf(\"service name mis-match. expected %v, actual %v\",\n\t\t\t\t\texpectServiceName, actualServiceName)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"No service name in the response\")\n\t\t}\n\t\texpectServiceName += serviceNameAppend\n\t}\n}\n\nvar closedSessionCount = 0\n\nvar testTelemetry = &snowflakeTelemetry{\n\tmutex: &sync.Mutex{},\n}\n\nfunc closeSessionMock(_ context.Context, _ *snowflakeRestful, _ time.Duration) error {\n\tclosedSessionCount++\n\treturn &SnowflakeError{\n\t\tNumber: ErrSessionGone,\n\t}\n}\n\nfunc TestCloseIgnoreSessionGone(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncCloseSession: closeSessionMock,\n\t}\n\tsc := &snowflakeConn{\n\t\tcfg:       &Config{},\n\t\trest:      sr,\n\t\ttelemetry: testTelemetry,\n\t}\n\n\tif sc.Close() != nil {\n\t\tt.Error(\"Close should let go session gone error\")\n\t}\n}\n\nfunc TestClientSessionPersist(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncCloseSession: closeSessionMock,\n\t}\n\tsc := &snowflakeConn{\n\t\tcfg:       &Config{},\n\t\trest:      sr,\n\t\ttelemetry: testTelemetry,\n\t}\n\tsc.cfg.ServerSessionKeepAlive = true\n\tcount := closedSessionCount\n\tif sc.Close() != nil {\n\t\tt.Error(\"Connection close should not return error\")\n\t}\n\tif count != closedSessionCount {\n\t\tt.Fatal(\"close session was called\")\n\t}\n}\n\nfunc TestFetchResultByQueryID(t *testing.T) {\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/password/successful_flow.json\"},\n\t\twiremockMapping{filePath: \"query/query_execution.json\"},\n\t\twiremockMapping{filePath: \"query/query_monitoring.json\"},\n\t)\n\n\tcfg := wiremock.connectionConfig()\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\n\tconn, err := db.Conn(context.Background())\n\tassertNilF(t, err)\n\tdefer conn.Close()\n\n\tvar qid string\n\terr = conn.Raw(func(x any) error {\n\t\trows1, err := x.(driver.QueryerContext).QueryContext(context.Background(), \"SELECT 1\", nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer rows1.Close()\n\t\tqid = rows1.(SnowflakeRows).GetQueryID()\n\t\treturn nil\n\t})\n\tassertNilF(t, err)\n\n\tctx := WithFetchResultByID(context.Background(), qid)\n\trows2, err := db.QueryContext(ctx, \"\")\n\tassertNilF(t, err)\n\tcloseCh := make(chan bool, 1)\n\trows2ext := &RowsExtended{rows: rows2, closeChan: &closeCh, t: t}\n\tdefer rows2ext.Close()\n\n\tvar ms, sum int\n\trows2ext.mustNext()\n\trows2ext.mustScan(&ms, &sum)\n\tassertEqualE(t, ms, 1)\n\tassertEqualE(t, sum, 5050)\n}\n\nfunc TestFetchRunningQueryByID(t *testing.T) {\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/password/successful_flow.json\"},\n\t\twiremockMapping{filePath: \"query/query_execution.json\"},\n\t\twiremockMapping{filePath: \"query/query_monitoring_running.json\"},\n\t)\n\n\tcfg := wiremock.connectionConfig()\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\n\tconn, err := db.Conn(context.Background())\n\tassertNilF(t, err)\n\tdefer conn.Close()\n\n\tvar qid string\n\terr = conn.Raw(func(x any) error {\n\t\trows1, err := x.(driver.QueryerContext).QueryContext(context.Background(), \"SELECT 1\", nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer rows1.Close()\n\t\tqid = rows1.(SnowflakeRows).GetQueryID()\n\t\treturn nil\n\t})\n\tassertNilF(t, err)\n\n\tctx := WithFetchResultByID(context.Background(), qid)\n\trows2, err := db.QueryContext(ctx, \"\")\n\tassertNilF(t, err)\n\tcloseCh := make(chan bool, 1)\n\trows2ext := &RowsExtended{rows: rows2, closeChan: &closeCh, t: t}\n\tdefer rows2ext.Close()\n\n\tvar ms, sum int\n\trows2ext.mustNext()\n\trows2ext.mustScan(&ms, &sum)\n\tassertEqualE(t, ms, 1)\n\tassertEqualE(t, sum, 5050)\n}\n\nfunc TestFetchErrorQueryByID(t *testing.T) {\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/password/successful_flow.json\"},\n\t\twiremockMapping{filePath: \"query/query_execution.json\"},\n\t\twiremockMapping{filePath: \"query/query_monitoring_error.json\"},\n\t)\n\n\tcfg := wiremock.connectionConfig()\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\n\tconn, err := db.Conn(context.Background())\n\tassertNilF(t, err)\n\tdefer conn.Close()\n\n\tvar qid string\n\terr = conn.Raw(func(x any) error {\n\t\trows1, err := x.(driver.QueryerContext).QueryContext(context.Background(), \"SELECT 1\", nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer rows1.Close()\n\t\tqid = rows1.(SnowflakeRows).GetQueryID()\n\t\treturn nil\n\t})\n\tassertNilF(t, err)\n\n\tctx := WithFetchResultByID(context.Background(), qid)\n\t_, err = db.QueryContext(ctx, \"\")\n\tassertNotNilF(t, err, \"Expected error when fetching failed query\")\n\n\tvar se *SnowflakeError\n\tassertErrorsAsF(t, err, &se)\n\tassertEqualE(t, se.Number, ErrQueryReportedError)\n}\n\nfunc TestFetchMalformedJsonQueryByID(t *testing.T) {\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/password/successful_flow.json\"},\n\t\twiremockMapping{filePath: \"query/query_execution.json\"},\n\t\twiremockMapping{filePath: \"query/query_monitoring_malformed.json\"},\n\t)\n\n\tcfg := wiremock.connectionConfig()\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\n\t// Execute a query to get a query ID using raw connection\n\tconn, err := db.Conn(context.Background())\n\tassertNilF(t, err)\n\tdefer conn.Close()\n\n\tvar qid string\n\terr = conn.Raw(func(x any) error {\n\t\trows1, err := x.(driver.QueryerContext).QueryContext(context.Background(), \"SELECT 1\", nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer rows1.Close()\n\t\tqid = rows1.(SnowflakeRows).GetQueryID()\n\t\treturn nil\n\t})\n\tassertNilF(t, err)\n\n\tctx := WithFetchResultByID(context.Background(), qid)\n\t_, err = db.QueryContext(ctx, \"\")\n\tassertNotNilF(t, err, \"Expected error when fetching malformed JSON\")\n\n\tassertStringContainsF(t, err.Error(), \"invalid character\")\n}\n\nfunc TestIsPrivateLink(t *testing.T) {\n\tfor _, tc := range []struct {\n\t\thost          string\n\t\tisPrivatelink bool\n\t}{\n\t\t{\"testaccount.us-east-1.snowflakecomputing.com\", false},\n\t\t{\"testaccount-no-privatelink.snowflakecomputing.com\", false},\n\t\t{\"testaccount.us-east-1.privatelink.snowflakecomputing.com\", true},\n\t\t{\"testaccount.cn-region.snowflakecomputing.cn\", false},\n\t\t{\"testaccount.cn-region.privaTELINk.snowflakecomputing.cn\", true},\n\t\t{\"testaccount.some-region.privatelink.snowflakecomputing.mil\", true},\n\t\t{\"testaccount.us-east-1.privatelink.snowflakecOMPUTING.com\", true},\n\t\t{\"snowhouse.snowflakecomputing.xyz\", false},\n\t\t{\"snowhouse.privatelink.snowflakecomputing.xyz\", true},\n\t\t{\"snowhouse.PRIVATELINK.snowflakecomputing.xyz\", true},\n\t} {\n\t\tt.Run(tc.host, func(t *testing.T) {\n\t\t\tassertEqualE(t, checkIsPrivateLink(tc.host), tc.isPrivatelink)\n\t\t})\n\t}\n}\n\nfunc TestBuildPrivatelinkConn(t *testing.T) {\n\tov := newOcspValidator(&Config{\n\t\tHost:     \"testaccount.us-east-1.privatelink.snowflakecomputing.com\",\n\t\tAccount:  \"testaccount\",\n\t\tUser:     \"testuser\",\n\t\tPassword: \"testpassword\",\n\t})\n\tassertEqualE(t, ov.cacheServerURL, \"http://ocsp.testaccount.us-east-1.privatelink.snowflakecomputing.com/ocsp_response_cache.json\")\n\tassertEqualE(t, ov.retryURL, \"http://ocsp.testaccount.us-east-1.privatelink.snowflakecomputing.com/retry/%v/%v\")\n}\n\nfunc TestOcspAddressesSetup(t *testing.T) {\n\tfor _, tc := range []struct {\n\t\thost                string\n\t\tcacheURL            string\n\t\tprivateLinkRetryURL string\n\t}{\n\t\t{\n\t\t\thost:                \"testaccount.us-east-1.snowflakecomputing.com\",\n\t\t\tcacheURL:            fmt.Sprintf(\"%v/%v\", defaultCacheServerHost, cacheFileBaseName),\n\t\t\tprivateLinkRetryURL: \"\",\n\t\t},\n\t\t{\n\t\t\thost:                \"testaccount-no-privatelink.snowflakecomputing.com\",\n\t\t\tcacheURL:            fmt.Sprintf(\"%v/%v\", defaultCacheServerHost, cacheFileBaseName),\n\t\t\tprivateLinkRetryURL: \"\",\n\t\t},\n\t\t{\n\t\t\thost:                \"testaccount.us-east-1.privatelink.snowflakecomputing.com\",\n\t\t\tcacheURL:            \"http://ocsp.testaccount.us-east-1.privatelink.snowflakecomputing.com/ocsp_response_cache.json\",\n\t\t\tprivateLinkRetryURL: \"http://ocsp.testaccount.us-east-1.privatelink.snowflakecomputing.com/retry/%v/%v\",\n\t\t},\n\t\t{\n\t\t\thost:                \"testaccount.cn-region.snowflakecomputing.cn\",\n\t\t\tcacheURL:            \"http://ocsp.testaccount.cn-region.snowflakecomputing.cn/ocsp_response_cache.json\",\n\t\t\tprivateLinkRetryURL: \"\", // not a privatelink env, no need to setup retry URL\n\t\t},\n\t\t{\n\t\t\thost:                \"testaccount.cn-region.privaTELINk.snowflakecomputing.cn\",\n\t\t\tcacheURL:            \"http://ocsp.testaccount.cn-region.privatelink.snowflakecomputing.cn/ocsp_response_cache.json\",\n\t\t\tprivateLinkRetryURL: \"http://ocsp.testaccount.cn-region.privatelink.snowflakecomputing.cn/retry/%v/%v\",\n\t\t},\n\t\t{\n\t\t\thost:                \"testaccount.some-region.privatelink.snowflakecomputing.mil\",\n\t\t\tcacheURL:            \"http://ocsp.testaccount.some-region.privatelink.snowflakecomputing.mil/ocsp_response_cache.json\",\n\t\t\tprivateLinkRetryURL: \"http://ocsp.testaccount.some-region.privatelink.snowflakecomputing.mil/retry/%v/%v\",\n\t\t},\n\t} {\n\t\tt.Run(tc.host, func(t *testing.T) {\n\t\t\tov := newOcspValidator(&Config{\n\t\t\t\tHost: tc.host,\n\t\t\t})\n\t\t\tassertEqualE(t, ov.cacheServerURL, tc.cacheURL)\n\t\t\tassertEqualE(t, ov.retryURL, tc.privateLinkRetryURL)\n\n\t\t})\n\t}\n}\n\nfunc TestGetQueryStatus(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsct.mustExec(`create or replace table ut_conn(c1 number, c2 string)\n\t\t\t\t\t\tas (select seq4() as seq, concat('str',to_varchar(seq)) as str1 \n\t\t\t\t\t\tfrom table(generator(rowcount => 100)))`,\n\t\t\tnil)\n\n\t\trows := sct.mustQueryContext(sct.sc.ctx, \"select min(c1) as ms, sum(c1) from ut_conn group by (c1 % 10) order by ms\", nil)\n\t\tqid := rows.(SnowflakeResult).GetQueryID()\n\n\t\t// use conn as type holder for SnowflakeConnection placeholder\n\t\tvar conn any = sct.sc\n\t\tqStatus, err := conn.(SnowflakeConnection).GetQueryStatus(sct.sc.ctx, qid)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"failed to get query status err = %s\", err.Error())\n\t\t\treturn\n\t\t}\n\t\tif qStatus == nil {\n\t\t\tt.Error(\"there was no query status returned\")\n\t\t\treturn\n\t\t}\n\t\tif qStatus.ErrorCode != \"\" || qStatus.ScanBytes <= 0 || qStatus.ProducedRows != 10 {\n\t\t\tt.Errorf(\"expected no error. got: %v, scan bytes: %v, produced rows: %v\",\n\t\t\t\tqStatus.ErrorCode, qStatus.ScanBytes, qStatus.ProducedRows)\n\t\t\treturn\n\t\t}\n\t})\n}\n\nfunc TestAddTelemetryDataViaSnowflakeConnection(t *testing.T) {\n\twiremock.registerMappings(t,\n\t\tnewWiremockMapping(\"auth/password/successful_flow.json\"),\n\t\tnewWiremockMapping(\"telemetry/custom_telemetry.json\"))\n\tcfg := wiremock.connectionConfig()\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\tconn, err := db.Conn(context.Background())\n\tassertNilF(t, err)\n\terr = conn.Raw(func(x any) error {\n\t\tm := map[string]string{}\n\t\tm[\"test_key\"] = \"test_value\"\n\t\treturn x.(SnowflakeConnection).AddTelemetryData(context.Background(), time.Now(), m)\n\t})\n\tassertNilF(t, err)\n}\n\nfunc TestConfigureTelemetry(t *testing.T) {\n\tfor _, enabled := range []bool{true, false} {\n\t\tt.Run(strconv.FormatBool(enabled), func(t *testing.T) {\n\t\t\twiremock.registerMappings(t,\n\t\t\t\twiremockMapping{\n\t\t\t\t\tfilePath: \"auth/password/successful_flow_with_telemetry.json\",\n\t\t\t\t\tparams:   map[string]string{\"%CLIENT_TELEMETRY_ENABLED%\": strconv.FormatBool(enabled)},\n\t\t\t\t},\n\t\t\t)\n\t\t\tcfg := wiremock.connectionConfig()\n\t\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\t\tdb := sql.OpenDB(connector)\n\t\t\tdefer db.Close()\n\t\t\tconn, err := db.Conn(context.Background())\n\t\t\tassertNilF(t, err)\n\t\t\terr = conn.Raw(func(x any) error {\n\t\t\t\tsc := x.(*snowflakeConn)\n\t\t\t\tassertEqualE(t, sc.telemetry.enabled, enabled)\n\t\t\t\treturn nil\n\t\t\t})\n\t\t\tassertNilF(t, err)\n\t\t})\n\t}\n}\n\nfunc TestGetInvalidQueryStatus(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsct.sc.rest.RequestTimeout = 1 * time.Second\n\n\t\tqStatus, err := sct.sc.checkQueryStatus(sct.sc.ctx, \"1234\")\n\t\tif err == nil || qStatus != nil {\n\t\t\tt.Error(\"expected an error\")\n\t\t}\n\t})\n}\n\nfunc TestExecWithServerSideError(t *testing.T) {\n\tpostQueryMock := func(_ context.Context, _ *snowflakeRestful,\n\t\t_ *url.Values, _ map[string]string, _ []byte, _ time.Duration,\n\t\trequestID UUID, _ *Config) (*execResponse, error) {\n\t\tdd := &execResponseData{}\n\t\treturn &execResponse{\n\t\t\tData:    *dd,\n\t\t\tMessage: \"\",\n\t\t\tCode:    \"\",\n\t\t\tSuccess: false,\n\t\t}, nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncPostQuery: postQueryMock,\n\t}\n\tsc := &snowflakeConn{\n\t\tcfg:       &Config{},\n\t\trest:      sr,\n\t\ttelemetry: testTelemetry,\n\t}\n\t_, err := sc.exec(context.Background(), \"\", false, /* noResult */\n\t\tfalse /* isInternal */, false /* describeOnly */, nil)\n\tif err == nil {\n\t\tt.Error(\"expected a server side error\")\n\t}\n\tsfe := err.(*SnowflakeError)\n\terrUnknownError := errors2.ErrUnknownError()\n\tif sfe.Number != -1 || sfe.SQLState != \"-1\" || sfe.QueryID != \"-1\" {\n\t\tt.Errorf(\"incorrect snowflake error. expected: %v, got: %v\", errUnknownError, *sfe)\n\t}\n\tif !strings.Contains(sfe.Message, \"an unknown server side error occurred\") {\n\t\tt.Errorf(\"incorrect message. expected: %v, got: %v\", errUnknownError.Message, sfe.Message)\n\t}\n}\n\nfunc TestConcurrentReadOnParams(t *testing.T) {\n\tconfig, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse dsn\")\n\t}\n\tconnector := NewConnector(SnowflakeDriver{}, *config)\n\tdb := sql.OpenDB(connector)\n\tdefer db.Close()\n\n\tvar successCount, failureCount int32\n\twg := sync.WaitGroup{}\n\tfor range 10 {\n\t\twg.Add(1)\n\t\tgo func() {\n\t\t\tfor range 10 {\n\t\t\t\tfunc() {\n\t\t\t\t\tstmt, err := db.PrepareContext(context.Background(), \"SELECT table_schema FROM information_schema.columns WHERE table_schema = ? LIMIT 1\")\n\t\t\t\t\tif err != nil || stmt == nil {\n\t\t\t\t\t\tatomic.AddInt32(&failureCount, 1)\n\t\t\t\t\t\treturn // Skip this iteration if PrepareContext fails\n\t\t\t\t\t}\n\t\t\t\t\tdefer stmt.Close()\n\n\t\t\t\t\trows, err := stmt.Query(\"INFORMATION_SCHEMA\")\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tatomic.AddInt32(&failureCount, 1)\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\tdefer rows.Close()\n\n\t\t\t\t\trows.Next()\n\t\t\t\t\tvar tableName string\n\t\t\t\t\terr = rows.Scan(&tableName)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tatomic.AddInt32(&failureCount, 1)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tatomic.AddInt32(&successCount, 1)\n\t\t\t\t\t}\n\t\t\t\t}()\n\t\t\t}\n\t\t\twg.Done()\n\t\t}()\n\t}\n\twg.Wait()\n\n\ttotalOperations := int32(100) // 10 goroutines × 10 operations each\n\tif successCount != totalOperations {\n\t\tt.Errorf(\"Expected all %d concurrent operations to succeed, got %d successes, %d failures\",\n\t\t\ttotalOperations, successCount, failureCount)\n\t} else {\n\t\tt.Logf(\"All %d concurrent operations completed successfully\", successCount)\n\t}\n}\n\nfunc postQueryTest(_ context.Context, _ *snowflakeRestful, _ *url.Values, headers map[string]string, _ []byte, _ time.Duration, _ UUID, _ *Config) (*execResponse, error) {\n\treturn nil, errors.New(\"failed to get query response\")\n}\n\nfunc postQueryFail(_ context.Context, _ *snowflakeRestful, _ *url.Values, headers map[string]string, _ []byte, _ time.Duration, _ UUID, _ *Config) (*execResponse, error) {\n\tdd := &execResponseData{\n\t\tQueryID:  \"1eFhmhe23242kmfd540GgGre\",\n\t\tSQLState: \"22008\",\n\t}\n\treturn &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"failed to get query response\",\n\t\tCode:    \"12345\",\n\t\tSuccess: false,\n\t}, errors.New(\"failed to get query response\")\n}\n\nfunc TestErrorReportingOnConcurrentFails(t *testing.T) {\n\tdb := openDB(t)\n\tdefer db.Close()\n\tvar wg sync.WaitGroup\n\tn := 5\n\twg.Add(3 * n)\n\tfor range n {\n\t\tgo executeQueryAndConfirmMessage(db, \"SELECT * FROM TABLE_ABC\", \"TABLE_ABC\", t, &wg)\n\t\tgo executeQueryAndConfirmMessage(db, \"SELECT * FROM TABLE_DEF\", \"TABLE_DEF\", t, &wg)\n\t\tgo executeQueryAndConfirmMessage(db, \"SELECT * FROM TABLE_GHI\", \"TABLE_GHI\", t, &wg)\n\t}\n\twg.Wait()\n}\n\nfunc executeQueryAndConfirmMessage(db *sql.DB, query string, expectedErrorTable string, t *testing.T, wg *sync.WaitGroup) {\n\tdefer wg.Done()\n\t_, err := db.Exec(query)\n\tmessage := err.(*SnowflakeError).Message\n\tif !strings.Contains(message, expectedErrorTable) {\n\t\tt.Errorf(\"QueryID: %s, Message %s ###### Expected error message table name: %s\",\n\t\t\terr.(*SnowflakeError).QueryID, err.(*SnowflakeError).Message, expectedErrorTable)\n\t}\n}\n\nfunc TestQueryArrowStreamError(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tnumrows := 50000\n\t\tquery := fmt.Sprintf(selectRandomGenerator, numrows)\n\t\tsct.sc.rest = &snowflakeRestful{\n\t\t\tFuncPostQuery:    postQueryTest,\n\t\t\tFuncCloseSession: closeSessionMock,\n\t\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t\t\tRequestTimeout:   10,\n\t\t}\n\t\t_, err := sct.sc.QueryArrowStream(sct.sc.ctx, query)\n\t\tif err == nil {\n\t\t\tt.Error(\"should have raised an error\")\n\t\t}\n\n\t\tsct.sc.rest.FuncPostQuery = postQueryFail\n\t\t_, err = sct.sc.QueryArrowStream(sct.sc.ctx, query)\n\t\tif err == nil {\n\t\t\tt.Error(\"should have raised an error\")\n\t\t}\n\t\t_, ok := err.(*SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"should be snowflake error. err: %v\", err)\n\t\t}\n\t})\n}\n\nfunc TestExecContextError(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsct.sc.rest = &snowflakeRestful{\n\t\t\tFuncPostQuery:    postQueryTest,\n\t\t\tFuncCloseSession: closeSessionMock,\n\t\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t\t\tRequestTimeout:   10,\n\t\t}\n\n\t\t_, err := sct.sc.ExecContext(sct.sc.ctx, \"SELECT 1\", []driver.NamedValue{})\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have raised an error\")\n\t\t}\n\n\t\tsct.sc.rest.FuncPostQuery = postQueryFail\n\t\t_, err = sct.sc.ExecContext(sct.sc.ctx, \"SELECT 1\", []driver.NamedValue{})\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have raised an error\")\n\t\t}\n\t})\n}\n\nfunc TestQueryContextError(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsct.sc.rest = &snowflakeRestful{\n\t\t\tFuncPostQuery:    postQueryTest,\n\t\t\tFuncCloseSession: closeSessionMock,\n\t\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t\t\tRequestTimeout:   10,\n\t\t}\n\t\t_, err := sct.sc.QueryContext(sct.sc.ctx, \"SELECT 1\", []driver.NamedValue{})\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have raised an error\")\n\t\t}\n\n\t\tsct.sc.rest.FuncPostQuery = postQueryFail\n\t\t_, err = sct.sc.QueryContext(sct.sc.ctx, \"SELECT 1\", []driver.NamedValue{})\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have raised an error\")\n\t\t}\n\t\t_, ok := err.(*SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"should be snowflake error. err: %v\", err)\n\t\t}\n\t})\n}\n\nfunc TestPrepareQuery(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\t_, err := sct.sc.Prepare(\"SELECT 1\")\n\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed to prepare query. err: %v\", err)\n\t\t}\n\t})\n}\n\nfunc TestBeginCreatesTransaction(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\ttx, _ := sct.sc.Begin()\n\t\tif tx == nil {\n\t\t\tt.Fatal(\"should have created a transaction with connection\")\n\t\t}\n\t})\n}\n\ntype EmptyTransporter struct{}\n\nfunc (t EmptyTransporter) RoundTrip(*http.Request) (*http.Response, error) {\n\treturn nil, nil\n}\n\n// castToTransport safely casts http.RoundTripper to *http.Transport\n// Returns nil if the cast fails\nfunc castToTransport(rt http.RoundTripper) *http.Transport {\n\tif transport, ok := rt.(*http.Transport); ok {\n\t\treturn transport\n\t}\n\treturn nil\n}\n\nfunc TestGetTransport(t *testing.T) {\n\ttestcases := []struct {\n\t\tname              string\n\t\tcfg               *Config\n\t\ttransportCheck    func(t *testing.T, transport *http.Transport)\n\t\troundTripperCheck func(t *testing.T, roundTripper http.RoundTripper)\n\t}{\n\t\t{\n\t\t\tname: \"DisableOCSPChecks\",\n\t\t\tcfg:  &Config{Account: \"one\", DisableOCSPChecks: false},\n\t\t\ttransportCheck: func(t *testing.T, transport *http.Transport) {\n\t\t\t\t// We should have a verifier function\n\t\t\t\tassertNotNilF(t, transport)\n\t\t\t\tassertNotNilF(t, transport.TLSClientConfig)\n\t\t\t\tassertNotNilF(t, transport.TLSClientConfig.VerifyPeerCertificate)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"DisableOCSPChecks missing from Config\",\n\t\t\tcfg:  &Config{Account: \"four\"},\n\t\t\ttransportCheck: func(t *testing.T, transport *http.Transport) {\n\t\t\t\t// We should have a verifier function\n\t\t\t\tassertNotNilF(t, transport)\n\t\t\t\tassertNotNilF(t, transport.TLSClientConfig)\n\t\t\t\tassertNotNilF(t, transport.TLSClientConfig.VerifyPeerCertificate)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"whole Config is missing\",\n\t\t\tcfg:  nil,\n\t\t\ttransportCheck: func(t *testing.T, transport *http.Transport) {\n\t\t\t\t// We should not have a TLSClientConfig\n\t\t\t\tassertNotNilF(t, transport)\n\t\t\t\tassertNilF(t, transport.TLSClientConfig)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Using custom Transporter\",\n\t\t\tcfg:  &Config{Account: \"five\", DisableOCSPChecks: true, Transporter: EmptyTransporter{}},\n\t\t\troundTripperCheck: func(t *testing.T, roundTripper http.RoundTripper) {\n\t\t\t\t// We should have a custom Transporter\n\t\t\t\tassertNotNilF(t, roundTripper)\n\t\t\t\tassertTrueE(t, roundTripper == EmptyTransporter{})\n\t\t\t},\n\t\t},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tresult, err := newTransportFactory(test.cfg, nil).createTransport(transportConfigFor(transportTypeSnowflake))\n\t\t\tassertNilE(t, err)\n\t\t\tif test.transportCheck != nil {\n\t\t\t\ttest.transportCheck(t, castToTransport(result))\n\t\t\t}\n\t\t\tif test.roundTripperCheck != nil {\n\t\t\t\ttest.roundTripperCheck(t, result)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestGetCRLTransport(t *testing.T) {\n\tt.Run(\"Using CRLs\", func(t *testing.T) {\n\t\tcrlCfg := &Config{\n\t\t\tCertRevocationCheckMode: CertRevocationCheckEnabled,\n\t\t\tDisableOCSPChecks:       true,\n\t\t}\n\t\ttransportFactory := newTransportFactory(crlCfg, nil)\n\t\tcrlRoundTripper, err := transportFactory.createTransport(transportConfigFor(transportTypeCRL))\n\t\tassertNilF(t, err)\n\t\ttransport := castToTransport(crlRoundTripper)\n\t\tassertNotNilF(t, transport, \"Expected http.Transport\")\n\t\tassertEqualE(t, transport.MaxIdleConns, defaultTransportConfigs.forTransportType(transportTypeCRL).MaxIdleConns)\n\t})\n}\n"
  },
  {
    "path": "connection_util.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"maps\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n)\n\nfunc (sc *snowflakeConn) isClientSessionKeepAliveEnabled() bool {\n\tv, ok := sc.syncParams.get(sessionClientSessionKeepAlive)\n\tif !ok {\n\t\treturn false\n\t}\n\treturn strings.Compare(*v, \"true\") == 0\n}\n\nfunc (sc *snowflakeConn) getClientSessionKeepAliveHeartbeatFrequency() (time.Duration, bool) {\n\tv, ok := sc.syncParams.get(sessionClientSessionKeepAliveHeartbeatFrequency)\n\n\tif !ok {\n\t\treturn 0, false\n\t}\n\n\tnum, err := strconv.Atoi(*v)\n\tif err != nil {\n\t\tlogger.Warnf(\"Failed to parse client session keepalive heartbeat frequency: %v. Falling back to default.\", err)\n\t\treturn 0, false\n\t}\n\n\treturn time.Duration(num) * time.Second, true\n}\n\nfunc (sc *snowflakeConn) startHeartBeat() {\n\tif sc.cfg != nil && !sc.isClientSessionKeepAliveEnabled() {\n\t\treturn\n\t}\n\tif sc.rest != nil {\n\t\tif heartbeatFrequency, ok := sc.getClientSessionKeepAliveHeartbeatFrequency(); ok {\n\t\t\tsc.rest.HeartBeat = newHeartBeat(sc.rest, heartbeatFrequency)\n\t\t} else {\n\t\t\tsc.rest.HeartBeat = newDefaultHeartBeat(sc.rest)\n\t\t}\n\t\tlogger.WithContext(sc.ctx).Debug(\"Start heart beat\")\n\t\tsc.rest.HeartBeat.start()\n\t}\n}\n\nfunc (sc *snowflakeConn) stopHeartBeat() {\n\tif sc.cfg != nil && !sc.isClientSessionKeepAliveEnabled() {\n\t\treturn\n\t}\n\tif sc.rest != nil && sc.rest.HeartBeat != nil {\n\t\tlogger.WithContext(sc.ctx).Debug(\"Stop heart beat\")\n\t\tsc.rest.HeartBeat.stop()\n\t}\n}\n\nfunc (sc *snowflakeConn) getArrayBindStageThreshold() int {\n\tv, ok := sc.syncParams.get(sessionArrayBindStageThreshold)\n\tif !ok {\n\t\treturn 0\n\t}\n\tnum, err := strconv.Atoi(*v)\n\tif err != nil {\n\t\treturn 0\n\t}\n\treturn num\n}\n\nfunc (sc *snowflakeConn) connectionTelemetry(cfg *Config) {\n\tdata := &telemetryData{\n\t\tMessage: map[string]string{\n\t\t\ttypeKey:          connectionParameters,\n\t\t\tsourceKey:        telemetrySource,\n\t\t\tdriverTypeKey:    \"Go\",\n\t\t\tdriverVersionKey: SnowflakeGoDriverVersion,\n\t\t\tgolangVersionKey: runtime.Version(),\n\t\t},\n\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t}\n\tmaps.Insert(data.Message, sc.syncParams.All())\n\tif err := sc.telemetry.addLog(data); err != nil {\n\t\tlogger.WithContext(sc.ctx).Warnf(\"cannot add telemetry log: %v\", err)\n\t}\n\tif err := sc.telemetry.sendBatch(); err != nil {\n\t\tlogger.WithContext(sc.ctx).Warnf(\"cannot send telemetry batch: %v\", err)\n\t}\n}\n\n// processFileTransfer creates a snowflakeFileTransferAgent object to process\n// any PUT/GET commands with their specified options\nfunc (sc *snowflakeConn) processFileTransfer(\n\tctx context.Context,\n\tdata *execResponse,\n\tquery string,\n\tisInternal bool) (\n\t*execResponse, error) {\n\toptions := &SnowflakeFileTransferOptions{}\n\tsfa := snowflakeFileTransferAgent{\n\t\tctx:          ctx,\n\t\tsc:           sc,\n\t\tdata:         &data.Data,\n\t\tcommand:      query,\n\t\toptions:      options,\n\t\tstreamBuffer: new(bytes.Buffer),\n\t}\n\tfs, err := getFileStream(ctx)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif fs != nil {\n\t\tsfa.sourceStream = fs\n\t\tif isInternal {\n\t\t\tsfa.data.AutoCompress = false\n\t\t}\n\t}\n\tif op := getFileTransferOptions(ctx); op != nil {\n\t\tsfa.options = op\n\t}\n\tif sfa.options.MultiPartThreshold == 0 {\n\t\tsfa.options.MultiPartThreshold = multiPartThreshold\n\t\t// for streaming download, use a smaller default part size\n\t\tif sfa.commandType == downloadCommand && isFileGetStream(ctx) {\n\t\t\tsfa.options.MultiPartThreshold = streamingMultiPartThreshold\n\t\t}\n\t}\n\tif err := sfa.execute(); err != nil {\n\t\treturn nil, err\n\t}\n\tdata, err = sfa.result()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif sfa.options != nil && isFileGetStream(ctx) {\n\t\tif err := writeFileStream(ctx, sfa.streamBuffer); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn data, nil\n}\n\nfunc getFileStream(ctx context.Context) (io.Reader, error) {\n\ts := ctx.Value(filePutStream)\n\tif s == nil {\n\t\treturn nil, nil\n\t}\n\tr, ok := s.(io.Reader)\n\tif !ok {\n\t\treturn nil, errors.New(\"incorrect io.Reader\")\n\t}\n\treturn r, nil\n}\n\nfunc isFileGetStream(ctx context.Context) bool {\n\tv := ctx.Value(fileGetStream)\n\treturn v != nil\n}\n\nfunc getFileTransferOptions(ctx context.Context) *SnowflakeFileTransferOptions {\n\tv := ctx.Value(fileTransferOptions)\n\tif v == nil {\n\t\treturn nil\n\t}\n\to, ok := v.(*SnowflakeFileTransferOptions)\n\tif !ok {\n\t\treturn nil\n\t}\n\treturn o\n}\n\nfunc writeFileStream(ctx context.Context, streamBuf *bytes.Buffer) error {\n\ts := ctx.Value(fileGetStream)\n\tw, ok := s.(io.Writer)\n\tif !ok {\n\t\treturn errors.New(\"expected an io.Writer\")\n\t}\n\t_, err := streamBuf.WriteTo(w)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc (sc *snowflakeConn) populateSessionParameters(parameters []nameValueParameter) {\n\t// other session parameters (not all)\n\tlogger.WithContext(sc.ctx).Tracef(\"params: %#v\", parameters)\n\tfor _, param := range parameters {\n\t\tv := \"\"\n\t\tswitch param.Value.(type) {\n\t\tcase int64:\n\t\t\tif vv, ok := param.Value.(int64); ok {\n\t\t\t\tv = strconv.FormatInt(vv, 10)\n\t\t\t}\n\t\tcase float64:\n\t\t\tif vv, ok := param.Value.(float64); ok {\n\t\t\t\tv = strconv.FormatFloat(vv, 'g', -1, 64)\n\t\t\t}\n\t\tcase bool:\n\t\t\tif vv, ok := param.Value.(bool); ok {\n\t\t\t\tv = strconv.FormatBool(vv)\n\t\t\t}\n\t\tdefault:\n\t\t\tif vv, ok := param.Value.(string); ok {\n\t\t\t\tv = vv\n\t\t\t}\n\t\t}\n\t\tlogger.WithContext(sc.ctx).Tracef(\"parameter. name: %v, value: %v\", param.Name, v)\n\t\tsc.syncParams.set(strings.ToLower(param.Name), &v)\n\t}\n}\n\nfunc (sc *snowflakeConn) configureTelemetry() {\n\ttelemetryEnabled, ok := sc.syncParams.get(\"client_telemetry_enabled\")\n\t// In-band telemetry is enabled by default on the backend side.\n\tif ok && telemetryEnabled != nil && *telemetryEnabled == \"true\" {\n\t\tsc.telemetry.flushSize = defaultFlushSize\n\t\tsc.telemetry.sr = sc.rest\n\t\tsc.telemetry.mutex = &sync.Mutex{}\n\t\tsc.telemetry.enabled = true\n\t}\n}\n\nfunc isAsyncMode(ctx context.Context) bool {\n\treturn isBooleanContextEnabled(ctx, asyncMode)\n}\n\nfunc isDescribeOnly(ctx context.Context) bool {\n\treturn isBooleanContextEnabled(ctx, describeOnly)\n}\n\nfunc isInternal(ctx context.Context) bool {\n\treturn isBooleanContextEnabled(ctx, internalQuery)\n}\n\nfunc isLogQueryTextEnabled(ctx context.Context) bool {\n\treturn isBooleanContextEnabled(ctx, logQueryText)\n}\n\nfunc isLogQueryParametersEnabled(ctx context.Context) bool {\n\treturn isBooleanContextEnabled(ctx, logQueryParameters)\n}\n\nfunc isBooleanContextEnabled(ctx context.Context, key ContextKey) bool {\n\tv := ctx.Value(key)\n\tif v == nil {\n\t\treturn false\n\t}\n\td, ok := v.(bool)\n\treturn ok && d\n}\n\nfunc setResultType(ctx context.Context, resType resultType) context.Context {\n\treturn context.WithValue(ctx, snowflakeResultType, resType)\n}\n\nfunc getResultType(ctx context.Context) resultType {\n\treturn ctx.Value(snowflakeResultType).(resultType)\n}\n\n// isDml returns true if the statement type code is in the range of DML.\nfunc isDml(v int64) bool {\n\treturn statementTypeIDDml <= v && v <= statementTypeIDMultiTableInsert\n}\n\nfunc isDql(data *execResponseData) bool {\n\treturn data.StatementTypeID == statementTypeIDSelect && !isMultiStmt(data)\n}\n\nfunc updateRows(data execResponseData) (int64, error) {\n\tif data.RowSet == nil {\n\t\treturn 0, nil\n\t}\n\tvar count int64\n\tfor i, n := 0, len(data.RowType); i < n; i++ {\n\t\tv, err := strconv.ParseInt(*data.RowSet[0][i], 10, 64)\n\t\tif err != nil {\n\t\t\treturn -1, err\n\t\t}\n\t\tcount += v\n\t}\n\treturn count, nil\n}\n\n// isMultiStmt returns true if the statement code is of type multistatement\n// Note that the statement type code is also equivalent to type INSERT, so an\n// additional check of the name is required\nfunc isMultiStmt(data *execResponseData) bool {\n\tvar isMultistatementByReturningSelect = data.StatementTypeID == statementTypeIDSelect && data.RowType[0].Name == \"multiple statement execution\"\n\treturn isMultistatementByReturningSelect || data.StatementTypeID == statementTypeIDMultistatement\n}\n\nfunc getResumeQueryID(ctx context.Context) (string, error) {\n\tval := ctx.Value(fetchResultByID)\n\tif val == nil {\n\t\treturn \"\", nil\n\t}\n\tstrVal, ok := val.(string)\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"failed to cast val %+v to string\", val)\n\t}\n\t// so there is a queryID in context for which we want to fetch the result\n\tif !queryIDRegexp.MatchString(strVal) {\n\t\treturn strVal, &SnowflakeError{\n\t\t\tNumber:  ErrQueryIDFormat,\n\t\t\tMessage: \"Invalid QID\",\n\t\t\tQueryID: strVal,\n\t\t}\n\t}\n\treturn strVal, nil\n}\n\n// returns snowflake chunk downloader by default or stream based chunk\n// downloader if option provided through context\nfunc populateChunkDownloader(\n\tctx context.Context,\n\tsc *snowflakeConn,\n\tdata execResponseData) chunkDownloader {\n\n\treturn &snowflakeChunkDownloader{\n\t\tsc:                 sc,\n\t\tctx:                ctx,\n\t\tpool:               getAllocator(ctx),\n\t\tCurrentChunk:       make([]chunkRowType, len(data.RowSet)),\n\t\tChunkMetas:         data.Chunks,\n\t\tTotal:              data.Total,\n\t\tTotalRowIndex:      int64(-1),\n\t\tCellCount:          len(data.RowType),\n\t\tQrmk:               data.Qrmk,\n\t\tQueryResultFormat:  data.QueryResultFormat,\n\t\tChunkHeader:        data.ChunkHeaders,\n\t\tFuncDownload:       downloadChunk,\n\t\tFuncDownloadHelper: downloadChunkHelper,\n\t\tFuncGet:            getChunk,\n\t\tRowSet: rowSetType{\n\t\t\tRowType:      data.RowType,\n\t\t\tJSON:         data.RowSet,\n\t\t\tRowSetBase64: data.RowSetBase64,\n\t\t},\n\t}\n}\n\n/**\n * We can only tell if private link is enabled for certain hosts when the hostname contains the subdomain\n * 'privatelink.snowflakecomputing.' but we don't have a good way of telling if a private link connection is\n * expected for internal stages for example.\n */\nfunc checkIsPrivateLink(host string) bool {\n\treturn strings.Contains(strings.ToLower(host), \".privatelink.snowflakecomputing.\")\n}\n\nfunc isStatementContext(ctx context.Context) bool {\n\tv := ctx.Value(executionType)\n\treturn v == executionTypeStatement\n}\n"
  },
  {
    "path": "connectivity_diagnosis.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"encoding/pem\"\n\t\"errors\"\n\t\"fmt\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n)\n\ntype connectivityDiagnoser struct {\n\tdiagnosticClient *http.Client\n}\n\nfunc newConnectivityDiagnoser(cfg *Config) *connectivityDiagnoser {\n\treturn &connectivityDiagnoser{\n\t\tdiagnosticClient: createDiagnosticClient(cfg),\n\t}\n}\n\ntype allowlistEntry struct {\n\tHost string `json:\"host\"`\n\tPort int    `json:\"port\"`\n\tType string `json:\"type\"`\n}\n\ntype allowlist struct {\n\tEntries []allowlistEntry\n}\n\n// acceptable HTTP status codes for connectivity diagnosis\n// for the sake of connectivity, e.g. HTTP403 from AWS S3 is perfectly fine\n// GCS bucket and Azure blob responds HTTP400 upon connecting with plain GET, its okay from connection standpoint\nvar connDiagAcceptableStatusCodes = []int{http.StatusOK, http.StatusForbidden, http.StatusBadRequest}\n\n// map of already-fetched CRLs to not test them more than once as they can be quite large\nvar connDiagTestedCrls = make(map[string]string)\n\n// create a diagnostic client with the appropriate transport for the given config\nfunc createDiagnosticClient(cfg *Config) *http.Client {\n\ttransport := createDiagnosticTransport(cfg)\n\n\tclientTimeout := cfg.ClientTimeout\n\tif clientTimeout == 0 {\n\t\tclientTimeout = time.Duration(sfconfig.DefaultClientTimeout)\n\t}\n\n\treturn &http.Client{\n\t\tTimeout:   clientTimeout,\n\t\tTransport: transport,\n\t}\n}\n\n// necessary to be able to log the IP address of the remote host to which we actually connected\n// might be even different from the result of DNS resolution\nfunc createDiagnosticDialContext() func(ctx context.Context, network, addr string) (net.Conn, error) {\n\tdialer := &net.Dialer{\n\t\tTimeout:   30 * time.Second,\n\t\tKeepAlive: 30 * time.Second,\n\t}\n\n\treturn func(ctx context.Context, network, addr string) (net.Conn, error) {\n\t\tconn, err := dialer.DialContext(ctx, network, addr)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tif remoteAddr := conn.RemoteAddr(); remoteAddr != nil {\n\t\t\tremoteIPStr := remoteAddr.String()\n\t\t\t// parse out just the IP (maybe port is present)\n\t\t\tif host, _, err := net.SplitHostPort(remoteIPStr); err == nil {\n\t\t\t\tremoteIPStr = host\n\t\t\t}\n\n\t\t\t// get hostname\n\t\t\thostname, _, _ := net.SplitHostPort(addr)\n\t\t\tif hostname == \"\" {\n\t\t\t\thostname = addr\n\t\t\t}\n\n\t\t\tlogger.Infof(\"[createDiagnosticDialContext] Connected to %s (remote IP: %s)\", hostname, remoteIPStr)\n\t\t}\n\n\t\treturn conn, nil\n\t}\n}\n\n// enhance the transport with IP logging\nfunc createDiagnosticTransport(cfg *Config) *http.Transport {\n\tbaseTransport, err := newTransportFactory(cfg, &snowflakeTelemetry{enabled: false}).createTransport(transportConfigFor(transportTypeSnowflake))\n\tif err != nil {\n\t\tlogger.Fatalf(\"[createDiagnosticTransport] failed to get the transport from the config: %v\", err)\n\t}\n\tif baseTransport == nil {\n\t\tlogger.Fatal(\"[createDiagnosticTransport] transport from config is nil\")\n\t}\n\n\tvar httpTransport = baseTransport.(*http.Transport)\n\n\t// return a new transport enhanced with remote IP logging\n\t// for SnowflakeNoOcspTransport, TLSClientConfig is nil\n\treturn &http.Transport{\n\t\tTLSClientConfig: httpTransport.TLSClientConfig,\n\t\tMaxIdleConns:    httpTransport.MaxIdleConns,\n\t\tIdleConnTimeout: httpTransport.IdleConnTimeout,\n\t\tProxy:           httpTransport.Proxy,\n\t\tDialContext:     createDiagnosticDialContext(),\n\t}\n}\n\nfunc (cd *connectivityDiagnoser) openAndReadAllowlistJSON(filePath string) (allowlist allowlist, err error) {\n\tif filePath == \"\" {\n\t\tlogger.Info(\"[openAndReadAllowlistJSON] allowlist.json location not specified, trying to load from current directory.\")\n\t\tfilePath = \"allowlist.json\"\n\t}\n\tlogger.Infof(\"[openAndReadAllowlistJSON] reading allowlist from %s.\", filePath)\n\tfileContent, err := os.ReadFile(filePath)\n\tif err != nil {\n\t\treturn allowlist, err\n\t}\n\n\tlogger.Debug(\"[openAndReadAllowlistJSON] parsing allowlist.json\")\n\terr = json.Unmarshal(fileContent, &allowlist.Entries)\n\treturn allowlist, err\n}\n\n// look up the host, using the local resolver\nfunc (cd *connectivityDiagnoser) resolveHostname(hostname string) {\n\tips, err := net.LookupIP(hostname)\n\tif err != nil {\n\t\tlogger.Errorf(\"[resolveHostname] error resolving hostname %s: %s\", hostname, err)\n\t\treturn\n\t}\n\tfor _, ip := range ips {\n\t\tlogger.Infof(\"[resolveHostname] resolved hostname %s to %s\", hostname, ip.String())\n\t\tif checkIsPrivateLink(hostname) && !ip.IsPrivate() {\n\t\t\tlogger.Errorf(\"[resolveHostname] this hostname %s should resolve to a private IP, but %s is public IP. Please, check your DNS configuration.\", hostname, ip.String())\n\t\t}\n\t}\n}\n\nfunc (cd *connectivityDiagnoser) isAcceptableStatusCode(statusCode int, acceptableCodes []int) bool {\n\treturn slices.Contains(acceptableCodes, statusCode)\n}\n\nfunc (cd *connectivityDiagnoser) fetchCRL(uri string) error {\n\tif _, ok := connDiagTestedCrls[uri]; ok {\n\t\tlogger.Infof(\"[fetchCRL] CRL for %s already fetched and parsed.\", uri)\n\t\treturn nil\n\t}\n\tlogger.Infof(\"[fetchCRL] fetching %s\", uri)\n\treq, err := cd.createRequest(uri)\n\tif err != nil {\n\t\tlogger.Errorf(\"[fetchCRL] error creating request: %v\", err)\n\t\treturn err\n\t}\n\tresp, err := cd.diagnosticClient.Do(req)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"[fetchCRL] HTTP GET to %s endpoint failed: %w\", uri, err)\n\t}\n\t// if closing response body is unsuccessful for some reason\n\tdefer func(Body io.ReadCloser) {\n\t\terr := Body.Close()\n\t\tif err != nil {\n\t\t\tlogger.Errorf(\"[fetchCRL] Failed to close response body: %v\", err)\n\t\t\treturn\n\t\t}\n\t}(resp.Body)\n\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn fmt.Errorf(\"[fetchCRL] HTTP response status from endpoint: %s\", resp.Status)\n\t}\n\n\tbody, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"[fetchCRL] failed to read response body: %w\", err)\n\t}\n\tlogger.Infof(\"[fetchCRL] %s retrieved successfully (%d bytes)\", uri, len(body))\n\tlogger.Infof(\"[fetchCRL] Parsing CRL fetched from %s\", uri)\n\tcrl, err := x509.ParseRevocationList(body)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"[fetchCRL] Failed to parse CRL: %w\", err)\n\t}\n\tlogger.Infof(\"    CRL Issuer: %s\", crl.Issuer)\n\tlogger.Infof(\"    This Update: %s\", crl.ThisUpdate)\n\tlogger.Infof(\"    Next Update: %s\", crl.NextUpdate)\n\tlogger.Infof(\"    Revoked Certificates#: %s\", strconv.Itoa(len(crl.RevokedCertificateEntries)))\n\n\tconnDiagTestedCrls[uri] = \"\"\n\treturn nil\n}\n\nfunc (cd *connectivityDiagnoser) doHTTP(request *http.Request) error {\n\tif strings.HasPrefix(request.URL.Host, \"ocsp.snowflakecomputing.\") {\n\t\tfullOCSPCacheURI := request.URL.String() + \"/ocsp_response_cache.json\"\n\t\tnewURL, err := url.Parse(fullOCSPCacheURI)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"failed to parse the full OCSP cache URL: %w\", err)\n\t\t}\n\t\trequest.URL = newURL\n\t}\n\tlogger.Infof(\"[doHTTP] testing HTTP connection to %s\", request.URL.String())\n\tresp, err := cd.diagnosticClient.Do(request)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"HTTP GET to %s endpoint failed: %w\", request.URL.String(), err)\n\t}\n\tdefer func(Body io.ReadCloser) {\n\t\terr := Body.Close()\n\t\tif err != nil {\n\t\t\tlogger.Errorf(\"[doHTTP] Failed to close response body: %v\", err)\n\t\t\treturn\n\t\t}\n\t}(resp.Body)\n\n\tif !cd.isAcceptableStatusCode(resp.StatusCode, connDiagAcceptableStatusCodes) {\n\t\treturn fmt.Errorf(\"HTTP response status from %s endpoint: %s\", request.URL.String(), resp.Status)\n\t}\n\tlogger.Infof(\"[doHTTP] Successfully connected to %s, HTTP response status: %s\", request.URL.String(), resp.Status)\n\treturn nil\n}\n\nfunc (cd *connectivityDiagnoser) doHTTPSGetCerts(request *http.Request, downloadCRLs bool) error {\n\tlogger.Infof(\"[doHTTPSGetCerts] connecting to %s\", request.URL.String())\n\tresp, err := cd.diagnosticClient.Do(request)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to connect: %w\", err)\n\t}\n\tdefer func(Body io.ReadCloser) {\n\t\terr := Body.Close()\n\t\tif err != nil {\n\t\t\tlogger.Errorf(\"[doHTTPSGetCerts] Failed to close response body: %v\", err)\n\t\t\treturn\n\t\t}\n\t}(resp.Body)\n\n\tif !cd.isAcceptableStatusCode(resp.StatusCode, connDiagAcceptableStatusCodes) {\n\t\treturn fmt.Errorf(\"HTTP response status from %s endpoint: %s\", request.URL.String(), resp.Status)\n\t}\n\tlogger.Infof(\"[doHTTPSGetCerts] Successfully connected to %s, HTTP response status: %s\", request.URL.String(), resp.Status)\n\n\tlogger.Debug(\"[doHTTPSGetCerts] getting TLS connection state\")\n\ttlsState := resp.TLS\n\tif tlsState == nil {\n\t\treturn errors.New(\"no TLS connection state available\")\n\t}\n\n\tlogger.Debug(\"[doHTTPSGetCerts] getting certificate chain\")\n\tcerts := tlsState.PeerCertificates\n\tlogger.Infof(\"[doHTTPSGetCerts] Retrieved %d certificate(s).\", len(certs))\n\n\t// log individual cert details\n\tfor i, cert := range certs {\n\t\tlogger.Infof(\"[doHTTPSGetCerts] Certificate %d, serial number: %x\", i+1, cert.SerialNumber)\n\t\tlogger.Infof(\"[doHTTPSGetCerts]   Subject: %s\", cert.Subject)\n\t\tlogger.Infof(\"[doHTTPSGetCerts]   Issuer:  %s\", cert.Issuer)\n\t\tlogger.Infof(\"[doHTTPSGetCerts]   Valid:   %s to %s\", cert.NotBefore, cert.NotAfter)\n\t\tlogger.Infof(\"[doHTTPSGetCerts]   For further details, check https://crt.sh/?serial=%x (non-Snowflake site)\", cert.SerialNumber)\n\n\t\t// if cert has CRL endpoint, log them too\n\t\tif len(cert.CRLDistributionPoints) > 0 {\n\t\t\tlogger.Infof(\"[doHTTPSGetCerts]   CRL Distribution Points:\")\n\t\t\tfor _, dp := range cert.CRLDistributionPoints {\n\t\t\t\tlogger.Infof(\"[doHTTPSGetCerts]    - %s\", dp)\n\t\t\t\t// only try to download the actual CRL if configured to do so\n\t\t\t\tif downloadCRLs {\n\t\t\t\t\tif err := cd.fetchCRL(dp); err != nil {\n\t\t\t\t\t\tlogger.Errorf(\"[doHTTPSGetCerts]      Failed to fetch or parse CRL: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tlogger.Infof(\"[doHTTPSGetCerts]   CRL Distribution Points not included in the certificate.\")\n\t\t}\n\n\t\t// dump the full PEM data too on DEBUG loglevel\n\t\tpemData := pem.EncodeToMemory(&pem.Block{\n\t\t\tType:  \"CERTIFICATE\",\n\t\t\tBytes: cert.Raw,\n\t\t})\n\t\tlogger.Debug(\"[doHTTPSGetCerts]   certificate PEM:\")\n\t\tlogger.Debug(string(pemData))\n\t}\n\treturn nil\n}\n\nfunc (cd *connectivityDiagnoser) createRequest(uri string) (*http.Request, error) {\n\tlogger.Infof(\"[createRequest] creating GET request to %s\", uri)\n\treq, err := http.NewRequest(\"GET\", uri, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn req, nil\n}\n\nfunc (cd *connectivityDiagnoser) checkProxy(req *http.Request) {\n\tdiagnosticTransport := cd.diagnosticClient.Transport.(*http.Transport)\n\tif diagnosticTransport == nil {\n\t\tlogger.Errorf(\"[checkProxy] diagnosticTransport is nil\")\n\t\treturn\n\t}\n\tif diagnosticTransport.Proxy == nil {\n\t\t// no proxy configured, nothing to log\n\t\treturn\n\t}\n\tp, err := diagnosticTransport.Proxy(req)\n\tif err != nil {\n\t\tlogger.Errorf(\"[checkProxy] problem determining PROXY: %v\", err)\n\t}\n\tif p != nil {\n\t\tlogger.Infof(\"[checkProxy] PROXY detected in the connection: %v\", p)\n\t}\n}\n\nfunc (cd *connectivityDiagnoser) performConnectivityCheck(entryType, host string, port int, downloadCRLs bool) (err error) {\n\tvar protocol string\n\tvar req *http.Request\n\n\tswitch port {\n\tcase 80:\n\t\tprotocol = \"http\"\n\tcase 443:\n\t\tprotocol = \"https\"\n\tdefault:\n\t\treturn fmt.Errorf(\"[performConnectivityCheck] unsupported port: %d\", port)\n\t}\n\n\tlogger.Infof(\"[performConnectivityCheck] %s check for %s %s\", strings.ToUpper(protocol), entryType, host)\n\treq, err = cd.createRequest(fmt.Sprintf(\"%s://%s\", protocol, host))\n\tif err != nil {\n\t\tlogger.Errorf(\"[performConnectivityCheck] error creating request: %v\", err)\n\t\treturn err\n\t}\n\n\tcd.checkProxy(req)\n\n\tswitch protocol {\n\tcase \"http\":\n\t\terr = cd.doHTTP(req)\n\tcase \"https\":\n\t\terr = cd.doHTTPSGetCerts(req, downloadCRLs)\n\t}\n\n\tif err != nil {\n\t\tlogger.Errorf(\"[performConnectivityCheck] error performing %s check: %v\", strings.ToUpper(protocol), err)\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc performDiagnosis(cfg *Config, downloadCRLs bool) {\n\tallowlistFile := cfg.ConnectionDiagnosticsAllowlistFile\n\n\tlogger.Info(\"[performDiagnosis] starting connectivity diagnosis based on allowlist file.\")\n\tif downloadCRLs {\n\t\tlogger.Info(\"[performDiagnosis] CRLs will be attempted to be downloaded and parsed during https tests.\")\n\t}\n\n\tdiag := newConnectivityDiagnoser(cfg)\n\n\tallowlist, err := diag.openAndReadAllowlistJSON(allowlistFile)\n\tif err != nil {\n\t\tlogger.Errorf(\"[performDiagnosis] error opening and parsing allowlist file: %v\", err)\n\t\treturn\n\t}\n\n\tfor _, entry := range allowlist.Entries {\n\t\thost := entry.Host\n\t\tport := entry.Port\n\t\tentryType := entry.Type\n\t\tlogger.Infof(\"[performDiagnosis] DNS check - resolving %s hostname %s\", entryType, host)\n\t\tdiag.resolveHostname(host)\n\n\t\tif port == 80 || port == 443 {\n\t\t\terr := diag.performConnectivityCheck(entryType, host, port, downloadCRLs)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "connectivity_diagnosis_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/tls\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"net/url\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\n/*\n * for the tests, we need to capture log output and assert on their content\n * this is done by creating a fresh logger to log into a buffer and look at that buffer\n * but we also need to preserve the original global logger to not modify that with tests\n * and restore original logger after the tests\n */\nfunc setupTestLogger() (buffer *bytes.Buffer, cleanup func()) {\n\toriginalLogger := logger\n\ttestLogger := CreateDefaultLogger() // from log.go\n\tbuffer = &bytes.Buffer{}\n\ttestLogger.SetOutput(buffer)\n\t_ = testLogger.SetLogLevel(\"INFO\")\n\tlogger = testLogger\n\n\tcleanup = func() {\n\t\tlogger = originalLogger\n\t}\n\n\treturn buffer, cleanup\n}\n\nfunc TestSetupTestLogger(t *testing.T) {\n\t// save  original global logger\n\toriginalLogger := logger\n\t// and restore it after test\n\tdefer func() { logger = originalLogger }()\n\n\tbuffer, cleanup := setupTestLogger()\n\n\tassertNotNilE(t, buffer, \"buffer should not be nil\")\n\tassertNotNilE(t, cleanup, \"cleanup function should not be nil\")\n\n\t// the test message should be in the buffer\n\ttestMessage := \"test log message for setupTestLogger\"\n\tlogger.Info(testMessage)\n\tlogOutput := buffer.String()\n\tassertStringContainsE(t, logOutput, testMessage, \"buffer should capture log output\")\n\n\t// now cleanup\n\tcleanup()\n\tassertEqualE(t, logger, originalLogger, \"cleanup should restore original logger\")\n\n\t// clear the buffer, log a new message into it\n\t// logs should not go to the test logger anymore\n\tbuffer.Reset()\n\tlogger.Info(\"this should not appear in test buffer\")\n\tassertEqualE(t, buffer.String(), \"\", \"buffer should be empty after cleanup\")\n}\n\n// test case types\ntype tcDiagnosticClient struct {\n\tname            string\n\tconfig          *Config\n\texpectedTimeout time.Duration\n}\n\ntype tcOpenAllowlistJSON struct {\n\tname           string\n\tsetup          func() (string, func())\n\tshouldError    bool\n\texpectedLength int\n}\n\ntype tcAcceptableStatusCode struct {\n\tstatusCode   int\n\tisAcceptable bool\n}\n\ntype tcFetchCRL struct {\n\tname          string\n\tsetupServer   func() *httptest.Server\n\tshouldError   bool\n\terrorContains string\n}\n\ntype tcCreateRequest struct {\n\tname        string\n\turi         string\n\tshouldError bool\n}\n\ntype tcDoHTTP struct {\n\tname          string\n\tsetupServer   func() *httptest.Server\n\tsetupRequest  func(serverURL string) *http.Request\n\tshouldError   bool\n\terrorContains string\n}\n\ntype tcDoHTTPSGetCerts struct {\n\tname          string\n\tsetupServer   func() *httptest.Server\n\tdownloadCRLs  bool\n\tshouldError   bool\n\terrorContains string\n}\n\ntype tcResolveHostname struct {\n\tname     string\n\thostname string\n}\n\ntype tcPerformConnectivityCheck struct {\n\tname         string\n\tentryType    string\n\thost         string\n\tport         int\n\tdownloadCRLs bool\n\texpectedLog  string\n}\n\nfunc TestCreateDiagnosticClient(t *testing.T) {\n\ttestcases := []tcDiagnosticClient{\n\t\t{\n\t\t\tname: \"Diagnostic Client with default timeout\",\n\t\t\tconfig: &Config{\n\t\t\t\tClientTimeout: 0,\n\t\t\t},\n\t\t\texpectedTimeout: sfconfig.DefaultClientTimeout,\n\t\t},\n\t\t{\n\t\t\tname: \"Diagnostic Client with custom timeout\",\n\t\t\tconfig: &Config{\n\t\t\t\tClientTimeout: 60 * time.Second,\n\t\t\t},\n\t\t\texpectedTimeout: 60 * time.Second,\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tclient := createDiagnosticClient(tc.config)\n\n\t\t\tassertNotNilE(t, client, \"client should not be nil\")\n\t\t\tassertEqualE(t, client.Timeout, tc.expectedTimeout, \"timeout did not match\")\n\t\t\tassertNotNilE(t, client.Transport, \"transport should not be nil\")\n\t\t})\n\t}\n}\n\nfunc TestCreateDiagnosticDialContext(t *testing.T) {\n\tdialContext := createDiagnosticDialContext()\n\n\tassertNotNilE(t, dialContext, \"dialContext should not be nil\")\n\n\t// new simple server to test basic connectivity\n\tserver := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tw.WriteHeader(http.StatusOK)\n\t}))\n\tdefer server.Close()\n\n\tu, _ := url.Parse(server.URL)\n\n\tctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)\n\tdefer cancel()\n\n\t_, err := dialContext(ctx, \"tcp\", u.Host)\n\tassertNilE(t, err, \"error should be nil\")\n}\n\nfunc TestOpenAndReadAllowlistJSON(t *testing.T) {\n\tvar diagTest connectivityDiagnoser\n\ttestcases := []tcOpenAllowlistJSON{\n\t\t{\n\t\t\tname: \"Open and Read Allowlist - valid file path, 2 entries\",\n\t\t\t// create a temp allowlist file and then delete it\n\t\t\tsetup: func() (filePath string, cleanup func()) {\n\t\t\t\tcontent := `[{\"host\":\"myaccount.snowflakecomputing.com\",\"port\":443,\"type\":\"SNOWFLAKE_DEPLOYMENT\"},{\"host\":\"ocsp.snowflakecomputing.com\",\"port\":80,\"type\":\"OCSP_CACHE\"}]`\n\t\t\t\ttmpFile, err := os.CreateTemp(\"\", \"allowlist_*.json\")\n\t\t\t\tassertNilF(t, err, \"Error during creating temp allowlist file.\")\n\t\t\t\t_, err = tmpFile.WriteString(content)\n\t\t\t\tassertNilF(t, err, \"Error during writing temp allowlist file.\")\n\t\t\t\ttmpFile.Close()\n\n\t\t\t\treturn tmpFile.Name(), func() { os.Remove(tmpFile.Name()) }\n\t\t\t},\n\t\t\tshouldError:    false,\n\t\t\texpectedLength: 2,\n\t\t},\n\t\t{\n\t\t\tname: \"Open and Read Allowlist - empty file path\",\n\t\t\tsetup: func() (filePath string, cleanup func()) {\n\t\t\t\tcontent := `[{\"host\":\"myaccount.snowflakecomputing.com\",\"port\":443,\"type\":\"SNOWFLAKE_DEPLOYMENT\"}]`\n\t\t\t\t_ = os.WriteFile(\"allowlist.json\", []byte(content), 0644)\n\n\t\t\t\treturn \"\", func() { os.Remove(\"allowlist.json\") }\n\t\t\t},\n\t\t\tshouldError:    false,\n\t\t\texpectedLength: 1,\n\t\t},\n\t\t{\n\t\t\tname: \"Open and Read Allowlist - non existent file\",\n\t\t\tsetup: func() (filePath string, cleanup func()) {\n\t\t\t\treturn \"/non/existent/file.json\", func() {}\n\t\t\t},\n\t\t\tshouldError:    true,\n\t\t\texpectedLength: 0,\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tfilePath, cleanup := tc.setup()\n\t\t\tdefer cleanup()\n\n\t\t\tallowlist, err := diagTest.openAndReadAllowlistJSON(filePath)\n\n\t\t\tif tc.shouldError {\n\t\t\t\tassertNotNilE(t, err, \"error should not be nil\")\n\t\t\t} else {\n\t\t\t\tassertNilE(t, err, \"error should be nil\")\n\t\t\t\tassertNotNilE(t, allowlist, \"file content should not be nil\")\n\t\t\t\tassertEqualE(t, len(allowlist.Entries), tc.expectedLength, \"allowlist length did not match\")\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestIsAcceptableStatusCode(t *testing.T) {\n\tvar diagTest connectivityDiagnoser\n\tacceptableCodes := []int{http.StatusOK, http.StatusForbidden, http.StatusBadRequest}\n\n\ttestcases := []tcAcceptableStatusCode{\n\t\t{http.StatusOK, true},\n\t\t{http.StatusForbidden, true},\n\t\t{http.StatusInternalServerError, false},\n\t\t{http.StatusUnauthorized, false},\n\t\t{http.StatusBadRequest, true},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(fmt.Sprintf(\"Is Acceptable Status Code - status %d\", tc.statusCode), func(t *testing.T) {\n\t\t\tresult := diagTest.isAcceptableStatusCode(tc.statusCode, acceptableCodes)\n\t\t\tassertEqualE(t, result, tc.isAcceptable, \"http status code acceptance is wrong\")\n\t\t})\n\t}\n}\n\nfunc TestFetchCRL(t *testing.T) {\n\tconfig := &Config{\n\t\tClientTimeout: 30 * time.Second,\n\t}\n\tdiagTest := newConnectivityDiagnoser(config)\n\tcrlPEM := `-----BEGIN X509 CRL-----\nMIIBuDCBoQIBATANBgkqhkiG9w0BAQsFADBeMQswCQYDVQQGEwJVUzELMAkGA1UE\nCAwCQ0ExDTALBgNVBAcMBFRlc3QxEDAOBgNVBAoMB0V4YW1wbGUxDzANBgNVBAsM\nBlRlc3RDQTEQMA4GA1UEAwwHVGVzdCBDQRcNMjUwNzI1MTYyMTQzWhcNMzMxMDEx\nMTYyMTQzWqAPMA0wCwYDVR0UBAQCAhAAMA0GCSqGSIb3DQEBCwUAA4IBAQCakfe4\nyaYe6jhSVZc177/y7a+qV6Vs8Ly+CwQiYCM/LieEI7coUpcMtF43ShfzG5FawrMI\nxa3L2ew5EHDPelrMAdc56GzGCZFlOp16++3HS8qUpodctMdWWcR9Jn0OAfR1I3cY\nKtLfQbYqwr+Ti6LT0SDp8kXhltH8ZfUcDWH779WF1IQatu5J+GoyHnfFCsP9gI3H\nAacyfk7Pp7MyAUChvbM6miyUbWm5NLW9nZgmMxqi9VpMnGZSCwqpS9J01k8YAbwS\nS3HAS4o7ePBmhiERTPjqmwqEUdrKzEYMtdCFHHfnnDSZxdAmb+Ep6WjRgU1AHxak\n6aJpJF0+Ic2kaXXI\n-----END X509 CRL-----`\n\tblock, _ := pem.Decode([]byte(crlPEM))\n\ttestcases := []tcFetchCRL{\n\t\t{\n\t\t\tname: \"Fetch CRL - successful fetch\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\t\t\t_, _ = w.Write(block.Bytes)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tshouldError: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Fetch CRL - server returns 404\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tw.WriteHeader(http.StatusNotFound)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tshouldError:   true,\n\t\t\terrorContains: \"HTTP response status\",\n\t\t},\n\t\t{\n\t\t\tname: \"Fetch CRL - server returns 500\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tshouldError:   true,\n\t\t\terrorContains: \"HTTP response status\",\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tserver := tc.setupServer()\n\t\t\tdefer server.Close()\n\n\t\t\terr := diagTest.fetchCRL(server.URL)\n\n\t\t\tif tc.shouldError {\n\t\t\t\tassertNotNilE(t, err, \"error should not be nil\")\n\t\t\t\tif tc.errorContains != \"\" {\n\t\t\t\t\tassertStringContainsE(t, err.Error(), tc.errorContains, \"error message should contain the expected string\")\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tassertNilE(t, err, \"error should be nil\")\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestCreateRequest(t *testing.T) {\n\tvar diagTest connectivityDiagnoser\n\ttestcases := []tcCreateRequest{\n\t\t{\n\t\t\tname:        \"Create Request - valid http url\",\n\t\t\turi:         \"http://ocsp.snowflakecomputing.com\",\n\t\t\tshouldError: false,\n\t\t},\n\t\t{\n\t\t\tname:        \"Create Request - valid https url\",\n\t\t\turi:         \"https://myaccount.snowflakecomputing.com\",\n\t\t\tshouldError: false,\n\t\t},\n\t\t{\n\t\t\tname:        \"Create Request - invalid url\",\n\t\t\turi:         \":/invalid-url\",\n\t\t\tshouldError: true,\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\treq, err := diagTest.createRequest(tc.uri)\n\n\t\t\tif tc.shouldError {\n\t\t\t\tassertNotNilE(t, err, \"error should not be nil\")\n\t\t\t} else {\n\t\t\t\tassertNilE(t, err, \"error should be nil\")\n\t\t\t\tassertNotNilE(t, req, \"request should not be nil\")\n\t\t\t\tassertEqualE(t, req.Method, \"GET\", \"method should be GET\")\n\t\t\t\tassertEqualE(t, req.URL.String(), tc.uri, \"url should match\")\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestDoHTTP(t *testing.T) {\n\tvar diagTest connectivityDiagnoser\n\ttestcases := []tcDoHTTP{\n\t\t// simple disposable server to test basic connectivity\n\t\t{\n\t\t\tname: \"Do HTTP - successful http request\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tsetupRequest: func(serverURL string) *http.Request {\n\t\t\t\treq, _ := http.NewRequest(\"GET\", serverURL, nil)\n\t\t\t\treturn req\n\t\t\t},\n\t\t\tshouldError: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Do HTTP - ocsp.snowflakecomputing.com url modification\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\t// doHTTP should automatically add ocsp_response_cache.json to the full url\n\t\t\t\t\tassertStringContainsE(t, r.URL.Path, \"ocsp_response_cache.json\", \"url path should contain ocsp_response_cache.json added\")\n\t\t\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tsetupRequest: func(serverURL string) *http.Request {\n\t\t\t\treq, _ := http.NewRequest(\"GET\", serverURL, nil)\n\t\t\t\treq.URL.Host = \"ocsp.snowflakecomputing.com\"\n\t\t\t\treturn req\n\t\t\t},\n\t\t\tshouldError: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Do HTTP - (CHINA) ocsp.snowflakecomputing.cn url modification\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tassertStringContainsE(t, r.URL.Path, \"ocsp_response_cache.json\", \"url path should contain ocsp_response_cache.json added\")\n\t\t\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tsetupRequest: func(serverURL string) *http.Request {\n\t\t\t\treq, _ := http.NewRequest(\"GET\", serverURL, nil)\n\t\t\t\treq.URL.Host = \"ocsp.snowflakecomputing.cn\"\n\t\t\t\treturn req\n\t\t\t},\n\t\t\t// http://ocsp.snowflakecomputing.cn/ocsp_response_cache.json throws HTTP404\n\t\t\tshouldError: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Do HTTP - server returns forbidden, acceptable\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tw.WriteHeader(http.StatusForbidden)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tsetupRequest: func(serverURL string) *http.Request {\n\t\t\t\treq, _ := http.NewRequest(\"GET\", serverURL, nil)\n\t\t\t\treturn req\n\t\t\t},\n\t\t\tshouldError: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Do HTTP - server returns internal server error, not acceptable\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tsetupRequest: func(serverURL string) *http.Request {\n\t\t\t\treq, _ := http.NewRequest(\"GET\", serverURL, nil)\n\t\t\t\treturn req\n\t\t\t},\n\t\t\tshouldError:   true,\n\t\t\terrorContains: \"HTTP response status\",\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tserver := tc.setupServer()\n\t\t\tdefer server.Close()\n\n\t\t\t// modify the diagnostic client to use a shorter timeout\n\t\t\tdiagTest.diagnosticClient = &http.Client{Timeout: 10 * time.Second}\n\n\t\t\treq := tc.setupRequest(server.URL)\n\t\t\terr := diagTest.doHTTP(req)\n\n\t\t\tif tc.shouldError {\n\t\t\t\tassertNotNilE(t, err, \"error should not be nil\")\n\t\t\t\tif tc.errorContains != \"\" {\n\t\t\t\t\tassertStringContainsE(t, err.Error(), tc.errorContains, \"error message should contain the expected string\")\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tassertNilE(t, err, \"error should be nil\")\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestDoHTTPSGetCerts(t *testing.T) {\n\tvar diagTest connectivityDiagnoser\n\ttestcases := []tcDoHTTPSGetCerts{\n\t\t// simple disposable server with TLS to test basic connectivity\n\t\t{\n\t\t\tname: \"Do HTTPS - successful https request\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tdownloadCRLs: false,\n\t\t\tshouldError:  false,\n\t\t},\n\t\t{\n\t\t\tname: \"Do HTTPS - server returns forbidden, acceptable\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tw.WriteHeader(http.StatusForbidden)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tdownloadCRLs: false,\n\t\t\tshouldError:  false,\n\t\t},\n\t\t{\n\t\t\tname: \"Do HTTPS - server returns internal server error, not acceptable\",\n\t\t\tsetupServer: func() *httptest.Server {\n\t\t\t\treturn httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\t\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\t\t\t}))\n\t\t\t},\n\t\t\tdownloadCRLs:  false,\n\t\t\tshouldError:   true,\n\t\t\terrorContains: \"HTTP response status\",\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tserver := tc.setupServer()\n\t\t\tdefer server.Close()\n\n\t\t\t// modify the diagnostic client to use a shorter timeout\n\t\t\t// and to ignore the server's certificate\n\t\t\tdiagTest.diagnosticClient = &http.Client{\n\t\t\t\tTimeout: 10 * time.Second,\n\t\t\t\tTransport: &http.Transport{\n\t\t\t\t\tTLSClientConfig: &tls.Config{InsecureSkipVerify: true},\n\t\t\t\t},\n\t\t\t}\n\n\t\t\treq, _ := http.NewRequest(\"GET\", server.URL, nil)\n\t\t\terr := diagTest.doHTTPSGetCerts(req, tc.downloadCRLs)\n\n\t\t\tif tc.shouldError {\n\t\t\t\tassertNotNilE(t, err, \"error should not be nil\")\n\t\t\t\tif tc.errorContains != \"\" {\n\t\t\t\t\tassertStringContainsE(t, err.Error(), tc.errorContains, \"error message should contain the expected string\")\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tassertNilE(t, err, \"error should be nil\")\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestCheckProxy(t *testing.T) {\n\tconfig := &Config{\n\t\tClientTimeout: 30 * time.Second,\n\t}\n\tdiagTest := newConnectivityDiagnoser(config)\n\n\tt.Run(\"Check Proxy - with proxy configured\", func(t *testing.T) {\n\t\t// setup test logger then restore original after test\n\t\tbuffer, cleanup := setupTestLogger()\n\t\tdefer cleanup()\n\n\t\t// set up transport with proxy\n\t\tproxyURL, _ := url.Parse(\"http://my.pro.xy:8080\")\n\t\tdiagTest.diagnosticClient.Transport = &http.Transport{\n\t\t\tProxy: func(req *http.Request) (*url.URL, error) {\n\t\t\t\treturn proxyURL, nil\n\t\t\t},\n\t\t}\n\n\t\t// this should generate a log output which indicates we use a proxy\n\t\treq, _ := http.NewRequest(\"GET\", \"https://myaccount.snowflakecomputing.com\", nil)\n\t\tdiagTest.checkProxy(req)\n\n\t\tlogOutput := buffer.String()\n\t\tassertStringContainsE(t, logOutput, \"[checkProxy] PROXY detected in the connection:\", \"log should contain proxy detection message\")\n\t\tassertStringContainsE(t, logOutput, \"http://my.pro.xy:8080\", \"log should contain the proxy URL\")\n\t})\n\n\tt.Run(\"Check Proxy - no proxy configured\", func(t *testing.T) {\n\t\t// setup test logger then restore original after test\n\t\tbuffer, cleanup := setupTestLogger()\n\t\tdefer cleanup()\n\n\t\t// set up transport without proxy\n\t\tdiagTest.diagnosticClient.Transport = &http.Transport{\n\t\t\tProxy: nil,\n\t\t}\n\n\t\treq, _ := http.NewRequest(\"GET\", \"https://myaccount.snowflakecomputing.com\", nil)\n\t\tdiagTest.checkProxy(req)\n\n\t\t// verify log output does NOT contain proxy detection\n\t\tlogOutput := buffer.String()\n\t\tif strings.Contains(logOutput, \"[checkProxy] PROXY detected\") {\n\t\t\tt.Errorf(\"log should not contain proxy detection message when no proxy is configured, but got: %s\", logOutput)\n\t\t}\n\t})\n\n\tt.Run(\"Check Proxy - proxy function returns error\", func(t *testing.T) {\n\t\t// setup test logger then restore original after test\n\t\tbuffer, cleanup := setupTestLogger()\n\t\tdefer cleanup()\n\n\t\t// deliberately return an error from the proxy function\n\t\tdiagTest.diagnosticClient.Transport = &http.Transport{\n\t\t\tProxy: func(req *http.Request) (*url.URL, error) {\n\t\t\t\treturn nil, fmt.Errorf(\"proxy configuration error\")\n\t\t\t},\n\t\t}\n\n\t\treq, _ := http.NewRequest(\"GET\", \"https://myaccount.snowflakecomputing.com\", nil)\n\t\tdiagTest.checkProxy(req)\n\n\t\t// verify log output contains error message\n\t\tlogOutput := buffer.String()\n\t\tassertStringContainsE(t, logOutput, \"[checkProxy] problem determining PROXY:\", \"log should contain proxy error message\")\n\t\tassertStringContainsE(t, logOutput, \"proxy configuration error\", \"log should contain the specific error\")\n\t})\n}\n\nfunc TestResolveHostname(t *testing.T) {\n\tvar diagTest connectivityDiagnoser\n\ttestcases := []tcResolveHostname{\n\t\t{\n\t\t\tname:     \"Resolve Hostname - valid hostname myaccount.snowflakecomputing.com\",\n\t\t\thostname: \"myaccount.snowflakecomputing.com\",\n\t\t},\n\t\t{\n\t\t\tname:     \"Resolve Hostname - invalid hostname\",\n\t\t\thostname: \"this.is.invalid\",\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t// setup test logger then restore original after test\n\t\t\tbuffer, cleanup := setupTestLogger()\n\t\t\tdefer cleanup()\n\n\t\t\tdiagTest.resolveHostname(tc.hostname)\n\n\t\t\tlogOutput := buffer.String()\n\n\t\t\t// check for expected log patterns based on hostname\n\t\t\tif tc.hostname == \"this.is.invalid\" {\n\t\t\t\tassertStringContainsE(t, logOutput, \"[resolveHostname] error resolving hostname\", \"should contain error message for invalid hostname\")\n\t\t\t\tassertStringContainsE(t, logOutput, tc.hostname, \"should contain the hostname in error message\")\n\t\t\t} else {\n\t\t\t\t// expect success message\n\t\t\t\tassertStringContainsE(t, logOutput, \"[resolveHostname] resolved hostname\", \"should contain success message for valid hostname\")\n\t\t\t\tassertStringContainsE(t, logOutput, tc.hostname, \"should contain the hostname in success message\")\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPerformConnectivityCheck(t *testing.T) {\n\tconfig := &Config{\n\t\tClientTimeout: 30 * time.Second,\n\t}\n\tdiagTest := newConnectivityDiagnoser(config)\n\n\ttestcases := []tcPerformConnectivityCheck{\n\t\t{\n\t\t\tname:         \"HTTP check for port 80\",\n\t\t\tentryType:    \"OCSP_CACHE\",\n\t\t\thost:         \"ocsp.snowflakecomputing.com\",\n\t\t\tport:         80,\n\t\t\tdownloadCRLs: false,\n\t\t\texpectedLog:  \"[performConnectivityCheck] HTTP check\",\n\t\t},\n\t\t{\n\t\t\tname:         \"HTTPS check for port 443\",\n\t\t\tentryType:    \"DUMMY_SNOWFLAKE_DEPLOYMENT\",\n\t\t\thost:         \"www.snowflake.com\",\n\t\t\tport:         443,\n\t\t\tdownloadCRLs: false,\n\t\t\texpectedLog:  \"[performConnectivityCheck] HTTPS check\",\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t// setup test logger then restore original after test\n\t\t\tbuffer, cleanup := setupTestLogger()\n\t\t\tdefer cleanup()\n\n\t\t\terr := diagTest.performConnectivityCheck(tc.entryType, tc.host, tc.port, tc.downloadCRLs)\n\n\t\t\tlogOutput := buffer.String()\n\n\t\t\t// verify expected log message appears\n\t\t\tassertStringContainsE(t, logOutput, tc.expectedLog, fmt.Sprintf(\"should contain '%s' log message\", tc.expectedLog))\n\t\t\tassertStringContainsE(t, logOutput, tc.entryType, \"should contain entry type in log\")\n\t\t\tassertStringContainsE(t, logOutput, tc.host, \"should contain host in log\")\n\n\t\t\t// if error occurred, verify error log format\n\t\t\tif err != nil {\n\t\t\t\tassertStringContainsE(t, logOutput, \"[performConnectivityCheck] error performing\", \"should contain error log message\")\n\t\t\t}\n\t\t})\n\t}\n\n}\n\nfunc TestPerformDiagnosis(t *testing.T) {\n\tt.Run(\"Perform Diagnosis - CRL download disabled\", func(t *testing.T) {\n\t\t// setup test logger then restore original after test\n\t\tbuffer, cleanup := setupTestLogger()\n\t\tdefer cleanup()\n\n\t\tallowlistContent := `[\n\t\t\t{\"host\":\"ocsp.snowflakecomputing.com\",\"port\":80,\"type\":\"OCSP_CACHE\"},\n\t\t\t{\"host\":\"www.snowflake.com\",\"port\":443,\"type\":\"DUMMY_SNOWFLAKE_DEPLOYMENT\"}\n\t\t]`\n\n\t\ttmpFile, err := os.CreateTemp(\"\", \"test_allowlist_*.json\")\n\t\tassertNilE(t, err, \"failed to create temp allowlist file\")\n\t\tdefer os.Remove(tmpFile.Name())\n\n\t\t_, _ = tmpFile.WriteString(allowlistContent)\n\t\ttmpFile.Close()\n\n\t\tconfig := &Config{\n\t\t\tConnectionDiagnosticsAllowlistFile: tmpFile.Name(),\n\t\t\tClientTimeout:                      30 * time.Second,\n\t\t}\n\n\t\t// perform the diagnosis without downloading CRL\n\t\tperformDiagnosis(config, false)\n\n\t\t// verify expected log messages from performDiagnosis and underlying functions\n\t\tlogOutput := buffer.String()\n\t\tassertStringContainsE(t, logOutput, \"[performDiagnosis] starting connectivity diagnosis\", \"should contain diagnosis start message\")\n\n\t\t// DNS resolution\n\t\tassertStringContainsE(t, logOutput, \"[performDiagnosis] DNS check - resolving OCSP_CACHE hostname ocsp.snowflakecomputing.com\", \"should contain DNS check for OCSP cache\")\n\t\tassertStringContainsE(t, logOutput, \"[performDiagnosis] DNS check - resolving DUMMY_SNOWFLAKE_DEPLOYMENT hostname www.snowflake.com\", \"should contain DNS check for Snowflake host\")\n\t\tassertStringContainsE(t, logOutput, \"[resolveHostname] resolved hostname\", \"should contain hostname resolution results\")\n\n\t\t// HTTP check\n\t\tassertStringContainsE(t, logOutput, \"[performConnectivityCheck] HTTP check for OCSP_CACHE ocsp.snowflakecomputing.com\", \"should contain HTTP check message\")\n\t\tassertStringContainsE(t, logOutput, \"[createRequest] creating GET request to http://ocsp.snowflakecomputing.com\", \"should contain request creation log\")\n\t\tassertStringContainsE(t, logOutput, \"[doHTTP] testing HTTP connection to\", \"should contain HTTP connection test log\")\n\n\t\t// HTTPS check\n\t\tassertStringContainsE(t, logOutput, \"[performConnectivityCheck] HTTPS check for DUMMY_SNOWFLAKE_DEPLOYMENT www.snowflake.com\", \"should contain HTTPS check message\")\n\t\tassertStringContainsE(t, logOutput, \"[createRequest] creating GET request to https://www.snowflake.com\", \"should contain HTTPS request creation log\")\n\t\tassertStringContainsE(t, logOutput, \"[doHTTPSGetCerts] connecting to https://www.snowflake.com\", \"should contain HTTPS connection log\")\n\n\t\t// diagnostic dial context\n\t\tassertStringContainsE(t, logOutput, \"[createDiagnosticDialContext] Connected to\", \"should contain dial context connection logs\")\n\t\tassertStringContainsE(t, logOutput, \"remote IP:\", \"should contain remote IP information\")\n\n\t\t// should NOT contain CRL download messages when disabled\n\t\tif strings.Contains(logOutput, \"[performDiagnosis] CRLs will be attempted to be downloaded\") {\n\t\t\tt.Errorf(\"should not contain CRL download message when disabled, but got: %s\", logOutput)\n\t\t}\n\t})\n\n\tt.Run(\"Perform Diagnosis - CRL download enabled\", func(t *testing.T) {\n\t\t// setup test logger then restore original after test\n\t\tbuffer, cleanup := setupTestLogger()\n\t\tdefer cleanup()\n\n\t\t// Create a temporary allowlist file with HTTPS entries to trigger CRL download attempts\n\t\tallowlistContent := `[\n\t\t\t{\"host\":\"ocsp.snowflakecomputing.com\",\"port\":80,\"type\":\"OCSP_CACHE\"},\n\t\t\t{\"host\":\"www.snowflake.com\",\"port\":443,\"type\":\"DUMMY_SNOWFLAKE_DEPLOYMENT\"}\n\t\t]`\n\n\t\ttmpFile, err := os.CreateTemp(\"\", \"test_allowlist_*.json\")\n\t\tassertNilE(t, err, \"failed to create temp allowlist file\")\n\t\tdefer os.Remove(tmpFile.Name())\n\n\t\t_, err = tmpFile.WriteString(allowlistContent)\n\t\tassertNilF(t, err, \"Failed to write temp allowlist.json.\")\n\t\ttmpFile.Close()\n\n\t\tconfig := &Config{\n\t\t\tConnectionDiagnosticsAllowlistFile: tmpFile.Name(),\n\t\t\tCertRevocationCheckMode:            CertRevocationCheckAdvisory,\n\t\t\tClientTimeout:                      30 * time.Second,\n\t\t\tDisableOCSPChecks:                  true,\n\t\t}\n\t\tdownloadCRLs := config.CertRevocationCheckMode.String() == \"ADVISORY\"\n\t\t// driver should download CRLs due to ADVISORY CRL mode\n\t\t// Note that there's a log.Fatalf in performDiagnosis that may cause the test to fail.\n\t\tperformDiagnosis(config, downloadCRLs)\n\n\t\t// verify expected log messages including CRL download\n\t\tlogOutput := buffer.String()\n\t\tassertStringContainsE(t, logOutput, \"[performDiagnosis] starting connectivity diagnosis\", \"should contain diagnosis start message\")\n\t\tassertStringContainsE(t, logOutput, \"[performDiagnosis] CRLs will be attempted to be downloaded and parsed during https tests\", \"should contain CRL download enabled message\")\n\n\t\t// DNS resolution\n\t\tassertStringContainsE(t, logOutput, \"[performDiagnosis] DNS check - resolving OCSP_CACHE hostname ocsp.snowflakecomputing.com\", \"should contain DNS check for OCSP cache\")\n\t\tassertStringContainsE(t, logOutput, \"[performDiagnosis] DNS check - resolving DUMMY_SNOWFLAKE_DEPLOYMENT hostname www.snowflake.com\", \"should contain DNS check for Snowflake host\")\n\t\tassertStringContainsE(t, logOutput, \"[resolveHostname] resolved hostname\", \"should contain hostname resolution results\")\n\n\t\t// HTTPS check\n\t\tassertStringContainsE(t, logOutput, \"[performConnectivityCheck] HTTPS check for DUMMY_SNOWFLAKE_DEPLOYMENT www.snowflake.com\", \"should contain HTTPS check message\")\n\t\tassertStringContainsE(t, logOutput, \"[doHTTPSGetCerts] connecting to https://www.snowflake.com\", \"should contain HTTPS connection log\")\n\t\tassertStringContainsE(t, logOutput, \"[doHTTPSGetCerts] Retrieved\", \"should contain certificate retrieval log\")\n\t\tassertStringContainsE(t, logOutput, \"certificate(s)\", \"should contain certificate count information\")\n\n\t\t// diagnostic dial context\n\t\tassertStringContainsE(t, logOutput, \"[createDiagnosticDialContext] Connected to\", \"should contain dial context connection logs\")\n\t\tassertStringContainsE(t, logOutput, \"remote IP:\", \"should contain remote IP information\")\n\n\t\t// CRL download\n\t\t// if certificate has CRLDistributionPoints this message is logged\n\t\tif strings.Contains(logOutput, \"CRL Distribution Points:\") {\n\t\t\t// and we should see CRL fetch attempts logged. we don't care if it's successful or not, we just want to see the log\n\t\t\tassertStringContainsE(t, logOutput, \"[fetchCRL] fetching\", \"should contain CRL fetch attempt log\")\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "connector.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n)\n\n// InternalSnowflakeDriver is the interface for an internal Snowflake driver\n// Deprecated: will be removed in a future release.\ntype InternalSnowflakeDriver interface {\n\tOpen(dsn string) (driver.Conn, error)\n\tOpenWithConfig(ctx context.Context, config Config) (driver.Conn, error)\n}\n\n// Connector creates Driver with the specified Config\ntype Connector struct {\n\tdriver InternalSnowflakeDriver\n\tcfg    Config\n}\n\n// NewConnector creates a new connector with the given SnowflakeDriver and Config.\nfunc NewConnector(driver InternalSnowflakeDriver, config Config) driver.Connector {\n\treturn Connector{driver, config}\n}\n\n// Connect creates a new connection.\nfunc (t Connector) Connect(ctx context.Context) (driver.Conn, error) {\n\tcfg := t.cfg\n\terr := sfconfig.FillMissingConfigParameters(&cfg)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn t.driver.OpenWithConfig(ctx, cfg)\n}\n\n// Driver creates a new driver.\nfunc (t Connector) Driver() driver.Driver {\n\treturn t.driver\n}\n"
  },
  {
    "path": "connector_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"database/sql/driver\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"reflect\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\ntype noopTestDriver struct {\n\tconfig Config\n\tconn   *snowflakeConn\n}\n\nfunc (d *noopTestDriver) Open(_ string) (driver.Conn, error) {\n\treturn nil, nil\n}\n\nfunc (d *noopTestDriver) OpenWithConfig(_ context.Context, config Config) (driver.Conn, error) {\n\td.config = config\n\treturn d.conn, nil\n}\n\nfunc TestConnector(t *testing.T) {\n\tconn := snowflakeConn{}\n\tmock := noopTestDriver{conn: &conn}\n\n\t// Use fake DSN for unit test - should not make real connections\n\tfakeDSN := \"testuser:testpass@testaccount.snowflakecomputing.com:443/testdb/testschema?warehouse=testwh&role=testrole\"\n\tconfig, err := ParseDSN(fakeDSN)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse dsn\")\n\t}\n\tconfig.Authenticator = AuthTypeSnowflake\n\tconfig.PrivateKey = nil\n\tconnector := NewConnector(&mock, *config)\n\tconnection, err := connector.Connect(context.Background())\n\tif err != nil {\n\t\tt.Fatalf(\"Connect error %s\", err)\n\t}\n\tif connection != &conn {\n\t\tt.Fatalf(\"Connection mismatch %s\", connection)\n\t}\n\tassertNilF(t, sfconfig.FillMissingConfigParameters(config))\n\tif reflect.DeepEqual(config, mock.config) {\n\t\tt.Fatalf(\"Config should be equal, expected %v, actual %v\", config, mock.config)\n\t}\n\tif connector.Driver() == nil {\n\t\tt.Fatalf(\"Missing driver\")\n\t}\n}\n\nfunc TestConnectorWithMissingConfig(t *testing.T) {\n\tconn := snowflakeConn{}\n\tmock := noopTestDriver{conn: &conn}\n\tconfig := Config{\n\t\tUser:     \"u\",\n\t\tPassword: \"p\",\n\t\tAccount:  \"\",\n\t}\n\texpectedErr := errors.ErrEmptyAccount()\n\n\tconnector := NewConnector(&mock, config)\n\t_, err := connector.Connect(context.Background())\n\tassertNotNilF(t, err, \"the connection should have failed due to empty account.\")\n\n\tdriverErr, ok := err.(*SnowflakeError)\n\tassertTrueF(t, ok, \"should be a SnowflakeError\")\n\tassertEqualE(t, driverErr.Number, expectedErr.Number)\n\tassertEqualE(t, driverErr.Message, expectedErr.Message)\n}\n\nfunc TestConnectorCancelContext(t *testing.T) {\n\tctx, cancel := context.WithCancel(context.Background())\n\torigLogger := GetLogger()\n\n\t// Create a test logger with buffer for capturing log output\n\ttestLogger := CreateDefaultLogger()\n\n\t// Create a buffer for capturing log output\n\tvar buf bytes.Buffer\n\ttestLogger.SetOutput(&buf)\n\tSetLogger(testLogger)\n\n\t// Restore default logger after the test completes\n\tdefer func() {\n\t\t// Recreate the default logger instead of trying to restore a proxy\n\t\tSetLogger(origLogger)\n\t}()\n\n\t// pass in our context which should only be used for establishing the initial connection; not persisted.\n\tsfConn, err := buildSnowflakeConn(ctx, Config{\n\t\tParams:        make(map[string]*string),\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t})\n\tassertNilF(t, err)\n\n\t// patch close handler\n\tsfConn.rest = &snowflakeRestful{\n\t\tFuncCloseSession: func(ctx context.Context, sr *snowflakeRestful, d time.Duration) error {\n\t\t\treturn ctx.Err()\n\t\t},\n\t}\n\n\t// cancel context BEFORE closing the connection.\n\t// this may occur if the *snowflakeConn was spawned by a QueryContext(), and the query has completed.\n\tcancel()\n\tassertNilF(t, sfConn.Close())\n\n\t// if the following log is emitted, the connection is holding onto context that it shouldn't be.\n\tassertFalseF(t, strings.Contains(buf.String(), \"context canceled\"))\n}\n"
  },
  {
    "path": "converter.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"encoding/hex\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n\t\"math\"\n\t\"math/big\"\n\t\"reflect\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/array\"\n\t\"github.com/apache/arrow-go/v18/arrow/decimal128\"\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n)\n\nconst format = \"2006-01-02 15:04:05.999999999\"\nconst numberDefaultPrecision = 38\nconst jsonFormatStr = \"json\"\n\nconst numberMaxPrecisionInBits = 127\n\n// 38 (max precision) + 1 (for possible '-') + 1 (for possible '.')\nconst decfloatPrintingPrec = 40\n\ntype timezoneType int\n\nvar errUnsupportedTimeArrayBind = errors.New(\"unsupported time array bind. Set the type to TimestampNTZType, TimestampLTZType, TimestampTZType, DateType or TimeType\")\nvar errNativeArrowWithoutProperContext = errors.New(\"structured types must be enabled to use with native arrow\")\n\nconst (\n\t// TimestampNTZType denotes a NTZ timezoneType for array binds\n\tTimestampNTZType timezoneType = iota\n\t// TimestampLTZType denotes a LTZ timezoneType for array binds\n\tTimestampLTZType\n\t// TimestampTZType denotes a TZ timezoneType for array binds\n\tTimestampTZType\n\t// DateType denotes a date type for array binds\n\tDateType\n\t// TimeType denotes a time type for array binds\n\tTimeType\n)\n\ntype interfaceArrayBinding struct {\n\thasTimezone       bool\n\ttzType            timezoneType\n\ttimezoneTypeArray any\n}\n\nfunc isInterfaceArrayBinding(t any) bool {\n\tswitch t.(type) {\n\tcase interfaceArrayBinding:\n\t\treturn true\n\tcase *interfaceArrayBinding:\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\nfunc isJSONFormatType(tsmode types.SnowflakeType) bool {\n\treturn tsmode == types.ObjectType || tsmode == types.ArrayType || tsmode == types.SliceType\n}\n\n// goTypeToSnowflake translates Go data type to Snowflake data type.\nfunc goTypeToSnowflake(v driver.Value, tsmode types.SnowflakeType) types.SnowflakeType {\n\tif isJSONFormatType(tsmode) {\n\t\treturn tsmode\n\t}\n\tif v == nil {\n\t\treturn types.NullType\n\t}\n\tswitch t := v.(type) {\n\tcase int64, sql.NullInt64:\n\t\treturn types.FixedType\n\tcase float64, sql.NullFloat64:\n\t\treturn types.RealType\n\tcase bool, sql.NullBool:\n\t\treturn types.BooleanType\n\tcase string, sql.NullString:\n\t\treturn types.TextType\n\tcase []byte:\n\t\tif tsmode == types.BinaryType {\n\t\t\treturn types.BinaryType // may be redundant but ensures BINARY type\n\t\t}\n\t\tif t == nil {\n\t\t\treturn types.NullType // invalid byte array. won't take as BINARY\n\t\t}\n\t\tif len(t) != 1 {\n\t\t\treturn types.ArrayType\n\t\t}\n\t\tif _, err := dataTypeMode(t); err != nil {\n\t\t\treturn types.UnSupportedType\n\t\t}\n\t\treturn types.ChangeType\n\tcase time.Time, sql.NullTime:\n\t\treturn tsmode\n\t}\n\tif supportedArrayBind(&driver.NamedValue{Value: v}) {\n\t\treturn types.SliceType\n\t}\n\t// structured objects\n\tif _, ok := v.(StructuredObjectWriter); ok {\n\t\treturn types.ObjectType\n\t} else if _, ok := v.(reflect.Type); ok && tsmode == types.NilObjectType {\n\t\treturn types.NilObjectType\n\t}\n\t// structured arrays\n\tif reflect.TypeOf(v).Kind() == reflect.Slice || (reflect.TypeOf(v).Kind() == reflect.Pointer && reflect.ValueOf(v).Elem().Kind() == reflect.Slice) {\n\t\treturn types.ArrayType\n\t} else if tsmode == types.NilArrayType {\n\t\treturn types.NilArrayType\n\t} else if tsmode == types.NilMapType {\n\t\treturn types.NilMapType\n\t} else if reflect.TypeOf(v).Kind() == reflect.Map || (reflect.TypeOf(v).Kind() == reflect.Pointer && reflect.ValueOf(v).Elem().Kind() == reflect.Map) {\n\t\treturn types.MapType\n\t}\n\treturn types.UnSupportedType\n}\n\n// snowflakeTypeToGo translates Snowflake data type to Go data type.\nfunc snowflakeTypeToGo(ctx context.Context, dbtype types.SnowflakeType, precision int64, scale int64, fields []query.FieldMetadata) reflect.Type {\n\tstructuredTypesEnabled := structuredTypesEnabled(ctx)\n\tswitch dbtype {\n\tcase types.FixedType:\n\t\tif higherPrecisionEnabled(ctx) {\n\t\t\tif scale == 0 {\n\t\t\t\tif precision >= 19 {\n\t\t\t\t\treturn reflect.TypeFor[*big.Int]()\n\t\t\t\t}\n\t\t\t\treturn reflect.TypeFor[int64]()\n\t\t\t}\n\t\t\treturn reflect.TypeFor[*big.Float]()\n\t\t}\n\t\tif scale == 0 {\n\t\t\tif precision >= 19 {\n\t\t\t\treturn reflect.TypeFor[string]()\n\t\t\t}\n\t\t\treturn reflect.TypeFor[int64]()\n\t\t}\n\t\treturn reflect.TypeFor[float64]()\n\tcase types.RealType:\n\t\treturn reflect.TypeFor[float64]()\n\tcase types.DecfloatType:\n\t\tif !decfloatMappingEnabled(ctx) {\n\t\t\treturn reflect.TypeFor[string]()\n\t\t}\n\t\tif higherPrecisionEnabled(ctx) {\n\t\t\treturn reflect.TypeFor[*big.Float]()\n\t\t}\n\t\treturn reflect.TypeFor[float64]()\n\tcase types.TextType, types.VariantType:\n\t\treturn reflect.TypeFor[string]()\n\tcase types.DateType, types.TimeType, types.TimestampLtzType, types.TimestampNtzType, types.TimestampTzType:\n\t\treturn reflect.TypeOf(time.Now())\n\tcase types.BinaryType:\n\t\treturn reflect.TypeFor[[]byte]()\n\tcase types.BooleanType:\n\t\treturn reflect.TypeFor[bool]()\n\tcase types.ObjectType:\n\t\tif len(fields) > 0 && structuredTypesEnabled {\n\t\t\treturn reflect.TypeFor[ObjectType]()\n\t\t}\n\t\treturn reflect.TypeFor[string]()\n\tcase types.ArrayType:\n\t\tif len(fields) == 0 || !structuredTypesEnabled {\n\t\t\treturn reflect.TypeFor[string]()\n\t\t}\n\t\tif len(fields) != 1 {\n\t\t\tlogger.WithContext(ctx).Warn(\"Unexpected fields number: \" + strconv.Itoa(len(fields)))\n\t\t\treturn reflect.TypeFor[string]()\n\t\t}\n\t\tswitch types.GetSnowflakeType(fields[0].Type) {\n\t\tcase types.FixedType:\n\t\t\tif fields[0].Scale == 0 && higherPrecisionEnabled(ctx) {\n\t\t\t\treturn reflect.TypeFor[[]*big.Int]()\n\t\t\t} else if fields[0].Scale == 0 && !higherPrecisionEnabled(ctx) {\n\t\t\t\treturn reflect.TypeFor[[]int64]()\n\t\t\t} else if fields[0].Scale != 0 && higherPrecisionEnabled(ctx) {\n\t\t\t\treturn reflect.TypeFor[[]*big.Float]()\n\t\t\t}\n\t\t\treturn reflect.TypeFor[[]float64]()\n\t\tcase types.RealType:\n\t\t\treturn reflect.TypeFor[[]float64]()\n\t\tcase types.TextType:\n\t\t\treturn reflect.TypeFor[[]string]()\n\t\tcase types.DateType, types.TimeType, types.TimestampLtzType, types.TimestampNtzType, types.TimestampTzType:\n\t\t\treturn reflect.TypeFor[[]time.Time]()\n\t\tcase types.BooleanType:\n\t\t\treturn reflect.TypeFor[[]bool]()\n\t\tcase types.BinaryType:\n\t\t\treturn reflect.TypeFor[[][]byte]()\n\t\tcase types.ObjectType:\n\t\t\treturn reflect.TypeFor[[]ObjectType]()\n\t\t}\n\t\treturn nil\n\tcase types.MapType:\n\t\tif !structuredTypesEnabled {\n\t\t\treturn reflect.TypeFor[string]()\n\t\t}\n\t\tswitch types.GetSnowflakeType(fields[0].Type) {\n\t\tcase types.TextType:\n\t\t\treturn snowflakeTypeToGoForMaps[string](ctx, fields[1])\n\t\tcase types.FixedType:\n\t\t\treturn snowflakeTypeToGoForMaps[int64](ctx, fields[1])\n\t\t}\n\t\treturn reflect.TypeFor[map[any]any]()\n\t}\n\tlogger.WithContext(ctx).Errorf(\"unsupported dbtype is specified. %v\", dbtype)\n\treturn reflect.TypeFor[string]()\n}\n\nfunc snowflakeTypeToGoForMaps[K comparable](ctx context.Context, valueMetadata query.FieldMetadata) reflect.Type {\n\tswitch types.GetSnowflakeType(valueMetadata.Type) {\n\tcase types.TextType:\n\t\treturn reflect.TypeFor[map[K]string]()\n\tcase types.FixedType:\n\t\tif higherPrecisionEnabled(ctx) && valueMetadata.Scale == 0 {\n\t\t\treturn reflect.TypeFor[map[K]*big.Int]()\n\t\t} else if higherPrecisionEnabled(ctx) && valueMetadata.Scale != 0 {\n\t\t\treturn reflect.TypeFor[map[K]*big.Float]()\n\t\t} else if !higherPrecisionEnabled(ctx) && valueMetadata.Scale == 0 {\n\t\t\treturn reflect.TypeFor[map[K]int64]()\n\t\t} else {\n\t\t\treturn reflect.TypeFor[map[K]float64]()\n\t\t}\n\tcase types.RealType:\n\t\treturn reflect.TypeFor[map[K]float64]()\n\tcase types.BooleanType:\n\t\treturn reflect.TypeFor[map[K]bool]()\n\tcase types.BinaryType:\n\t\treturn reflect.TypeFor[map[K][]byte]()\n\tcase types.TimeType, types.DateType, types.TimestampTzType, types.TimestampNtzType, types.TimestampLtzType:\n\t\treturn reflect.TypeFor[map[K]time.Time]()\n\t}\n\tlogger.WithContext(ctx).Errorf(\"unsupported dbtype is specified for map value\")\n\treturn reflect.TypeFor[string]()\n}\n\n// valueToString converts arbitrary golang type to a string. This is mainly used in binding data with placeholders\n// in queries.\nfunc valueToString(v driver.Value, tsmode types.SnowflakeType, params *syncParams) (bindingValue, error) {\n\tisJSONFormat := isJSONFormatType(tsmode)\n\tif v == nil {\n\t\tif isJSONFormat {\n\t\t\treturn bindingValue{nil, jsonFormatStr, nil}, nil\n\t\t}\n\t\treturn bindingValue{nil, \"\", nil}, nil\n\t}\n\tv1 := reflect.Indirect(reflect.ValueOf(v))\n\n\tif valuer, ok := v.(driver.Valuer); ok { // check for driver.Valuer satisfaction and honor that first\n\t\tif value, err := valuer.Value(); err == nil && value != nil {\n\t\t\t// if the output value is a valid string, return that\n\t\t\tif strVal, ok := value.(string); ok {\n\t\t\t\tif isJSONFormat {\n\t\t\t\t\treturn bindingValue{&strVal, jsonFormatStr, nil}, nil\n\t\t\t\t}\n\t\t\t\treturn bindingValue{&strVal, \"\", nil}, nil\n\t\t\t}\n\t\t}\n\t}\n\n\tif tsmode == types.DecfloatType && v1.Type() == reflect.TypeFor[big.Float]() {\n\t\ts := v.(*big.Float).Text('g', decfloatPrintingPrec)\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\t}\n\n\tswitch v1.Kind() {\n\tcase reflect.Bool:\n\t\ts := strconv.FormatBool(v1.Bool())\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\tcase reflect.Int64:\n\t\ts := strconv.FormatInt(v1.Int(), 10)\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\tcase reflect.Float64:\n\t\ts := strconv.FormatFloat(v1.Float(), 'g', -1, 32)\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\tcase reflect.String:\n\t\ts := v1.String()\n\t\tif isJSONFormat {\n\t\t\treturn bindingValue{&s, jsonFormatStr, nil}, nil\n\t\t}\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\tcase reflect.Slice, reflect.Array:\n\t\treturn arrayToString(v, tsmode, params)\n\tcase reflect.Map:\n\t\treturn mapToString(v, tsmode, params)\n\tcase reflect.Struct:\n\t\treturn structValueToString(v, tsmode, params)\n\t}\n\n\treturn bindingValue{}, fmt.Errorf(\"unsupported type: %v\", v1.Kind())\n}\n\n// isUUIDImplementer checks if a value is a UUID that satisfies RFC 4122\nfunc isUUIDImplementer(v reflect.Value) bool {\n\trt := v.Type()\n\n\t// Check if the type is an array of 16 bytes\n\tif v.Kind() == reflect.Array && rt.Elem().Kind() == reflect.Uint8 && rt.Len() == 16 {\n\t\t// Check if the type implements fmt.Stringer\n\t\tvInt := v.Interface()\n\t\tif stringer, ok := vInt.(fmt.Stringer); ok {\n\t\t\tuuidStr := stringer.String()\n\n\t\t\trfc4122Regex := `^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$`\n\t\t\tmatched, err := regexp.MatchString(rfc4122Regex, uuidStr)\n\t\t\tif err != nil {\n\t\t\t\treturn false\n\t\t\t}\n\n\t\t\tif matched {\n\t\t\t\t// parse the UUID and ensure it is the same as the original string\n\t\t\t\tu := ParseUUID(uuidStr)\n\t\t\t\treturn u.String() == uuidStr\n\t\t\t}\n\t\t}\n\t}\n\treturn false\n}\n\nfunc arrayToString(v driver.Value, tsmode types.SnowflakeType, params *syncParams) (bindingValue, error) {\n\tv1 := reflect.Indirect(reflect.ValueOf(v))\n\tif v1.Kind() == reflect.Slice && v1.IsNil() {\n\t\treturn bindingValue{nil, jsonFormatStr, nil}, nil\n\t}\n\tif bd, ok := v.([][]byte); ok && tsmode == types.BinaryType {\n\t\tschema := bindingSchema{\n\t\t\tTyp:      \"array\",\n\t\t\tNullable: true,\n\t\t\tFields: []query.FieldMetadata{\n\t\t\t\t{\n\t\t\t\t\tType:     \"binary\",\n\t\t\t\t\tNullable: true,\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tif len(bd) == 0 {\n\t\t\tres := \"[]\"\n\t\t\treturn bindingValue{value: &res, format: jsonFormatStr, schema: &schema}, nil\n\t\t}\n\t\ts := \"\"\n\t\tfor _, b := range bd {\n\t\t\ts += \"\\\"\" + hex.EncodeToString(b) + \"\\\",\"\n\t\t}\n\t\ts = \"[\" + s[:len(s)-1] + \"]\"\n\t\treturn bindingValue{&s, jsonFormatStr, &schema}, nil\n\t} else if times, ok := v.([]time.Time); ok {\n\t\ttyp := types.DriverTypeToSnowflake[tsmode]\n\t\tsfFormat, err := dateTimeInputFormatByType(typ, params)\n\t\tif err != nil {\n\t\t\treturn bindingValue{nil, \"\", nil}, err\n\t\t}\n\t\tgoFormat, err := snowflakeFormatToGoFormat(sfFormat)\n\t\tif err != nil {\n\t\t\treturn bindingValue{nil, \"\", nil}, err\n\t\t}\n\t\tarr := make([]string, len(times))\n\t\tfor idx, t := range times {\n\t\t\tarr[idx] = t.Format(goFormat)\n\t\t}\n\t\tres, err := json.Marshal(arr)\n\t\tif err != nil {\n\t\t\treturn bindingValue{nil, jsonFormatStr, &bindingSchema{\n\t\t\t\tTyp:      \"array\",\n\t\t\t\tNullable: true,\n\t\t\t\tFields: []query.FieldMetadata{\n\t\t\t\t\t{\n\t\t\t\t\t\tType:     typ,\n\t\t\t\t\t\tNullable: true,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}}, err\n\t\t}\n\t\tresString := string(res)\n\t\treturn bindingValue{&resString, jsonFormatStr, nil}, nil\n\t} else if isArrayOfStructs(v) {\n\t\tstringEntries := make([]string, v1.Len())\n\t\tsowcForSingleElement, err := buildSowcFromType(params, reflect.TypeOf(v).Elem())\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t\tfor i := 0; i < v1.Len(); i++ {\n\t\t\tpotentialSow := v1.Index(i)\n\t\t\tif sow, ok := potentialSow.Interface().(StructuredObjectWriter); ok {\n\t\t\t\tbv, err := structValueToString(sow, tsmode, params)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn bindingValue{nil, jsonFormatStr, nil}, err\n\t\t\t\t}\n\t\t\t\tstringEntries[i] = *bv.value\n\t\t\t}\n\t\t}\n\t\tvalue := \"[\" + strings.Join(stringEntries, \",\") + \"]\"\n\t\tarraySchema := &bindingSchema{\n\t\t\tTyp:      \"array\",\n\t\t\tNullable: true,\n\t\t\tFields: []query.FieldMetadata{\n\t\t\t\t{\n\t\t\t\t\tType:     \"OBJECT\",\n\t\t\t\t\tNullable: true,\n\t\t\t\t\tFields:   sowcForSingleElement.toFields(),\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\treturn bindingValue{&value, jsonFormatStr, arraySchema}, nil\n\t} else if reflect.ValueOf(v).Len() == 0 {\n\t\tvalue := \"[]\"\n\t\treturn bindingValue{&value, jsonFormatStr, nil}, nil\n\t} else if barr, ok := v.([]byte); ok {\n\t\tif tsmode == types.BinaryType {\n\t\t\tres := hex.EncodeToString(barr)\n\t\t\treturn bindingValue{&res, jsonFormatStr, nil}, nil\n\t\t}\n\t\tschemaForBytes := bindingSchema{\n\t\t\tTyp:      \"array\",\n\t\t\tNullable: true,\n\t\t\tFields: []query.FieldMetadata{\n\t\t\t\t{\n\t\t\t\t\tType:     \"FIXED\",\n\t\t\t\t\tNullable: true,\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tif len(barr) == 0 {\n\t\t\tres := \"[]\"\n\t\t\treturn bindingValue{&res, jsonFormatStr, &schemaForBytes}, nil\n\t\t}\n\t\tres := \"[\"\n\t\tfor _, b := range barr {\n\t\t\tres += fmt.Sprint(b) + \",\"\n\t\t}\n\t\tres = res[0:len(res)-1] + \"]\"\n\t\treturn bindingValue{&res, jsonFormatStr, &schemaForBytes}, nil\n\t} else if isUUIDImplementer(v1) { // special case for UUIDs (snowflake type and other implementers)\n\t\tstringer := v.(fmt.Stringer) // we don't need to validate if it's a fmt.Stringer because we already checked if it's a UUID type with a stringer\n\t\tvalue := stringer.String()\n\t\treturn bindingValue{&value, \"\", nil}, nil\n\t} else if isSliceOfSlices(v) {\n\t\treturn bindingValue{}, errors.New(\"array of arrays is not supported\")\n\t}\n\tres, err := json.Marshal(v)\n\tif err != nil {\n\t\treturn bindingValue{nil, jsonFormatStr, nil}, err\n\t}\n\tresString := string(res)\n\treturn bindingValue{&resString, jsonFormatStr, nil}, nil\n}\n\nfunc mapToString(v driver.Value, tsmode types.SnowflakeType, params *syncParams) (bindingValue, error) {\n\tvar err error\n\tvalOf := reflect.Indirect(reflect.ValueOf(v))\n\tif valOf.IsNil() {\n\t\treturn bindingValue{nil, \"\", nil}, nil\n\t}\n\ttypOf := reflect.TypeOf(v)\n\tvar jsonBytes []byte\n\tif tsmode == types.BinaryType {\n\t\tm := make(map[string]*string, valOf.Len())\n\t\titer := valOf.MapRange()\n\t\tfor iter.Next() {\n\t\t\tval := iter.Value().Interface().([]byte)\n\t\t\tif val != nil {\n\t\t\t\ts := hex.EncodeToString(val)\n\t\t\t\tm[stringOrIntToString(iter.Key())] = &s\n\t\t\t} else {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = nil\n\t\t\t}\n\t\t}\n\t\tjsonBytes, err = json.Marshal(m)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t} else if typOf.Elem().AssignableTo(reflect.TypeFor[time.Time]()) || typOf.Elem().AssignableTo(reflect.TypeFor[sql.NullTime]()) {\n\t\tm := make(map[string]*string, valOf.Len())\n\t\titer := valOf.MapRange()\n\t\tfor iter.Next() {\n\t\t\tval, valid, err := toNullableTime(iter.Value().Interface())\n\t\t\tif err != nil {\n\t\t\t\treturn bindingValue{}, err\n\t\t\t}\n\t\t\tif !valid {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = nil\n\t\t\t} else {\n\t\t\t\ttyp := types.DriverTypeToSnowflake[tsmode]\n\t\t\t\ts, err := timeToString(val, typ, params)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn bindingValue{}, err\n\t\t\t\t}\n\t\t\t\tm[stringOrIntToString(iter.Key())] = &s\n\t\t\t}\n\t\t}\n\t\tjsonBytes, err = json.Marshal(m)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t} else if typOf.Elem().AssignableTo(reflect.TypeFor[sql.NullString]()) {\n\t\tm := make(map[string]*string, valOf.Len())\n\t\titer := valOf.MapRange()\n\t\tfor iter.Next() {\n\t\t\tval := iter.Value().Interface().(sql.NullString)\n\t\t\tif val.Valid {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = &val.String\n\t\t\t} else {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = nil\n\t\t\t}\n\t\t}\n\t\tjsonBytes, err = json.Marshal(m)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t} else if typOf.Elem().AssignableTo(reflect.TypeFor[sql.NullByte]()) || typOf.Elem().AssignableTo(reflect.TypeFor[sql.NullInt16]()) || typOf.Elem().AssignableTo(reflect.TypeFor[sql.NullInt32]()) || typOf.Elem().AssignableTo(reflect.TypeFor[sql.NullInt64]()) {\n\t\tm := make(map[string]*int64, valOf.Len())\n\t\titer := valOf.MapRange()\n\t\tfor iter.Next() {\n\t\t\tval, valid := toNullableInt64(iter.Value().Interface())\n\t\t\tif valid {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = &val\n\t\t\t} else {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = nil\n\t\t\t}\n\t\t}\n\t\tjsonBytes, err = json.Marshal(m)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t} else if typOf.Elem().AssignableTo(reflect.TypeFor[sql.NullFloat64]()) {\n\t\tm := make(map[string]*float64, valOf.Len())\n\t\titer := valOf.MapRange()\n\t\tfor iter.Next() {\n\t\t\tval := iter.Value().Interface().(sql.NullFloat64)\n\t\t\tif val.Valid {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = &val.Float64\n\t\t\t} else {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = nil\n\t\t\t}\n\t\t}\n\t\tjsonBytes, err = json.Marshal(m)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t} else if typOf.Elem().AssignableTo(reflect.TypeFor[sql.NullBool]()) {\n\t\tm := make(map[string]*bool, valOf.Len())\n\t\titer := valOf.MapRange()\n\t\tfor iter.Next() {\n\t\t\tval := iter.Value().Interface().(sql.NullBool)\n\t\t\tif val.Valid {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = &val.Bool\n\t\t\t} else {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = nil\n\t\t\t}\n\t\t}\n\t\tjsonBytes, err = json.Marshal(m)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t} else if typOf.Elem().AssignableTo(structuredObjectWriterType) {\n\t\tm := make(map[string]map[string]any, valOf.Len())\n\t\titer := valOf.MapRange()\n\t\tvar valueMetadata *query.FieldMetadata\n\t\tfor iter.Next() {\n\t\t\tsowc := structuredObjectWriterContext{}\n\t\t\tsowc.init(params)\n\t\t\tif iter.Value().IsNil() {\n\t\t\t\tm[stringOrIntToString(iter.Key())] = nil\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\terr = iter.Value().Interface().(StructuredObjectWriter).Write(&sowc)\n\t\t\tif err != nil {\n\t\t\t\treturn bindingValue{}, err\n\t\t\t}\n\t\t\tm[stringOrIntToString(iter.Key())] = sowc.values\n\t\t\tif valueMetadata == nil {\n\t\t\t\tvalueMetadata = &query.FieldMetadata{\n\t\t\t\t\tType:     \"OBJECT\",\n\t\t\t\t\tNullable: true,\n\t\t\t\t\tFields:   sowc.toFields(),\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif valueMetadata == nil {\n\t\t\tsowcFromValueType, err := buildSowcFromType(params, typOf.Elem())\n\t\t\tif err != nil {\n\t\t\t\treturn bindingValue{}, err\n\t\t\t}\n\t\t\tvalueMetadata = &query.FieldMetadata{\n\t\t\t\tType:     \"OBJECT\",\n\t\t\t\tNullable: true,\n\t\t\t\tFields:   sowcFromValueType.toFields(),\n\t\t\t}\n\t\t}\n\t\tjsonBytes, err = json.Marshal(m)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t\tjsonString := string(jsonBytes)\n\t\tkeyMetadata, err := goTypeToFieldMetadata(typOf.Key(), types.TextType, params)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t\tschema := bindingSchema{\n\t\t\tTyp:    \"MAP\",\n\t\t\tFields: []query.FieldMetadata{keyMetadata, *valueMetadata},\n\t\t}\n\t\treturn bindingValue{&jsonString, jsonFormatStr, &schema}, nil\n\t} else {\n\t\tjsonBytes, err = json.Marshal(v)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t}\n\tjsonString := string(jsonBytes)\n\tkeyMetadata, err := goTypeToFieldMetadata(typOf.Key(), types.TextType, params)\n\tif err != nil {\n\t\treturn bindingValue{}, err\n\t}\n\tvalueMetadata, err := goTypeToFieldMetadata(typOf.Elem(), tsmode, params)\n\tif err != nil {\n\t\treturn bindingValue{}, err\n\t}\n\tschema := bindingSchema{\n\t\tTyp:    \"MAP\",\n\t\tFields: []query.FieldMetadata{keyMetadata, valueMetadata},\n\t}\n\treturn bindingValue{&jsonString, jsonFormatStr, &schema}, nil\n}\n\nfunc toNullableInt64(val any) (int64, bool) {\n\tswitch v := val.(type) {\n\tcase sql.NullByte:\n\t\treturn int64(v.Byte), v.Valid\n\tcase sql.NullInt16:\n\t\treturn int64(v.Int16), v.Valid\n\tcase sql.NullInt32:\n\t\treturn int64(v.Int32), v.Valid\n\tcase sql.NullInt64:\n\t\treturn v.Int64, v.Valid\n\t}\n\t// should never happen, the list above is exhaustive\n\tpanic(\"Only byte, int16, int32 or int64 are supported\")\n}\n\nfunc toNullableTime(val any) (time.Time, bool, error) {\n\tswitch v := val.(type) {\n\tcase time.Time:\n\t\treturn v, true, nil\n\tcase sql.NullTime:\n\t\treturn v.Time, v.Valid, nil\n\t}\n\treturn time.Now(), false, fmt.Errorf(\"cannot use %T as time\", val)\n}\n\nfunc stringOrIntToString(v reflect.Value) string {\n\tif v.CanInt() {\n\t\treturn strconv.Itoa(int(v.Int()))\n\t}\n\treturn v.String()\n}\n\nfunc goTypeToFieldMetadata(typ reflect.Type, tsmode types.SnowflakeType, params *syncParams) (query.FieldMetadata, error) {\n\tif tsmode == types.BinaryType {\n\t\treturn query.FieldMetadata{\n\t\t\tType:     \"BINARY\",\n\t\t\tNullable: true,\n\t\t}, nil\n\t}\n\tif typ.Kind() == reflect.Pointer {\n\t\ttyp = typ.Elem()\n\t}\n\tswitch typ.Kind() {\n\tcase reflect.String:\n\t\treturn query.FieldMetadata{\n\t\t\tType:     \"TEXT\",\n\t\t\tNullable: true,\n\t\t}, nil\n\tcase reflect.Bool:\n\t\treturn query.FieldMetadata{\n\t\t\tType:     \"BOOLEAN\",\n\t\t\tNullable: true,\n\t\t}, nil\n\tcase reflect.Int, reflect.Int8, reflect.Uint8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\treturn query.FieldMetadata{\n\t\t\tType:      \"FIXED\",\n\t\t\tPrecision: numberDefaultPrecision,\n\t\t\tNullable:  true,\n\t\t}, nil\n\tcase reflect.Float32, reflect.Float64:\n\t\treturn query.FieldMetadata{\n\t\t\tType:     \"REAL\",\n\t\t\tNullable: true,\n\t\t}, nil\n\tcase reflect.Struct:\n\t\tif typ.AssignableTo(reflect.TypeFor[sql.NullString]()) {\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"TEXT\",\n\t\t\t\tNullable: true,\n\t\t\t}, nil\n\t\t} else if typ.AssignableTo(reflect.TypeFor[sql.NullBool]()) {\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"BOOLEAN\",\n\t\t\t\tNullable: true,\n\t\t\t}, nil\n\t\t} else if typ.AssignableTo(reflect.TypeFor[sql.NullByte]()) || typ.AssignableTo(reflect.TypeFor[sql.NullInt16]()) || typ.AssignableTo(reflect.TypeFor[sql.NullInt32]()) || typ.AssignableTo(reflect.TypeFor[sql.NullInt64]()) {\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:      \"FIXED\",\n\t\t\t\tPrecision: numberDefaultPrecision,\n\t\t\t\tNullable:  true,\n\t\t\t}, nil\n\t\t} else if typ.AssignableTo(reflect.TypeFor[sql.NullFloat64]()) {\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"REAL\",\n\t\t\t\tNullable: true,\n\t\t\t}, nil\n\t\t} else if tsmode == types.DateType {\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"DATE\",\n\t\t\t\tNullable: true,\n\t\t\t}, nil\n\t\t} else if tsmode == types.TimeType {\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"TIME\",\n\t\t\t\tNullable: true,\n\t\t\t}, nil\n\t\t} else if tsmode == types.TimestampTzType {\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"TIMESTAMP_TZ\",\n\t\t\t\tNullable: true,\n\t\t\t}, nil\n\t\t} else if tsmode == types.TimestampNtzType {\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"TIMESTAMP_NTZ\",\n\t\t\t\tNullable: true,\n\t\t\t}, nil\n\t\t} else if tsmode == types.TimestampLtzType {\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"TIMESTAMP_LTZ\",\n\t\t\t\tNullable: true,\n\t\t\t}, nil\n\t\t} else if typ.AssignableTo(structuredObjectWriterType) || tsmode == types.NilObjectType {\n\t\t\tsowc, err := buildSowcFromType(params, typ)\n\t\t\tif err != nil {\n\t\t\t\treturn query.FieldMetadata{}, err\n\t\t\t}\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"OBJECT\",\n\t\t\t\tNullable: true,\n\t\t\t\tFields:   sowc.toFields(),\n\t\t\t}, nil\n\t\t} else if tsmode == types.NilArrayType || tsmode == types.NilMapType {\n\t\t\tsowc, err := buildSowcFromType(params, typ)\n\t\t\tif err != nil {\n\t\t\t\treturn query.FieldMetadata{}, err\n\t\t\t}\n\t\t\treturn query.FieldMetadata{\n\t\t\t\tType:     \"OBJECT\",\n\t\t\t\tNullable: true,\n\t\t\t\tFields:   sowc.toFields(),\n\t\t\t}, nil\n\t\t}\n\tcase reflect.Slice:\n\t\tmetadata, err := goTypeToFieldMetadata(typ.Elem(), tsmode, params)\n\t\tif err != nil {\n\t\t\treturn query.FieldMetadata{}, err\n\t\t}\n\t\treturn query.FieldMetadata{\n\t\t\tType:     \"ARRAY\",\n\t\t\tNullable: true,\n\t\t\tFields:   []query.FieldMetadata{metadata},\n\t\t}, nil\n\tcase reflect.Map:\n\t\tkeyMetadata, err := goTypeToFieldMetadata(typ.Key(), tsmode, params)\n\t\tif err != nil {\n\t\t\treturn query.FieldMetadata{}, err\n\t\t}\n\t\tvalueMetadata, err := goTypeToFieldMetadata(typ.Elem(), tsmode, params)\n\t\tif err != nil {\n\t\t\treturn query.FieldMetadata{}, err\n\t\t}\n\t\treturn query.FieldMetadata{\n\t\t\tType:     \"MAP\",\n\t\t\tNullable: true,\n\t\t\tFields:   []query.FieldMetadata{keyMetadata, valueMetadata},\n\t\t}, nil\n\t}\n\treturn query.FieldMetadata{}, fmt.Errorf(\"cannot build field metadata for %v (mode %v)\", typ.Kind().String(), tsmode.String())\n}\n\nfunc isSliceOfSlices(v any) bool {\n\ttyp := reflect.TypeOf(v)\n\treturn typ.Kind() == reflect.Slice && typ.Elem().Kind() == reflect.Slice\n}\n\nfunc isArrayOfStructs(v any) bool {\n\treturn reflect.TypeOf(v).Elem().Kind() == reflect.Struct || (reflect.TypeOf(v).Elem().Kind() == reflect.Pointer && reflect.TypeOf(v).Elem().Elem().Kind() == reflect.Struct)\n}\n\nfunc structValueToString(v driver.Value, tsmode types.SnowflakeType, params *syncParams) (bindingValue, error) {\n\tswitch typedVal := v.(type) {\n\tcase time.Time:\n\t\treturn timeTypeValueToString(typedVal, tsmode)\n\tcase sql.NullTime:\n\t\tif !typedVal.Valid {\n\t\t\treturn bindingValue{nil, \"\", nil}, nil\n\t\t}\n\t\treturn timeTypeValueToString(typedVal.Time, tsmode)\n\tcase sql.NullBool:\n\t\tif !typedVal.Valid {\n\t\t\treturn bindingValue{nil, \"\", nil}, nil\n\t\t}\n\t\ts := strconv.FormatBool(typedVal.Bool)\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\tcase sql.NullInt64:\n\t\tif !typedVal.Valid {\n\t\t\treturn bindingValue{nil, \"\", nil}, nil\n\t\t}\n\t\ts := strconv.FormatInt(typedVal.Int64, 10)\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\tcase sql.NullFloat64:\n\t\tif !typedVal.Valid {\n\t\t\treturn bindingValue{nil, \"\", nil}, nil\n\t\t}\n\t\ts := strconv.FormatFloat(typedVal.Float64, 'g', -1, 32)\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\tcase sql.NullString:\n\t\tfmt := \"\"\n\t\tif isJSONFormatType(tsmode) {\n\t\t\tfmt = jsonFormatStr\n\t\t}\n\t\tif !typedVal.Valid {\n\t\t\treturn bindingValue{nil, fmt, nil}, nil\n\t\t}\n\t\treturn bindingValue{&typedVal.String, fmt, nil}, nil\n\t}\n\tif sow, ok := v.(StructuredObjectWriter); ok {\n\t\tsowc := &structuredObjectWriterContext{}\n\t\tsowc.init(params)\n\t\terr := sow.Write(sowc)\n\t\tif err != nil {\n\t\t\treturn bindingValue{nil, \"\", nil}, err\n\t\t}\n\t\tjsonBytes, err := json.Marshal(sowc.values)\n\t\tif err != nil {\n\t\t\treturn bindingValue{nil, \"\", nil}, err\n\t\t}\n\t\tjsonString := string(jsonBytes)\n\t\tschema := bindingSchema{\n\t\t\tTyp:      \"object\",\n\t\t\tNullable: true,\n\t\t\tFields:   sowc.toFields(),\n\t\t}\n\t\treturn bindingValue{&jsonString, jsonFormatStr, &schema}, nil\n\t} else if typ, ok := v.(reflect.Type); ok && tsmode == types.NilArrayType {\n\t\tmetadata, err := goTypeToFieldMetadata(typ, tsmode, params)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t\tschema := bindingSchema{\n\t\t\tTyp:      \"ARRAY\",\n\t\t\tNullable: true,\n\t\t\tFields: []query.FieldMetadata{\n\t\t\t\tmetadata,\n\t\t\t},\n\t\t}\n\t\treturn bindingValue{nil, jsonFormatStr, &schema}, nil\n\t} else if t, ok := v.(NilMapTypes); ok && tsmode == types.NilMapType {\n\t\tkeyMetadata, err := goTypeToFieldMetadata(t.Key, tsmode, params)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t\tvalueMetadata, err := goTypeToFieldMetadata(t.Value, tsmode, params)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t\tschema := bindingSchema{\n\t\t\tTyp:      \"map\",\n\t\t\tNullable: true,\n\t\t\tFields:   []query.FieldMetadata{keyMetadata, valueMetadata},\n\t\t}\n\t\treturn bindingValue{nil, jsonFormatStr, &schema}, nil\n\t} else if typ, ok := v.(reflect.Type); ok && tsmode == types.NilObjectType {\n\t\tmetadata, err := goTypeToFieldMetadata(typ, tsmode, params)\n\t\tif err != nil {\n\t\t\treturn bindingValue{}, err\n\t\t}\n\t\tschema := bindingSchema{\n\t\t\tTyp:      \"object\",\n\t\t\tNullable: true,\n\t\t\tFields:   metadata.Fields,\n\t\t}\n\t\treturn bindingValue{nil, jsonFormatStr, &schema}, nil\n\t}\n\treturn bindingValue{}, fmt.Errorf(\"unknown binding for type %T and mode %v\", v, tsmode)\n}\n\nfunc timeTypeValueToString(tm time.Time, tsmode types.SnowflakeType) (bindingValue, error) {\n\tswitch tsmode {\n\tcase types.DateType:\n\t\t_, offset := tm.Zone()\n\t\ttm = tm.Add(time.Second * time.Duration(offset))\n\t\ts := strconv.FormatInt(tm.Unix()*1000, 10)\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\tcase types.TimeType:\n\t\ts := fmt.Sprintf(\"%d\",\n\t\t\t(tm.Hour()*3600+tm.Minute()*60+tm.Second())*1e9+tm.Nanosecond())\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\tcase types.TimestampNtzType, types.TimestampLtzType, types.TimestampTzType:\n\t\ts, err := convertTimeToTimeStamp(tm, tsmode)\n\t\tif err != nil {\n\t\t\treturn bindingValue{nil, \"\", nil}, err\n\t\t}\n\t\treturn bindingValue{&s, \"\", nil}, nil\n\t}\n\treturn bindingValue{nil, \"\", nil}, fmt.Errorf(\"unsupported time type: %v\", tsmode)\n}\n\n// extractTimestamp extracts the internal timestamp data to epoch time in seconds and milliseconds\nfunc extractTimestamp(srcValue *string) (sec int64, nsec int64, err error) {\n\tlogger.Debugf(\"SRC: %v\", srcValue)\n\tvar i int\n\tfor i = 0; i < len(*srcValue); i++ {\n\t\tif (*srcValue)[i] == '.' {\n\t\t\tsec, err = strconv.ParseInt((*srcValue)[0:i], 10, 64)\n\t\t\tif err != nil {\n\t\t\t\treturn 0, 0, err\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t}\n\tif i == len(*srcValue) {\n\t\t// no fraction\n\t\tsec, err = strconv.ParseInt(*srcValue, 10, 64)\n\t\tif err != nil {\n\t\t\treturn 0, 0, err\n\t\t}\n\t\tnsec = 0\n\t} else {\n\t\ts := (*srcValue)[i+1:]\n\t\tnsec, err = strconv.ParseInt(s+strings.Repeat(\"0\", 9-len(s)), 10, 64)\n\t\tif err != nil {\n\t\t\treturn 0, 0, err\n\t\t}\n\t}\n\tlogger.Infof(\"sec: %v, nsec: %v\", sec, nsec)\n\treturn sec, nsec, nil\n}\n\n// stringToValue converts a pointer of string data to an arbitrary golang variable\n// This is mainly used in fetching data.\nfunc stringToValue(ctx context.Context, dest *driver.Value, srcColumnMeta query.ExecResponseRowType, srcValue *string, loc *time.Location, params *syncParams) error {\n\tif srcValue == nil {\n\t\tlogger.Debugf(\"snowflake data type: %v, raw value: nil\", srcColumnMeta.Type)\n\t\t*dest = nil\n\t\treturn nil\n\t}\n\tstructuredTypesEnabled := structuredTypesEnabled(ctx)\n\n\t// Truncate large strings before logging to avoid secret masking performance issues\n\tvalueForLogging := *srcValue\n\tif len(valueForLogging) > 1024 {\n\t\tvalueForLogging = valueForLogging[:1024] + fmt.Sprintf(\"... (%d bytes total)\", len(*srcValue))\n\t}\n\tlogger.Debugf(\"snowflake data type: %v, raw value: %v\", srcColumnMeta.Type, valueForLogging)\n\tswitch srcColumnMeta.Type {\n\tcase \"object\":\n\t\tif len(srcColumnMeta.Fields) == 0 || !structuredTypesEnabled {\n\t\t\t// semistructured type without schema\n\t\t\t*dest = *srcValue\n\t\t\treturn nil\n\t\t}\n\t\tm := make(map[string]any)\n\t\tdecoder := decoderWithNumbersAsStrings(srcValue)\n\t\tif err := decoder.Decode(&m); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tv, err := buildStructuredTypeRecursive(ctx, m, srcColumnMeta.Fields, params)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t*dest = v\n\t\treturn nil\n\tcase \"text\", \"real\", \"variant\":\n\t\t*dest = *srcValue\n\t\treturn nil\n\tcase \"fixed\":\n\t\tif higherPrecisionEnabled(ctx) {\n\t\t\tif srcColumnMeta.Scale == 0 {\n\t\t\t\tif srcColumnMeta.Precision >= 19 {\n\t\t\t\t\tbigInt := big.NewInt(0)\n\t\t\t\t\tbigInt.SetString(*srcValue, 10)\n\t\t\t\t\t*dest = *bigInt\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t\t*dest = *srcValue\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tbigFloat, _, err := big.ParseFloat(*srcValue, 10, numberMaxPrecisionInBits, big.AwayFromZero)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\t*dest = *bigFloat\n\t\t\treturn nil\n\t\t}\n\t\t*dest = *srcValue\n\t\treturn nil\n\tcase \"decfloat\":\n\t\tif !decfloatMappingEnabled(ctx) {\n\t\t\t*dest = *srcValue\n\t\t\treturn nil\n\t\t}\n\t\tbf := new(big.Float).SetPrec(127)\n\t\tif _, ok := bf.SetString(*srcValue); !ok {\n\t\t\treturn fmt.Errorf(\"cannot convert %v to %T\", *srcValue, bf)\n\t\t}\n\t\tif higherPrecisionEnabled(ctx) {\n\t\t\t*dest = *bf\n\t\t} else {\n\t\t\t*dest, _ = bf.Float64()\n\t\t}\n\t\treturn nil\n\tcase \"date\":\n\t\tv, err := strconv.ParseInt(*srcValue, 10, 64)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t*dest = time.Unix(v*86400, 0).UTC()\n\t\treturn nil\n\tcase \"time\":\n\t\tsec, nsec, err := extractTimestamp(srcValue)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tt0 := time.Time{}\n\t\t*dest = t0.Add(time.Duration(sec*1e9 + nsec))\n\t\treturn nil\n\tcase \"timestamp_ntz\":\n\t\tsec, nsec, err := extractTimestamp(srcValue)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t*dest = time.Unix(sec, nsec).UTC()\n\t\treturn nil\n\tcase \"timestamp_ltz\":\n\t\tsec, nsec, err := extractTimestamp(srcValue)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif loc == nil {\n\t\t\tloc = time.Now().Location()\n\t\t}\n\t\t*dest = time.Unix(sec, nsec).In(loc)\n\t\treturn nil\n\tcase \"timestamp_tz\":\n\t\tlogger.Debugf(\"tz: %v\", *srcValue)\n\n\t\ttm := strings.Split(*srcValue, \" \")\n\t\tif len(tm) != 2 {\n\t\t\treturn &SnowflakeError{\n\t\t\t\tNumber:   ErrInvalidTimestampTz,\n\t\t\t\tSQLState: SQLStateInvalidDataTimeFormat,\n\t\t\t\tMessage:  fmt.Sprintf(\"invalid TIMESTAMP_TZ data. The value doesn't consist of two numeric values separated by a space: %v\", *srcValue),\n\t\t\t}\n\t\t}\n\t\tsec, nsec, err := extractTimestamp(&tm[0])\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\toffset, err := strconv.ParseInt(tm[1], 10, 64)\n\t\tif err != nil {\n\t\t\treturn &SnowflakeError{\n\t\t\t\tNumber:   ErrInvalidTimestampTz,\n\t\t\t\tSQLState: SQLStateInvalidDataTimeFormat,\n\t\t\t\tMessage:  fmt.Sprintf(\"invalid TIMESTAMP_TZ data. The offset value is not integer: %v\", tm[1]),\n\t\t\t}\n\t\t}\n\t\tloc := Location(int(offset) - 1440)\n\t\ttt := time.Unix(sec, nsec)\n\t\t*dest = tt.In(loc)\n\t\treturn nil\n\tcase \"binary\":\n\t\tb, err := hex.DecodeString(*srcValue)\n\t\tif err != nil {\n\t\t\treturn &SnowflakeError{\n\t\t\t\tNumber:   ErrInvalidBinaryHexForm,\n\t\t\t\tSQLState: SQLStateNumericValueOutOfRange,\n\t\t\t\tMessage:  err.Error(),\n\t\t\t}\n\t\t}\n\t\t*dest = b\n\t\treturn nil\n\tcase \"array\":\n\t\tif len(srcColumnMeta.Fields) == 0 || !structuredTypesEnabled {\n\t\t\t*dest = *srcValue\n\t\t\treturn nil\n\t\t}\n\t\tif len(srcColumnMeta.Fields) > 1 {\n\t\t\treturn errors.New(\"got more than one field for array\")\n\t\t}\n\t\tvar arr []any\n\t\tdecoder := decoderWithNumbersAsStrings(srcValue)\n\t\tif err := decoder.Decode(&arr); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tv, err := buildStructuredArray(ctx, srcColumnMeta.Fields[0], arr, params)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t*dest = v\n\t\treturn nil\n\tcase \"map\":\n\t\tvar err error\n\t\t*dest, err = jsonToMap(ctx, srcColumnMeta.Fields[0], srcColumnMeta.Fields[1], *srcValue, params)\n\t\treturn err\n\t}\n\t*dest = *srcValue\n\treturn nil\n}\n\nfunc jsonToMap(ctx context.Context, keyMetadata, valueMetadata query.FieldMetadata, srcValue string, params *syncParams) (snowflakeValue, error) {\n\tstructuredTypesEnabled := structuredTypesEnabled(ctx)\n\tif !structuredTypesEnabled {\n\t\treturn srcValue, nil\n\t}\n\tswitch keyMetadata.Type {\n\tcase \"text\":\n\t\tvar m map[string]any\n\t\tdecoder := decoderWithNumbersAsStrings(&srcValue)\n\t\terr := decoder.Decode(&m)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t// returning snowflakeValue of complex types does not work with generics\n\t\tif valueMetadata.Type == \"object\" {\n\t\t\tres := make(map[string]*structuredType)\n\t\t\tfor k, v := range m {\n\t\t\t\tif v == nil || reflect.ValueOf(v).IsNil() {\n\t\t\t\t\tres[k] = nil\n\t\t\t\t} else {\n\t\t\t\t\tres[k] = buildStructuredTypeFromMap(v.(map[string]any), valueMetadata.Fields, params)\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn res, nil\n\t\t}\n\t\treturn jsonToMapWithKeyType[string](ctx, valueMetadata, m, params)\n\tcase \"fixed\":\n\t\tvar m map[int64]any\n\t\tdecoder := decoderWithNumbersAsStrings(&srcValue)\n\t\terr := decoder.Decode(&m)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif valueMetadata.Type == \"object\" {\n\t\t\tres := make(map[int64]*structuredType)\n\t\t\tfor k, v := range m {\n\t\t\t\tres[k] = buildStructuredTypeFromMap(v.(map[string]any), valueMetadata.Fields, params)\n\t\t\t}\n\t\t\treturn res, nil\n\t\t}\n\t\treturn jsonToMapWithKeyType[int64](ctx, valueMetadata, m, params)\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unsupported map key type: %v\", keyMetadata.Type)\n\t}\n}\n\nfunc jsonToMapWithKeyType[K comparable](ctx context.Context, valueMetadata query.FieldMetadata, m map[K]any, params *syncParams) (snowflakeValue, error) {\n\tmapValuesNullableEnabled := embeddedValuesNullableEnabled(ctx)\n\tswitch valueMetadata.Type {\n\tcase \"text\":\n\t\treturn buildMapValues[K, sql.NullString, string](mapValuesNullableEnabled, m, func(v any) (string, error) {\n\t\t\treturn v.(string), nil\n\t\t}, func(v any) (sql.NullString, error) {\n\t\t\treturn sql.NullString{Valid: v != nil, String: ifNotNullOrDefault(v, \"\")}, nil\n\t\t}, false)\n\tcase \"boolean\":\n\t\treturn buildMapValues[K, sql.NullBool, bool](mapValuesNullableEnabled, m, func(v any) (bool, error) {\n\t\t\treturn v.(bool), nil\n\t\t}, func(v any) (sql.NullBool, error) {\n\t\t\treturn sql.NullBool{Valid: v != nil, Bool: ifNotNullOrDefault(v, false)}, nil\n\t\t}, false)\n\tcase \"fixed\":\n\t\tif valueMetadata.Scale == 0 {\n\t\t\treturn buildMapValues[K, sql.NullInt64, int64](mapValuesNullableEnabled, m, func(v any) (int64, error) {\n\t\t\t\treturn strconv.ParseInt(string(v.(json.Number)), 10, 64)\n\t\t\t}, func(v any) (sql.NullInt64, error) {\n\t\t\t\tif v != nil {\n\t\t\t\t\ti64, err := strconv.ParseInt(string(v.(json.Number)), 10, 64)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn sql.NullInt64{}, err\n\t\t\t\t\t}\n\t\t\t\t\treturn sql.NullInt64{Valid: true, Int64: i64}, nil\n\t\t\t\t}\n\t\t\t\treturn sql.NullInt64{Valid: false}, nil\n\t\t\t}, false)\n\t\t}\n\t\treturn buildMapValues[K, sql.NullFloat64, float64](mapValuesNullableEnabled, m, func(v any) (float64, error) {\n\t\t\treturn strconv.ParseFloat(string(v.(json.Number)), 64)\n\t\t}, func(v any) (sql.NullFloat64, error) {\n\t\t\tif v != nil {\n\t\t\t\tf64, err := strconv.ParseFloat(string(v.(json.Number)), 64)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn sql.NullFloat64{}, err\n\t\t\t\t}\n\t\t\t\treturn sql.NullFloat64{Valid: true, Float64: f64}, nil\n\t\t\t}\n\t\t\treturn sql.NullFloat64{Valid: false}, nil\n\t\t}, false)\n\tcase \"real\":\n\t\treturn buildMapValues[K, sql.NullFloat64, float64](mapValuesNullableEnabled, m, func(v any) (float64, error) {\n\t\t\treturn strconv.ParseFloat(string(v.(json.Number)), 64)\n\t\t}, func(v any) (sql.NullFloat64, error) {\n\t\t\tif v != nil {\n\t\t\t\tf64, err := strconv.ParseFloat(string(v.(json.Number)), 64)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn sql.NullFloat64{}, err\n\t\t\t\t}\n\t\t\t\treturn sql.NullFloat64{Valid: true, Float64: f64}, nil\n\t\t\t}\n\t\t\treturn sql.NullFloat64{Valid: false}, nil\n\t\t}, false)\n\tcase \"binary\":\n\t\treturn buildMapValues[K, []byte, []byte](mapValuesNullableEnabled, m, func(v any) ([]byte, error) {\n\t\t\tif v == nil {\n\t\t\t\treturn nil, nil\n\t\t\t}\n\t\t\treturn hex.DecodeString(v.(string))\n\t\t}, func(v any) ([]byte, error) {\n\t\t\tif v == nil {\n\t\t\t\treturn nil, nil\n\t\t\t}\n\t\t\treturn hex.DecodeString(v.(string))\n\t\t}, true)\n\tcase \"date\", \"time\", \"timestamp_tz\", \"timestamp_ltz\", \"timestamp_ntz\":\n\t\treturn buildMapValues[K, sql.NullTime, time.Time](mapValuesNullableEnabled, m, func(v any) (time.Time, error) {\n\t\t\tsfFormat, err := dateTimeOutputFormatByType(valueMetadata.Type, params)\n\t\t\tif err != nil {\n\t\t\t\treturn time.Time{}, err\n\t\t\t}\n\t\t\tgoFormat, err := snowflakeFormatToGoFormat(sfFormat)\n\t\t\tif err != nil {\n\t\t\t\treturn time.Time{}, err\n\t\t\t}\n\t\t\treturn time.Parse(goFormat, v.(string))\n\t\t}, func(v any) (sql.NullTime, error) {\n\t\t\tif v == nil {\n\t\t\t\treturn sql.NullTime{Valid: false}, nil\n\t\t\t}\n\t\t\tsfFormat, err := dateTimeOutputFormatByType(valueMetadata.Type, params)\n\t\t\tif err != nil {\n\t\t\t\treturn sql.NullTime{}, err\n\t\t\t}\n\t\t\tgoFormat, err := snowflakeFormatToGoFormat(sfFormat)\n\t\t\tif err != nil {\n\t\t\t\treturn sql.NullTime{}, err\n\t\t\t}\n\t\t\ttime, err := time.Parse(goFormat, v.(string))\n\t\t\tif err != nil {\n\t\t\t\treturn sql.NullTime{}, err\n\t\t\t}\n\t\t\treturn sql.NullTime{Valid: true, Time: time}, nil\n\t\t}, false)\n\tcase \"array\":\n\t\tarrayMetadata := valueMetadata.Fields[0]\n\t\tswitch arrayMetadata.Type {\n\t\tcase \"text\":\n\t\t\treturn buildArrayFromMap[K, string](ctx, arrayMetadata, m, params)\n\t\tcase \"fixed\":\n\t\t\tif arrayMetadata.Scale == 0 {\n\t\t\t\treturn buildArrayFromMap[K, int64](ctx, arrayMetadata, m, params)\n\t\t\t}\n\t\t\treturn buildArrayFromMap[K, float64](ctx, arrayMetadata, m, params)\n\t\tcase \"real\":\n\t\t\treturn buildArrayFromMap[K, float64](ctx, arrayMetadata, m, params)\n\t\tcase \"binary\":\n\t\t\treturn buildArrayFromMap[K, []byte](ctx, arrayMetadata, m, params)\n\t\tcase \"boolean\":\n\t\t\treturn buildArrayFromMap[K, bool](ctx, arrayMetadata, m, params)\n\t\tcase \"date\", \"time\", \"timestamp_ltz\", \"timestamp_tz\", \"timestamp_ntz\":\n\t\t\treturn buildArrayFromMap[K, time.Time](ctx, arrayMetadata, m, params)\n\t\t}\n\t}\n\treturn nil, fmt.Errorf(\"unsupported map value type: %v\", valueMetadata.Type)\n}\n\nfunc buildArrayFromMap[K comparable, V any](ctx context.Context, valueMetadata query.FieldMetadata, m map[K]any, params *syncParams) (snowflakeValue, error) {\n\tres := make(map[K][]V)\n\tfor k, v := range m {\n\t\tif v == nil {\n\t\t\tres[k] = nil\n\t\t} else {\n\t\t\tstructuredArray, err := buildStructuredArray(ctx, valueMetadata, v.([]any), params)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tres[k] = structuredArray.([]V)\n\t\t}\n\t}\n\treturn res, nil\n}\n\nfunc buildStructuredTypeFromMap(values map[string]any, fieldMetadata []query.FieldMetadata, params *syncParams) *structuredType {\n\treturn &structuredType{\n\t\tvalues:        values,\n\t\tparams:        params,\n\t\tfieldMetadata: fieldMetadata,\n\t}\n}\n\nfunc ifNotNullOrDefault[T any](t any, def T) T {\n\tif t == nil {\n\t\treturn def\n\t}\n\treturn t.(T)\n}\n\nfunc buildMapValues[K comparable, Vnullable any, VnotNullable any](mapValuesNullableEnabled bool, m map[K]any, buildNotNullable func(v any) (VnotNullable, error), buildNullable func(v any) (Vnullable, error), nullableByDefault bool) (snowflakeValue, error) {\n\tvar err error\n\tif mapValuesNullableEnabled {\n\t\tresult := make(map[K]Vnullable, len(m))\n\t\tfor k, v := range m {\n\t\t\tif result[k], err = buildNullable(v); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t\treturn result, nil\n\t}\n\tresult := make(map[K]VnotNullable, len(m))\n\tfor k, v := range m {\n\t\tif v == nil && !nullableByDefault {\n\t\t\treturn nil, errors2.ErrNullValueInMapError()\n\t\t}\n\t\tif result[k], err = buildNotNullable(v); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn result, nil\n}\n\nfunc buildStructuredArray(ctx context.Context, fieldMetadata query.FieldMetadata, srcValue []any, params *syncParams) (any, error) {\n\tswitch fieldMetadata.Type {\n\tcase \"text\":\n\t\treturn copyArrayAndConvert[string](srcValue, func(input any) (string, error) {\n\t\t\treturn input.(string), nil\n\t\t})\n\tcase \"fixed\":\n\t\tif fieldMetadata.Scale == 0 {\n\t\t\treturn copyArrayAndConvert[int64](srcValue, func(input any) (int64, error) {\n\t\t\t\treturn strconv.ParseInt(string(input.(json.Number)), 10, 64)\n\t\t\t})\n\t\t}\n\t\treturn copyArrayAndConvert[float64](srcValue, func(input any) (float64, error) {\n\t\t\treturn strconv.ParseFloat(string(input.(json.Number)), 64)\n\t\t})\n\tcase \"real\":\n\t\treturn copyArrayAndConvert[float64](srcValue, func(input any) (float64, error) {\n\t\t\treturn strconv.ParseFloat(string(input.(json.Number)), 64)\n\t\t})\n\tcase \"time\", \"date\", \"timestamp_ltz\", \"timestamp_ntz\", \"timestamp_tz\":\n\t\treturn copyArrayAndConvert[time.Time](srcValue, func(input any) (time.Time, error) {\n\t\t\tsfFormat, err := dateTimeOutputFormatByType(fieldMetadata.Type, params)\n\t\t\tif err != nil {\n\t\t\t\treturn time.Time{}, err\n\t\t\t}\n\t\t\tgoFormat, err := snowflakeFormatToGoFormat(sfFormat)\n\t\t\tif err != nil {\n\t\t\t\treturn time.Time{}, err\n\t\t\t}\n\t\t\treturn time.Parse(goFormat, input.(string))\n\t\t})\n\tcase \"boolean\":\n\t\treturn copyArrayAndConvert[bool](srcValue, func(input any) (bool, error) {\n\t\t\treturn input.(bool), nil\n\t\t})\n\tcase \"binary\":\n\t\treturn copyArrayAndConvert[[]byte](srcValue, func(input any) ([]byte, error) {\n\t\t\treturn hex.DecodeString(input.(string))\n\t\t})\n\tcase \"object\":\n\t\treturn copyArrayAndConvert[*structuredType](srcValue, func(input any) (*structuredType, error) {\n\t\t\treturn buildStructuredTypeRecursive(ctx, input.(map[string]any), fieldMetadata.Fields, params)\n\t\t})\n\tcase \"array\":\n\t\tswitch fieldMetadata.Fields[0].Type {\n\t\tcase \"text\":\n\t\t\treturn buildStructuredArrayRecursive[string](ctx, fieldMetadata.Fields[0], srcValue, params)\n\t\tcase \"fixed\":\n\t\t\tif fieldMetadata.Fields[0].Scale == 0 {\n\t\t\t\treturn buildStructuredArrayRecursive[int64](ctx, fieldMetadata.Fields[0], srcValue, params)\n\t\t\t}\n\t\t\treturn buildStructuredArrayRecursive[float64](ctx, fieldMetadata.Fields[0], srcValue, params)\n\t\tcase \"real\":\n\t\t\treturn buildStructuredArrayRecursive[float64](ctx, fieldMetadata.Fields[0], srcValue, params)\n\t\tcase \"boolean\":\n\t\t\treturn buildStructuredArrayRecursive[bool](ctx, fieldMetadata.Fields[0], srcValue, params)\n\t\tcase \"binary\":\n\t\t\treturn buildStructuredArrayRecursive[[]byte](ctx, fieldMetadata.Fields[0], srcValue, params)\n\t\tcase \"date\", \"time\", \"timestamp_ltz\", \"timestamp_ntz\", \"timestamp_tz\":\n\t\t\treturn buildStructuredArrayRecursive[time.Time](ctx, fieldMetadata.Fields[0], srcValue, params)\n\t\t}\n\t}\n\treturn srcValue, nil\n}\n\nfunc buildStructuredArrayRecursive[T any](ctx context.Context, fieldMetadata query.FieldMetadata, srcValue []any, params *syncParams) ([][]T, error) {\n\tarr := make([][]T, len(srcValue))\n\tfor i, v := range srcValue {\n\t\tstructuredArray, err := buildStructuredArray(ctx, fieldMetadata, v.([]any), params)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tarr[i] = structuredArray.([]T)\n\t}\n\treturn arr, nil\n}\n\nfunc copyArrayAndConvert[T any](input []any, convertFunc func(input any) (T, error)) ([]T, error) {\n\tvar err error\n\toutput := make([]T, len(input))\n\tfor i, s := range input {\n\t\tif output[i], err = convertFunc(s); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn output, nil\n}\n\nfunc buildStructuredTypeRecursive(ctx context.Context, m map[string]any, fields []query.FieldMetadata, params *syncParams) (*structuredType, error) {\n\tvar err error\n\tfor _, fm := range fields {\n\t\tif fm.Type == \"array\" && m[fm.Name] != nil {\n\t\t\tif m[fm.Name], err = buildStructuredArray(ctx, fm.Fields[0], m[fm.Name].([]any), params); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t} else if fm.Type == \"map\" && m[fm.Name] != nil {\n\t\t\tif m[fm.Name], err = jsonToMapWithKeyType(ctx, fm.Fields[1], m[fm.Name].(map[string]any), params); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t} else if fm.Type == \"object\" && m[fm.Name] != nil {\n\t\t\tif m[fm.Name], err = buildStructuredTypeRecursive(ctx, m[fm.Name].(map[string]any), fm.Fields, params); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t}\n\treturn &structuredType{\n\t\tvalues:        m,\n\t\tfieldMetadata: fields,\n\t\tparams:        params,\n\t}, nil\n}\n\nvar decimalShift = new(big.Int).Exp(big.NewInt(2), big.NewInt(64), nil)\n\nfunc intToBigFloat(val int64, scale int64) *big.Float {\n\tf := new(big.Float).SetInt64(val)\n\ts := new(big.Float).SetInt(new(big.Int).Exp(big.NewInt(10), big.NewInt(scale), nil))\n\treturn new(big.Float).Quo(f, s)\n}\n\nfunc decimalToBigInt(num decimal128.Num) *big.Int {\n\thigh := new(big.Int).SetInt64(num.HighBits())\n\tlow := new(big.Int).SetUint64(num.LowBits())\n\treturn new(big.Int).Add(new(big.Int).Mul(high, decimalShift), low)\n}\n\nfunc decimalToBigFloat(num decimal128.Num, scale int64) *big.Float {\n\tf := new(big.Float).SetInt(decimalToBigInt(num))\n\ts := new(big.Float).SetInt(new(big.Int).Exp(big.NewInt(10), big.NewInt(scale), nil))\n\treturn new(big.Float).Quo(f, s)\n}\n\nfunc arrowSnowflakeTimestampToTime(\n\tcolumn arrow.Array,\n\tsfType types.SnowflakeType,\n\tscale int,\n\trecIdx int,\n\tloc *time.Location) *time.Time {\n\n\tif column.IsNull(recIdx) {\n\t\treturn nil\n\t}\n\tvar ret time.Time\n\tswitch sfType {\n\tcase types.TimestampNtzType:\n\t\tif column.DataType().ID() == arrow.STRUCT {\n\t\t\tstructData := column.(*array.Struct)\n\t\t\tepoch := structData.Field(0).(*array.Int64).Int64Values()\n\t\t\tfraction := structData.Field(1).(*array.Int32).Int32Values()\n\t\t\tret = time.Unix(epoch[recIdx], int64(fraction[recIdx])).UTC()\n\t\t} else {\n\t\t\tintData := column.(*array.Int64)\n\t\t\tvalue := intData.Value(recIdx)\n\t\t\tepoch := extractEpoch(value, scale)\n\t\t\tfraction := extractFraction(value, scale)\n\t\t\tret = time.Unix(epoch, fraction).UTC()\n\t\t}\n\tcase types.TimestampLtzType:\n\t\tif column.DataType().ID() == arrow.STRUCT {\n\t\t\tstructData := column.(*array.Struct)\n\t\t\tepoch := structData.Field(0).(*array.Int64).Int64Values()\n\t\t\tfraction := structData.Field(1).(*array.Int32).Int32Values()\n\t\t\tret = time.Unix(epoch[recIdx], int64(fraction[recIdx])).In(loc)\n\t\t} else {\n\t\t\tintData := column.(*array.Int64)\n\t\t\tvalue := intData.Value(recIdx)\n\t\t\tepoch := extractEpoch(value, scale)\n\t\t\tfraction := extractFraction(value, scale)\n\t\t\tret = time.Unix(epoch, fraction).In(loc)\n\t\t}\n\tcase types.TimestampTzType:\n\t\tstructData := column.(*array.Struct)\n\t\tif structData.NumField() == 2 {\n\t\t\tvalue := structData.Field(0).(*array.Int64).Int64Values()\n\t\t\ttimezone := structData.Field(1).(*array.Int32).Int32Values()\n\t\t\tepoch := extractEpoch(value[recIdx], scale)\n\t\t\tfraction := extractFraction(value[recIdx], scale)\n\t\t\tlocTz := Location(int(timezone[recIdx]) - 1440)\n\t\t\tret = time.Unix(epoch, fraction).In(locTz)\n\t\t} else {\n\t\t\tepoch := structData.Field(0).(*array.Int64).Int64Values()\n\t\t\tfraction := structData.Field(1).(*array.Int32).Int32Values()\n\t\t\ttimezone := structData.Field(2).(*array.Int32).Int32Values()\n\t\t\tlocTz := Location(int(timezone[recIdx]) - 1440)\n\t\t\tret = time.Unix(epoch[recIdx], int64(fraction[recIdx])).In(locTz)\n\t\t}\n\t}\n\treturn &ret\n}\n\nfunc extractEpoch(value int64, scale int) int64 {\n\treturn value / int64(math.Pow10(scale))\n}\n\nfunc extractFraction(value int64, scale int) int64 {\n\treturn (value % int64(math.Pow10(scale))) * int64(math.Pow10(9-scale))\n}\n\n// Arrow Interface (Column) converter. This is called when Arrow chunks are\n// downloaded to convert to the corresponding row type.\nfunc arrowToValues(\n\tctx context.Context,\n\tdestcol []snowflakeValue,\n\tsrcColumnMeta query.ExecResponseRowType,\n\tsrcValue arrow.Array,\n\tloc *time.Location,\n\thigherPrecision bool,\n\tparams *syncParams) error {\n\n\tif len(destcol) != srcValue.Len() {\n\t\treturn fmt.Errorf(\"array interface length mismatch\")\n\t}\n\tlogger.Debugf(\"snowflake data type: %v, arrow data type: %v\", srcColumnMeta.Type, srcValue.DataType())\n\n\tvar err error\n\tsnowflakeType := types.GetSnowflakeType(srcColumnMeta.Type)\n\tfor i := range destcol {\n\t\tif destcol[i], err = arrowToValue(ctx, i, srcColumnMeta.ToFieldMetadata(), srcValue, loc, higherPrecision, params, snowflakeType); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc arrowToValue(ctx context.Context, rowIdx int, srcColumnMeta query.FieldMetadata, srcValue arrow.Array, loc *time.Location, higherPrecision bool, params *syncParams, snowflakeType types.SnowflakeType) (snowflakeValue, error) {\n\tstructuredTypesEnabled := structuredTypesEnabled(ctx)\n\tswitch snowflakeType {\n\tcase types.FixedType:\n\t\t// Snowflake data types that are fixed-point numbers will fall into this category\n\t\t// e.g. NUMBER, DECIMAL/NUMERIC, INT/INTEGER\n\t\tswitch numericValue := srcValue.(type) {\n\t\tcase *array.Decimal128:\n\t\t\treturn arrowDecimal128ToValue(numericValue, rowIdx, higherPrecision, srcColumnMeta), nil\n\t\tcase *array.Int64:\n\t\t\treturn arrowInt64ToValue(numericValue, rowIdx, higherPrecision, srcColumnMeta), nil\n\t\tcase *array.Int32:\n\t\t\treturn arrowInt32ToValue(numericValue, rowIdx, higherPrecision, srcColumnMeta), nil\n\t\tcase *array.Int16:\n\t\t\treturn arrowInt16ToValue(numericValue, rowIdx, higherPrecision, srcColumnMeta), nil\n\t\tcase *array.Int8:\n\t\t\treturn arrowInt8ToValue(numericValue, rowIdx, higherPrecision, srcColumnMeta), nil\n\t\t}\n\t\treturn nil, fmt.Errorf(\"unsupported data type\")\n\tcase types.RealType:\n\t\t// Snowflake data types that are floating-point numbers will fall in this category\n\t\t// e.g. FLOAT/REAL/DOUBLE\n\t\treturn arrowRealToValue(srcValue.(*array.Float64), rowIdx), nil\n\tcase types.DecfloatType:\n\t\treturn arrowDecFloatToValue(ctx, srcValue.(*array.Struct), rowIdx)\n\tcase types.BooleanType:\n\t\treturn arrowBoolToValue(srcValue.(*array.Boolean), rowIdx), nil\n\tcase types.TextType, types.VariantType:\n\t\tstrings := srcValue.(*array.String)\n\t\tif !srcValue.IsNull(rowIdx) {\n\t\t\treturn strings.Value(rowIdx), nil\n\t\t}\n\t\treturn nil, nil\n\tcase types.ArrayType:\n\t\tif len(srcColumnMeta.Fields) == 0 || !structuredTypesEnabled {\n\t\t\t// semistructured type without schema\n\t\t\tstrings := srcValue.(*array.String)\n\t\t\tif !srcValue.IsNull(rowIdx) {\n\t\t\t\treturn strings.Value(rowIdx), nil\n\t\t\t}\n\t\t\treturn nil, nil\n\t\t}\n\t\tstrings, ok := srcValue.(*array.String)\n\t\tif ok {\n\t\t\t// structured array as json\n\t\t\tif !srcValue.IsNull(rowIdx) {\n\t\t\t\tval := strings.Value(rowIdx)\n\t\t\t\tvar arr []any\n\t\t\t\tdecoder := decoderWithNumbersAsStrings(&val)\n\t\t\t\tif err := decoder.Decode(&arr); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\treturn buildStructuredArray(ctx, srcColumnMeta.Fields[0], arr, params)\n\t\t\t}\n\t\t\treturn nil, nil\n\t\t}\n\t\tif !structuredTypesEnabled {\n\t\t\treturn nil, errNativeArrowWithoutProperContext\n\t\t}\n\t\treturn buildListFromNativeArrow(ctx, rowIdx, srcColumnMeta.Fields[0], srcValue, loc, higherPrecision, params)\n\tcase types.ObjectType:\n\t\tif len(srcColumnMeta.Fields) == 0 || !structuredTypesEnabled {\n\t\t\t// semistructured type without schema\n\t\t\tstrings := srcValue.(*array.String)\n\t\t\tif !srcValue.IsNull(rowIdx) {\n\t\t\t\treturn strings.Value(rowIdx), nil\n\t\t\t}\n\t\t\treturn nil, nil\n\t\t}\n\t\tstrings, ok := srcValue.(*array.String)\n\t\tif ok {\n\t\t\t// structured objects as json\n\t\t\tif !srcValue.IsNull(rowIdx) {\n\t\t\t\tm := make(map[string]any)\n\t\t\t\tvalue := strings.Value(rowIdx)\n\t\t\t\tdecoder := decoderWithNumbersAsStrings(&value)\n\t\t\t\tif err := decoder.Decode(&m); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\treturn buildStructuredTypeRecursive(ctx, m, srcColumnMeta.Fields, params)\n\t\t\t}\n\t\t\treturn nil, nil\n\t\t}\n\t\t// structured objects as native arrow\n\t\tif !structuredTypesEnabled {\n\t\t\treturn nil, errNativeArrowWithoutProperContext\n\t\t}\n\t\tif srcValue.IsNull(rowIdx) {\n\t\t\treturn nil, nil\n\t\t}\n\t\tstructs := srcValue.(*array.Struct)\n\t\treturn arrowToStructuredType(ctx, structs, srcColumnMeta.Fields, loc, rowIdx, higherPrecision, params)\n\tcase types.MapType:\n\t\tif srcValue.IsNull(rowIdx) {\n\t\t\treturn nil, nil\n\t\t}\n\t\tstrings, ok := srcValue.(*array.String)\n\t\tif ok {\n\t\t\t// structured map as json\n\t\t\tif !srcValue.IsNull(rowIdx) {\n\t\t\t\treturn jsonToMap(ctx, srcColumnMeta.Fields[0], srcColumnMeta.Fields[1], strings.Value(rowIdx), params)\n\t\t\t}\n\t\t} else {\n\t\t\t// structured map as native arrow\n\t\t\tif !structuredTypesEnabled {\n\t\t\t\treturn nil, errNativeArrowWithoutProperContext\n\t\t\t}\n\t\t\treturn buildMapFromNativeArrow(ctx, rowIdx, srcColumnMeta.Fields[0], srcColumnMeta.Fields[1], srcValue, loc, higherPrecision, params)\n\t\t}\n\tcase types.BinaryType:\n\t\treturn arrowBinaryToValue(srcValue.(*array.Binary), rowIdx), nil\n\tcase types.DateType:\n\t\treturn arrowDateToValue(srcValue.(*array.Date32), rowIdx), nil\n\tcase types.TimeType:\n\t\treturn arrowTimeToValue(srcValue, rowIdx, int(srcColumnMeta.Scale)), nil\n\tcase types.TimestampNtzType, types.TimestampLtzType, types.TimestampTzType:\n\t\tv := arrowSnowflakeTimestampToTime(srcValue, snowflakeType, int(srcColumnMeta.Scale), rowIdx, loc)\n\t\tif v != nil {\n\t\t\treturn *v, nil\n\t\t}\n\t\treturn nil, nil\n\t}\n\n\treturn nil, fmt.Errorf(\"unsupported data type\")\n}\n\nfunc buildMapFromNativeArrow(ctx context.Context, rowIdx int, keyMetadata, valueMetadata query.FieldMetadata, srcValue arrow.Array, loc *time.Location, higherPrecision bool, params *syncParams) (snowflakeValue, error) {\n\tarrowMap := srcValue.(*array.Map)\n\tif arrowMap.IsNull(rowIdx) {\n\t\treturn nil, nil\n\t}\n\tkeys := arrowMap.Keys()\n\titems := arrowMap.Items()\n\toffsets := arrowMap.Offsets()\n\tswitch keyMetadata.Type {\n\tcase \"text\":\n\t\tkeyFunc := func(j int) (string, error) {\n\t\t\treturn keys.(*array.String).Value(j), nil\n\t\t}\n\t\treturn buildStructuredMapFromArrow(ctx, rowIdx, valueMetadata, offsets, keyFunc, items, higherPrecision, loc, params)\n\tcase \"fixed\":\n\t\tkeyFunc := func(j int) (int64, error) {\n\t\t\tk, err := extractInt64(keys, int(j))\n\t\t\tif err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t\treturn k, nil\n\t\t}\n\t\treturn buildStructuredMapFromArrow(ctx, rowIdx, valueMetadata, offsets, keyFunc, items, higherPrecision, loc, params)\n\t}\n\treturn nil, nil\n}\n\nfunc buildListFromNativeArrow(ctx context.Context, rowIdx int, fieldMetadata query.FieldMetadata, srcValue arrow.Array, loc *time.Location, higherPrecision bool, params *syncParams) (snowflakeValue, error) {\n\tlist := srcValue.(*array.List)\n\tif list.IsNull(rowIdx) {\n\t\treturn nil, nil\n\t}\n\tvalues := list.ListValues()\n\toffsets := list.Offsets()\n\tsnowflakeType := types.GetSnowflakeType(fieldMetadata.Type)\n\tswitch snowflakeType {\n\tcase types.FixedType:\n\t\tswitch typedValues := values.(type) {\n\t\tcase *array.Decimal128:\n\t\t\tif higherPrecision && fieldMetadata.Scale == 0 {\n\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (*big.Int, error) {\n\t\t\t\t\tbigInt := arrowDecimal128ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\tif bigInt == nil {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t}\n\t\t\t\t\treturn bigInt.(*big.Int), nil\n\n\t\t\t\t})\n\t\t\t} else if higherPrecision && fieldMetadata.Scale != 0 {\n\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (*big.Float, error) {\n\t\t\t\t\tbigFloat := arrowDecimal128ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\tif bigFloat == nil {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t}\n\t\t\t\t\treturn bigFloat.(*big.Float), nil\n\n\t\t\t\t})\n\n\t\t\t} else if !higherPrecision && fieldMetadata.Scale == 0 {\n\t\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullInt64, error) {\n\t\t\t\t\t\tv := arrowDecimal128ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\t\tif v == nil {\n\t\t\t\t\t\t\treturn sql.NullInt64{Valid: false}, nil\n\t\t\t\t\t\t}\n\t\t\t\t\t\tval, err := strconv.ParseInt(v.(string), 10, 64)\n\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\treturn sql.NullInt64{Valid: false}, err\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn sql.NullInt64{Valid: true, Int64: val}, nil\n\n\t\t\t\t\t})\n\t\t\t\t}\n\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (int64, error) {\n\t\t\t\t\tv := arrowDecimal128ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\tif v == nil {\n\t\t\t\t\t\treturn 0, errors2.ErrNullValueInArrayError()\n\t\t\t\t\t}\n\t\t\t\t\treturn strconv.ParseInt(v.(string), 10, 64)\n\t\t\t\t})\n\t\t\t} else {\n\t\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullFloat64, error) {\n\t\t\t\t\t\tv := arrowDecimal128ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\t\tif v == nil {\n\t\t\t\t\t\t\treturn sql.NullFloat64{Valid: false}, nil\n\t\t\t\t\t\t}\n\t\t\t\t\t\tval, err := strconv.ParseFloat(v.(string), 64)\n\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\treturn sql.NullFloat64{Valid: false}, err\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn sql.NullFloat64{Valid: true, Float64: val}, nil\n\n\t\t\t\t\t})\n\t\t\t\t}\n\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (float64, error) {\n\t\t\t\t\tv := arrowDecimal128ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\tif v == nil {\n\t\t\t\t\t\treturn 0, errors2.ErrNullValueInArrayError()\n\t\t\t\t\t}\n\t\t\t\t\treturn strconv.ParseFloat(v.(string), 64)\n\t\t\t\t})\n\n\t\t\t}\n\t\tcase *array.Int64:\n\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullInt64, error) {\n\t\t\t\t\tresInt := arrowInt64ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\tif resInt == nil {\n\t\t\t\t\t\treturn sql.NullInt64{Valid: false}, nil\n\t\t\t\t\t}\n\t\t\t\t\treturn sql.NullInt64{Valid: true, Int64: resInt.(int64)}, nil\n\t\t\t\t})\n\t\t\t}\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (int64, error) {\n\t\t\t\tresInt := arrowInt64ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\tif resInt == nil {\n\t\t\t\t\treturn 0, errors2.ErrNullValueInArrayError()\n\t\t\t\t}\n\t\t\t\treturn resInt.(int64), nil\n\t\t\t})\n\n\t\tcase *array.Int32:\n\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullInt32, error) {\n\t\t\t\t\tresInt := arrowInt32ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\tif resInt == nil {\n\t\t\t\t\t\treturn sql.NullInt32{Valid: false}, nil\n\t\t\t\t\t}\n\t\t\t\t\treturn sql.NullInt32{Valid: true, Int32: resInt.(int32)}, nil\n\n\t\t\t\t})\n\t\t\t}\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (int32, error) {\n\t\t\t\tresInt := arrowInt32ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\tif resInt == nil {\n\t\t\t\t\treturn 0, errors2.ErrNullValueInArrayError()\n\t\t\t\t}\n\t\t\t\treturn resInt.(int32), nil\n\t\t\t})\n\t\tcase *array.Int16:\n\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullInt16, error) {\n\t\t\t\t\tresInt := arrowInt16ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\tif resInt == nil {\n\t\t\t\t\t\treturn sql.NullInt16{Valid: false}, nil\n\t\t\t\t\t}\n\t\t\t\t\treturn sql.NullInt16{Valid: true, Int16: resInt.(int16)}, nil\n\n\t\t\t\t})\n\t\t\t}\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (int16, error) {\n\t\t\t\tresInt := arrowInt16ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\tif resInt == nil {\n\t\t\t\t\treturn 0, errors2.ErrNullValueInArrayError()\n\t\t\t\t}\n\t\t\t\treturn resInt.(int16), nil\n\t\t\t})\n\n\t\tcase *array.Int8:\n\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullByte, error) {\n\t\t\t\t\tresInt := arrowInt8ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\t\tif resInt == nil {\n\t\t\t\t\t\treturn sql.NullByte{Valid: false}, nil\n\t\t\t\t\t}\n\t\t\t\t\treturn sql.NullByte{Valid: true, Byte: resInt.(byte)}, nil\n\t\t\t\t})\n\t\t\t}\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (int8, error) {\n\t\t\t\tresInt := arrowInt8ToValue(typedValues, j, higherPrecision, fieldMetadata)\n\t\t\t\tif resInt == nil {\n\t\t\t\t\treturn 0, errors2.ErrNullValueInArrayError()\n\t\t\t\t}\n\t\t\t\treturn resInt.(int8), nil\n\t\t\t})\n\t\t}\n\tcase types.RealType:\n\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullFloat64, error) {\n\t\t\t\tresFloat := arrowRealToValue(values.(*array.Float64), j)\n\t\t\t\tif resFloat == nil {\n\t\t\t\t\treturn sql.NullFloat64{Valid: false}, nil\n\t\t\t\t}\n\t\t\t\treturn sql.NullFloat64{Valid: true, Float64: resFloat.(float64)}, nil\n\t\t\t})\n\t\t}\n\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (float64, error) {\n\t\t\tresFloat := arrowRealToValue(values.(*array.Float64), j)\n\t\t\tif resFloat == nil {\n\t\t\t\treturn 0, errors2.ErrNullValueInArrayError()\n\t\t\t}\n\t\t\treturn resFloat.(float64), nil\n\t\t})\n\tcase types.TextType:\n\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullString, error) {\n\t\t\t\tresString := arrowStringToValue(values.(*array.String), j)\n\t\t\t\tif resString == nil {\n\t\t\t\t\treturn sql.NullString{Valid: false}, nil\n\t\t\t\t}\n\t\t\t\treturn sql.NullString{Valid: true, String: resString.(string)}, nil\n\t\t\t})\n\t\t}\n\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (string, error) {\n\t\t\tresString := arrowStringToValue(values.(*array.String), j)\n\t\t\tif resString == nil {\n\t\t\t\treturn \"\", errors2.ErrNullValueInArrayError()\n\t\t\t}\n\t\t\treturn resString.(string), nil\n\t\t})\n\tcase types.BooleanType:\n\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullBool, error) {\n\t\t\t\tresBool := arrowBoolToValue(values.(*array.Boolean), j)\n\t\t\t\tif resBool == nil {\n\t\t\t\t\treturn sql.NullBool{Valid: false}, nil\n\t\t\t\t}\n\t\t\t\treturn sql.NullBool{Valid: true, Bool: resBool.(bool)}, nil\n\t\t\t})\n\t\t}\n\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (bool, error) {\n\t\t\tresBool := arrowBoolToValue(values.(*array.Boolean), j)\n\t\t\tif resBool == nil {\n\t\t\t\treturn false, errors2.ErrNullValueInArrayError()\n\t\t\t}\n\t\t\treturn resBool.(bool), nil\n\n\t\t})\n\n\tcase types.BinaryType:\n\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) ([]byte, error) {\n\t\t\tres := arrowBinaryToValue(values.(*array.Binary), j)\n\t\t\tif res == nil {\n\t\t\t\treturn nil, nil\n\t\t\t}\n\t\t\treturn res.([]byte), nil\n\n\t\t})\n\tcase types.DateType:\n\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullTime, error) {\n\t\t\t\tv := arrowDateToValue(values.(*array.Date32), j)\n\t\t\t\tif v == nil {\n\t\t\t\t\treturn sql.NullTime{Valid: false}, nil\n\t\t\t\t}\n\t\t\t\treturn sql.NullTime{Valid: true, Time: v.(time.Time)}, nil\n\n\t\t\t})\n\t\t}\n\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (time.Time, error) {\n\t\t\tv := arrowDateToValue(values.(*array.Date32), j)\n\t\t\tif v == nil {\n\t\t\t\treturn time.Time{}, errors2.ErrNullValueInArrayError()\n\t\t\t}\n\t\t\treturn v.(time.Time), nil\n\n\t\t})\n\n\tcase types.TimeType:\n\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullTime, error) {\n\t\t\t\tv := arrowTimeToValue(values, j, fieldMetadata.Scale)\n\t\t\t\tif v == nil {\n\t\t\t\t\treturn sql.NullTime{Valid: false}, nil\n\t\t\t\t}\n\t\t\t\treturn sql.NullTime{Valid: true, Time: v.(time.Time)}, nil\n\n\t\t\t})\n\t\t}\n\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (time.Time, error) {\n\t\t\tv := arrowTimeToValue(values, j, fieldMetadata.Scale)\n\t\t\tif v == nil {\n\t\t\t\treturn time.Time{}, errors2.ErrNullValueInArrayError()\n\t\t\t}\n\t\t\treturn v.(time.Time), nil\n\n\t\t})\n\n\tcase types.TimestampNtzType, types.TimestampLtzType, types.TimestampTzType:\n\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (sql.NullTime, error) {\n\t\t\t\tptr := arrowSnowflakeTimestampToTime(values, snowflakeType, fieldMetadata.Scale, j, loc)\n\t\t\t\tif ptr != nil {\n\t\t\t\t\treturn sql.NullTime{Valid: true, Time: *ptr}, nil\n\t\t\t\t}\n\t\t\t\treturn sql.NullTime{Valid: false}, nil\n\t\t\t})\n\t\t}\n\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (time.Time, error) {\n\t\t\tptr := arrowSnowflakeTimestampToTime(values, snowflakeType, fieldMetadata.Scale, j, loc)\n\t\t\tif ptr != nil {\n\t\t\t\treturn *ptr, nil\n\t\t\t}\n\t\t\treturn time.Time{}, errors2.ErrNullValueInArrayError()\n\t\t})\n\tcase types.ObjectType:\n\t\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) (*structuredType, error) {\n\t\t\tif values.IsNull(j) {\n\t\t\t\treturn nil, nil\n\t\t\t}\n\t\t\tm := make(map[string]any, len(fieldMetadata.Fields))\n\t\t\tfor fieldIdx, field := range fieldMetadata.Fields {\n\t\t\t\tm[field.Name] = values.(*array.Struct).Field(fieldIdx).ValueStr(j)\n\t\t\t}\n\t\t\treturn buildStructuredTypeRecursive(ctx, m, fieldMetadata.Fields, params)\n\t\t})\n\tcase types.ArrayType:\n\t\tswitch fieldMetadata.Fields[0].Type {\n\t\tcase \"text\":\n\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\treturn buildArrowListRecursive[sql.NullString](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\t\t}\n\t\t\treturn buildArrowListRecursive[string](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\tcase \"fixed\":\n\t\t\tif fieldMetadata.Fields[0].Scale == 0 {\n\t\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\t\treturn buildArrowListRecursive[sql.NullInt64](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\t\t\t}\n\t\t\t\treturn buildArrowListRecursive[int64](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\t\t}\n\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\treturn buildArrowListRecursive[sql.NullFloat64](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\t\t}\n\t\t\treturn buildArrowListRecursive[float64](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\tcase \"real\":\n\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\treturn buildArrowListRecursive[sql.NullFloat64](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\t\t}\n\t\t\treturn buildArrowListRecursive[float64](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\tcase \"boolean\":\n\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\treturn buildArrowListRecursive[sql.NullBool](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\t\t}\n\t\t\treturn buildArrowListRecursive[bool](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\tcase \"binary\":\n\t\t\treturn buildArrowListRecursive[[]byte](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\tcase \"date\", \"time\", \"timestamp_ltz\", \"timestamp_ntz\", \"timestamp_tz\":\n\t\t\tif embeddedValuesNullableEnabled(ctx) {\n\t\t\t\treturn buildArrowListRecursive[sql.NullTime](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\t\t}\n\t\t\treturn buildArrowListRecursive[time.Time](ctx, rowIdx, fieldMetadata, offsets, values, loc, higherPrecision, params)\n\t\t}\n\t}\n\treturn nil, nil\n}\n\nfunc buildArrowListRecursive[T any](ctx context.Context, rowIdx int, fieldMetadata query.FieldMetadata, offsets []int32, values arrow.Array, loc *time.Location, higherPrecision bool, params *syncParams) (snowflakeValue, error) {\n\treturn mapStructuredArrayNativeArrowRows(offsets, rowIdx, func(j int) ([]T, error) {\n\t\tarrowList, err := buildListFromNativeArrow(ctx, j, fieldMetadata.Fields[0], values, loc, higherPrecision, params)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif arrowList == nil {\n\t\t\treturn nil, nil\n\t\t}\n\t\treturn arrowList.([]T), nil\n\n\t})\n}\n\nfunc mapStructuredArrayNativeArrowRows[T any](offsets []int32, rowIdx int, createValueFunc func(j int) (T, error)) (snowflakeValue, error) {\n\tarr := make([]T, offsets[rowIdx+1]-offsets[rowIdx])\n\tfor j := offsets[rowIdx]; j < offsets[rowIdx+1]; j++ {\n\t\tv, err := createValueFunc(int(j))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tarr[j-offsets[rowIdx]] = v\n\t}\n\treturn arr, nil\n}\n\nfunc extractInt64(values arrow.Array, j int) (int64, error) {\n\tswitch typedValues := values.(type) {\n\tcase *array.Decimal128:\n\t\treturn int64(typedValues.Value(j).LowBits()), nil\n\tcase *array.Int64:\n\t\treturn typedValues.Value(j), nil\n\tcase *array.Int32:\n\t\treturn int64(typedValues.Value(j)), nil\n\tcase *array.Int16:\n\t\treturn int64(typedValues.Value(j)), nil\n\tcase *array.Int8:\n\t\treturn int64(typedValues.Value(j)), nil\n\t}\n\treturn 0, fmt.Errorf(\"unsupported map type: %T\", values.DataType().Name())\n}\n\nfunc buildStructuredMapFromArrow[K comparable](ctx context.Context, rowIdx int, valueMetadata query.FieldMetadata, offsets []int32, keyFunc func(j int) (K, error), items arrow.Array, higherPrecision bool, loc *time.Location, params *syncParams) (snowflakeValue, error) {\n\tmapNullValuesEnabled := embeddedValuesNullableEnabled(ctx)\n\tswitch valueMetadata.Type {\n\tcase \"text\":\n\t\tif mapNullValuesEnabled {\n\t\t\treturn mapStructuredMapNativeArrowRows(make(map[K]sql.NullString), offsets, rowIdx, keyFunc, func(j int) (sql.NullString, error) {\n\t\t\t\tif items.IsNull(j) {\n\t\t\t\t\treturn sql.NullString{Valid: false}, nil\n\t\t\t\t}\n\t\t\t\treturn sql.NullString{Valid: true, String: items.(*array.String).Value(j)}, nil\n\t\t\t})\n\t\t}\n\t\treturn mapStructuredMapNativeArrowRows(make(map[K]string), offsets, rowIdx, keyFunc, func(j int) (string, error) {\n\t\t\tif items.IsNull(j) {\n\t\t\t\treturn \"\", errors2.ErrNullValueInMapError()\n\t\t\t}\n\t\t\treturn items.(*array.String).Value(j), nil\n\t\t})\n\tcase \"boolean\":\n\t\tif mapNullValuesEnabled {\n\t\t\treturn mapStructuredMapNativeArrowRows(make(map[K]sql.NullBool), offsets, rowIdx, keyFunc, func(j int) (sql.NullBool, error) {\n\t\t\t\tif items.IsNull(j) {\n\t\t\t\t\treturn sql.NullBool{Valid: false}, nil\n\t\t\t\t}\n\t\t\t\treturn sql.NullBool{Valid: true, Bool: items.(*array.Boolean).Value(j)}, nil\n\t\t\t})\n\t\t}\n\t\treturn mapStructuredMapNativeArrowRows(make(map[K]bool), offsets, rowIdx, keyFunc, func(j int) (bool, error) {\n\t\t\tif items.IsNull(j) {\n\t\t\t\treturn false, errors2.ErrNullValueInMapError()\n\t\t\t}\n\t\t\treturn items.(*array.Boolean).Value(j), nil\n\t\t})\n\tcase \"fixed\":\n\t\tif higherPrecision && valueMetadata.Scale == 0 {\n\t\t\treturn mapStructuredMapNativeArrowRows(make(map[K]*big.Int), offsets, rowIdx, keyFunc, func(j int) (*big.Int, error) {\n\t\t\t\tif items.IsNull(j) {\n\t\t\t\t\treturn nil, nil\n\t\t\t\t}\n\t\t\t\treturn mapStructuredMapNativeArrowFixedValue[*big.Int](valueMetadata, j, items, higherPrecision, nil)\n\t\t\t})\n\t\t} else if higherPrecision && valueMetadata.Scale != 0 {\n\t\t\treturn mapStructuredMapNativeArrowRows(make(map[K]*big.Float), offsets, rowIdx, keyFunc, func(j int) (*big.Float, error) {\n\t\t\t\tif items.IsNull(j) {\n\t\t\t\t\treturn nil, nil\n\t\t\t\t}\n\t\t\t\treturn mapStructuredMapNativeArrowFixedValue[*big.Float](valueMetadata, j, items, higherPrecision, nil)\n\t\t\t})\n\t\t} else if !higherPrecision && valueMetadata.Scale == 0 {\n\t\t\tif mapNullValuesEnabled {\n\t\t\t\treturn mapStructuredMapNativeArrowRows(make(map[K]sql.NullInt64), offsets, rowIdx, keyFunc, func(j int) (sql.NullInt64, error) {\n\t\t\t\t\tif items.IsNull(j) {\n\t\t\t\t\t\treturn sql.NullInt64{Valid: false}, nil\n\t\t\t\t\t}\n\t\t\t\t\ts, err := mapStructuredMapNativeArrowFixedValue[string](valueMetadata, j, items, higherPrecision, \"\")\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn sql.NullInt64{}, err\n\t\t\t\t\t}\n\t\t\t\t\ti64, err := strconv.ParseInt(s, 10, 64)\n\t\t\t\t\treturn sql.NullInt64{Valid: true, Int64: i64}, err\n\t\t\t\t})\n\t\t\t}\n\t\t\treturn mapStructuredMapNativeArrowRows(make(map[K]int64), offsets, rowIdx, keyFunc, func(j int) (int64, error) {\n\t\t\t\tif items.IsNull(j) {\n\t\t\t\t\treturn 0, errors2.ErrNullValueInMapError()\n\t\t\t\t}\n\t\t\t\ts, err := mapStructuredMapNativeArrowFixedValue[string](valueMetadata, j, items, higherPrecision, \"\")\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\treturn strconv.ParseInt(s, 10, 64)\n\t\t\t})\n\t\t} else {\n\t\t\tif mapNullValuesEnabled {\n\t\t\t\treturn mapStructuredMapNativeArrowRows(make(map[K]sql.NullFloat64), offsets, rowIdx, keyFunc, func(j int) (sql.NullFloat64, error) {\n\t\t\t\t\tif items.IsNull(j) {\n\t\t\t\t\t\treturn sql.NullFloat64{Valid: false}, nil\n\t\t\t\t\t}\n\t\t\t\t\ts, err := mapStructuredMapNativeArrowFixedValue[string](valueMetadata, j, items, higherPrecision, \"\")\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn sql.NullFloat64{}, err\n\t\t\t\t\t}\n\t\t\t\t\tf64, err := strconv.ParseFloat(s, 64)\n\t\t\t\t\treturn sql.NullFloat64{Valid: true, Float64: f64}, err\n\t\t\t\t})\n\t\t\t}\n\t\t\treturn mapStructuredMapNativeArrowRows(make(map[K]float64), offsets, rowIdx, keyFunc, func(j int) (float64, error) {\n\t\t\t\tif items.IsNull(j) {\n\t\t\t\t\treturn 0, errors2.ErrNullValueInMapError()\n\t\t\t\t}\n\t\t\t\ts, err := mapStructuredMapNativeArrowFixedValue[string](valueMetadata, j, items, higherPrecision, \"\")\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\treturn strconv.ParseFloat(s, 64)\n\t\t\t})\n\t\t}\n\tcase \"real\":\n\t\tif mapNullValuesEnabled {\n\t\t\treturn mapStructuredMapNativeArrowRows(make(map[K]sql.NullFloat64), offsets, rowIdx, keyFunc, func(j int) (sql.NullFloat64, error) {\n\t\t\t\tif items.IsNull(j) {\n\t\t\t\t\treturn sql.NullFloat64{Valid: false}, nil\n\t\t\t\t}\n\t\t\t\tf64 := items.(*array.Float64).Value(j)\n\t\t\t\treturn sql.NullFloat64{Valid: true, Float64: f64}, nil\n\t\t\t})\n\t\t}\n\t\treturn mapStructuredMapNativeArrowRows(make(map[K]float64), offsets, rowIdx, keyFunc, func(j int) (float64, error) {\n\t\t\tif items.IsNull(j) {\n\t\t\t\treturn 0, errors2.ErrNullValueInMapError()\n\t\t\t}\n\t\t\treturn arrowRealToValue(items.(*array.Float64), j).(float64), nil\n\t\t})\n\tcase \"binary\":\n\t\treturn mapStructuredMapNativeArrowRows(make(map[K][]byte), offsets, rowIdx, keyFunc, func(j int) ([]byte, error) {\n\t\t\tif items.IsNull(j) {\n\t\t\t\treturn nil, nil\n\t\t\t}\n\t\t\treturn arrowBinaryToValue(items.(*array.Binary), j).([]byte), nil\n\t\t})\n\tcase \"date\":\n\t\treturn buildTimeFromNativeArrowArray(mapNullValuesEnabled, offsets, rowIdx, keyFunc, items, func(j int) time.Time {\n\t\t\treturn arrowDateToValue(items.(*array.Date32), j).(time.Time)\n\t\t})\n\tcase \"time\":\n\t\treturn buildTimeFromNativeArrowArray(mapNullValuesEnabled, offsets, rowIdx, keyFunc, items, func(j int) time.Time {\n\t\t\treturn arrowTimeToValue(items, j, valueMetadata.Scale).(time.Time)\n\t\t})\n\tcase \"timestamp_ltz\", \"timestamp_ntz\", \"timestamp_tz\":\n\t\treturn buildTimeFromNativeArrowArray(mapNullValuesEnabled, offsets, rowIdx, keyFunc, items, func(j int) time.Time {\n\t\t\treturn *arrowSnowflakeTimestampToTime(items, types.GetSnowflakeType(valueMetadata.Type), valueMetadata.Scale, j, loc)\n\t\t})\n\tcase \"object\":\n\t\treturn mapStructuredMapNativeArrowRows(make(map[K]*structuredType), offsets, rowIdx, keyFunc, func(j int) (*structuredType, error) {\n\t\t\tif items.IsNull(j) {\n\t\t\t\treturn nil, nil\n\t\t\t}\n\t\t\tvar err error\n\t\t\tm := make(map[string]any)\n\t\t\tfor fieldIdx, field := range valueMetadata.Fields {\n\t\t\t\tsnowflakeType := types.GetSnowflakeType(field.Type)\n\t\t\t\tm[field.Name], err = arrowToValue(ctx, j, field, items.(*array.Struct).Field(fieldIdx), loc, higherPrecision, params, snowflakeType)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn &structuredType{\n\t\t\t\tvalues:        m,\n\t\t\t\tfieldMetadata: valueMetadata.Fields,\n\t\t\t\tparams:        params,\n\t\t\t}, nil\n\t\t})\n\tcase \"array\":\n\t\tswitch valueMetadata.Fields[0].Type {\n\t\tcase \"text\":\n\t\t\treturn buildListFromNativeArrowMap[K, string](ctx, rowIdx, valueMetadata, offsets, keyFunc, items, higherPrecision, loc, params)\n\t\tcase \"fixed\":\n\t\t\tif valueMetadata.Fields[0].Scale == 0 {\n\t\t\t\treturn buildListFromNativeArrowMap[K, int64](ctx, rowIdx, valueMetadata, offsets, keyFunc, items, higherPrecision, loc, params)\n\t\t\t}\n\t\t\treturn buildListFromNativeArrowMap[K, float64](ctx, rowIdx, valueMetadata, offsets, keyFunc, items, higherPrecision, loc, params)\n\t\tcase \"real\":\n\t\t\treturn buildListFromNativeArrowMap[K, float64](ctx, rowIdx, valueMetadata, offsets, keyFunc, items, higherPrecision, loc, params)\n\t\tcase \"binary\":\n\t\t\treturn buildListFromNativeArrowMap[K, []byte](ctx, rowIdx, valueMetadata, offsets, keyFunc, items, higherPrecision, loc, params)\n\t\tcase \"boolean\":\n\t\t\treturn buildListFromNativeArrowMap[K, bool](ctx, rowIdx, valueMetadata, offsets, keyFunc, items, higherPrecision, loc, params)\n\t\tcase \"date\", \"time\", \"timestamp_ltz\", \"timestamp_ntz\", \"timestamp_tz\":\n\t\t\treturn buildListFromNativeArrowMap[K, time.Time](ctx, rowIdx, valueMetadata, offsets, keyFunc, items, higherPrecision, loc, params)\n\t\t}\n\t}\n\treturn nil, errors.New(\"Unsupported map value: \" + valueMetadata.Type)\n}\n\nfunc buildListFromNativeArrowMap[K comparable, V any](ctx context.Context, rowIdx int, valueMetadata query.FieldMetadata, offsets []int32, keyFunc func(j int) (K, error), items arrow.Array, higherPrecision bool, loc *time.Location, params *syncParams) (snowflakeValue, error) {\n\treturn mapStructuredMapNativeArrowRows(make(map[K][]V), offsets, rowIdx, keyFunc, func(j int) ([]V, error) {\n\t\tif items.IsNull(j) {\n\t\t\treturn nil, nil\n\t\t}\n\t\tlist, err := buildListFromNativeArrow(ctx, j, valueMetadata.Fields[0], items, loc, higherPrecision, params)\n\t\treturn list.([]V), err\n\t})\n}\n\nfunc buildTimeFromNativeArrowArray[K comparable](mapNullValuesEnabled bool, offsets []int32, rowIdx int, keyFunc func(j int) (K, error), items arrow.Array, buildTime func(j int) time.Time) (snowflakeValue, error) {\n\tif mapNullValuesEnabled {\n\t\treturn mapStructuredMapNativeArrowRows(make(map[K]sql.NullTime), offsets, rowIdx, keyFunc, func(j int) (sql.NullTime, error) {\n\t\t\tif items.IsNull(j) {\n\t\t\t\treturn sql.NullTime{Valid: false}, nil\n\t\t\t}\n\t\t\treturn sql.NullTime{Valid: true, Time: buildTime(j)}, nil\n\t\t})\n\t}\n\treturn mapStructuredMapNativeArrowRows(make(map[K]time.Time), offsets, rowIdx, keyFunc, func(j int) (time.Time, error) {\n\t\tif items.IsNull(j) {\n\t\t\treturn time.Time{}, errors2.ErrNullValueInMapError()\n\t\t}\n\t\treturn buildTime(j), nil\n\t})\n}\n\nfunc mapStructuredMapNativeArrowFixedValue[V any](valueMetadata query.FieldMetadata, j int, items arrow.Array, higherPrecision bool, defaultValue V) (V, error) {\n\tv, err := extractNumberFromArrow(&items, j, higherPrecision, valueMetadata)\n\tif err != nil {\n\t\treturn defaultValue, err\n\t}\n\treturn v.(V), nil\n}\n\nfunc extractNumberFromArrow(values *arrow.Array, j int, higherPrecision bool, srcColumnMeta query.FieldMetadata) (snowflakeValue, error) {\n\tswitch typedValues := (*values).(type) {\n\tcase *array.Decimal128:\n\t\treturn arrowDecimal128ToValue(typedValues, j, higherPrecision, srcColumnMeta), nil\n\tcase *array.Int64:\n\t\treturn arrowInt64ToValue(typedValues, j, higherPrecision, srcColumnMeta), nil\n\tcase *array.Int32:\n\t\treturn arrowInt32ToValue(typedValues, j, higherPrecision, srcColumnMeta), nil\n\tcase *array.Int16:\n\t\treturn arrowInt16ToValue(typedValues, j, higherPrecision, srcColumnMeta), nil\n\tcase *array.Int8:\n\t\treturn arrowInt8ToValue(typedValues, j, higherPrecision, srcColumnMeta), nil\n\t}\n\treturn 0, fmt.Errorf(\"unknown number type: %T\", values)\n}\n\nfunc mapStructuredMapNativeArrowRows[K comparable, V any](m map[K]V, offsets []int32, rowIdx int, keyFunc func(j int) (K, error), itemFunc func(j int) (V, error)) (map[K]V, error) {\n\tfor j := offsets[rowIdx]; j < offsets[rowIdx+1]; j++ {\n\t\tk, err := keyFunc(int(j))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif m[k], err = itemFunc(int(j)); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn m, nil\n}\n\nfunc arrowToStructuredType(ctx context.Context, structs *array.Struct, fieldMetadata []query.FieldMetadata, loc *time.Location, rowIdx int, higherPrecision bool, params *syncParams) (*structuredType, error) {\n\tvar err error\n\tm := make(map[string]any)\n\tfor colIdx := 0; colIdx < structs.NumField(); colIdx++ {\n\t\tvar v any\n\t\tswitch types.GetSnowflakeType(fieldMetadata[colIdx].Type) {\n\t\tcase types.FixedType:\n\t\t\tv = structs.Field(colIdx).ValueStr(rowIdx)\n\t\t\tswitch typedValues := structs.Field(colIdx).(type) {\n\t\t\tcase *array.Decimal128:\n\t\t\t\tv = arrowDecimal128ToValue(typedValues, rowIdx, higherPrecision, fieldMetadata[colIdx])\n\t\t\tcase *array.Int64:\n\t\t\t\tv = arrowInt64ToValue(typedValues, rowIdx, higherPrecision, fieldMetadata[colIdx])\n\t\t\tcase *array.Int32:\n\t\t\t\tv = arrowInt32ToValue(typedValues, rowIdx, higherPrecision, fieldMetadata[colIdx])\n\t\t\tcase *array.Int16:\n\t\t\t\tv = arrowInt16ToValue(typedValues, rowIdx, higherPrecision, fieldMetadata[colIdx])\n\t\t\tcase *array.Int8:\n\t\t\t\tv = arrowInt8ToValue(typedValues, rowIdx, higherPrecision, fieldMetadata[colIdx])\n\t\t\t}\n\t\tcase types.BooleanType:\n\t\t\tv = arrowBoolToValue(structs.Field(colIdx).(*array.Boolean), rowIdx)\n\t\tcase types.RealType:\n\t\t\tv = arrowRealToValue(structs.Field(colIdx).(*array.Float64), rowIdx)\n\t\tcase types.BinaryType:\n\t\t\tv = arrowBinaryToValue(structs.Field(colIdx).(*array.Binary), rowIdx)\n\t\tcase types.DateType:\n\t\t\tv = arrowDateToValue(structs.Field(colIdx).(*array.Date32), rowIdx)\n\t\tcase types.TimeType:\n\t\t\tv = arrowTimeToValue(structs.Field(colIdx), rowIdx, fieldMetadata[colIdx].Scale)\n\t\tcase types.TextType:\n\t\t\tv = arrowStringToValue(structs.Field(colIdx).(*array.String), rowIdx)\n\t\tcase types.TimestampLtzType, types.TimestampTzType, types.TimestampNtzType:\n\t\t\tptr := arrowSnowflakeTimestampToTime(structs.Field(colIdx), types.GetSnowflakeType(fieldMetadata[colIdx].Type), fieldMetadata[colIdx].Scale, rowIdx, loc)\n\t\t\tif ptr != nil {\n\t\t\t\tv = *ptr\n\t\t\t}\n\t\tcase types.ObjectType:\n\t\t\tif !structs.Field(colIdx).IsNull(rowIdx) {\n\t\t\t\tif v, err = arrowToStructuredType(ctx, structs.Field(colIdx).(*array.Struct), fieldMetadata[colIdx].Fields, loc, rowIdx, higherPrecision, params); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\t\tcase types.ArrayType:\n\t\t\tif !structs.Field(colIdx).IsNull(rowIdx) {\n\t\t\t\tvar err error\n\t\t\t\tif v, err = buildListFromNativeArrow(ctx, rowIdx, fieldMetadata[colIdx].Fields[0], structs.Field(colIdx), loc, higherPrecision, params); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\t\tcase types.MapType:\n\t\t\tif !structs.Field(colIdx).IsNull(rowIdx) {\n\t\t\t\tvar err error\n\t\t\t\tif v, err = buildMapFromNativeArrow(ctx, rowIdx, fieldMetadata[colIdx].Fields[0], fieldMetadata[colIdx].Fields[1], structs.Field(colIdx), loc, higherPrecision, params); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tm[fieldMetadata[colIdx].Name] = v\n\t}\n\treturn &structuredType{\n\t\tvalues:        m,\n\t\tfieldMetadata: fieldMetadata,\n\t\tparams:        params,\n\t}, nil\n}\n\nfunc arrowStringToValue(srcValue *array.String, rowIdx int) snowflakeValue {\n\tif srcValue.IsNull(rowIdx) {\n\t\treturn nil\n\t}\n\treturn srcValue.Value(rowIdx)\n}\n\nfunc arrowDecimal128ToValue(srcValue *array.Decimal128, rowIdx int, higherPrecision bool, srcColumnMeta query.FieldMetadata) snowflakeValue {\n\tif !srcValue.IsNull(rowIdx) {\n\t\tnum := srcValue.Value(rowIdx)\n\t\tif srcColumnMeta.Scale == 0 {\n\t\t\tif higherPrecision {\n\t\t\t\treturn num.BigInt()\n\t\t\t}\n\t\t\treturn num.ToString(0)\n\t\t}\n\t\tf := decimalToBigFloat(num, int64(srcColumnMeta.Scale))\n\t\tif higherPrecision {\n\t\t\treturn f\n\t\t}\n\t\treturn fmt.Sprintf(\"%.*f\", srcColumnMeta.Scale, f)\n\t}\n\treturn nil\n}\n\nfunc arrowInt64ToValue(srcValue *array.Int64, rowIdx int, higherPrecision bool, srcColumnMeta query.FieldMetadata) snowflakeValue {\n\tif !srcValue.IsNull(rowIdx) {\n\t\tval := srcValue.Value(rowIdx)\n\t\treturn arrowIntToValue(srcColumnMeta, higherPrecision, val)\n\t}\n\treturn nil\n}\n\nfunc arrowInt32ToValue(srcValue *array.Int32, rowIdx int, higherPrecision bool, srcColumnMeta query.FieldMetadata) snowflakeValue {\n\tif !srcValue.IsNull(rowIdx) {\n\t\tval := srcValue.Value(rowIdx)\n\t\treturn arrowIntToValue(srcColumnMeta, higherPrecision, int64(val))\n\t}\n\treturn nil\n}\n\nfunc arrowInt16ToValue(srcValue *array.Int16, rowIdx int, higherPrecision bool, srcColumnMeta query.FieldMetadata) snowflakeValue {\n\tif !srcValue.IsNull(rowIdx) {\n\t\tval := srcValue.Value(rowIdx)\n\t\treturn arrowIntToValue(srcColumnMeta, higherPrecision, int64(val))\n\t}\n\treturn nil\n}\n\nfunc arrowInt8ToValue(srcValue *array.Int8, rowIdx int, higherPrecision bool, srcColumnMeta query.FieldMetadata) snowflakeValue {\n\tif !srcValue.IsNull(rowIdx) {\n\t\tval := srcValue.Value(rowIdx)\n\t\treturn arrowIntToValue(srcColumnMeta, higherPrecision, int64(val))\n\t}\n\treturn nil\n}\n\nfunc arrowIntToValue(srcColumnMeta query.FieldMetadata, higherPrecision bool, val int64) snowflakeValue {\n\tif srcColumnMeta.Scale == 0 {\n\t\tif higherPrecision {\n\t\t\tif srcColumnMeta.Precision >= 19 {\n\t\t\t\treturn big.NewInt(val)\n\t\t\t}\n\t\t\treturn val\n\t\t}\n\t\treturn fmt.Sprintf(\"%d\", val)\n\t}\n\tif higherPrecision {\n\t\tf := intToBigFloat(val, int64(srcColumnMeta.Scale))\n\t\treturn f\n\t}\n\treturn fmt.Sprintf(\"%.*f\", srcColumnMeta.Scale, float64(val)/math.Pow10(srcColumnMeta.Scale))\n}\n\nfunc arrowRealToValue(srcValue *array.Float64, rowIdx int) snowflakeValue {\n\tif !srcValue.IsNull(rowIdx) {\n\t\treturn srcValue.Value(rowIdx)\n\t}\n\treturn nil\n}\n\nfunc arrowDecFloatToValue(ctx context.Context, srcValue *array.Struct, rowIdx int) (snowflakeValue, error) {\n\tif !srcValue.IsNull(rowIdx) {\n\t\texponent := srcValue.Field(0).(*array.Int16).Value(rowIdx)\n\t\tmantissaBytes := srcValue.Field(1).(*array.Binary).Value(rowIdx)\n\t\tmantissaInt, err := parseTwosComplementBigEndian(mantissaBytes)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to parse mantissa bytes: %s, error: %v\", hex.EncodeToString(mantissaBytes), err)\n\t\t}\n\t\tif decfloatMappingEnabled(ctx) {\n\t\t\tmantissa := new(big.Float).SetPrec(127).SetInt(mantissaInt)\n\t\t\tif result, ok := new(big.Float).SetPrec(127).SetString(fmt.Sprintf(\"%ve%v\", mantissa.Text('G', 38), exponent)); ok {\n\t\t\t\treturn result, nil\n\t\t\t}\n\t\t\treturn nil, fmt.Errorf(\"failed to create decfloat from mantissa %s and exponent %d\", mantissa.Text('G', 38), exponent)\n\t\t}\n\t\tmantissaStr := mantissaInt.String()\n\t\tif mantissaStr == \"0\" {\n\t\t\treturn \"0\", nil\n\t\t}\n\t\tnegative := mantissaStr[0] == '-'\n\t\tmantissaUnsigned := strings.TrimLeft(mantissaStr, \"-\")\n\t\tmantissaLen := len(mantissaUnsigned)\n\t\tif mantissaLen > 1 {\n\t\t\tmantissaUnsigned = mantissaUnsigned[0:1] + \".\" + mantissaUnsigned[1:]\n\t\t}\n\t\tif negative {\n\t\t\tmantissaStr = \"-\" + mantissaUnsigned\n\t\t} else {\n\t\t\tmantissaStr = mantissaUnsigned\n\t\t}\n\t\texponent = exponent + int16(mantissaLen) - 1\n\t\tresult := mantissaStr\n\t\tif exponent != 0 {\n\t\t\tresult = mantissaStr + \"e\" + strconv.Itoa(int(exponent))\n\t\t}\n\t\treturn result, nil\n\t}\n\treturn nil, nil\n}\n\nfunc parseTwosComplementBigEndian(b []byte) (*big.Int, error) {\n\tif len(b) > 16 {\n\t\treturn nil, fmt.Errorf(\"input byte slice is too long (max 16 bytes)\")\n\t}\n\n\tval := new(big.Int)\n\tval.SetBytes(b) // big.Int.SetBytes treats the bytes as an unsigned magnitude\n\n\t// If the sign bit is 1, the number is negative.\n\tif b[0]&0x80 != 0 {\n\t\t// Calculate 2^(bit length) for subtraction\n\t\tbitLength := uint(len(b) * 8)\n\t\tpowerOfTwo := new(big.Int).Exp(big.NewInt(2), big.NewInt(int64(bitLength)), nil)\n\n\t\t// Subtract 2^(bit length) from the unsigned value to get the signed value.\n\t\tval.Sub(val, powerOfTwo)\n\t}\n\n\treturn val, nil\n}\n\nfunc arrowBoolToValue(srcValue *array.Boolean, rowIdx int) snowflakeValue {\n\tif !srcValue.IsNull(rowIdx) {\n\t\treturn srcValue.Value(rowIdx)\n\t}\n\treturn nil\n}\n\nfunc arrowBinaryToValue(srcValue *array.Binary, rowIdx int) snowflakeValue {\n\tif !srcValue.IsNull(rowIdx) {\n\t\treturn srcValue.Value(rowIdx)\n\t}\n\treturn nil\n}\n\nfunc arrowDateToValue(srcValue *array.Date32, rowID int) snowflakeValue {\n\tif !srcValue.IsNull(rowID) {\n\t\treturn time.Unix(int64(srcValue.Value(rowID))*86400, 0).UTC()\n\t}\n\treturn nil\n}\n\nfunc arrowTimeToValue(srcValue arrow.Array, rowIdx int, scale int) snowflakeValue {\n\tt0 := time.Time{}\n\tif srcValue.DataType().ID() == arrow.INT64 {\n\t\tif !srcValue.IsNull(rowIdx) {\n\t\t\treturn t0.Add(time.Duration(srcValue.(*array.Int64).Value(rowIdx) * int64(math.Pow10(9-scale))))\n\t\t}\n\t} else {\n\t\tif !srcValue.IsNull(rowIdx) {\n\t\t\treturn t0.Add(time.Duration(int64(srcValue.(*array.Int32).Value(rowIdx)) * int64(math.Pow10(9-scale))))\n\t\t}\n\t}\n\treturn nil\n}\n\ntype (\n\tintArray          []int\n\tint32Array        []int32\n\tint64Array        []int64\n\tfloat64Array      []float64\n\tfloat32Array      []float32\n\tdecfloatArray     []*big.Float\n\tboolArray         []bool\n\tstringArray       []string\n\tbyteArray         [][]byte\n\ttimestampNtzArray []time.Time\n\ttimestampLtzArray []time.Time\n\ttimestampTzArray  []time.Time\n\tdateArray         []time.Time\n\ttimeArray         []time.Time\n)\n\n// Array takes in a column of a row to be inserted via array binding, bulk or\n// otherwise, and converts it into a native snowflake type for binding\nfunc Array(a any, typ ...any) (any, error) {\n\n\tswitch t := a.(type) {\n\tcase []int:\n\t\treturn (*intArray)(&t), nil\n\tcase []int32:\n\t\treturn (*int32Array)(&t), nil\n\tcase []int64:\n\t\treturn (*int64Array)(&t), nil\n\tcase []float64:\n\t\treturn (*float64Array)(&t), nil\n\tcase []float32:\n\t\treturn (*float32Array)(&t), nil\n\tcase []*big.Float:\n\t\tif len(typ) == 1 {\n\t\t\tif b, ok := typ[0].([]byte); ok && bytes.Equal(b, DataTypeDecfloat) {\n\t\t\t\treturn (*decfloatArray)(&t), nil\n\t\t\t}\n\t\t}\n\t\treturn nil, errors.New(\"unsupported *big.Float array bind. Set the type to DataTypeDecfloat to use decfloatArray\")\n\tcase []bool:\n\t\treturn (*boolArray)(&t), nil\n\tcase []string:\n\t\treturn (*stringArray)(&t), nil\n\tcase [][]byte:\n\t\treturn (*byteArray)(&t), nil\n\tcase []time.Time:\n\t\tif len(typ) < 1 {\n\t\t\treturn nil, errUnsupportedTimeArrayBind\n\t\t}\n\t\tswitch typ[0] {\n\t\tcase TimestampNTZType:\n\t\t\treturn (*timestampNtzArray)(&t), nil\n\t\tcase TimestampLTZType:\n\t\t\treturn (*timestampLtzArray)(&t), nil\n\t\tcase TimestampTZType:\n\t\t\treturn (*timestampTzArray)(&t), nil\n\t\tcase DateType:\n\t\t\treturn (*dateArray)(&t), nil\n\t\tcase TimeType:\n\t\t\treturn (*timeArray)(&t), nil\n\t\tdefault:\n\t\t\treturn nil, errUnsupportedTimeArrayBind\n\t\t}\n\tcase *[]int:\n\t\treturn (*intArray)(t), nil\n\tcase *[]int32:\n\t\treturn (*int32Array)(t), nil\n\tcase *[]int64:\n\t\treturn (*int64Array)(t), nil\n\tcase *[]float64:\n\t\treturn (*float64Array)(t), nil\n\tcase *[]float32:\n\t\treturn (*float32Array)(t), nil\n\tcase *[]*big.Float:\n\t\tif len(typ) == 1 {\n\t\t\tif b, ok := typ[0].([]byte); ok && bytes.Equal(b, DataTypeDecfloat) {\n\t\t\t\treturn (*decfloatArray)(t), nil\n\t\t\t}\n\t\t}\n\t\treturn nil, errors.New(\"unsupported *big.Float array bind. Set the type to DataTypeDecfloat to use decfloatArray\")\n\tcase *[]bool:\n\t\treturn (*boolArray)(t), nil\n\tcase *[]string:\n\t\treturn (*stringArray)(t), nil\n\tcase *[][]byte:\n\t\treturn (*byteArray)(t), nil\n\tcase *[]time.Time:\n\t\tif len(typ) < 1 {\n\t\t\treturn nil, errUnsupportedTimeArrayBind\n\t\t}\n\t\tswitch typ[0] {\n\t\tcase TimestampNTZType:\n\t\t\treturn (*timestampNtzArray)(t), nil\n\t\tcase TimestampLTZType:\n\t\t\treturn (*timestampLtzArray)(t), nil\n\t\tcase TimestampTZType:\n\t\t\treturn (*timestampTzArray)(t), nil\n\t\tcase DateType:\n\t\t\treturn (*dateArray)(t), nil\n\t\tcase TimeType:\n\t\t\treturn (*timeArray)(t), nil\n\t\tdefault:\n\t\t\treturn nil, errUnsupportedTimeArrayBind\n\t\t}\n\tcase []any, *[]any:\n\t\t// Support for bulk array binding insertion using []any / *[]any\n\t\tif len(typ) < 1 {\n\t\t\treturn interfaceArrayBinding{\n\t\t\t\thasTimezone:       false,\n\t\t\t\ttimezoneTypeArray: a,\n\t\t\t}, nil\n\t\t}\n\t\treturn interfaceArrayBinding{\n\t\t\thasTimezone:       true,\n\t\t\ttzType:            typ[0].(timezoneType),\n\t\t\ttimezoneTypeArray: a,\n\t\t}, nil\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unknown array type for binding: %T\", a)\n\t}\n}\n\n// snowflakeArrayToString converts the array binding to snowflake's native\n// string type. The string value differs whether it's directly bound or\n// uploaded via stream.\nfunc snowflakeArrayToString(nv *driver.NamedValue, stream bool) (types.SnowflakeType, []*string, error) {\n\tvar t types.SnowflakeType\n\tvar arr []*string\n\tswitch reflect.TypeOf(nv.Value) {\n\tcase reflect.TypeFor[*intArray]():\n\t\tt = types.FixedType\n\t\ta := nv.Value.(*intArray)\n\t\tfor _, x := range *a {\n\t\t\tv := strconv.Itoa(x)\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*int64Array]():\n\t\tt = types.FixedType\n\t\ta := nv.Value.(*int64Array)\n\t\tfor _, x := range *a {\n\t\t\tv := strconv.FormatInt(x, 10)\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*int32Array]():\n\t\tt = types.FixedType\n\t\ta := nv.Value.(*int32Array)\n\t\tfor _, x := range *a {\n\t\t\tv := strconv.Itoa(int(x))\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*float64Array]():\n\t\tt = types.RealType\n\t\ta := nv.Value.(*float64Array)\n\t\tfor _, x := range *a {\n\t\t\tv := fmt.Sprintf(\"%g\", x)\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*float32Array]():\n\t\tt = types.RealType\n\t\ta := nv.Value.(*float32Array)\n\t\tfor _, x := range *a {\n\t\t\tv := fmt.Sprintf(\"%g\", x)\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*decfloatArray]():\n\t\tt = types.TextType\n\t\ta := nv.Value.(*decfloatArray)\n\t\tfor _, x := range *a {\n\t\t\tv := x.Text('g', decfloatPrintingPrec)\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*boolArray]():\n\t\tt = types.BooleanType\n\t\ta := nv.Value.(*boolArray)\n\t\tfor _, x := range *a {\n\t\t\tv := strconv.FormatBool(x)\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*stringArray]():\n\t\tt = types.TextType\n\t\ta := nv.Value.(*stringArray)\n\t\tfor _, x := range *a {\n\t\t\tv := x // necessary for address to be not overwritten\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*byteArray]():\n\t\tt = types.BinaryType\n\t\ta := nv.Value.(*byteArray)\n\t\tfor _, x := range *a {\n\t\t\tv := hex.EncodeToString(x)\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*timestampNtzArray]():\n\t\tt = types.TimestampNtzType\n\t\ta := nv.Value.(*timestampNtzArray)\n\t\tfor _, x := range *a {\n\t\t\tv, err := getTimestampBindValue(x, stream, t)\n\t\t\tif err != nil {\n\t\t\t\treturn types.UnSupportedType, nil, err\n\t\t\t}\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*timestampLtzArray]():\n\t\tt = types.TimestampLtzType\n\t\ta := nv.Value.(*timestampLtzArray)\n\n\t\tfor _, x := range *a {\n\t\t\tv, err := getTimestampBindValue(x, stream, t)\n\t\t\tif err != nil {\n\t\t\t\treturn types.UnSupportedType, nil, err\n\t\t\t}\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*timestampTzArray]():\n\t\tt = types.TimestampTzType\n\t\ta := nv.Value.(*timestampTzArray)\n\t\tfor _, x := range *a {\n\t\t\tv, err := getTimestampBindValue(x, stream, t)\n\t\t\tif err != nil {\n\t\t\t\treturn types.UnSupportedType, nil, err\n\t\t\t}\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*dateArray]():\n\t\tt = types.DateType\n\t\ta := nv.Value.(*dateArray)\n\t\tfor _, x := range *a {\n\t\t\tvar v string\n\t\t\tif stream {\n\t\t\t\tv = x.Format(\"2006-01-02\")\n\t\t\t} else {\n\t\t\t\t_, offset := x.Zone()\n\t\t\t\tx = x.Add(time.Second * time.Duration(offset))\n\t\t\t\tv = fmt.Sprintf(\"%d\", x.Unix()*1000)\n\t\t\t}\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tcase reflect.TypeFor[*timeArray]():\n\t\tt = types.TimeType\n\t\ta := nv.Value.(*timeArray)\n\t\tfor _, x := range *a {\n\t\t\tvar v string\n\t\t\tif stream {\n\t\t\t\tv = fmt.Sprintf(\"%02d:%02d:%02d.%09d\", x.Hour(), x.Minute(), x.Second(), x.Nanosecond())\n\t\t\t} else {\n\t\t\t\th, m, s := x.Clock()\n\t\t\t\ttm := int64(h)*int64(time.Hour) + int64(m)*int64(time.Minute) + int64(s)*int64(time.Second) + int64(x.Nanosecond())\n\t\t\t\tv = strconv.FormatInt(tm, 10)\n\t\t\t}\n\t\t\tarr = append(arr, &v)\n\t\t}\n\tdefault:\n\t\t// Support for bulk array binding insertion using []any / *[]any\n\t\tnvValue := reflect.ValueOf(nv)\n\t\tif nvValue.Kind() == reflect.Pointer {\n\t\t\tvalue := reflect.Indirect(reflect.ValueOf(nv.Value))\n\t\t\tif isInterfaceArrayBinding(value.Interface()) {\n\t\t\t\ttimeStruct, ok := value.Interface().(interfaceArrayBinding)\n\t\t\t\tif ok {\n\t\t\t\t\ttimeInterfaceSlice := reflect.Indirect(reflect.ValueOf(timeStruct.timezoneTypeArray))\n\t\t\t\t\tif timeStruct.hasTimezone {\n\t\t\t\t\t\treturn interfaceSliceToString(timeInterfaceSlice, stream, timeStruct.tzType)\n\t\t\t\t\t}\n\t\t\t\t\treturn interfaceSliceToString(timeInterfaceSlice, stream)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn types.UnSupportedType, nil, nil\n\t}\n\treturn t, arr, nil\n}\n\nfunc interfaceSliceToString(interfaceSlice reflect.Value, stream bool, tzType ...timezoneType) (types.SnowflakeType, []*string, error) {\n\tvar t types.SnowflakeType\n\tvar arr []*string\n\n\tfor i := 0; i < interfaceSlice.Len(); i++ {\n\t\tval := interfaceSlice.Index(i)\n\t\tif val.CanInterface() {\n\t\t\tv := val.Interface()\n\n\t\t\tswitch x := v.(type) {\n\t\t\tcase int:\n\t\t\t\tt = types.FixedType\n\t\t\t\tv := strconv.Itoa(x)\n\t\t\t\tarr = append(arr, &v)\n\t\t\tcase int32:\n\t\t\t\tt = types.FixedType\n\t\t\t\tv := strconv.Itoa(int(x))\n\t\t\t\tarr = append(arr, &v)\n\t\t\tcase int64:\n\t\t\t\tt = types.FixedType\n\t\t\t\tv := strconv.FormatInt(x, 10)\n\t\t\t\tarr = append(arr, &v)\n\t\t\tcase float32:\n\t\t\t\tt = types.RealType\n\t\t\t\tv := fmt.Sprintf(\"%g\", x)\n\t\t\t\tarr = append(arr, &v)\n\t\t\tcase float64:\n\t\t\t\tt = types.RealType\n\t\t\t\tv := fmt.Sprintf(\"%g\", x)\n\t\t\t\tarr = append(arr, &v)\n\t\t\tcase bool:\n\t\t\t\tt = types.BooleanType\n\t\t\t\tv := strconv.FormatBool(x)\n\t\t\t\tarr = append(arr, &v)\n\t\t\tcase string:\n\t\t\t\tt = types.TextType\n\t\t\t\tarr = append(arr, &x)\n\t\t\tcase []byte:\n\t\t\t\tt = types.BinaryType\n\t\t\t\tv := hex.EncodeToString(x)\n\t\t\t\tarr = append(arr, &v)\n\t\t\tcase time.Time:\n\t\t\t\tif len(tzType) < 1 {\n\t\t\t\t\treturn types.UnSupportedType, nil, nil\n\t\t\t\t}\n\n\t\t\t\tswitch tzType[0] {\n\t\t\t\tcase TimestampNTZType:\n\t\t\t\t\tt = types.TimestampNtzType\n\t\t\t\t\tv, err := getTimestampBindValue(x, stream, t)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn types.UnSupportedType, nil, err\n\t\t\t\t\t}\n\t\t\t\t\tarr = append(arr, &v)\n\t\t\t\tcase TimestampLTZType:\n\t\t\t\t\tt = types.TimestampLtzType\n\t\t\t\t\tv, err := getTimestampBindValue(x, stream, t)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn types.UnSupportedType, nil, err\n\t\t\t\t\t}\n\t\t\t\t\tarr = append(arr, &v)\n\t\t\t\tcase TimestampTZType:\n\t\t\t\t\tt = types.TimestampTzType\n\t\t\t\t\tv, err := getTimestampBindValue(x, stream, t)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn types.UnSupportedType, nil, err\n\t\t\t\t\t}\n\t\t\t\t\tarr = append(arr, &v)\n\t\t\t\tcase DateType:\n\t\t\t\t\tt = types.DateType\n\t\t\t\t\t_, offset := x.Zone()\n\t\t\t\t\tx = x.Add(time.Second * time.Duration(offset))\n\t\t\t\t\tv := fmt.Sprintf(\"%d\", x.Unix()*1000)\n\t\t\t\t\tarr = append(arr, &v)\n\t\t\t\tcase TimeType:\n\t\t\t\t\tt = types.TimeType\n\t\t\t\t\tvar v string\n\t\t\t\t\tif stream {\n\t\t\t\t\t\tv = x.Format(format[11:19])\n\t\t\t\t\t} else {\n\t\t\t\t\t\th, m, s := x.Clock()\n\t\t\t\t\t\ttm := int64(h)*int64(time.Hour) + int64(m)*int64(time.Minute) + int64(s)*int64(time.Second) + int64(x.Nanosecond())\n\t\t\t\t\t\tv = strconv.FormatInt(tm, 10)\n\t\t\t\t\t}\n\t\t\t\t\tarr = append(arr, &v)\n\t\t\t\tdefault:\n\t\t\t\t\treturn types.UnSupportedType, nil, nil\n\t\t\t\t}\n\t\t\tcase driver.Valuer: // honor each driver's Valuer interface\n\t\t\t\tif value, err := x.Value(); err == nil && value != nil {\n\t\t\t\t\t// if the output value is a valid string, return that\n\t\t\t\t\tif strVal, ok := value.(string); ok {\n\t\t\t\t\t\tt = types.TextType\n\t\t\t\t\t\tarr = append(arr, &strVal)\n\t\t\t\t\t}\n\t\t\t\t} else if v != nil {\n\t\t\t\t\treturn types.UnSupportedType, nil, nil\n\t\t\t\t} else {\n\t\t\t\t\tarr = append(arr, nil)\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\tif val.Interface() != nil {\n\t\t\t\t\tif isUUIDImplementer(val) {\n\t\t\t\t\t\tt = types.TextType\n\t\t\t\t\t\tx := v.(fmt.Stringer).String()\n\t\t\t\t\t\tarr = append(arr, &x)\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\treturn types.UnSupportedType, nil, nil\n\t\t\t\t}\n\n\t\t\t\tarr = append(arr, nil)\n\t\t\t}\n\t\t}\n\t}\n\treturn t, arr, nil\n}\n\nfunc higherPrecisionEnabled(ctx context.Context) bool {\n\treturn ia.HigherPrecisionEnabled(ctx)\n}\n\nfunc decfloatMappingEnabled(ctx context.Context) bool {\n\tv := ctx.Value(enableDecfloat)\n\tif v == nil {\n\t\treturn false\n\t}\n\td, ok := v.(bool)\n\treturn ok && d\n}\n\n// TypedNullTime is required to properly bind the null value with the snowflakeType as the Snowflake functions\n// require the type of the field to be provided explicitly for the null values\ntype TypedNullTime struct {\n\tTime   sql.NullTime\n\tTzType timezoneType\n}\n\nfunc convertTzTypeToSnowflakeType(tzType timezoneType) types.SnowflakeType {\n\tswitch tzType {\n\tcase TimestampNTZType:\n\t\treturn types.TimestampNtzType\n\tcase TimestampLTZType:\n\t\treturn types.TimestampLtzType\n\tcase TimestampTZType:\n\t\treturn types.TimestampTzType\n\tcase DateType:\n\t\treturn types.DateType\n\tcase TimeType:\n\t\treturn types.TimeType\n\t}\n\treturn types.UnSupportedType\n}\n\nfunc getTimestampBindValue(x time.Time, stream bool, t types.SnowflakeType) (string, error) {\n\tif stream {\n\t\treturn x.Format(format), nil\n\t}\n\treturn convertTimeToTimeStamp(x, t)\n}\n\nfunc convertTimeToTimeStamp(x time.Time, t types.SnowflakeType) (string, error) {\n\tunixTime, _ := new(big.Int).SetString(fmt.Sprintf(\"%d\", x.Unix()), 10)\n\tm, ok := new(big.Int).SetString(strconv.FormatInt(1e9, 10), 10)\n\tif !ok {\n\t\treturn \"\", errors.New(\"failed to parse big int from string: invalid format or unsupported characters\")\n\t}\n\n\tunixTime.Mul(unixTime, m)\n\ttmNanos, _ := new(big.Int).SetString(fmt.Sprintf(\"%d\", x.Nanosecond()), 10)\n\tif t == types.TimestampTzType {\n\t\t_, offset := x.Zone()\n\t\treturn fmt.Sprintf(\"%v %v\", unixTime.Add(unixTime, tmNanos), offset/60+1440), nil\n\t}\n\treturn unixTime.Add(unixTime, tmNanos).String(), nil\n}\n\nfunc decoderWithNumbersAsStrings(srcValue *string) *json.Decoder {\n\tdecoder := json.NewDecoder(bytes.NewBufferString(*srcValue))\n\tdecoder.UseNumber()\n\treturn decoder\n}\n"
  },
  {
    "path": "converter_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n\t\"io\"\n\t\"math\"\n\t\"math/big\"\n\t\"math/cmplx\"\n\t\"reflect\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/array\"\n\t\"github.com/apache/arrow-go/v18/arrow/decimal128\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n)\n\nfunc stringIntToDecimal(src string) (decimal128.Num, bool) {\n\tb, ok := new(big.Int).SetString(src, 10)\n\tif !ok {\n\t\treturn decimal128.Num{}, ok\n\t}\n\tvar high, low big.Int\n\thigh.QuoRem(b, decimalShift, &low)\n\treturn decimal128.New(high.Int64(), low.Uint64()), ok\n}\n\nfunc stringFloatToDecimal(src string, scale int64) (decimal128.Num, bool) {\n\tb, ok := new(big.Float).SetString(src)\n\tif !ok {\n\t\treturn decimal128.Num{}, ok\n\t}\n\ts := new(big.Float).SetInt(new(big.Int).Exp(big.NewInt(10), big.NewInt(scale), nil))\n\tn := new(big.Float).Mul(b, s)\n\tif !n.IsInt() {\n\t\treturn decimal128.Num{}, false\n\t}\n\tvar high, low, z big.Int\n\tn.Int(&z)\n\thigh.QuoRem(&z, decimalShift, &low)\n\treturn decimal128.New(high.Int64(), low.Uint64()), ok\n}\n\nfunc stringFloatToInt(src string, scale int64) (int64, bool) {\n\tb, ok := new(big.Float).SetString(src)\n\tif !ok {\n\t\treturn 0, ok\n\t}\n\ts := new(big.Float).SetInt(new(big.Int).Exp(big.NewInt(10), big.NewInt(scale), nil))\n\tn := new(big.Float).Mul(b, s)\n\tvar z big.Int\n\tn.Int(&z)\n\tif !z.IsInt64() {\n\t\treturn 0, false\n\t}\n\treturn z.Int64(), true\n}\n\ntype testValueToStringStructuredObject struct {\n\ts    string\n\ti    int32\n\tdate time.Time\n}\n\nfunc (o *testValueToStringStructuredObject) Write(sowc StructuredObjectWriterContext) error {\n\tif err := sowc.WriteString(\"s\", o.s); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteInt32(\"i\", o.i); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteTime(\"date\", o.date, DataTypeDate); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc TestValueToString(t *testing.T) {\n\tv := cmplx.Sqrt(-5 + 12i) // should never happen as Go sql package must have already validated.\n\t_, err := valueToString(v, types.NullType, nil)\n\tif err == nil {\n\t\tt.Errorf(\"should raise error: %v\", v)\n\t}\n\tparams := newSyncParams(make(map[string]*string))\n\tdateFormat := \"YYYY-MM-DD\"\n\tparams.set(\"date_output_format\", &dateFormat)\n\n\t// both localTime and utcTime should yield the same unix timestamp\n\tlocalTime := time.Date(2019, 2, 6, 14, 17, 31, 123456789, time.FixedZone(\"-08:00\", -8*3600))\n\tutcTime := time.Date(2019, 2, 6, 22, 17, 31, 123456789, time.UTC)\n\texpectedUnixTime := \"1549491451123456789\" // time.Unix(1549491451, 123456789).Format(time.RFC3339) == \"2019-02-06T14:17:31-08:00\"\n\texpectedBool := \"true\"\n\texpectedInt64 := \"1\"\n\texpectedFloat64 := \"1.1\"\n\texpectedString := \"teststring\"\n\n\tbv, err := valueToString(localTime, types.TimestampLtzType, nil)\n\tassertNilF(t, err)\n\tassertEmptyStringE(t, bv.format)\n\tassertNilE(t, bv.schema)\n\tassertEqualE(t, *bv.value, expectedUnixTime)\n\n\tbv, err = valueToString(utcTime, types.TimestampLtzType, nil)\n\tassertNilF(t, err)\n\tassertEmptyStringE(t, bv.format)\n\tassertNilE(t, bv.schema)\n\tassertEqualE(t, *bv.value, expectedUnixTime)\n\n\tbv, err = valueToString(sql.NullBool{Bool: true, Valid: true}, types.TimestampLtzType, nil)\n\tassertNilF(t, err)\n\tassertEmptyStringE(t, bv.format)\n\tassertNilE(t, bv.schema)\n\tassertEqualE(t, *bv.value, expectedBool)\n\n\tbv, err = valueToString(sql.NullInt64{Int64: 1, Valid: true}, types.TimestampLtzType, nil)\n\tassertNilF(t, err)\n\tassertEmptyStringE(t, bv.format)\n\tassertNilE(t, bv.schema)\n\tassertEqualE(t, *bv.value, expectedInt64)\n\n\tbv, err = valueToString(sql.NullFloat64{Float64: 1.1, Valid: true}, types.TimestampLtzType, nil)\n\tassertNilF(t, err)\n\tassertEmptyStringE(t, bv.format)\n\tassertNilE(t, bv.schema)\n\tassertEqualE(t, *bv.value, expectedFloat64)\n\n\tbv, err = valueToString(sql.NullString{String: \"teststring\", Valid: true}, types.TimestampLtzType, nil)\n\tassertNilF(t, err)\n\tassertEmptyStringE(t, bv.format)\n\tassertNilE(t, bv.schema)\n\tassertEqualE(t, *bv.value, expectedString)\n\n\tt.Run(\"SQL Time\", func(t *testing.T) {\n\t\tbv, err := valueToString(sql.NullTime{Time: localTime, Valid: true}, types.TimestampLtzType, nil)\n\t\tassertNilF(t, err)\n\t\tassertEmptyStringE(t, bv.format)\n\t\tassertNilE(t, bv.schema)\n\t\tassertEqualE(t, *bv.value, expectedUnixTime)\n\t})\n\n\tt.Run(\"arrays\", func(t *testing.T) {\n\t\tbv, err := valueToString([2]int{1, 2}, types.ObjectType, nil)\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, bv.format, jsonFormatStr)\n\t\tassertEqualE(t, *bv.value, \"[1,2]\")\n\t})\n\tt.Run(\"slices\", func(t *testing.T) {\n\t\tbv, err := valueToString([]int{1, 2}, types.ObjectType, nil)\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, bv.format, jsonFormatStr)\n\t\tassertEqualE(t, *bv.value, \"[1,2]\")\n\t})\n\n\tt.Run(\"UUID - should return string\", func(t *testing.T) {\n\t\tu := NewUUID()\n\t\tbv, err := valueToString(u, types.TextType, nil)\n\t\tassertNilF(t, err)\n\t\tassertEmptyStringE(t, bv.format)\n\t\tassertEqualE(t, *bv.value, u.String())\n\t})\n\n\tt.Run(\"database/sql/driver - Valuer interface\", func(t *testing.T) {\n\t\tu := newTestUUID()\n\t\tbv, err := valueToString(u, types.TextType, nil)\n\t\tassertNilF(t, err)\n\t\tassertEmptyStringE(t, bv.format)\n\t\tassertEqualE(t, *bv.value, u.String())\n\t})\n\n\tt.Run(\"testUUID\", func(t *testing.T) {\n\t\tu := newTestUUID()\n\t\tassertEqualE(t, u.String(), parseTestUUID(u.String()).String())\n\n\t\tbv, err := valueToString(u, types.TextType, nil)\n\t\tassertNilF(t, err)\n\t\tassertEmptyStringE(t, bv.format)\n\t\tassertEqualE(t, *bv.value, u.String())\n\t})\n\n\tbv, err = valueToString(&testValueToStringStructuredObject{s: \"some string\", i: 123, date: time.Date(2024, time.May, 24, 0, 0, 0, 0, time.UTC)}, types.TimestampLtzType, &params)\n\tassertNilF(t, err)\n\tassertEqualE(t, bv.format, jsonFormatStr)\n\tassertDeepEqualE(t, *bv.schema, bindingSchema{\n\t\tTyp:      \"object\",\n\t\tNullable: true,\n\t\tFields: []query.FieldMetadata{\n\t\t\t{\n\t\t\t\tName:     \"s\",\n\t\t\t\tType:     \"text\",\n\t\t\t\tNullable: true,\n\t\t\t\tLength:   134217728,\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:      \"i\",\n\t\t\t\tType:      \"fixed\",\n\t\t\t\tNullable:  true,\n\t\t\t\tPrecision: 38,\n\t\t\t\tScale:     0,\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:     \"date\",\n\t\t\t\tType:     \"date\",\n\t\t\t\tNullable: true,\n\t\t\t\tScale:    9,\n\t\t\t},\n\t\t},\n\t})\n\tassertEqualIgnoringWhitespaceE(t, *bv.value, `{\"date\": \"2024-05-24\", \"i\": 123, \"s\": \"some string\"}`)\n}\n\nfunc TestExtractTimestamp(t *testing.T) {\n\ts := \"1234abcdef\" // pragma: allowlist secret\n\t_, _, err := extractTimestamp(&s)\n\tif err == nil {\n\t\tt.Errorf(\"should raise error: %v\", s)\n\t}\n\ts = \"1234abc.def\"\n\t_, _, err = extractTimestamp(&s)\n\tif err == nil {\n\t\tt.Errorf(\"should raise error: %v\", s)\n\t}\n\ts = \"1234.def\"\n\t_, _, err = extractTimestamp(&s)\n\tif err == nil {\n\t\tt.Errorf(\"should raise error: %v\", s)\n\t}\n}\n\nfunc TestStringToValue(t *testing.T) {\n\tvar source string\n\tvar dest driver.Value\n\tvar err error\n\tvar rowType *query.ExecResponseRowType\n\tsource = \"abcdefg\"\n\n\ttypes := []string{\n\t\t\"date\", \"time\", \"timestamp_ntz\", \"timestamp_ltz\", \"timestamp_tz\", \"binary\",\n\t}\n\n\tfor _, tt := range types {\n\t\tt.Run(tt, func(t *testing.T) {\n\t\t\trowType = &query.ExecResponseRowType{\n\t\t\t\tType: tt,\n\t\t\t}\n\t\t\tif err = stringToValue(context.Background(), &dest, *rowType, &source, nil, nil); err == nil {\n\t\t\t\tt.Errorf(\"should raise error. type: %v, value:%v\", tt, source)\n\t\t\t}\n\t\t})\n\t}\n\n\tsources := []string{\n\t\t\"12345K78 2020\",\n\t\t\"12345678 20T0\",\n\t}\n\n\ttypes = []string{\n\t\t\"timestamp_tz\",\n\t}\n\n\tfor _, ss := range sources {\n\t\tfor _, tt := range types {\n\t\t\tt.Run(ss+tt, func(t *testing.T) {\n\t\t\t\trowType = &query.ExecResponseRowType{\n\t\t\t\t\tType: tt,\n\t\t\t\t}\n\t\t\t\tif err = stringToValue(context.Background(), &dest, *rowType, &ss, nil, nil); err == nil {\n\t\t\t\t\tt.Errorf(\"should raise error. type: %v, value:%v\", tt, source)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t}\n\n\tsrc := \"1549491451.123456789\"\n\tif err = stringToValue(context.Background(), &dest, query.ExecResponseRowType{Type: \"timestamp_ltz\"}, &src, nil, nil); err != nil {\n\t\tt.Errorf(\"unexpected error: %v\", err)\n\t} else if ts, ok := dest.(time.Time); !ok {\n\t\tt.Errorf(\"expected type: 'time.Time', got '%v'\", reflect.TypeOf(dest))\n\t} else if ts.UnixNano() != 1549491451123456789 {\n\t\tt.Errorf(\"expected unix timestamp: 1549491451123456789, got %v\", ts.UnixNano())\n\t}\n}\n\ntype tcArrayToString struct {\n\tin  driver.NamedValue\n\ttyp types.SnowflakeType\n\tout []string\n}\n\nfunc TestArrayToString(t *testing.T) {\n\ttestcases := []tcArrayToString{\n\t\t{in: driver.NamedValue{Value: &intArray{1, 2}}, typ: types.FixedType, out: []string{\"1\", \"2\"}},\n\t\t{in: driver.NamedValue{Value: &int32Array{1, 2}}, typ: types.FixedType, out: []string{\"1\", \"2\"}},\n\t\t{in: driver.NamedValue{Value: &int64Array{3, 4, 5}}, typ: types.FixedType, out: []string{\"3\", \"4\", \"5\"}},\n\t\t{in: driver.NamedValue{Value: &float64Array{6.7}}, typ: types.RealType, out: []string{\"6.7\"}},\n\t\t{in: driver.NamedValue{Value: &float32Array{1.5}}, typ: types.RealType, out: []string{\"1.5\"}},\n\t\t{in: driver.NamedValue{Value: &boolArray{true, false}}, typ: types.BooleanType, out: []string{\"true\", \"false\"}},\n\t\t{in: driver.NamedValue{Value: &stringArray{\"foo\", \"bar\", \"baz\"}}, typ: types.TextType, out: []string{\"foo\", \"bar\", \"baz\"}},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(strings.Join(test.out, \"_\"), func(t *testing.T) {\n\t\t\ts, a, err := snowflakeArrayToString(&test.in, false)\n\t\t\tassertNilF(t, err)\n\t\t\tif s != test.typ {\n\t\t\t\tt.Errorf(\"failed. in: %v, expected: %v, got: %v\", test.in, test.typ, s)\n\t\t\t}\n\t\t\tfor i, v := range a {\n\t\t\t\tif *v != test.out[i] {\n\t\t\t\t\tt.Errorf(\"failed. in: %v, expected: %v, got: %v\", test.in, test.out[i], a)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestArrowToValues(t *testing.T) {\n\tdest := make([]snowflakeValue, 2)\n\n\tpool := memory.NewCheckedAllocator(memory.NewGoAllocator())\n\tdefer pool.AssertSize(t, 0)\n\tvar valids []bool // AppendValues() with an empty valid array adds every value by default\n\n\tlocalTime := time.Date(2019, 2, 6, 14, 17, 31, 123456789, time.FixedZone(\"-08:00\", -8*3600))\n\n\tfield1 := arrow.Field{Name: \"epoch\", Type: &arrow.Int64Type{}}\n\tfield2 := arrow.Field{Name: \"timezone\", Type: &arrow.Int32Type{}}\n\ttzStruct := arrow.StructOf(field1, field2)\n\n\ttype testObj struct {\n\t\tfield1 int\n\t\tfield2 string\n\t}\n\n\tfor _, tc := range []struct {\n\t\tlogical         string\n\t\tphysical        string\n\t\trowType         query.ExecResponseRowType\n\t\tvalues          any\n\t\tbuilder         array.Builder\n\t\tappend          func(b array.Builder, vs any)\n\t\tcompare         func(src any, dst []snowflakeValue) int\n\t\thigherPrecision bool\n\t}{\n\t\t{\n\t\t\tlogical:         \"fixed\",\n\t\t\tphysical:        \"number\", // default: number(38, 0)\n\t\t\tvalues:          []int64{1, 2},\n\t\t\tbuilder:         array.NewInt64Builder(pool),\n\t\t\tappend:          func(b array.Builder, vs any) { b.(*array.Int64Builder).AppendValues(vs.([]int64), valids) },\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"number(38,5)\",\n\t\t\trowType:  query.ExecResponseRowType{Scale: 5},\n\t\t\tvalues:   []string{\"1.05430\", \"2.08983\"},\n\t\t\tbuilder:  array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringFloatToInt(s, 5)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to int\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Int64Builder).Append(num)\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tnum, ok := stringFloatToInt(srcvs[i], 5)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := intToBigFloat(num, 5)\n\t\t\t\t\tdstDec := dst[i].(*big.Float)\n\t\t\t\t\tif srcDec.Cmp(dstDec) != 0 {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"number(38,5)\",\n\t\t\trowType:  query.ExecResponseRowType{Scale: 5},\n\t\t\tvalues:   []string{\"1.05430\", \"2.08983\"},\n\t\t\tbuilder:  array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringFloatToInt(s, 5)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to int\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Int64Builder).Append(num)\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tnum, ok := stringFloatToInt(srcvs[i], 5)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := fmt.Sprintf(\"%.*f\", 5, float64(num)/math.Pow10(int(5)))\n\t\t\t\t\tdstDec := dst[i]\n\t\t\t\t\tif srcDec != dstDec {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: false,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"number(38,0)\",\n\t\t\tvalues:   []string{\"10000000000000000000000000000000000000\", \"-12345678901234567890123456789012345678\"},\n\t\t\tbuilder:  array.NewDecimal128Builder(pool, &arrow.Decimal128Type{Precision: 30, Scale: 2}),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringIntToDecimal(s)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to big.Int\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Decimal128Builder).Append(num)\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tnum, ok := stringIntToDecimal(srcvs[i])\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := decimalToBigInt(num)\n\t\t\t\t\tdstDec := dst[i].(*big.Int)\n\t\t\t\t\tif srcDec.Cmp(dstDec) != 0 {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"number(38,37)\",\n\t\t\trowType:  query.ExecResponseRowType{Scale: 37},\n\t\t\tvalues:   []string{\"1.2345678901234567890123456789012345678\", \"-9.9999999999999999999999999999999999999\"},\n\t\t\tbuilder:  array.NewDecimal128Builder(pool, &arrow.Decimal128Type{Precision: 38, Scale: 37}),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringFloatToDecimal(s, 37)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to big.Rat\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Decimal128Builder).Append(num)\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tnum, ok := stringFloatToDecimal(srcvs[i], 37)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := decimalToBigFloat(num, 37)\n\t\t\t\t\tdstDec := dst[i].(*big.Float)\n\t\t\t\t\tif srcDec.Cmp(dstDec) != 0 {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int8\",\n\t\t\tvalues:   []int8{1, 2},\n\t\t\tbuilder:  array.NewInt8Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int8Builder).AppendValues(vs.([]int8), valids) },\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]int8)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tif int64(srcvs[i]) != dst[i].(int64) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int16\",\n\t\t\tvalues:   []int16{1, 2},\n\t\t\tbuilder:  array.NewInt16Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int16Builder).AppendValues(vs.([]int16), valids) },\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]int16)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tif int64(srcvs[i]) != dst[i].(int64) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int16\",\n\t\t\tvalues:   []string{\"1.2345\", \"2.3456\"},\n\t\t\trowType:  query.ExecResponseRowType{Scale: 4},\n\t\t\tbuilder:  array.NewInt16Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringFloatToInt(s, 4)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to int\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Int16Builder).Append(int16(num))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tnum, ok := stringFloatToInt(srcvs[i], 4)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := intToBigFloat(num, 4)\n\t\t\t\t\tdstDec := dst[i].(*big.Float)\n\t\t\t\t\tif srcDec.Cmp(dstDec) != 0 {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int16\",\n\t\t\tvalues:   []string{\"1.2345\", \"2.3456\"},\n\t\t\trowType:  query.ExecResponseRowType{Scale: 4},\n\t\t\tbuilder:  array.NewInt16Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringFloatToInt(s, 4)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to int\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Int16Builder).Append(int16(num))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tnum, ok := stringFloatToInt(srcvs[i], 4)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := fmt.Sprintf(\"%.*f\", 4, float64(num)/math.Pow10(int(4)))\n\t\t\t\t\tdstDec := dst[i]\n\t\t\t\t\tif srcDec != dstDec {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: false,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int32\",\n\t\t\tvalues:   []int32{1, 2},\n\t\t\tbuilder:  array.NewInt32Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Int32Builder).AppendValues(vs.([]int32), valids) },\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]int32)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tif int64(srcvs[i]) != dst[i] {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int32\",\n\t\t\tvalues:   []string{\"1.23456\", \"2.34567\"},\n\t\t\trowType:  query.ExecResponseRowType{Scale: 5},\n\t\t\tbuilder:  array.NewInt32Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringFloatToInt(s, 5)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to int\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Int32Builder).Append(int32(num))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tnum, ok := stringFloatToInt(srcvs[i], 5)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := intToBigFloat(num, 5)\n\t\t\t\t\tdstDec := dst[i].(*big.Float)\n\t\t\t\t\tif srcDec.Cmp(dstDec) != 0 {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical:  \"fixed\",\n\t\t\tphysical: \"int32\",\n\t\t\tvalues:   []string{\"1.23456\", \"2.34567\"},\n\t\t\trowType:  query.ExecResponseRowType{Scale: 5},\n\t\t\tbuilder:  array.NewInt32Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, s := range vs.([]string) {\n\t\t\t\t\tnum, ok := stringFloatToInt(s, 5)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"failed to convert to int\")\n\t\t\t\t\t}\n\t\t\t\t\tb.(*array.Int32Builder).Append(int32(num))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]string)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tnum, ok := stringFloatToInt(srcvs[i], 5)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t\tsrcDec := fmt.Sprintf(\"%.*f\", 5, float64(num)/math.Pow10(int(5)))\n\t\t\t\t\tdstDec := dst[i]\n\t\t\t\t\tif srcDec != dstDec {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: false,\n\t\t},\n\t\t{\n\t\t\tlogical:         \"fixed\",\n\t\t\tphysical:        \"int64\",\n\t\t\tvalues:          []int64{1, 2},\n\t\t\tbuilder:         array.NewInt64Builder(pool),\n\t\t\tappend:          func(b array.Builder, vs any) { b.(*array.Int64Builder).AppendValues(vs.([]int64), valids) },\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical: \"boolean\",\n\t\t\tvalues:  []bool{true, false},\n\t\t\tbuilder: array.NewBooleanBuilder(pool),\n\t\t\tappend:  func(b array.Builder, vs any) { b.(*array.BooleanBuilder).AppendValues(vs.([]bool), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:  \"real\",\n\t\t\tphysical: \"float\",\n\t\t\tvalues:   []float64{1, 2},\n\t\t\tbuilder:  array.NewFloat64Builder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.Float64Builder).AppendValues(vs.([]float64), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical:  \"text\",\n\t\t\tphysical: \"string\",\n\t\t\tvalues:   []string{\"foo\", \"bar\"},\n\t\t\tbuilder:  array.NewStringBuilder(pool),\n\t\t\tappend:   func(b array.Builder, vs any) { b.(*array.StringBuilder).AppendValues(vs.([]string), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical: \"binary\",\n\t\t\tvalues:  [][]byte{[]byte(\"foo\"), []byte(\"bar\")},\n\t\t\tbuilder: array.NewBinaryBuilder(pool, arrow.BinaryTypes.Binary),\n\t\t\tappend:  func(b array.Builder, vs any) { b.(*array.BinaryBuilder).AppendValues(vs.([][]byte), valids) },\n\t\t},\n\t\t{\n\t\t\tlogical: \"date\",\n\t\t\tvalues:  []time.Time{time.Now(), localTime},\n\t\t\tbuilder: array.NewDate32Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, d := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Date32Builder).Append(arrow.Date32(d.Unix()))\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"time\",\n\t\t\tvalues:  []time.Time{time.Now(), time.Now()},\n\t\t\trowType: query.ExecResponseRowType{Scale: 9},\n\t\t\tbuilder: array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Int64Builder).Append(t.UnixNano())\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tif srcvs[i].Nanosecond() != dst[i].(time.Time).Nanosecond() {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t\thigherPrecision: true,\n\t\t},\n\t\t{\n\t\t\tlogical: \"timestamp_ntz\",\n\t\t\tvalues:  []time.Time{time.Now(), localTime},\n\t\t\trowType: query.ExecResponseRowType{Scale: 9},\n\t\t\tbuilder: array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Int64Builder).Append(t.UnixNano())\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tif srcvs[i].UnixNano() != dst[i].(time.Time).UnixNano() {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"timestamp_ltz\",\n\t\t\tvalues:  []time.Time{time.Now(), localTime},\n\t\t\trowType: query.ExecResponseRowType{Scale: 9},\n\t\t\tbuilder: array.NewInt64Builder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tb.(*array.Int64Builder).Append(t.UnixNano())\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tif srcvs[i].UnixNano() != dst[i].(time.Time).UnixNano() {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"timestamp_tz\",\n\t\t\tvalues:  []time.Time{time.Now(), localTime},\n\t\t\tbuilder: array.NewStructBuilder(pool, tzStruct),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tsb := b.(*array.StructBuilder)\n\t\t\t\tvalids = []bool{true, true}\n\t\t\t\tsb.AppendValues(valids)\n\t\t\t\tfor _, t := range vs.([]time.Time) {\n\t\t\t\t\tsb.FieldBuilder(0).(*array.Int64Builder).Append(t.Unix())\n\t\t\t\t\tsb.FieldBuilder(1).(*array.Int32Builder).Append(int32(t.UnixNano()))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]time.Time)\n\t\t\t\tfor i := range srcvs {\n\t\t\t\t\tif srcvs[i].Unix() != dst[i].(time.Time).Unix() {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"array\",\n\t\t\tvalues:  [][]string{{\"foo\", \"bar\"}, {\"baz\", \"quz\", \"quux\"}},\n\t\t\tbuilder: array.NewStringBuilder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, a := range vs.([][]string) {\n\t\t\t\t\tb.(*array.StringBuilder).Append(fmt.Sprint(a))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([][]string)\n\t\t\t\tfor i, o := range srcvs {\n\t\t\t\t\tif fmt.Sprint(o) != dst[i].(string) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tlogical: \"object\",\n\t\t\tvalues:  []testObj{{0, \"foo\"}, {1, \"bar\"}},\n\t\t\tbuilder: array.NewStringBuilder(pool),\n\t\t\tappend: func(b array.Builder, vs any) {\n\t\t\t\tfor _, o := range vs.([]testObj) {\n\t\t\t\t\tb.(*array.StringBuilder).Append(fmt.Sprint(o))\n\t\t\t\t}\n\t\t\t},\n\t\t\tcompare: func(src any, dst []snowflakeValue) int {\n\t\t\t\tsrcvs := src.([]testObj)\n\t\t\t\tfor i, o := range srcvs {\n\t\t\t\t\tif fmt.Sprint(o) != dst[i].(string) {\n\t\t\t\t\t\treturn i\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn -1\n\t\t\t},\n\t\t},\n\t} {\n\t\ttestName := tc.logical\n\t\tif tc.physical != \"\" {\n\t\t\ttestName += \" \" + tc.physical\n\t\t}\n\t\tt.Run(testName, func(t *testing.T) {\n\t\t\tb := tc.builder\n\t\t\ttc.append(b, tc.values)\n\t\t\tarr := b.NewArray()\n\t\t\tdefer arr.Release()\n\n\t\t\tmeta := tc.rowType\n\t\t\tmeta.Type = tc.logical\n\n\t\t\twithHigherPrecision := tc.higherPrecision\n\n\t\t\tif err := arrowToValues(context.Background(), dest, meta, arr, localTime.Location(), withHigherPrecision, nil); err != nil { // TODO\n\t\t\t\tt.Fatalf(\"error: %s\", err)\n\t\t\t}\n\n\t\t\telemType := reflect.TypeOf(tc.values).Elem()\n\t\t\tif tc.compare != nil {\n\t\t\t\tidx := tc.compare(tc.values, dest)\n\t\t\t\tif idx != -1 {\n\t\t\t\t\tt.Fatalf(\"error: column array value mistmatch at index %v\", idx)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfor _, d := range dest {\n\t\t\t\t\tif reflect.TypeOf(d) != elemType {\n\t\t\t\t\t\tt.Fatalf(\"error: expected type %s, got type %s\", reflect.TypeOf(d), elemType)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\n\t}\n}\n\n// TestArrowToRecord has been moved to arrowbatches/converter_test.go\n// (all test case data removed from this file)\n\nfunc TestTimestampLTZLocation(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsrc := \"1549491451.123456789\"\n\t\tvar dest driver.Value\n\t\tloc, _ := time.LoadLocation(PSTLocation)\n\t\tif err := stringToValue(context.Background(), &dest, query.ExecResponseRowType{Type: \"timestamp_ltz\"}, &src, loc, nil); err != nil {\n\t\t\tt.Errorf(\"unexpected error: %v\", err)\n\t\t}\n\t\tts, ok := dest.(time.Time)\n\t\tif !ok {\n\t\t\tt.Errorf(\"expected type: 'time.Time', got '%v'\", reflect.TypeOf(dest))\n\t\t}\n\t\tif ts.Location() != loc {\n\t\t\tt.Errorf(\"expected location to be %v, got '%v'\", loc, ts.Location())\n\t\t}\n\n\t\tif err := stringToValue(context.Background(), &dest, query.ExecResponseRowType{Type: \"timestamp_ltz\"}, &src, nil, nil); err != nil {\n\t\t\tt.Errorf(\"unexpected error: %v\", err)\n\t\t}\n\t\tts, ok = dest.(time.Time)\n\t\tif !ok {\n\t\t\tt.Errorf(\"expected type: 'time.Time', got '%v'\", reflect.TypeOf(dest))\n\t\t}\n\t\tif ts.Location() != time.Local {\n\t\t\tt.Errorf(\"expected location to be local, got '%v'\", ts.Location())\n\t\t}\n\t})\n}\n\nfunc TestSmallTimestampBinding(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tctx := context.Background()\n\t\ttimeValue, err := time.Parse(\"2006-01-02 15:04:05\", \"1600-10-10 10:10:10\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed to parse time: %v\", err)\n\t\t}\n\t\tparameters := []driver.NamedValue{\n\t\t\t{Ordinal: 1, Value: DataTypeTimestampNtz},\n\t\t\t{Ordinal: 2, Value: timeValue},\n\t\t}\n\n\t\trows := sct.mustQueryContext(ctx, \"SELECT ?\", parameters)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\n\t\tscanValues := make([]driver.Value, 1)\n\t\tfor {\n\t\t\tif err := rows.Next(scanValues); err == io.EOF {\n\t\t\t\tbreak\n\t\t\t} else if err != nil {\n\t\t\t\tt.Fatalf(\"failed to run query: %v\", err)\n\t\t\t}\n\t\t\tif scanValues[0] != timeValue {\n\t\t\t\tt.Fatalf(\"unexpected result. expected: %v, got: %v\", timeValue, scanValues[0])\n\t\t\t}\n\t\t}\n\t})\n}\n\n// TestTimestampConversionWithoutArrowBatches tests all 10 timestamp scales\n// (0-9) because each scale exercises a mathematically distinct code path in\n// the timestamp conversion logic. See TestTimestampConversionDistantDates in\n// arrowbatches/batches_test.go for rationale on why the full scale range is\n// required.\nfunc TestTimestampConversionWithoutArrowBatches(t *testing.T) {\n\ttimestamps := [3]string{\n\t\t\"2000-10-10 10:10:10.123456789\", // neutral\n\t\t\"9999-12-12 23:59:59.999999999\", // max\n\t\t\"0001-01-01 00:00:00.000000000\"} // min\n\ttypes := [3]string{\"TIMESTAMP_NTZ\", \"TIMESTAMP_LTZ\", \"TIMESTAMP_TZ\"}\n\n\trunDBTest(t, func(sct *DBTest) {\n\t\tctx := context.Background()\n\n\t\tfor _, tsStr := range timestamps {\n\t\t\tts, err := time.Parse(\"2006-01-02 15:04:05\", tsStr)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"failed to parse time: %v\", err)\n\t\t\t}\n\t\t\tfor _, tp := range types {\n\t\t\t\tt.Run(tp+\"_\"+tsStr, func(t *testing.T) {\n\t\t\t\t\t// Batch all 10 scales into a single multi-column query to reduce round trips.\n\t\t\t\t\tvar cols []string\n\t\t\t\t\tfor scale := 0; scale <= 9; scale++ {\n\t\t\t\t\t\tcols = append(cols, fmt.Sprintf(\"'%s'::%s(%v)\", tsStr, tp, scale))\n\t\t\t\t\t}\n\t\t\t\t\tquery := \"SELECT \" + strings.Join(cols, \", \")\n\t\t\t\t\trows := sct.mustQueryContext(ctx, query, nil)\n\t\t\t\t\tdefer func() {\n\t\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t\t}()\n\n\t\t\t\t\tif !rows.Next() {\n\t\t\t\t\t\tt.Fatalf(\"failed to run query: %v\", query)\n\t\t\t\t\t}\n\n\t\t\t\t\tscanVals := make([]time.Time, 10)\n\t\t\t\t\tscanPtrs := make([]any, 10)\n\t\t\t\t\tfor i := range scanVals {\n\t\t\t\t\t\tscanPtrs[i] = &scanVals[i]\n\t\t\t\t\t}\n\t\t\t\t\tassertNilF(t, rows.Scan(scanPtrs...))\n\n\t\t\t\t\tfor scale := 0; scale <= 9; scale++ {\n\t\t\t\t\t\texp := ts.Truncate(time.Duration(math.Pow10(9 - scale)))\n\t\t\t\t\t\tact := scanVals[scale]\n\t\t\t\t\t\tif !exp.Equal(act) {\n\t\t\t\t\t\t\tt.Fatalf(\"scale %d: unexpected result. expected: %v, got: %v\", scale, exp, act)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestTimeTypeValueToString(t *testing.T) {\n\ttimeValue, err := time.Parse(\"2006-01-02 15:04:05\", \"2020-01-02 10:11:12\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\toffsetTimeValue, err := time.ParseInLocation(\"2006-01-02 15:04:05\", \"2020-01-02 10:11:12\", Location(6*60))\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\ttestcases := []struct {\n\t\tin     time.Time\n\t\ttsmode types.SnowflakeType\n\t\tout    string\n\t}{\n\t\t{timeValue, types.DateType, \"1577959872000\"},\n\t\t{timeValue, types.TimeType, \"36672000000000\"},\n\t\t{timeValue, types.TimestampNtzType, \"1577959872000000000\"},\n\t\t{timeValue, types.TimestampLtzType, \"1577959872000000000\"},\n\t\t{timeValue, types.TimestampTzType, \"1577959872000000000 1440\"},\n\t\t{offsetTimeValue, types.TimestampTzType, \"1577938272000000000 1800\"},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.out, func(t *testing.T) {\n\t\t\tbv, err := timeTypeValueToString(tc.in, tc.tsmode)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEmptyStringE(t, bv.format)\n\t\t\tassertNilE(t, bv.schema)\n\t\t\tassertEqualE(t, tc.out, *bv.value)\n\t\t})\n\t}\n}\n\nfunc TestIsArrayOfStructs(t *testing.T) {\n\ttestcases := []struct {\n\t\tvalue    any\n\t\texpected bool\n\t}{\n\t\t{[]simpleObject{}, true},\n\t\t{[]*simpleObject{}, true},\n\t\t{[]int{1}, false},\n\t\t{[]string{\"abc\"}, false},\n\t\t{&[]bool{true}, false},\n\t}\n\tfor _, tc := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v\", tc.value), func(t *testing.T) {\n\t\t\tres := isArrayOfStructs(tc.value)\n\t\t\tif res != tc.expected {\n\t\t\t\tt.Errorf(\"expected %v to result in %v\", tc.value, tc.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestSqlNull(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQuery(\"SELECT 1, NULL UNION SELECT 2, 'test' ORDER BY 1\")\n\t\tdefer rows.Close()\n\t\tvar rowID int\n\t\tvar nullStr sql.Null[string]\n\t\tassertTrueF(t, rows.Next())\n\t\tassertNilF(t, rows.Scan(&rowID, &nullStr))\n\t\tassertEqualE(t, nullStr, sql.Null[string]{Valid: false})\n\t\tassertTrueF(t, rows.Next())\n\t\tassertNilF(t, rows.Scan(&rowID, &nullStr))\n\t\tassertEqualE(t, nullStr, sql.Null[string]{Valid: true, V: \"test\"})\n\t})\n}\n\nfunc TestNumbersScanType(t *testing.T) {\n\tfor _, forceFormat := range []string{forceJSON, forceARROW} {\n\t\tt.Run(forceFormat, func(t *testing.T) {\n\t\t\trunDBTest(t, func(dbt *DBTest) {\n\t\t\t\tdbt.mustExecT(t, forceFormat)\n\n\t\t\t\tt.Run(\"scale == 0\", func(t *testing.T) {\n\t\t\t\t\tt.Run(\"without higher precision\", func(t *testing.T) {\n\t\t\t\t\t\trows := dbt.mustQueryContext(context.Background(), \"SELECT 1, 300::NUMBER(15, 0), 600::NUMBER(18, 0), 700::NUMBER(19, 0), 900::NUMBER(38, 0), 123456789012345678901234567890\")\n\t\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\t\trows.mustNext()\n\t\t\t\t\t\tvar i1, i2, i3 int64\n\t\t\t\t\t\tvar i4, i5, i6 string\n\t\t\t\t\t\trows.mustScan(&i1, &i2, &i3, &i4, &i5, &i6)\n\t\t\t\t\t\tassertEqualE(t, i1, int64(1))\n\t\t\t\t\t\tassertEqualE(t, i2, int64(300))\n\t\t\t\t\t\tassertEqualE(t, i3, int64(600))\n\t\t\t\t\t\tassertEqualE(t, i4, \"700\")\n\t\t\t\t\t\tassertEqualE(t, i5, \"900\")\n\t\t\t\t\t\tassertEqualE(t, i6, \"123456789012345678901234567890\") // pragma: allowlist secret\n\n\t\t\t\t\t\ttypes, err := rows.ColumnTypes()\n\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\tassertEqualE(t, types[0].ScanType(), reflect.TypeFor[int64]())\n\t\t\t\t\t\tassertEqualE(t, types[1].ScanType(), reflect.TypeFor[int64]())\n\t\t\t\t\t\tassertEqualE(t, types[2].ScanType(), reflect.TypeFor[int64]())\n\t\t\t\t\t\tassertEqualE(t, types[3].ScanType(), reflect.TypeFor[string]())\n\t\t\t\t\t\tassertEqualE(t, types[4].ScanType(), reflect.TypeFor[string]())\n\t\t\t\t\t\tassertEqualE(t, types[5].ScanType(), reflect.TypeFor[string]())\n\t\t\t\t\t})\n\n\t\t\t\t\tt.Run(\"without higher precision - regardless of scan type, int parsing should still work\", func(t *testing.T) {\n\t\t\t\t\t\trows := dbt.mustQueryContext(context.Background(), \"SELECT 1, 300::NUMBER(15, 0), 600::NUMBER(18, 0), 700::NUMBER(19, 0), 900::NUMBER(38, 0), 123456789012345678901234567890\")\n\t\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\t\trows.mustNext()\n\t\t\t\t\t\tvar i1, i2, i3, i4, i5 int64\n\t\t\t\t\t\tvar i6 string\n\t\t\t\t\t\trows.mustScan(&i1, &i2, &i3, &i4, &i5, &i6)\n\t\t\t\t\t\tassertEqualE(t, i1, int64(1))\n\t\t\t\t\t\tassertEqualE(t, i2, int64(300))\n\t\t\t\t\t\tassertEqualE(t, i3, int64(600))\n\t\t\t\t\t\tassertEqualE(t, i4, int64(700))\n\t\t\t\t\t\tassertEqualE(t, i5, int64(900))\n\t\t\t\t\t\tassertEqualE(t, i6, \"123456789012345678901234567890\") // pragma: allowlist secret\n\n\t\t\t\t\t\ttypes, err := rows.ColumnTypes()\n\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\tassertEqualE(t, types[0].ScanType(), reflect.TypeFor[int64]())\n\t\t\t\t\t\tassertEqualE(t, types[1].ScanType(), reflect.TypeFor[int64]())\n\t\t\t\t\t\tassertEqualE(t, types[2].ScanType(), reflect.TypeFor[int64]())\n\t\t\t\t\t\tassertEqualE(t, types[3].ScanType(), reflect.TypeFor[string]())\n\t\t\t\t\t\tassertEqualE(t, types[4].ScanType(), reflect.TypeFor[string]())\n\t\t\t\t\t\tassertEqualE(t, types[5].ScanType(), reflect.TypeFor[string]())\n\t\t\t\t\t})\n\n\t\t\t\t\tt.Run(\"with higher precision\", func(t *testing.T) {\n\t\t\t\t\t\trows := dbt.mustQueryContext(WithHigherPrecision(context.Background()), \"SELECT 1::NUMBER(1, 0), 300::NUMBER(15, 0), 600::NUMBER(19, 0), 700::NUMBER(20, 0), 900::NUMBER(38, 0), 123456789012345678901234567890\")\n\t\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\t\trows.mustNext()\n\t\t\t\t\t\tvar i1, i2 int64\n\t\t\t\t\t\tvar i3, i4, i5, i6 *big.Int\n\t\t\t\t\t\trows.mustScan(&i1, &i2, &i3, &i4, &i5, &i6)\n\t\t\t\t\t\tassertEqualE(t, i1, int64(1))\n\t\t\t\t\t\tassertEqualE(t, i2, int64(300))\n\t\t\t\t\t\tassertEqualE(t, i3.Cmp(big.NewInt(600)), 0)\n\t\t\t\t\t\tassertEqualE(t, i4.Cmp(big.NewInt(700)), 0)\n\t\t\t\t\t\tassertEqualE(t, i5.Cmp(big.NewInt(900)), 0)\n\t\t\t\t\t\tbigInt123456789012345678901234567890 := &big.Int{}\n\t\t\t\t\t\tbigInt123456789012345678901234567890.SetString(\"123456789012345678901234567890\", 10) // pragma: allowlist secret\n\t\t\t\t\t\tassertEqualE(t, i6.Cmp(bigInt123456789012345678901234567890), 0)\n\n\t\t\t\t\t\ttypes, err := rows.ColumnTypes()\n\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\tassertEqualE(t, types[0].ScanType(), reflect.TypeFor[int64]())\n\t\t\t\t\t\tassertEqualE(t, types[1].ScanType(), reflect.TypeFor[int64]())\n\t\t\t\t\t\tassertEqualE(t, types[2].ScanType(), reflect.TypeFor[*big.Int]())\n\t\t\t\t\t\tassertEqualE(t, types[3].ScanType(), reflect.TypeFor[*big.Int]())\n\t\t\t\t\t\tassertEqualE(t, types[4].ScanType(), reflect.TypeFor[*big.Int]())\n\t\t\t\t\t\tassertEqualE(t, types[5].ScanType(), reflect.TypeFor[*big.Int]())\n\t\t\t\t\t})\n\t\t\t\t})\n\n\t\t\t\tt.Run(\"scale != 0\", func(t *testing.T) {\n\t\t\t\t\tt.Run(\"without higher precision\", func(t *testing.T) {\n\t\t\t\t\t\trows := dbt.mustQueryContext(context.Background(), \"SELECT 1.5, 300.5::NUMBER(15, 1), 600.5::NUMBER(18, 1), 700.5::NUMBER(19, 1), 900.5::NUMBER(38, 1), 123456789012345678901234567890.5\")\n\t\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\t\trows.mustNext()\n\t\t\t\t\t\tvar i1, i2, i3, i4, i5, i6 float64\n\t\t\t\t\t\trows.mustScan(&i1, &i2, &i3, &i4, &i5, &i6)\n\t\t\t\t\t\tassertEqualE(t, i1, 1.5)\n\t\t\t\t\t\tassertEqualE(t, i2, 300.5)\n\t\t\t\t\t\tassertEqualE(t, i3, 600.5)\n\t\t\t\t\t\tassertEqualE(t, i4, 700.5)\n\t\t\t\t\t\tassertEqualE(t, i5, 900.5)\n\t\t\t\t\t\tassertEqualE(t, i6, 123456789012345678901234567890.5)\n\n\t\t\t\t\t\ttypes, err := rows.ColumnTypes()\n\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\tassertEqualE(t, types[0].ScanType(), reflect.TypeFor[float64]())\n\t\t\t\t\t\tassertEqualE(t, types[1].ScanType(), reflect.TypeFor[float64]())\n\t\t\t\t\t\tassertEqualE(t, types[2].ScanType(), reflect.TypeFor[float64]())\n\t\t\t\t\t\tassertEqualE(t, types[3].ScanType(), reflect.TypeFor[float64]())\n\t\t\t\t\t\tassertEqualE(t, types[4].ScanType(), reflect.TypeFor[float64]())\n\t\t\t\t\t\tassertEqualE(t, types[5].ScanType(), reflect.TypeFor[float64]())\n\t\t\t\t\t})\n\n\t\t\t\t\tt.Run(\"with higher precision\", func(t *testing.T) {\n\t\t\t\t\t\trows := dbt.mustQueryContext(WithHigherPrecision(context.Background()), \"SELECT 1.5, 300.5::NUMBER(15, 1), 600.5::NUMBER(18, 1), 700.5::NUMBER(19, 1), 900.5::NUMBER(38, 1), 123456789012345678901234567890.5\")\n\t\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\t\trows.mustNext()\n\t\t\t\t\t\tvar i1, i2, i3, i4, i5, i6 *big.Float\n\t\t\t\t\t\trows.mustScan(&i1, &i2, &i3, &i4, &i5, &i6)\n\t\t\t\t\t\tassertEqualE(t, i1.Cmp(big.NewFloat(1.5)), 0)\n\t\t\t\t\t\tassertEqualE(t, i2.Cmp(big.NewFloat(300.5)), 0)\n\t\t\t\t\t\tassertEqualE(t, i3.Cmp(big.NewFloat(600.5)), 0)\n\t\t\t\t\t\tassertEqualE(t, i4.Cmp(big.NewFloat(700.5)), 0)\n\t\t\t\t\t\tassertEqualE(t, i5.Cmp(big.NewFloat(900.5)), 0)\n\t\t\t\t\t\tbigInt123456789012345678901234567890, _, err := big.ParseFloat(\"123456789012345678901234567890.5\", 10, numberMaxPrecisionInBits, big.AwayFromZero)\n\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\tassertEqualE(t, i6.Cmp(bigInt123456789012345678901234567890), 0)\n\n\t\t\t\t\t\ttypes, err := rows.ColumnTypes()\n\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\tassertEqualE(t, types[0].ScanType(), reflect.TypeFor[*big.Float]())\n\t\t\t\t\t\tassertEqualE(t, types[1].ScanType(), reflect.TypeFor[*big.Float]())\n\t\t\t\t\t\tassertEqualE(t, types[2].ScanType(), reflect.TypeFor[*big.Float]())\n\t\t\t\t\t\tassertEqualE(t, types[3].ScanType(), reflect.TypeFor[*big.Float]())\n\t\t\t\t\t\tassertEqualE(t, types[4].ScanType(), reflect.TypeFor[*big.Float]())\n\t\t\t\t\t\tassertEqualE(t, types[5].ScanType(), reflect.TypeFor[*big.Float]())\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t}\n}\n\nfunc mustArray(v any, typ ...any) driver.Value {\n\tarray, err := Array(v, typ...)\n\tif err != nil {\n\t\tpanic(fmt.Sprintf(\"failed to convert to array: %v\", err))\n\t}\n\treturn array\n}\n"
  },
  {
    "path": "crl.go",
    "content": "package gosnowflake\n\nimport (\n\t\"crypto/x509\"\n\t\"encoding/asn1\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"slices\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n)\n\nconst snowflakeCrlCacheValidityTimeEnv = \"SNOWFLAKE_CRL_CACHE_VALIDITY_TIME\"\n\nvar idpOID = asn1.ObjectIdentifier{2, 5, 29, 28}\n\ntype distributionPointName struct {\n\tFullName []asn1.RawValue `asn1:\"optional,tag:0\"`\n}\n\ntype issuingDistributionPoint struct {\n\tDistributionPoint distributionPointName `asn1:\"optional,tag:0\"`\n}\n\ntype crlValidator struct {\n\tcertRevocationCheckMode        CertRevocationCheckMode\n\tallowCertificatesWithoutCrlURL bool\n\tinMemoryCacheDisabled          bool\n\tonDiskCacheDisabled            bool\n\tcrlDownloadMaxSize             int\n\thttpClient                     *http.Client\n\ttelemetry                      *snowflakeTelemetry\n}\n\ntype crlCacheCleanerType struct {\n\tmu                      sync.Mutex\n\tcacheValidityTime       time.Duration\n\tonDiskCacheRemovalDelay time.Duration\n\tonDiskCacheDir          string\n\tcleanupStopChan         chan struct{}\n\tcleanupDoneChan         chan struct{}\n}\n\ntype crlInMemoryCacheValueType struct {\n\tcrl          *x509.RevocationList\n\tdownloadTime *time.Time\n}\n\nvar (\n\tcrlCacheCleanerTickRate = time.Hour\n\tcrlInMemoryCache        = make(map[string]*crlInMemoryCacheValueType)\n\tcrlInMemoryCacheMutex   = &sync.Mutex{}\n\tcrlURLMutexes           = make(map[string]*sync.Mutex)\n\tcrlCacheCleanerMu       = &sync.Mutex{}\n\tcrlCacheCleaner         *crlCacheCleanerType\n)\n\nfunc newCrlValidator(certRevocationCheckMode CertRevocationCheckMode, allowCertificatesWithoutCrlURL bool, inMemoryCacheDisabled, onDiskCacheDisabled bool, crlDownloadMaxSize int, httpClient *http.Client, telemetry *snowflakeTelemetry) (*crlValidator, error) {\n\tinitCrlCacheCleaner()\n\tcv := &crlValidator{\n\t\tcertRevocationCheckMode:        certRevocationCheckMode,\n\t\tallowCertificatesWithoutCrlURL: allowCertificatesWithoutCrlURL,\n\t\tinMemoryCacheDisabled:          inMemoryCacheDisabled,\n\t\tonDiskCacheDisabled:            onDiskCacheDisabled,\n\t\tcrlDownloadMaxSize:             crlDownloadMaxSize,\n\t\thttpClient:                     httpClient,\n\t\ttelemetry:                      telemetry,\n\t}\n\treturn cv, nil\n}\n\nfunc initCrlCacheCleaner() {\n\tcrlCacheCleanerMu.Lock()\n\tdefer crlCacheCleanerMu.Unlock()\n\tif crlCacheCleaner != nil {\n\t\treturn\n\t}\n\tvar err error\n\tvalidityTime := defaultCrlCacheValidityTime\n\tif validityTimeStr := os.Getenv(snowflakeCrlCacheValidityTimeEnv); validityTimeStr != \"\" {\n\t\tif validityTime, err = time.ParseDuration(os.Getenv(snowflakeCrlCacheValidityTimeEnv)); err != nil {\n\t\t\tlogger.Infof(\"failed to parse %v: %v, using default value %v\", snowflakeCrlCacheValidityTimeEnv, err, defaultCrlCacheValidityTime)\n\t\t\tvalidityTime = defaultCrlCacheValidityTime\n\t\t}\n\t}\n\n\tonDiskCacheRemovalDelay := defaultCrlOnDiskCacheRemovalDelay\n\tif onDiskCacheRemovalDelayStr := os.Getenv(\"SNOWFLAKE_CRL_ON_DISK_CACHE_REMOVAL_DELAY\"); onDiskCacheRemovalDelayStr != \"\" {\n\t\tif onDiskCacheRemovalDelay, err = time.ParseDuration(onDiskCacheRemovalDelayStr); err != nil {\n\t\t\tlogger.Infof(\"failed to parse SNOWFLAKE_CRL_ON_DISK_CACHE_REMOVAL_DELAY: %v, using default value %v\", err, defaultCrlOnDiskCacheRemovalDelay)\n\t\t\tonDiskCacheRemovalDelay = defaultCrlOnDiskCacheRemovalDelay\n\t\t}\n\t}\n\n\tonDiskCacheDir := os.Getenv(\"SNOWFLAKE_CRL_ON_DISK_CACHE_DIR\")\n\tif onDiskCacheDir == \"\" {\n\t\tif onDiskCacheDir, err = defaultCrlOnDiskCacheDir(); err != nil {\n\t\t\tlogger.Infof(\"failed to get default CRL on-disk cache directory: %v\", err)\n\t\t\tonDiskCacheDir = \"\" // it will work only if on-disk cache is disabled\n\t\t}\n\t}\n\tif onDiskCacheDir != \"\" {\n\t\tif err = os.MkdirAll(onDiskCacheDir, 0755); err != nil {\n\t\t\tlogger.Errorf(\"error while preparing cache dir for CRLs: %v\", err)\n\t\t}\n\t}\n\n\tcrlCacheCleaner = &crlCacheCleanerType{\n\t\tcacheValidityTime:       validityTime,\n\t\tonDiskCacheRemovalDelay: onDiskCacheRemovalDelay,\n\t\tonDiskCacheDir:          onDiskCacheDir,\n\t\tcleanupStopChan:         nil,\n\t\tcleanupDoneChan:         nil,\n\t}\n\n}\n\n// CertRevocationCheckMode defines the modes for certificate revocation checks.\ntype CertRevocationCheckMode = sfconfig.CertRevocationCheckMode\n\nconst (\n\t// CertRevocationCheckDisabled means that certificate revocation checks are disabled.\n\tCertRevocationCheckDisabled = sfconfig.CertRevocationCheckDisabled\n\t// CertRevocationCheckAdvisory means that certificate revocation checks are advisory, and the driver will not fail if the checks end with error (cannot verify revocation status).\n\t// Driver will fail only if a certicate is revoked.\n\tCertRevocationCheckAdvisory = sfconfig.CertRevocationCheckAdvisory\n\t// CertRevocationCheckEnabled means that every certificate revocation check must pass, otherwise the driver will fail.\n\tCertRevocationCheckEnabled = sfconfig.CertRevocationCheckEnabled\n)\n\ntype crlValidationResult int\n\nconst (\n\tcrlRevoked crlValidationResult = iota\n\tcrlUnrevoked\n\tcrlError\n)\n\ntype certValidationResult int\n\nconst (\n\tcertRevoked certValidationResult = iota\n\tcertUnrevoked\n\tcertError\n)\n\nconst (\n\tdefaultCrlHTTPClientTimeout       = 10 * time.Second\n\tdefaultCrlCacheValidityTime       = 24 * time.Hour\n\tdefaultCrlOnDiskCacheRemovalDelay = 7 * time.Hour\n\tdefaultCrlDownloadMaxSize         = 20 * 1024 * 1024 // 20 MB\n)\n\nfunc (cv *crlValidator) verifyPeerCertificates(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {\n\tif cv.certRevocationCheckMode == CertRevocationCheckDisabled {\n\t\tlogger.Debug(\"certificate revocation check is disabled, skipping CRL validation\")\n\t\treturn nil\n\t}\n\tcrlValidationResults := cv.validateChains(verifiedChains)\n\n\tallRevoked := true\n\tfor _, result := range crlValidationResults {\n\t\tif result == crlUnrevoked {\n\t\t\tlogger.Debug(\"found certificate chain with no revoked certificates\")\n\t\t\treturn nil\n\t\t}\n\t\tif result != crlRevoked {\n\t\t\tallRevoked = false\n\t\t}\n\t}\n\n\tif allRevoked {\n\t\treturn fmt.Errorf(\"every verified certificate chain contained revoked certificates\")\n\t}\n\n\tlogger.Warn(\"some certificate chains didn't pass or driver wasn't able to peform the checks\")\n\tif cv.certRevocationCheckMode == CertRevocationCheckAdvisory {\n\t\tlogger.Warn(\"certificate revocation check is set to CERT_REVOCATION_CHECK_ADVISORY, so assuming that certificates are not revoked\")\n\t\treturn nil\n\t}\n\treturn fmt.Errorf(\"certificate revocation check failed\")\n}\n\nfunc (cv *crlValidator) validateChains(chains [][]*x509.Certificate) []crlValidationResult {\n\tcrlValidationResults := make([]crlValidationResult, len(chains))\n\tfor i, chain := range chains {\n\t\tcrlValidationResults[i] = crlUnrevoked\n\t\tvar chainStr strings.Builder\n\t\tfor _, cert := range chain {\n\t\t\tfmt.Fprintf(&chainStr, \"%v -> \", cert.Subject)\n\t\t}\n\t\tlogger.Debugf(\"validating certificate chain %d: %s\", i, chainStr.String())\n\t\tfor j, cert := range chain {\n\t\t\tif j == len(chain)-1 {\n\t\t\t\tlogger.Debugf(\"skipping root certificate %v for CRL validation\", cert.Subject)\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif isShortLivedCertificate(cert) {\n\t\t\t\tlogger.Debugf(\"certificate %v is short-lived, skipping CRL validation\", cert.Subject)\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif len(cert.CRLDistributionPoints) == 0 {\n\t\t\t\tif cv.allowCertificatesWithoutCrlURL {\n\t\t\t\t\tlogger.Debugf(\"certificate %v has no CRL distribution points, skipping CRL validation\", cert.Subject)\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tlogger.Warnf(\"certificate %v has no CRL distribution points, skipping CRL validation, but marking as error\", cert.Subject)\n\t\t\t\tcrlValidationResults[i] = crlError\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tcertStatus := cv.validateCertificate(cert, chain[j+1])\n\t\t\tif certStatus == certRevoked {\n\t\t\t\tcrlValidationResults[i] = crlRevoked\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\tif certStatus == certError {\n\t\t\t\tcrlValidationResults[i] = crlError\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tif crlValidationResults[i] == crlUnrevoked {\n\t\t\tlogger.Debugf(\"certificate chain %d is unrevoked, skipping remaining chains\", i)\n\t\t\tbreak\n\t\t}\n\t}\n\n\treturn crlValidationResults\n}\n\nfunc (cv *crlValidator) validateCertificate(cert *x509.Certificate, parent *x509.Certificate) certValidationResult {\n\tvar results []certValidationResult\n\tfor _, crlURL := range cert.CRLDistributionPoints {\n\t\tresult := cv.validateCrlAgainstCrlURL(cert, crlURL, parent)\n\t\tif result == certRevoked {\n\t\t\treturn result\n\t\t}\n\t\tresults = append(results, result)\n\t}\n\tif slices.Contains(results, certError) {\n\t\treturn certError\n\t}\n\treturn certUnrevoked\n}\n\nfunc (cv *crlValidator) validateCrlAgainstCrlURL(cert *x509.Certificate, crlURL string, parent *x509.Certificate) certValidationResult {\n\tnow := time.Now()\n\n\tmu := cv.getOrCreateMutex(crlURL)\n\tmu.Lock()\n\tdefer mu.Unlock()\n\n\tcrl, downloadTime := cv.getFromCache(crlURL)\n\tneedsFreshCrl := crl == nil || crl.NextUpdate.Before(now) || downloadTime.Add(crlCacheCleaner.cacheValidityTime).Before(now)\n\tshouldUpdateCrl := false\n\n\tif needsFreshCrl {\n\t\tnewCrl, newDownloadTime, err := cv.downloadCrl(crlURL)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"failed to download CRL from %v: %v\", crlURL, err)\n\t\t}\n\t\tif newCrl != nil && newCrl.NextUpdate.Before(now) {\n\t\t\tlogger.Warnf(\"downloaded CRL from %v is already expired (next update at %v)\", crlURL, newCrl.NextUpdate)\n\t\t\tnewCrl = nil\n\t\t\tif crl == nil {\n\t\t\t\treturn certError\n\t\t\t}\n\t\t}\n\t\tshouldUpdateCrl = newCrl != nil && (crl == nil || newCrl.ThisUpdate.After(crl.ThisUpdate))\n\t\tif shouldUpdateCrl {\n\t\t\tlogger.Debugf(\"Found updated CRL for %v\", crlURL)\n\t\t\tcrl = newCrl\n\t\t\tdownloadTime = newDownloadTime\n\t\t} else {\n\t\t\tif crl != nil && crl.NextUpdate.After(now) {\n\t\t\t\tlogger.Debugf(\"CRL for %v is up-to-date, using cached version\", crlURL)\n\t\t\t} else {\n\t\t\t\tlogger.Warnf(\"CRL for %v is not available or outdated\", crlURL)\n\t\t\t\treturn certError\n\t\t\t}\n\t\t}\n\t}\n\n\tlogger.Debugf(\"CRL has %v entries, next update at %v\", len(crl.RevokedCertificateEntries), crl.NextUpdate)\n\tif err := cv.validateCrl(crl, parent, crlURL); err != nil {\n\t\treturn certError\n\t}\n\n\tif shouldUpdateCrl {\n\t\tlogger.Debugf(\"CRL for %v is valid, updating cache\", crlURL)\n\t\tcv.updateCache(crlURL, crl, downloadTime)\n\t}\n\n\tfor _, rce := range crl.RevokedCertificateEntries {\n\t\tif cert.SerialNumber.Cmp(rce.SerialNumber) == 0 {\n\t\t\tlogger.Warnf(\"certificate for %v (serial number %v) has been revoked at %v, reason: %v\", cert.Subject, rce.SerialNumber, rce.RevocationTime, rce.ReasonCode)\n\t\t\treturn certRevoked\n\t\t}\n\t}\n\n\treturn certUnrevoked\n}\n\nfunc (cv *crlValidator) validateCrl(crl *x509.RevocationList, parent *x509.Certificate, crlURL string) error {\n\tif crl.Issuer.String() != parent.Subject.String() {\n\t\terr := fmt.Errorf(\"CRL issuer %v does not match parent certificate subject %v for %v\", crl.Issuer, parent.Subject, crlURL)\n\t\tlogger.Warn(err.Error())\n\t\treturn err\n\t}\n\tif err := crl.CheckSignatureFrom(parent); err != nil {\n\t\tlogger.Warnf(\"CRL signature verification failed for %v: %v\", crlURL, err)\n\t\treturn err\n\t}\n\tif err := cv.verifyAgainstIdpExtension(crl, crlURL); err != nil {\n\t\tlogger.Warnf(\"CRL IDP extension verification failed for %v: %v\", crlURL, err)\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc (cv *crlValidator) getFromCache(crlURL string) (*x509.RevocationList, *time.Time) {\n\tif cv.inMemoryCacheDisabled {\n\t\tlogger.Debugf(\"in-memory cache is disabled\")\n\t} else {\n\t\tcrlInMemoryCacheMutex.Lock()\n\t\tcacheValue, exists := crlInMemoryCache[crlURL]\n\t\tcrlInMemoryCacheMutex.Unlock()\n\t\tif exists {\n\t\t\tlogger.Debugf(\"found CRL in cache for %v\", crlURL)\n\t\t\treturn cacheValue.crl, cacheValue.downloadTime\n\t\t}\n\t}\n\tif cv.onDiskCacheDisabled {\n\t\tlogger.Debugf(\"CRL cache is disabled, not checking disk for %v\", crlURL)\n\t\treturn nil, nil\n\t}\n\tcrlFilePath := cv.crlURLToPath(crlURL)\n\tfileHandle, err := os.Open(crlFilePath)\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot open CRL from disk for %v (%v): %v\", crlURL, crlFilePath, err)\n\t\treturn nil, nil\n\t}\n\tdefer func() {\n\t\tif err := fileHandle.Close(); err != nil {\n\t\t\tlogger.Warnf(\"failed to close CRL file handle for %v (%v): %v\", crlURL, crlFilePath, err)\n\t\t}\n\t}()\n\tstat, err := fileHandle.Stat()\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot stat CRL file for %v (%v): %v\", crlURL, crlFilePath, err)\n\t\treturn nil, nil\n\t}\n\tcrlBytes, err := io.ReadAll(fileHandle)\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot read CRL from disk for %v (%v): %v\", crlURL, crlFilePath, err)\n\t\treturn nil, nil\n\t}\n\tcrl, err := x509.ParseRevocationList(crlBytes)\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot parse CRL from disk for %v (%v): %v\", crlURL, crlFilePath, err)\n\t\treturn nil, nil\n\t}\n\tmodTime := stat.ModTime()\n\n\tif !cv.inMemoryCacheDisabled {\n\t\t// promote CRL to in-memory cache\n\t\tcrlInMemoryCacheMutex.Lock()\n\t\tcrlInMemoryCache[crlURL] = &crlInMemoryCacheValueType{\n\t\t\tcrl: crl,\n\t\t\t// modTime is not the exact time the CRL was downloaded, but rather the last modification time of the file\n\t\t\t// still, it is good enough for our purposes\n\t\t\tdownloadTime: &modTime,\n\t\t}\n\t\tcrlInMemoryCacheMutex.Unlock()\n\t}\n\treturn crl, &modTime\n}\n\nfunc (cv *crlValidator) updateCache(crlURL string, crl *x509.RevocationList, downloadTime *time.Time) {\n\tif cv.inMemoryCacheDisabled {\n\t\tlogger.Debugf(\"in-memory cache is disabled, not updating\")\n\t} else {\n\t\tcrlInMemoryCacheMutex.Lock()\n\t\tcrlInMemoryCache[crlURL] = &crlInMemoryCacheValueType{\n\t\t\tcrl:          crl,\n\t\t\tdownloadTime: downloadTime,\n\t\t}\n\t\tcrlInMemoryCacheMutex.Unlock()\n\t}\n\tif cv.onDiskCacheDisabled {\n\t\tlogger.Debugf(\"CRL cache is disabled, not writing to disk for %v\", crlURL)\n\t\treturn\n\t}\n\tcrlFilePath := cv.crlURLToPath(crlURL)\n\tcrlDirPath := filepath.Dir(crlFilePath)\n\tcrlDirParentPath := filepath.Dir(crlDirPath)\n\tif err := os.MkdirAll(crlDirParentPath, 0755); err != nil {\n\t\tlogger.Warnf(\"failed to create directory for CRL file %v: %v\", crlFilePath, err)\n\t\treturn\n\t}\n\tif err := os.Mkdir(crlDirPath, 0700); err != nil {\n\t\tif !errors.Is(err, os.ErrExist) {\n\t\t\tlogger.Warnf(\"failed to create directory for CRL file %v: %v\", crlFilePath, err)\n\t\t\treturn\n\t\t}\n\t\tif err = os.Chmod(crlDirPath, 0700); err != nil {\n\t\t\tlogger.Warnf(\"failed to chmod existing directory for CRL file %v: %v\", crlFilePath, err)\n\t\t\treturn\n\t\t}\n\t}\n\tif err := os.WriteFile(crlFilePath, crl.Raw, 0600); err != nil {\n\t\tlogger.Warnf(\"failed to write CRL to disk for %v (%v): %v\", crlURL, crlFilePath, err)\n\t}\n}\n\nfunc (cv *crlValidator) downloadCrl(crlURL string) (*x509.RevocationList, *time.Time, error) {\n\ttelemetryEvent := &telemetryData{\n\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t\tMessage: map[string]string{\n\t\t\t\"type\":    \"client_crl_stats\",\n\t\t\t\"crl_url\": crlURL,\n\t\t},\n\t}\n\tdefer func() {\n\t\tif err := cv.telemetry.addLog(telemetryEvent); err != nil {\n\t\t\tlogger.Warnf(\"failed to add telemetry log for CRL download: %v\", err)\n\t\t}\n\t}()\n\tlogger.Debugf(\"downloading CRL from %v\", crlURL)\n\tnow := time.Now()\n\tresp, err := cv.httpClient.Get(crlURL)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tdefer func() {\n\t\tif err = resp.Body.Close(); err != nil {\n\t\t\tlogger.Warnf(\"failed to close response body for CRL downloaded from %v: %v\", crlURL, err)\n\t\t}\n\t}()\n\tif resp.StatusCode >= 400 {\n\t\treturn nil, nil, fmt.Errorf(\"failed to download CRL from %v, status code: %v\", crlURL, resp.StatusCode)\n\t}\n\tmaxSize := resp.ContentLength\n\tif maxSize <= 0 || maxSize > int64(cv.crlDownloadMaxSize) {\n\t\tmaxSize = int64(cv.crlDownloadMaxSize)\n\t}\n\tcrlBytes, err := io.ReadAll(io.LimitReader(resp.Body, maxSize))\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tif cv.crlDownloadMaxSize > 0 && len(crlBytes) >= cv.crlDownloadMaxSize {\n\t\treturn nil, nil, fmt.Errorf(\"CRL from %v exceeds maximum size of %d bytes\", crlURL, cv.crlDownloadMaxSize)\n\t}\n\ttelemetryEvent.Message[\"crl_bytes\"] = fmt.Sprintf(\"%d\", len(crlBytes))\n\tdownloadTime := time.Since(now)\n\ttelemetryEvent.Message[\"crl_download_time_ms\"] = fmt.Sprintf(\"%d\", downloadTime.Milliseconds())\n\tlogger.Debugf(\"downloaded %v bytes for CRL %v\", len(crlBytes), crlURL)\n\ttimeBeforeParsing := time.Now()\n\tcrl, err := x509.ParseRevocationList(crlBytes)\n\tlogger.Debugf(\"parsed CRL from %v, error: %v\", crlURL, err)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tlogger.Debugf(\"parsed CRL from %v, next update at %v\", crlURL, crl.NextUpdate)\n\ttelemetryEvent.Message[\"crl_parse_time_ms\"] = fmt.Sprintf(\"%d\", time.Since(timeBeforeParsing).Milliseconds())\n\ttelemetryEvent.Message[\"crl_revoked_certificates\"] = fmt.Sprintf(\"%d\", len(crl.RevokedCertificateEntries))\n\treturn crl, &now, err\n}\n\nfunc (cv *crlValidator) crlURLToPath(crlURL string) string {\n\t// Convert CRL URL to a file path, e.g., by replacing slashes with underscores\n\treturn filepath.Join(crlCacheCleaner.onDiskCacheDir, url.QueryEscape(crlURL))\n}\n\nfunc (cv *crlValidator) verifyAgainstIdpExtension(crl *x509.RevocationList, distributionPoint string) error {\n\tfor _, ext := range append(crl.Extensions, crl.ExtraExtensions...) {\n\t\tif ext.Id.Equal(idpOID) {\n\t\t\tvar idp issuingDistributionPoint\n\t\t\t_, err := asn1.Unmarshal(ext.Value, &idp)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"failed to unmarshal IDP extension: %w\", err)\n\t\t\t}\n\t\t\tfor _, dp := range idp.DistributionPoint.FullName {\n\t\t\t\tif string(dp.Bytes) == distributionPoint {\n\t\t\t\t\tlogger.Debugf(\"distribution point %v matches CRL IDP extension\", distributionPoint)\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn fmt.Errorf(\"distribution point %v not found in CRL IDP extension\", distributionPoint)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (cv *crlValidator) getOrCreateMutex(crlURL string) *sync.Mutex {\n\tcrlInMemoryCacheMutex.Lock()\n\tmu, ok := crlURLMutexes[crlURL]\n\tif !ok {\n\t\tmu = &sync.Mutex{}\n\t\tcrlURLMutexes[crlURL] = mu\n\t}\n\tcrlInMemoryCacheMutex.Unlock()\n\treturn mu\n}\n\nfunc isShortLivedCertificate(cert *x509.Certificate) bool {\n\t// https://cabforum.org/working-groups/server/baseline-requirements/requirements/\n\t// See Short-lived Subscriber Certificate section\n\tif cert.NotBefore.Before(time.Date(2024, time.March, 15, 0, 0, 0, 0, time.UTC)) {\n\t\t// Certificates issued before March 15, 2024 are not considered short-lived\n\t\treturn false\n\t}\n\tmaximumValidityPeriod := 7 * 24 * time.Hour\n\tif cert.NotBefore.Before(time.Date(2026, time.March, 15, 0, 0, 0, 0, time.UTC)) {\n\t\tmaximumValidityPeriod = 10 * 24 * time.Hour\n\t}\n\tmaximumValidityPeriod += time.Minute // Fix inclusion start and end time\n\tcertValidityPeriod := cert.NotAfter.Sub(cert.NotBefore)\n\treturn maximumValidityPeriod > certValidityPeriod\n}\n\nfunc (ccc *crlCacheCleanerType) startPeriodicCacheCleanup() {\n\tccc.mu.Lock()\n\tdefer ccc.mu.Unlock()\n\tif ccc.cleanupStopChan != nil {\n\t\tlogger.Debug(\"CRL cache cleaner is already running, not starting again\")\n\t\treturn\n\t}\n\tlogger.Debugf(\"starting periodic CRL cache cleanup with tick rate %v\", crlCacheCleanerTickRate)\n\tccc.cleanupStopChan = make(chan struct{})\n\tccc.cleanupDoneChan = make(chan struct{})\n\tgo func() {\n\t\tticker := time.NewTicker(crlCacheCleanerTickRate)\n\t\tdefer ticker.Stop()\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-ticker.C:\n\t\t\t\tccc.cleanupInMemoryCache()\n\t\t\t\tccc.cleanupOnDiskCache()\n\t\t\tcase <-ccc.cleanupStopChan:\n\t\t\t\tclose(ccc.cleanupDoneChan)\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}()\n}\n\nfunc (ccc *crlCacheCleanerType) stopPeriodicCacheCleanup() {\n\tccc.mu.Lock()\n\tdefer ccc.mu.Unlock()\n\tlogger.Debug(\"stopping periodic CRL cache cleanup\")\n\tif ccc.cleanupStopChan != nil {\n\t\tclose(ccc.cleanupStopChan)\n\t\t<-ccc.cleanupDoneChan\n\t\tccc.cleanupStopChan = nil\n\t\tccc.cleanupDoneChan = nil\n\t} else {\n\t\tlogger.Debugf(\"CRL cache cleaner was not running, nothing to stop\")\n\t}\n}\n\nfunc (ccc *crlCacheCleanerType) cleanupInMemoryCache() {\n\tnow := time.Now()\n\tlogger.Debugf(\"cleaning up in-memory CRL cache at %v\", now)\n\tcrlInMemoryCacheMutex.Lock()\n\tfor k, v := range crlInMemoryCache {\n\t\texpired := v.crl.NextUpdate.Before(now)\n\t\tevicted := v.downloadTime.Add(ccc.cacheValidityTime).Before(now)\n\t\tlogger.Debugf(\"testing CRL for %v (nextUpdate=%v, downloadTime=%v) from in-memory cache (expired: %v, evicted: %v)\", k, v.crl.NextUpdate, v.downloadTime, expired, evicted)\n\t\tif expired || evicted {\n\t\t\tdelete(crlInMemoryCache, k)\n\t\t}\n\t}\n\tcrlInMemoryCacheMutex.Unlock()\n}\n\nfunc (ccc *crlCacheCleanerType) cleanupOnDiskCache() {\n\tnow := time.Now()\n\tlogger.Debugf(\"cleaning up on-disk CRL cache at %v\", now)\n\tentries, err := os.ReadDir(ccc.onDiskCacheDir)\n\tif err != nil {\n\t\tlogger.Warnf(\"failed to read CRL cache dir: %v\", err)\n\t\treturn\n\t}\n\tfor _, entry := range entries {\n\t\tif !entry.Type().IsRegular() {\n\t\t\tcontinue\n\t\t}\n\t\tpath := filepath.Join(ccc.onDiskCacheDir, entry.Name())\n\t\tcrlBytes, err := os.ReadFile(path)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"failed to read CRL file %v: %v\", path, err)\n\t\t\tcontinue\n\t\t}\n\t\tcrl, err := x509.ParseRevocationList(crlBytes)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"failed to parse CRL file %v: %v\", path, err)\n\t\t\tcontinue\n\t\t}\n\t\tif crl.NextUpdate.Add(ccc.onDiskCacheRemovalDelay).Before(now) {\n\t\t\tlogger.Debugf(\"CRL file %v is expired, removing\", path)\n\t\t\tif err := os.Remove(path); err != nil {\n\t\t\t\tlogger.Warnf(\"failed to remove expired CRL file %v: %v\", path, err)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc defaultCrlOnDiskCacheDir() (string, error) {\n\tswitch runtime.GOOS {\n\tcase \"windows\":\n\t\treturn filepath.Join(os.Getenv(\"USERPROFILE\"), \"AppData\", \"Local\", \"Snowflake\", \"Caches\", \"crls\"), nil\n\tcase \"darwin\":\n\t\thome := os.Getenv(\"HOME\")\n\t\tif home == \"\" {\n\t\t\treturn \"\", errors.New(\"HOME is blank\")\n\t\t}\n\t\treturn filepath.Join(home, \"Library\", \"Caches\", \"Snowflake\", \"crls\"), nil\n\tdefault:\n\t\thome := os.Getenv(\"HOME\")\n\t\tif home == \"\" {\n\t\t\treturn \"\", errors.New(\"HOME is blank\")\n\t\t}\n\t\treturn filepath.Join(home, \".cache\", \"snowflake\", \"crls\"), nil\n\t}\n}\n"
  },
  {
    "path": "crl_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"cmp\"\n\t\"context\"\n\t\"crypto/rand\"\n\t\"crypto/rsa\"\n\t\"crypto/sha256\"\n\t\"crypto/x509\"\n\t\"crypto/x509/pkix\"\n\t\"database/sql\"\n\t\"encoding/asn1\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"math/big\"\n\t\"net\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\nvar serialNumber = int64(0) // to be incremented\n\ntype allowCertificatesWithoutCrlURLType bool\ntype inMemoryCacheDisabledType bool\ntype onDiskCacheDisabledType bool\ntype downloadMaxSizeType int\n\ntype notAfterType time.Time\ntype crlEndpointType string\n\ntype revokedCert *x509.Certificate\n\ntype thisUpdateType time.Time\ntype nextUpdateType time.Time\n\nfunc newTestCrlValidator(t *testing.T, checkMode CertRevocationCheckMode, args ...any) *crlValidator {\n\thttpClient := &http.Client{}\n\tallowCertificatesWithoutCrlURL := false\n\tinMemoryCacheDisabled := false\n\tonDiskCacheDisabled := false\n\tdownloadMaxSize := defaultCrlDownloadMaxSize\n\ttelemetry := &snowflakeTelemetry{}\n\tfor _, arg := range args {\n\t\tswitch v := arg.(type) {\n\t\tcase *http.Client:\n\t\t\thttpClient = v\n\t\tcase allowCertificatesWithoutCrlURLType:\n\t\t\tallowCertificatesWithoutCrlURL = bool(v)\n\t\tcase inMemoryCacheDisabledType:\n\t\t\tinMemoryCacheDisabled = bool(v)\n\t\tcase onDiskCacheDisabledType:\n\t\t\tonDiskCacheDisabled = bool(v)\n\t\tcase downloadMaxSizeType:\n\t\t\tdownloadMaxSize = int(v)\n\t\tcase *snowflakeTelemetry:\n\t\t\ttelemetry = v\n\t\tdefault:\n\t\t\tt.Fatalf(\"unexpected argument type %T\", v)\n\t\t}\n\t}\n\tcv, err := newCrlValidator(checkMode, allowCertificatesWithoutCrlURL, inMemoryCacheDisabled, onDiskCacheDisabled, downloadMaxSize, httpClient, telemetry)\n\tassertNilF(t, err)\n\treturn cv\n}\n\nfunc TestCrlCheckModeDisabledNoHttpCall(t *testing.T) {\n\tcaKey, caCert := createCa(t, nil, nil, \"root CA\", 0)\n\t_, leafCert := createLeafCert(t, caCert, caKey, 0, crlEndpointType(\"/rootCrl\"))\n\tcrt := &countingRoundTripper{}\n\tcv := newTestCrlValidator(t, CertRevocationCheckDisabled, &http.Client{Transport: crt})\n\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\tassertNilE(t, err)\n\tassertEqualE(t, crt.totalRequests(), 0, \"no HTTP request should be made when check mode is disabled\")\n}\n\nfunc TestCrlModes(t *testing.T) {\n\tfor _, checkMode := range []CertRevocationCheckMode{CertRevocationCheckEnabled, CertRevocationCheckAdvisory} {\n\t\tt.Run(fmt.Sprintf(\"checkMode=%v\", checkMode), func(t *testing.T) {\n\t\t\tt.Run(\"ShortLivedCertDoesNotNeedCRL\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode, allowCertificatesWithoutCrlURLType(false))\n\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", 0, \"\")\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, 0, \"\", notAfterType(time.Now().Add(4*24*time.Hour)))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tassertNilE(t, err)\n\t\t\t})\n\n\t\t\tt.Run(\"LeafCertNotRevoked\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey)\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tassertNilE(t, err)\n\t\t\t})\n\n\t\t\tt.Run(\"LeafCertRevoked\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey, revokedCert(leafCert))\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tassertNotNilF(t, err)\n\t\t\t\tassertEqualE(t, err.Error(), \"every verified certificate chain contained revoked certificates\")\n\t\t\t})\n\n\t\t\tt.Run(\"LeafOneCrlErrorAndOneNotRevoked\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/404\"), crlEndpointType(\"rootCrl\"))\n\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey)\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tswitch checkMode {\n\t\t\t\tcase CertRevocationCheckEnabled:\n\t\t\t\t\tassertNotNilF(t, err)\n\t\t\t\t\tassertEqualE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\tcase CertRevocationCheckAdvisory:\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"LeafOneCrlErrorAndOneRevoked\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/404\"), crlEndpointType(\"/rootCrl\"))\n\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey, revokedCert(leafCert))\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tassertNotNilF(t, err)\n\t\t\t\tassertEqualE(t, err.Error(), \"every verified certificate chain contained revoked certificates\")\n\t\t\t})\n\n\t\t\tt.Run(\"TestLeafNotRevokedAndRootDoesNotProvideCrl\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\trootCaPrivateKey, rootCaCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\tintermediateCaKey, intermediateCaCert := createCa(t, rootCaCert, rootCaPrivateKey, \"intermediate CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, intermediateCaCert, intermediateCaKey, port, crlEndpointType(\"/intermediateCrl\"))\n\t\t\t\tintermediateCrl := createCrl(t, intermediateCaCert, intermediateCaKey)\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/intermediateCrl\", intermediateCrl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, intermediateCaCert, rootCaCert}})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertEqualE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"IntermediateRevokedAndLeafDoesNotProvideCrl\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\trootCaPrivateKey, rootCaCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\tintermediateCaKey, intermediateCaCert := createCa(t, rootCaCert, rootCaPrivateKey, \"intermediate CA\", port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t_, leafCert := createLeafCert(t, intermediateCaCert, intermediateCaKey, port, crlEndpointType(\"/intermediateCrl\"))\n\t\t\t\trootCrl := createCrl(t, rootCaCert, rootCaPrivateKey, revokedCert(intermediateCaCert))\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", rootCrl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, intermediateCaCert, rootCaCert}})\n\t\t\t\tassertEqualE(t, err.Error(), \"every verified certificate chain contained revoked certificates\")\n\t\t\t})\n\n\t\t\tt.Run(\"IntermediateRevokedAndLeafDoesNotProvideCrl\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\trootCaPrivateKey, rootCaCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\tintermediateCaKey, intermediateCaCert := createCa(t, rootCaCert, rootCaPrivateKey, \"intermediate CA\", port, \"/rootCrl\")\n\t\t\t\t_, leafCert := createLeafCert(t, intermediateCaCert, intermediateCaKey, port)\n\t\t\t\trootCrl := createCrl(t, rootCaCert, rootCaPrivateKey, revokedCert(intermediateCaCert))\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", rootCrl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, intermediateCaCert, rootCaCert}})\n\t\t\t\tassertEqualE(t, err.Error(), \"every verified certificate chain contained revoked certificates\")\n\t\t\t})\n\n\t\t\tt.Run(\"DownloadedCrlIsExpiredAndNoneValidExists\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey, thisUpdateType(time.Now().Add(-2*time.Hour)), nextUpdateType(time.Now().Add(-1*time.Hour)))\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertNotNilF(t, err)\n\t\t\t\t\tassertStringContainsE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"DownloadedCrlIsExpiredButTheValidExists\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\toldCrl := createCrl(t, caCert, caPrivateKey, thisUpdateType(time.Now().Add(-50*time.Hour)), nextUpdateType(time.Now().Add(48*time.Hour)))\n\t\t\t\tnewCrl := createCrl(t, caCert, caPrivateKey, thisUpdateType(time.Now().Add(-2*time.Hour)), nextUpdateType(time.Now().Add(-1*time.Hour)))\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", newCrl))\n\n\t\t\t\toldCrlDownloadTime := time.Now().Add(-48 * time.Hour)\n\t\t\t\tcrlInMemoryCache[fullCrlURL(port, \"/rootCrl\")] = &crlInMemoryCacheValueType{\n\t\t\t\t\tcrl:          oldCrl,\n\t\t\t\t\tdownloadTime: &oldCrlDownloadTime,\n\t\t\t\t}\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tassertNilE(t, err)\n\t\t\t})\n\n\t\t\tt.Run(\"CrlSignatureInvalid\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\totherCaPrivateKey, _ := createCa(t, nil, nil, \"other CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\tcrl := createCrl(t, caCert, otherCaPrivateKey) // signed with wrong key\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertStringContainsE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"CrlIssuerMismatch\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\totherKey, otherCert := createCa(t, nil, nil, \"other CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\tcrl := createCrl(t, otherCert, otherKey) // issued by other CA\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertStringContainsE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"CertWithNoCrlDistributionPoints\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port)\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertEqualE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"CertWithNoCrlDistributionPointsAllowed\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", 0)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, 0)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode, allowCertificatesWithoutCrlURLType(true))\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tassertNilE(t, err)\n\t\t\t})\n\n\t\t\tt.Run(\"DownloadCrlFailsOnUnparsableCrl\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode, &http.Client{\n\t\t\t\t\tTransport: &malformedCrlRoundTripper{},\n\t\t\t\t})\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertEqualE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"DownloadCrlFailsOn404\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertEqualE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"CrlFitsLimit\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode, downloadMaxSizeType(1024*1024))\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey)\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tassertNilE(t, err)\n\t\t\t})\n\n\t\t\tt.Run(\"CrlTooLargeToDownload\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode, downloadMaxSizeType(10))\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey)\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertEqualE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"VerifyAgainstIdpExtensionWithDistributionPointMatch\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\n\t\t\t\tidpValue, err := asn1.Marshal(issuingDistributionPoint{\n\t\t\t\t\tDistributionPoint: distributionPointName{\n\t\t\t\t\t\tFullName: []asn1.RawValue{\n\t\t\t\t\t\t\t{Bytes: fmt.Appendf(nil, \"http://localhost:%v/rootCrl\", port)},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t})\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tidpExtension := &pkix.Extension{\n\t\t\t\t\tId:    idpOID,\n\t\t\t\t\tValue: idpValue,\n\t\t\t\t}\n\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey, idpExtension)\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr = cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tassertNilE(t, err)\n\t\t\t})\n\n\t\t\tt.Run(\"TestVerifyAgainstIdpExtensionWithDistributionPointMismatch\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\n\t\t\t\tidpValue, err := asn1.Marshal(issuingDistributionPoint{\n\t\t\t\t\tDistributionPoint: distributionPointName{\n\t\t\t\t\t\tFullName: []asn1.RawValue{\n\t\t\t\t\t\t\t{Bytes: fmt.Appendf(nil, \"http://localhost:%v/otherCrl\", port)},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t})\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tidpExtension := &pkix.Extension{\n\t\t\t\t\tId:    idpOID,\n\t\t\t\t\tValue: idpValue,\n\t\t\t\t}\n\n\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey, idpExtension)\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\terr = cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertNotNilF(t, err)\n\t\t\t\t\tassertEqualE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"AnyValidChainCausesSuccess\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, revokedLeaf := createLeafCert(t, caCert, caKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t_, validLeaf := createLeafCert(t, caCert, caKey, port, crlEndpointType(\"/rootCrl\"))\n\n\t\t\t\t// CRL revokes only the first leaf\n\t\t\t\tcrl := createCrl(t, caCert, caKey, revokedCert(revokedLeaf))\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\t// First chain: revoked, second chain: valid\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{\n\t\t\t\t\t{revokedLeaf, caCert},\n\t\t\t\t\t{validLeaf, caCert},\n\t\t\t\t})\n\t\t\t\tassertNilE(t, err)\n\t\t\t})\n\n\t\t\tt.Run(\"OneChainIsRevokedAndOtherIsError\", func(t *testing.T) {\n\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\tdefer closeServer(t, server)\n\t\t\t\tcaKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t_, revokedLeaf := createLeafCert(t, caCert, caKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t_, errorLeaf := createLeafCert(t, caCert, caKey, port, crlEndpointType(\"/missingCrl\"))\n\n\t\t\t\t// CRL revokes only the first leaf\n\t\t\t\tcrl := createCrl(t, caCert, caKey, revokedCert(revokedLeaf))\n\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\t// First chain: revoked, second chain: valid\n\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{\n\t\t\t\t\t{revokedLeaf, caCert},\n\t\t\t\t\t{errorLeaf, caCert},\n\t\t\t\t})\n\t\t\t\tif checkMode == CertRevocationCheckEnabled {\n\t\t\t\t\tassertNotNilF(t, err)\n\t\t\t\t\tassertEqualE(t, err.Error(), \"certificate revocation check failed\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t}\n\t\t\t})\n\n\t\t\tt.Run(\"CacheTests\", func(t *testing.T) {\n\t\t\t\tt.Run(\"should use in-memory cache\", func(t *testing.T) {\n\t\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\t\tcrt := newCountingRoundTripper(createTestNoRevocationTransport())\n\t\t\t\t\tcv := newTestCrlValidator(t, checkMode, &http.Client{\n\t\t\t\t\t\tTransport: crt,\n\t\t\t\t\t})\n\n\t\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\t\tdefer closeServer(t, server)\n\t\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey)\n\n\t\t\t\t\tdownloadTime := time.Now().Add(-1 * time.Minute)\n\t\t\t\t\tcrlInMemoryCache[fullCrlURL(port, \"/rootCrl\")] = &crlInMemoryCacheValueType{\n\t\t\t\t\t\tcrl:          crl,\n\t\t\t\t\t\tdownloadTime: &downloadTime,\n\t\t\t\t\t}\n\t\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t\tassertEqualE(t, crt.totalRequests(), 0)\n\t\t\t\t\t_, err = os.Open(cv.crlURLToPath(\"/rootCrl\"))\n\t\t\t\t\tassertErrIsE(t, err, os.ErrNotExist, \"CRL file should not be created in the cache directory\")\n\t\t\t\t})\n\n\t\t\t\tt.Run(\"should promote on-disk cache to memory and not modify on-disk entry\", func(t *testing.T) {\n\t\t\t\t\tskipOnMissingHome(t)\n\t\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\t\tcrt := newCountingRoundTripper(createTestNoRevocationTransport())\n\t\t\t\t\tcv := newTestCrlValidator(t, checkMode, &http.Client{\n\t\t\t\t\t\tTransport: crt,\n\t\t\t\t\t})\n\n\t\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\t\tdefer closeServer(t, server)\n\t\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey)\n\n\t\t\t\t\tassertNilF(t, os.WriteFile(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")), crl.Raw, 0600)) // simulate a cached CRL\n\t\t\t\t\tstatBefore, err := os.Stat(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertNilF(t, err)\n\n\t\t\t\t\terr = cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t\tassertEqualE(t, crt.totalRequests(), 0)\n\t\t\t\t\tstatAfter, err := os.Stat(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tassertEqualE(t, statBefore.ModTime().Equal(statAfter.ModTime()), true, \"CRL file should not be modified in the cache directory\")\n\t\t\t\t})\n\n\t\t\t\tt.Run(\"should redownload when nextUpdate is reached\", func(t *testing.T) {\n\t\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\t\tcrt := newCountingRoundTripper(createTestNoRevocationTransport())\n\t\t\t\t\tcv := newTestCrlValidator(t, checkMode, &http.Client{\n\t\t\t\t\t\tTransport: crt,\n\t\t\t\t\t})\n\n\t\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\t\tdefer closeServer(t, server)\n\t\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t\toldCrl := createCrl(t, caCert, caPrivateKey, thisUpdateType(time.Now().Add(-2*time.Minute)), nextUpdateType(time.Now().Add(-1*time.Minute)))\n\t\t\t\t\tnewCrl := createCrl(t, caCert, caPrivateKey, thisUpdateType(time.Now()), nextUpdateType(time.Now().Add(time.Hour)))\n\n\t\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", newCrl))\n\n\t\t\t\t\tpreviousDownloadTime := time.Now().Add(-1 * time.Minute)\n\t\t\t\t\tcrlInMemoryCache[fullCrlURL(port, \"/rootCrl\")] = &crlInMemoryCacheValueType{\n\t\t\t\t\t\tcrl:          oldCrl,\n\t\t\t\t\t\tdownloadTime: &previousDownloadTime,\n\t\t\t\t\t}\n\n\t\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\t\tassertNilE(t, err)\n\n\t\t\t\t\tassertEqualE(t, crt.totalRequests(), 1)\n\t\t\t\t\tfd, err := os.Open(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertNilE(t, err, \"CRL file should be created in the cache directory\")\n\t\t\t\t\tdefer fd.Close()\n\t\t\t\t\tassertTrueE(t, crlInMemoryCache[fullCrlURL(port, \"/rootCrl\")].downloadTime.After(previousDownloadTime))\n\t\t\t\t\tassertTrueE(t, crlInMemoryCache[fullCrlURL(port, \"/rootCrl\")].crl.NextUpdate.Equal(newCrl.NextUpdate))\n\t\t\t\t})\n\n\t\t\t\tt.Run(\"should redownload when evicted in cache\", func(t *testing.T) {\n\t\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\t\tcrt := newCountingRoundTripper(createTestNoRevocationTransport())\n\t\t\t\t\tcv := newTestCrlValidator(t, checkMode, &http.Client{\n\t\t\t\t\t\tTransport: crt,\n\t\t\t\t\t})\n\n\t\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\t\tdefer closeServer(t, server)\n\t\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t\toldCrl := createCrl(t, caCert, caPrivateKey, thisUpdateType(time.Now().Add(-2*time.Hour)), nextUpdateType(time.Now().Add(time.Hour)))\n\t\t\t\t\tnewCrl := createCrl(t, caCert, caPrivateKey, thisUpdateType(time.Now()), nextUpdateType(time.Now().Add(4*time.Hour)))\n\t\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", newCrl))\n\n\t\t\t\t\tpreviousValidityTime := crlCacheCleaner.cacheValidityTime\n\t\t\t\t\tdefer func() {\n\t\t\t\t\t\tcrlCacheCleaner.cacheValidityTime = previousValidityTime\n\t\t\t\t\t}()\n\t\t\t\t\tcrlCacheCleaner.cacheValidityTime = 10 * time.Minute\n\n\t\t\t\t\tpreviousDownloadTime := time.Now().Add(-1 * time.Hour)\n\t\t\t\t\tcrlInMemoryCache[fullCrlURL(port, \"/rootCrl\")] = &crlInMemoryCacheValueType{\n\t\t\t\t\t\tcrl:          oldCrl,\n\t\t\t\t\t\tdownloadTime: &previousDownloadTime,\n\t\t\t\t\t}\n\n\t\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\t\tassertNilE(t, err)\n\n\t\t\t\t\tassertEqualE(t, crt.totalRequests(), 1)\n\t\t\t\t\tfd, err := os.Open(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertNilE(t, err, \"CRL file should be created in the cache directory\")\n\t\t\t\t\tdefer fd.Close()\n\t\t\t\t\tassertTrueE(t, crlInMemoryCache[fullCrlURL(port, \"/rootCrl\")].downloadTime.After(previousDownloadTime))\n\t\t\t\t\tassertTrueE(t, crlInMemoryCache[fullCrlURL(port, \"/rootCrl\")].crl.NextUpdate.Equal(newCrl.NextUpdate))\n\t\t\t\t\tif !isWindows {\n\t\t\t\t\t\tstat, err := os.Stat(filepath.Dir(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\"))))\n\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\tassertEqualE(t, stat.Mode().Perm(), os.FileMode(0700), \"cache directory permissions should be 0700\")\n\t\t\t\t\t}\n\t\t\t\t})\n\n\t\t\t\tt.Run(\"should not save to on-disk cache when disabled\", func(t *testing.T) {\n\t\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\t\tcv := newTestCrlValidator(t, checkMode, onDiskCacheDisabledType(true))\n\n\t\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\t\tdefer closeServer(t, server)\n\t\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey)\n\t\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t\t_, err = os.Open(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertErrIsE(t, err, os.ErrNotExist, \"CRL file should not be created in the cache directory when on-disk cache is disabled\")\n\t\t\t\t\tassertNotNilE(t, crlInMemoryCache[fullCrlURL(port, \"/rootCrl\")]) // in-memory cache should still be used\n\t\t\t\t})\n\n\t\t\t\tt.Run(\"should not read from on-disk cache when disabled\", func(t *testing.T) {\n\t\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\t\tcrt := newCountingRoundTripper(createTestNoRevocationTransport())\n\t\t\t\t\tcv := newTestCrlValidator(t, checkMode, onDiskCacheDisabledType(true), &http.Client{\n\t\t\t\t\t\tTransport: crt,\n\t\t\t\t\t})\n\n\t\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\t\tdefer closeServer(t, server)\n\t\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t\toldCrl := createCrl(t, caCert, caPrivateKey, nextUpdateType(time.Now()))\n\t\t\t\t\tnewCrl := createCrl(t, caCert, caPrivateKey)\n\t\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", newCrl))\n\n\t\t\t\t\tassertNilF(t, os.WriteFile(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")), oldCrl.Raw, 0600)) // simulate a cached CRL\n\t\t\t\t\tstatBefore, err := os.Stat(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\terr = cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t\tassertEqualE(t, crt.totalRequests(), 1, \"CRL should be downloaded from the server\")\n\t\t\t\t\tassertNotNilE(t, crlInMemoryCache[fullCrlURL(port, \"/rootCrl\")]) // in-memory cache should still be used\n\t\t\t\t\tstatAfter, err := os.Stat(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tassertTrueE(t, statBefore.ModTime().Equal(statAfter.ModTime()), \"CRL file should be modified in the cache directory\")\n\t\t\t\t})\n\n\t\t\t\tt.Run(\"should not use in-memory cache when disabled\", func(t *testing.T) {\n\t\t\t\t\tskipOnMissingHome(t)\n\t\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\t\tcv := newTestCrlValidator(t, checkMode, inMemoryCacheDisabledType(true))\n\n\t\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\t\tdefer closeServer(t, server)\n\t\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey)\n\t\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t\tassertEqualE(t, len(crlInMemoryCache), 0, \"in-memory cache should not be used when disabled\")\n\t\t\t\t\tfd, err := os.Open(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertNilE(t, err) // on-disk cache should still be used\n\t\t\t\t\tdefer fd.Close()\n\t\t\t\t})\n\n\t\t\t\tt.Run(\"should not use on disk cache when disabled\", func(t *testing.T) {\n\t\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\t\tcv := newTestCrlValidator(t, checkMode, inMemoryCacheDisabledType(true), onDiskCacheDisabledType(true))\n\n\t\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\t\tdefer closeServer(t, server)\n\t\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey)\n\t\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t\tassertNilE(t, crlInMemoryCache[fullCrlURL(port, \"/rootCrl\")], \"in-memory cache should not be used when disabled\")\n\t\t\t\t\t_, err = os.Open(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertErrIsE(t, err, os.ErrNotExist, \"CRL file should not be created in the cache directory when on-disk cache is disabled\")\n\t\t\t\t})\n\n\t\t\t\tt.Run(\"should clean up cache\", func(t *testing.T) {\n\t\t\t\t\tskipOnMissingHome(t)\n\t\t\t\t\tcleanupCrlCache(t)\n\n\t\t\t\t\tcv := newTestCrlValidator(t, checkMode)\n\n\t\t\t\t\tserver, port := createCrlServer(t)\n\t\t\t\t\tdefer closeServer(t, server)\n\t\t\t\t\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t\t\t\t\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\t\t\t\t\tcrl := createCrl(t, caCert, caPrivateKey, nextUpdateType(time.Now().Add(3000*time.Millisecond)))\n\t\t\t\t\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\t\t\t\t\tpreviousValidityTime := crlCacheCleaner.cacheValidityTime\n\t\t\t\t\tpreviousOnDiskCacheRemovalDelay := crlCacheCleaner.onDiskCacheRemovalDelay\n\t\t\t\t\tdefer func() {\n\t\t\t\t\t\tcrlCacheCleaner.cacheValidityTime = previousValidityTime\n\t\t\t\t\t\tcrlCacheCleaner.onDiskCacheRemovalDelay = previousOnDiskCacheRemovalDelay\n\t\t\t\t\t}()\n\t\t\t\t\tcrlCacheCleaner.cacheValidityTime = 1000 * time.Millisecond\n\t\t\t\t\tcrlCacheCleaner.onDiskCacheRemovalDelay = 2000 * time.Millisecond\n\n\t\t\t\t\tcrlCacheCleaner.stopPeriodicCacheCleanup()\n\t\t\t\t\tpreviousCacheCleanerTickRate := crlCacheCleanerTickRate\n\t\t\t\t\tdefer func() {\n\t\t\t\t\t\tcrlCacheCleanerTickRate = previousCacheCleanerTickRate\n\t\t\t\t\t}()\n\t\t\t\t\tcrlCacheCleanerTickRate = 500 * time.Millisecond\n\t\t\t\t\tcrlCacheCleaner.startPeriodicCacheCleanup()\n\t\t\t\t\tdefer crlCacheCleaner.stopPeriodicCacheCleanup()\n\n\t\t\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t\tcrlInMemoryCacheMutex.Lock()\n\t\t\t\t\tassertNotNilE(t, crlInMemoryCache[fullCrlURL(port, \"/rootCrl\")], \"in-memory cache should be populated\")\n\t\t\t\t\tcrlInMemoryCacheMutex.Unlock()\n\t\t\t\t\tfd, err := os.Open(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertNilE(t, err, \"CRL file should be created in the cache directory\")\n\t\t\t\t\tfd.Close()\n\n\t\t\t\t\ttime.Sleep(3000 * time.Millisecond) // wait for cleanup to happen\n\n\t\t\t\t\tcrlInMemoryCacheMutex.Lock()\n\t\t\t\t\tassertNilE(t, crlInMemoryCache[fullCrlURL(port, \"/rootCrl\")], \"in-memory cache should be cleaned up\")\n\t\t\t\t\tcrlInMemoryCacheMutex.Unlock()\n\t\t\t\t\tfd, err = os.Open(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertNilE(t, err, \"CRL file should still be present in the cache directory\")\n\t\t\t\t\tfd.Close()\n\n\t\t\t\t\ttime.Sleep(4000 * time.Millisecond) // wait for removal delay to pass\n\t\t\t\t\t_, err = os.Open(cv.crlURLToPath(fullCrlURL(port, \"/rootCrl\")))\n\t\t\t\t\tassertErrIsE(t, err, os.ErrNotExist, \"CRL file should be removed from the cache directory after removal delay\")\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t}\n}\n\nfunc cleanupCrlCache(t *testing.T) {\n\tcrlCacheCleanerMu.Lock()\n\tif crlCacheCleaner != nil {\n\t\tcrlCacheCleaner.stopPeriodicCacheCleanup()\n\t\terr := os.RemoveAll(crlCacheCleaner.onDiskCacheDir)\n\t\tassertNilF(t, err)\n\t\tcrlCacheCleaner = nil\n\t}\n\tcrlCacheCleanerMu.Unlock()\n\tcrlInMemoryCache = make(map[string]*crlInMemoryCacheValueType)\n}\n\nfunc TestRealCrlWithIdpExtension(t *testing.T) {\n\tcrlBytes, err := base64.StdEncoding.DecodeString(`MIIWCzCCFbECAQEwCgYIKoZIzj0EAwIwOzELMAkGA1UEBhMCVVMxHjAcBgNVBAoTFUdvb2dsZSBUcnVzdCBTZXJ2aWNlczEMMAoGA1UEAxMDV0UyFw0yNTA2MDMwNTE0MjZaFw0yNTA2MTMwNDE0MjVaMIIU1TAiAhEA+GNmsfmkiSYS3So6PtM4YRcNMjUwNTMwMDgzMDU0WjAiAhEAjnadf1gDhyYKPKaa/12+7xcNMjUwNTMwMDgzNDMyWjAhAhBE9QlX3xRpuxJ814WV+K/1Fw0yNTA1MzAxMTA0MzNaMCICEQCqN2nq4YSOEwkyJCn6HYQlFw0yNTA1MzAxMTM0MzNaMCECEDBfFh8CphcdEJF+zBTMw74XDTI1MDUzMDEyMDA1M1owIQIQalbjU7py90YQObvUekSOhBcNMjUwNTMwMTIwNDMzWjAiAhEAr2k4vZwyJnISwutcyf2nyRcNMjUwNTMwMTMwNDMzWjAhAhB35TMXvzwpYwooflxIqWDEFw0yNTA1MzAxMzMwNTNaMCECEAGHFbYpRjuyEmwHBjVy54gXDTI1MDUzMDEzMzQzMlowIgIRAId502qqmD3KEDgIHLdDwZYXDTI1MDUzMDE0MTg1MFowIgIRAJEe803uv+NQEJUBE5Q6P0kXDTI1MDUzMDE0MTg1MFowIgIRAOLFs7G+1xolCsv2TgVXc0AXDTI1MDUzMDE4MDQzMlowIQIQUsjln6aQLBgQRpsXpimESRcNMjUwNTMwMTgzNDMyWjAiAhEA62yPgGbg8uAKRBAp3N7zjRcNMjUwNTMwMjAwNDMyWjAiAhEAsjA4b2hRSeQJ3HSOmSCsfxcNMjUwNTMwMjAzNDMzWjAiAhEA5vGSk0V5AiQSSlJJgHBO/RcNMjUwNTMwMjEwMDUzWjAhAhBC5Bb9vfzyyQkPGoyM+1y3Fw0yNTA1MzEwMDA0MzNaMCICEQCk2xXPFJlcFAq8gAoYZcWKFw0yNTA1MzEwMDM0MzJaMCICEQDoXOJPuECUGwpzgim5mc9mFw0yNTA1MzEwMTAwNTNaMCECEHgn0iqA3FOqEGZkc3nMlQsXDTI1MDUzMTAyMzA1NFowIQIQdnsVe7yop/YSZC36hn8k0hcNMjUwNTMxMDUwMDUzWjAiAhEA988MkvjARu0K+NJ1aVwOIRcNMjUwNTMxMDcwMDUzWjAiAhEAwFdObfm70cMSBKAflw/KCxcNMjUwNTMxMDczMDUzWjAiAhEAqX2jbkbYhlwKl2fgguEfdRcNMjUwNTMxMDgzMDUzWjAhAhAcfL0AhaLI2xAfTjDas2e4Fw0yNTA1MzEwODM0MzNaMCECEHcuTXPmmCULECe4qj6t/woXDTI1MDUzMTA5MDQzMlowIgIRAL0tNF+V7aarEjS5X52ozVwXDTI1MDUzMTA5MzA1M1owIQIQEWjKzEnAuZAQOdBZQMCcLRcNMjUwNTMxMTAzMDUzWjAhAhA2l4kUNXKzpwoDbrMlYN65Fw0yNTA1MzExMTA0MzJaMCICEQDQMi07YAslxglpYDrFllr0Fw0yNTA1MzExMTMwNTNaMCECEEfIJzk/qTOVEDehcdaIr3YXDTI1MDUzMTEyMzQzMlowIgIRAPs9bOlpEQZzEL71JmOr4gMXDTI1MDUzMTE0MzA1NFowIgIRAKA4/laWgpf+CX5Xqdui57sXDTI1MDUzMTE0MzQzMlowIQIQIJL+kywlXcIQoNk1IR4hABcNMjUwNTMxMTcwMDUzWjAiAhEA10YhoTDr3JIJdDwoUvU7PBcNMjUwNTMxMTgzNDMyWjAhAhBjqqc9j1zo+grP13nPYjlrFw0yNTA1MzExOTMwNTNaMCECEFvJXOjJWg4XCg9lgBLgFCUXDTI1MDUzMTIxMDA1NFowIQIQHjWkZX62R5gKS9bus/vO3hcNMjUwNTMxMjIwNDMzWjAiAhEArzROq2M27voKXANmOzjg4BcNMjUwNTMxMjIzMDU0WjAhAhBGoxuPheM5twmSM9LO0NZuFw0yNTA1MzEyMzAwNTNaMCICEQClgDoqCxhihxDvXApTEN/QFw0yNTA1MzEyMzM0MzJaMCICEQCjffeJqicvMxCaQlnCRp1kFw0yNTA2MDEwMDM0MzNaMCECEB3bMsobz0qRCdm+plUwrNUXDTI1MDYwMTAxMDA1NFowIgIRANusCipK0XOVEC0+C1Ce+bsXDTI1MDYwMTAyMDA1NFowIgIRANsRDccCPVBrEGplnFXS3y0XDTI1MDYwMTAyMzQzMlowIQIQZBPFmHRcxzESJeZSri7+fBcNMjUwNjAxMDMwMDUyWjAhAhBUeunArcVjrApcJ9uR1v0cFw0yNTA2MDEwMzA0MzJaMCECEH7M2GgoJPa3Ccjz9nx1FmwXDTI1MDYwMTAzMzA1M1owIgIRAKwbWa1xrjjgCvB5I6ICstAXDTI1MDYwMTA0MzA1M1owIgIRAKRJvSq/BfQqEPgYyqN/lkwXDTI1MDYwMTA1MDA1NFowIQIQPJxOkr7drV4Qjxa9rYfUwhcNMjUwNjAxMDUwNDMzWjAiAhEA8lQTTLlsfBoJlrx6CydL7hcNMjUwNjAxMDUzNDMzWjAiAhEAluoSt/87SbUKN6WD8WO/uBcNMjUwNjAxMDYzMDUzWjAiAhEAi1z9zzq3ecYQYbpyjZcV0BcNMjUwNjAxMDcwMDU0WjAhAhBDYZctZbp9NQkS+H75yhEmFw0yNTA2MDExMDM0MzJaMCICEQDhKSZ6X/VHjQpM79Em7auJFw0yNTA2MDExMTAwNTNaMCICEQCzngaFAi5rTBJBHMJnGgjCFw0yNTA2MDExMTMwNTRaMCECEAi0b7W58XDnEHtR8u+d+TwXDTI1MDMwNDEyMTIyNVowIgIRANw8VR+umOAsEpehwNHqCWkXDTI1MDYwMTEyMzQzMlowIQIQVDJ7+F+QyfQSUexffugxPBcNMjUwNjAxMTMwMDUzWjAiAhEA3kZX5ACREf4Ql7R88uTRiBcNMjUwMzA0MTQ1MTU2WjAhAhBAmF4m8TDJfxCB93DGRJ5SFw0yNTA2MDExNTMwNTRaMCECED2nNXiAdcbkCorz/3SaOXkXDTI1MDMwNDE2MDY0MFowIgIRAJPjTBx12IeKCsZC+WsYtqwXDTI1MDYwMTE4MzA1M1owIQIQH89eMYtFX+ESUBJx9drNdxcNMjUwNjAxMTkwMDUzWjAiAhEA9h1UKrkPonEJ3oHf6DAdeRcNMjUwNjAxMTkzMDUzWjAiAhEAx7HcWI25jVsJzEFAa8H6hhcNMjUwNjAxMTkzNDMyWjAiAhEA2xt7Vz1eC9US2Lx9U7IdQxcNMjUwNjAyMDEzMDUzWjAhAhBLBChzFL7nMBKrkgfIqmL4Fw0yNTA2MDIwMjA0MzJaMCICEQCoWrPIkhkCEwoZoBW8Wi7iFw0yNTA2MDIwMjMwNTNaMCICEQCW9nREFwgFExAhQPkEcX1GFw0yNTA2MDIwNzA0MzJaMCECEFJpjh2fOfnwEPYEmgM4vAsXDTI1MDYwMjEwMzQzMlowIgIRAMARWx58ovYeCYlv9x/+dXUXDTI1MDYwMjExMzQzMlowIgIRANGVJSxAtM0+CmvyDk5yemEXDTI1MDYwMjEyMzQzMlowIQIQLuR16MKk7VIJsPZDdxmxjBcNMjUwMzA1MTMxNTQ2WjAhAhBgWj2KpFDd1hLS8czTxP9WFw0yNTA2MDIxMzMwNTNaMCICEQDpBAXC4tks2RA3PmivojEYFw0yNTA2MDIxMzM0MzJaMCICEQDPAqlDrpaIZRLOv4dkWD9YFw0yNTA2MDIxNDM0MzJaMCECECHJcaelQHswEjWQOK4shmQXDTI1MDYwMjE1MDA1M1owIgIRAKSC4iHRwdOXEI4MVwjYASMXDTI1MDYwMjE4MDQzMlowIgIRAPhnb/McQolNCT5KPL9WBy0XDTI1MDYwMjE5MDA1M1owIQIQA/fNWPLbkQ8SJc6T1ykDtxcNMjUwNjAyMTkwNDMyWjAhAhBp1e5W8/pEFgoVhg1GywuhFw0yNTA2MDIxOTM0MzJaMCECEDa16LoaHM7jEBLVfZOw+2EXDTI1MDYwMjIwMDA1NFowIgIRANhoeJQh/bgAChCj0tjaOhoXDTI1MDYwMjIwMzQzMlowIgIRAPGCJfkpjnA0Ep42ikTZTDQXDTI1MDYwMjIxMDQzMlowIgIRANBfcQ5tm+jQEIrc4G9uz30XDTI1MDYwMjIyMDQzM1owIQIQDvMAXxXjJV0Q07lbQyqRlRcNMjUwNjAzMDIzMDUzWjAhAhABUapKRf9bwxJ9pM421HlyFw0yNTA2MDMwMjM0MzNaMCICEQDE7QlV4jWoawmVVFlPlN5ZFw0yNTA2MDMwMzAwNTNaMCECEDrfc2dpmptdEOBKNuW5dN0XDTI1MDMwNzE2MDY0NVowIgIRAO08CoY80ZYZCnASAJsibosXDTI1MDMwOTE2MDYzOVowIgIRAO3z/WMJKFPwEqGv+wIQqVUXDTI1MDMxMTE2MDYzOVowIgIRAOGk/CY9/86iEkStcRIR74oXDTI1MDMxMTE3MzMzNVowIgIRALmyt1+31WZtCrklPUahHsoXDTI1MDMxMTIxMjcyM1owIgIRAN0K49cWZ5XVCRUwnqkyzAcXDTI1MDMxNTE3MzM0MFowIgIRAKHAD2cxPWesCiXtOaFLRMwXDTI1MDMxNTE4MzcxM1owIQIQerJr0+WomOYQqOCLMwwQQhcNMjUwMzE5MDYxNDA5WjAhAhAX1xTDBKnX9RBHto7Yo8lVFw0yNTAzMjAwOTM4NTdaMCICEQDrpjOSW5W9fgqtI2heAOexFw0yNTAzMjMwNjE0MDlaMCICEQCdIwrsmoZRIhIDnY2gQhZZFw0yNTAzMjYwODI1MDRaMCICEQCc3wlTpAB6ZxJB5SLJ1cGFFw0yNTAzMzExMDQ0NDVaMCECEDvSrWlzrD2bEHLHvZ+Ak9sXDTI1MDQwMjA3NTk0NFowIgIRAMJ2ztUSpiKpCqYpTx6GEWwXDTI1MDQwNjA4MDA0OFowIQIQed72ikZNyBISyOL/lLPDIxcNMjUwNDA4MjA1NjM5WjAiAhEA+fjeN7n4PugS5Mh4kSSUhhcNMjUwNDA5MDQ1ODM4WjAhAhA2Gg3BxIzzaAqR0K/EYS9uFw0yNTA0MDkwNTU4MzBaMCECEA6iX6ZA2cvtCvqLywYZkGEXDTI1MDQwOTA4MDIwMFowIQIQaajjpNdTR+MSotZQd0le4BcNMjUwNDExMDk0ODQxWjAiAhEA+Z7TKxQHRP8KXarTEkKl/xcNMjUwNDExMTY0MTUxWjAhAhAYS5W1oCus3gqsNhnA9lgNFw0yNTA0MTExODUyMDBaMCECEB9WtUrjbzKNCcLJuZELbPIXDTI1MDQxMTE4NTIwMFowIQIQJuqczPhm8x4JCjjS5UEV4hcNMjUwNDEzMjExMzQ4WjAiAhEA8pC1AgBcHQMK98lYehVRqBcNMjUwNDEzMjIxNzE3WjAhAhBEe078o0AX4hCPOfwW08DgFw0yNTA0MTUyMTEzNDhaMCECEFtBlrwO2/yCEI5FaTjhEMUXDTI1MDQxNTIzNDgxMVowIgIRAOAhdu/DwnQZEGh9ABuntsEXDTI1MDQxNzE5MjA0NlowIgIRAKblmThTrKCLCaAfU80cgHUXDTI1MDQxNzIwMjU0N1owIQIQJ+PW+89xTOgJv3sKUFzpFRcNMjUwNDE4MjEzOTI0WjAhAhBreCVIZnxIxQkm0n/lw8XuFw0yNTA0MjAxODAxMzJaMCICEQDLHBY49bRaWxAUwMRRaYGkFw0yNTA0MjExNjU3NDlaMCECEBKDWcexQm8uCQPht1B2WCMXDTI1MDQyMjE2NTc0OVowIgIRANEuLddZ+6e/Cinj83AK2TIXDTI1MDQyMzE2NTc0OFowIQIQQRs5pdt3rw0Kj3yAi9nB8BcNMjUwNDI0MTgwMTMxWjAiAhEA2++UC5BwrkkSDLuijbOlhxcNMjUwNDI0MTgxNTI0WjAiAhEAso/DvQaXc8cQJQzH3vT39xcNMjUwNDI1MTgwOTAxWjAhAhA6Wxu2SrTNQAqGYEIlmug6Fw0yNTA0MzAxNTE2NTZaMCECECrQTDxnQf4UCjmTomNx6uoXDTI1MDQzMDE2MTU1MVowIgIRAOzK9hrrhUpREDNdMK+UhKYXDTI1MDUwMjIyNDgzNFowIQIQJywBgwts3CYJlswBuEfC4BcNMjUwNTAyMjM1MTQ1WjAhAhAv+aqUHySyHQnqo/kXTj07Fw0yNTA1MDMyMjQ4MzVaMCICEQDlfhMr/mGCeQrUug4RBfCwFw0yNTA1MDQyMzUxNDVaMCICEQC12JkjoHkyGQrXnfDh1Ak3Fw0yNTA1MDgxNjM3MjJaMCICEQD+ChJzg9zffhJvICXO5egWFw0yNTA1MDkxNDQ0MjNaMCICEQDINngvxFORLgmtenUC0eReFw0yNTA1MTAxNTQ4MTFaMCICEQDCRQG/17P8RgkCuuqVCqOEFw0yNTA1MTIxNDQ0MjRaMCECEG/pHThaOXIYEA6gUwBN2AAXDTI1MDUxMjE1NDgxMFowIQIQKRDCPxMlRDkQtVuZlc1y/BcNMjUwNTE1MTYzMjQ4WjAhAhAQJithNwlgHhBJtOo4cr7PFw0yNTA1MTgxNjMyNDhaMCICEQDuzLB0Dym1dAopKKRwqg+FFw0yNTA1MTkxNjMyNDhaMCICEQCic+mqwTKh2wlW/M9hFsKUFw0yNTA1MjExNDE5NTJaMCICEQCkcISpajRR8gloWttjVtWYFw0yNTA1MjIxNDE5NTFaMCECEC+QfsXidSEECVCY2XJcobsXDTI1MDUyNTEzMTU1NFowIQIQGaVPji8ez7sQc2BEKZ6zQRcNMjUwNTI2MTQxOTUxWjAhAhAWzpGux+VcMBLCf/uAu+UHFw0yNTA1MjgxNjAzMDdaMCICEQCVcpW8k5oxiwkAGBCtQXleFw0yNTA1MjkxNTMwNTRaMCICEQDCbweEznzXHxLmEoYkMXAXFw0yNTA1MjkxNjQyMjZaMCICEQC0/LnZiZ/wlhAYZ7QNFoMOFw0yNTA1MzAxNDE4NTBaMCICEQDNuyNRBRFsWhC2IgBtBr4jFw0yNTA1MzAxNDE4NTBaMCICEQCqTNQ5/wthcQoKTERGUrPiFw0yNTA2MDExNTMwNTNaoGwwajAfBgNVHSMEGDAWgBR1vsR3ron2RDd9z7FoHx0a69w0WTALBgNVHRQEBAICCwswOgYDVR0cAQH/BDAwLqApoCeGJWh0dHA6Ly9jLnBraS5nb29nL3dlMi95SzVuUGh0SEtRcy5jcmyBAf8wCgYIKoZIzj0EAwIDSAAwRQIhANnRHxa67XPmeX/SrH7l5sMJxA+OLg6eAjiUCBHW7NeKAiBZTWzYLK9IDgfUffYcRLtITegsRIjm02lrBd1I1I+QbQ==`)\n\tassertNilF(t, err)\n\tcrl, err := x509.ParseRevocationList(crlBytes)\n\tassertNilF(t, err)\n\tcv := newTestCrlValidator(t, CertRevocationCheckEnabled)\n\terr = cv.verifyAgainstIdpExtension(crl, \"http://c.pki.goog/we2/yK5nPhtHKQs.crl\")\n\tassertNilE(t, err)\n\terr = cv.verifyAgainstIdpExtension(crl, \"http://c.pki.goog/we2/other.crl\")\n\tassertNotNilF(t, err)\n\tassertStringContainsE(t, err.Error(), \"distribution point http://c.pki.goog/we2/other.crl not found in CRL IDP extension\")\n}\n\nfunc TestParallelRequestToTheSameCrl(t *testing.T) {\n\tcleanupCrlCache(t)\n\tserver, port := createCrlServer(t)\n\tdefer closeServer(t, server)\n\tcaPrivateKey, caCert := createCa(t, nil, nil, \"root CA\", port)\n\t_, leafCert := createLeafCert(t, caCert, caPrivateKey, port, crlEndpointType(\"/rootCrl\"))\n\tcrl := createCrl(t, caCert, caPrivateKey)\n\tregisterCrlEndpoints(t, server, newCrlEndpointDef(\"/rootCrl\", crl))\n\n\tbrt := newBlockingRoundTripper(createTestNoRevocationTransport(), 100*time.Millisecond)\n\tcrt := newCountingRoundTripper(brt)\n\tcv := newTestCrlValidator(t, CertRevocationCheckEnabled, &http.Client{\n\t\tTransport: crt,\n\t})\n\n\tvar wg sync.WaitGroup\n\tfor range 10 {\n\t\twg.Add(1)\n\t\tgo func() {\n\t\t\tdefer wg.Done()\n\t\t\terr := cv.verifyPeerCertificates(nil, [][]*x509.Certificate{{leafCert, caCert}})\n\t\t\tassertNilE(t, err)\n\t\t}()\n\t}\n\twg.Wait()\n\n\tassertEqualE(t, crt.totalRequests(), 1)\n}\n\nfunc TestIsShortLivedCertificate(t *testing.T) {\n\ttests := []struct {\n\t\tname     string\n\t\tcert     *x509.Certificate\n\t\texpected bool\n\t}{\n\t\t{\n\t\t\tname: \"Issued before March 15, 2024 (not short-lived)\",\n\t\t\tcert: &x509.Certificate{\n\t\t\t\tNotBefore: time.Date(2024, time.March, 1, 0, 0, 0, 0, time.UTC),\n\t\t\t\tNotAfter:  time.Date(2024, time.March, 10, 0, 0, 0, 0, time.UTC),\n\t\t\t},\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Issued after March 15, 2024, validity less than 10, but more than 7 days (short-lived)\",\n\t\t\tcert: &x509.Certificate{\n\t\t\t\tNotBefore: time.Date(2024, time.March, 16, 0, 0, 0, 0, time.UTC),\n\t\t\t\tNotAfter:  time.Date(2024, time.March, 24, 0, 0, 0, 0, time.UTC),\n\t\t\t},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Issued after March 15, 2024, validity less than 7 days (short-lived)\",\n\t\t\tcert: &x509.Certificate{\n\t\t\t\tNotBefore: time.Date(2024, time.March, 16, 0, 0, 0, 0, time.UTC),\n\t\t\t\tNotAfter:  time.Date(2024, time.March, 22, 0, 0, 0, 0, time.UTC),\n\t\t\t},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Issued after March 15, 2024, validity exactly 10 days (short-lived)\",\n\t\t\tcert: &x509.Certificate{\n\t\t\t\tNotBefore: time.Date(2024, time.March, 16, 0, 0, 0, 0, time.UTC),\n\t\t\t\tNotAfter:  time.Date(2024, time.March, 26, 0, 0, 0, 0, time.UTC),\n\t\t\t},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Issued after March 15, 2024, validity more than 10 days (not short-lived)\",\n\t\t\tcert: &x509.Certificate{\n\t\t\t\tNotBefore: time.Date(2024, time.March, 16, 0, 0, 0, 0, time.UTC),\n\t\t\t\tNotAfter:  time.Date(2024, time.March, 27, 0, 0, 0, 0, time.UTC),\n\t\t\t},\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Issued after March 15, 2026, validity less than 7 days (short-lived)\",\n\t\t\tcert: &x509.Certificate{\n\t\t\t\tNotBefore: time.Date(2026, time.March, 16, 0, 0, 0, 0, time.UTC),\n\t\t\t\tNotAfter:  time.Date(2026, time.March, 20, 0, 0, 0, 0, time.UTC),\n\t\t\t},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Issued after March 15, 2026, validity exactly 7 days (short-lived)\",\n\t\t\tcert: &x509.Certificate{\n\t\t\t\tNotBefore: time.Date(2026, time.March, 16, 0, 0, 0, 0, time.UTC),\n\t\t\t\tNotAfter:  time.Date(2026, time.March, 23, 0, 0, 0, 0, time.UTC),\n\t\t\t},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Issued after March 15, 2026, validity more than 7 days (not short-lived)\",\n\t\t\tcert: &x509.Certificate{\n\t\t\t\tNotBefore: time.Date(2026, time.March, 16, 0, 0, 0, 0, time.UTC),\n\t\t\t\tNotAfter:  time.Date(2026, time.March, 24, 0, 0, 0, 0, time.UTC),\n\t\t\t},\n\t\t\texpected: false,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tassertEqualE(t, isShortLivedCertificate(tt.cert), tt.expected)\n\t\t})\n\t}\n}\n\ntype malformedCrlRoundTripper struct {\n}\n\nfunc (m *malformedCrlRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {\n\tresponse := http.Response{\n\t\tStatusCode: http.StatusOK,\n\t}\n\tresponse.Body = http.NoBody\n\treturn &response, nil\n}\n\nfunc createCa(t *testing.T, issuerCert *x509.Certificate, issuerPrivateKey *rsa.PrivateKey, cn string, port int, crlEndpoints ...crlEndpointType) (*rsa.PrivateKey, *x509.Certificate) {\n\tcaTemplate := &x509.Certificate{\n\t\tSerialNumber: big.NewInt(1),\n\t\tSubject: pkix.Name{\n\t\t\tOrganization:       []string{\"Snowflake\"},\n\t\t\tOrganizationalUnit: []string{\"Drivers\"},\n\t\t\tLocality:           []string{\"Warsaw\"},\n\t\t\tCommonName:         cn,\n\t\t},\n\t\tNotBefore:             time.Now(),\n\t\tNotAfter:              time.Now().AddDate(10, 0, 0),\n\t\tIsCA:                  true,\n\t\tKeyUsage:              x509.KeyUsageCertSign | x509.KeyUsageCRLSign,\n\t\tBasicConstraintsValid: true,\n\t\tSignatureAlgorithm:    x509.SHA256WithRSA,\n\t}\n\treturn createCert(t, caTemplate, issuerCert, issuerPrivateKey, port, crlEndpoints)\n}\n\nfunc createLeafCert(t *testing.T, issuerCert *x509.Certificate, issuerPrivateKey *rsa.PrivateKey, port int, params ...any) (*rsa.PrivateKey, *x509.Certificate) {\n\tnotAfter := time.Now().AddDate(1, 0, 0)\n\tvar crlEndpoints []crlEndpointType\n\tfor _, param := range params {\n\t\tswitch v := param.(type) {\n\t\tcase notAfterType:\n\t\t\tnotAfter = time.Time(v)\n\t\tcase crlEndpointType:\n\t\t\tcrlEndpoints = append(crlEndpoints, v)\n\t\t}\n\t}\n\tserialNumber++\n\tcertTemplate := &x509.Certificate{\n\t\tSerialNumber: big.NewInt(serialNumber),\n\t\tSubject: pkix.Name{\n\t\t\tOrganization:       []string{\"Snowflake\"},\n\t\t\tOrganizationalUnit: []string{\"Drivers\"},\n\t\t\tLocality:           []string{\"Warsaw\"},\n\t\t\tCommonName:         \"localhost\",\n\t\t},\n\t\tNotBefore:          time.Now(),\n\t\tNotAfter:           notAfter,\n\t\tIsCA:               false,\n\t\tSignatureAlgorithm: x509.SHA256WithRSA,\n\t}\n\treturn createCert(t, certTemplate, issuerCert, issuerPrivateKey, port, crlEndpoints)\n}\n\nfunc createCert(t *testing.T, template, issuerCert *x509.Certificate, issuerPrivateKey *rsa.PrivateKey, port int, crlEndpoints []crlEndpointType) (*rsa.PrivateKey, *x509.Certificate) {\n\tvar distributionPoints []string\n\tfor _, crlEndpoint := range crlEndpoints {\n\t\tdistributionPoints = append(distributionPoints, fmt.Sprintf(\"http://localhost:%v%v\", port, crlEndpoint))\n\t\ttemplate.CRLDistributionPoints = distributionPoints\n\t}\n\tprivateKey, err := rsa.GenerateKey(rand.Reader, 2048)\n\tassertNilF(t, err)\n\ttemplate.SubjectKeyId = calculateKeyID(t, &privateKey.PublicKey)\n\tsignerPrivateKey := cmp.Or(issuerPrivateKey, privateKey)\n\tissuerCertOrSelfSigned := cmp.Or(issuerCert, template)\n\tcertBytes, err := x509.CreateCertificate(rand.Reader, template, issuerCertOrSelfSigned, &privateKey.PublicKey, signerPrivateKey)\n\tassertNilF(t, err)\n\tcert, err := x509.ParseCertificate(certBytes)\n\tassertNilF(t, err)\n\treturn privateKey, cert\n}\n\nfunc calculateKeyID(t *testing.T, pubKey any) []byte {\n\tpubBytes, err := x509.MarshalPKIXPublicKey(pubKey)\n\tassertNilF(t, err)\n\thash := sha256.Sum256(pubBytes)\n\treturn hash[:]\n}\n\nfunc createCrl(t *testing.T, issuerCert *x509.Certificate, issuerPrivateKey *rsa.PrivateKey, args ...any) *x509.RevocationList {\n\tvar revokedCertEntries []x509.RevocationListEntry\n\tvar extensions []pkix.Extension\n\tthisUpdate := time.Now().Add(-time.Hour)\n\tnextUpdate := time.Now().Add(time.Hour)\n\tfor _, arg := range args {\n\t\tswitch v := arg.(type) {\n\t\tcase revokedCert:\n\t\t\trevokedCertEntries = append(revokedCertEntries, x509.RevocationListEntry{\n\t\t\t\tSerialNumber:   v.SerialNumber,\n\t\t\t\tRevocationTime: time.Now().Add(-time.Hour * 24),\n\t\t\t})\n\t\tcase *pkix.Extension:\n\t\t\textensions = append(extensions, *v)\n\t\tcase thisUpdateType:\n\t\t\tthisUpdate = time.Time(v)\n\t\tcase nextUpdateType:\n\t\t\tnextUpdate = time.Time(v)\n\t\tdefault:\n\t\t\tt.Fatalf(\"unexpected argument type: %T\", arg)\n\t\t}\n\t}\n\tcrlTemplate := &x509.RevocationList{\n\t\tNumber:                    big.NewInt(1),\n\t\tRevokedCertificateEntries: revokedCertEntries,\n\t\tExtraExtensions:           extensions,\n\t\tThisUpdate:                thisUpdate,\n\t\tNextUpdate:                nextUpdate,\n\t}\n\tcrlBytes, err := x509.CreateRevocationList(rand.Reader, crlTemplate, issuerCert, issuerPrivateKey)\n\tassertNilF(t, err)\n\tcrl, err := x509.ParseRevocationList(crlBytes)\n\tassertNilF(t, err)\n\treturn crl\n}\n\ntype crlEndpointDef struct {\n\tendpoint string\n\tcrl      *x509.RevocationList\n}\n\nfunc newCrlEndpointDef(endpoint string, crl *x509.RevocationList) *crlEndpointDef {\n\treturn &crlEndpointDef{\n\t\tendpoint: endpoint,\n\t\tcrl:      crl,\n\t}\n}\n\nfunc createCrlServer(t *testing.T) (*http.Server, int) {\n\tlistener, err := net.Listen(\"tcp\", \":0\")\n\tassertNilF(t, err)\n\tport := listener.Addr().(*net.TCPAddr).Port\n\n\tserver := &http.Server{\n\t\tAddr:    fmt.Sprintf(\":%v\", port),\n\t\tHandler: http.NewServeMux(),\n\t}\n\tgo func() {\n\t\terr := server.Serve(listener)\n\t\tassertErrIsF(t, err, http.ErrServerClosed)\n\t}()\n\treturn server, port\n}\n\nfunc registerCrlEndpoints(t *testing.T, server *http.Server, endpointDefs ...*crlEndpointDef) {\n\tfor _, endpointDef := range endpointDefs {\n\t\tserver.Handler.(*http.ServeMux).HandleFunc(endpointDef.endpoint, func(responseWriter http.ResponseWriter, request *http.Request) {\n\t\t\tresponseWriter.WriteHeader(http.StatusOK)\n\t\t\t_, err := responseWriter.Write(endpointDef.crl.Raw)\n\t\t\tassertNilF(t, err)\n\t\t})\n\t}\n}\n\nfunc fullCrlURL(port int, endpoint string) string {\n\treturn fmt.Sprintf(\"http://localhost:%v%v\", port, endpoint)\n}\n\nfunc closeServer(t *testing.T, server *http.Server) {\n\terr := server.Shutdown(context.Background())\n\tassertNilF(t, err)\n}\n\nfunc TestCrlE2E(t *testing.T) {\n\tt.Run(\"Successful flow\", func(t *testing.T) {\n\t\tskipOnJenkins(t, \"Jenkins tests use HTTP connection to SF, so CRL is not used\")\n\t\tcleanupCrlCache(t)\n\t\tdefer cleanupCrlCache(t) // to reset cache cleaner after test\n\t\tcrlCacheCleanerTickRate = 1 * time.Second\n\t\tcacheValidityTimeOverride := overrideEnv(snowflakeCrlCacheValidityTimeEnv, \"15s\")\n\t\tdefer cacheValidityTimeOverride.rollback()\n\t\tcfg, err := ParseDSN(dsn)\n\t\tassertNilF(t, err, \"Failed to parse DSN\")\n\n\t\t// Add CRL-specific test parameters\n\t\tcfg.CertRevocationCheckMode = CertRevocationCheckEnabled\n\t\tcfg.CrlAllowCertificatesWithoutCrlURL = ConfigBoolTrue\n\t\tcfg.DisableOCSPChecks = true\n\t\tcfg.CrlOnDiskCacheDisabled = true\n\t\tdb := sql.OpenDB(NewConnector(SnowflakeDriver{}, *cfg))\n\t\tdefer db.Close()\n\t\trows, err := db.Query(\"SELECT 1\")\n\t\tassertNilF(t, err, \"CRL E2E test failed\")\n\t\tdefer rows.Close()\n\t\tcrlInMemoryCacheMutex.Lock()\n\t\tmemoryEntriesAfterSnowflakeConnection := len(crlInMemoryCache)\n\t\tcrlInMemoryCacheMutex.Unlock()\n\t\tlogger.Debugf(\"memory entries after Snowflake connection: %v\", memoryEntriesAfterSnowflakeConnection)\n\t\tassertTrueE(t, memoryEntriesAfterSnowflakeConnection > 0)\n\n\t\t// additional entries for connecting to cloud providers and checking their certs\n\t\tcwd, err := os.Getwd()\n\t\tassertNilF(t, err, \"Failed to get current working directory\")\n\t\t_, err = db.Exec(fmt.Sprintf(\"PUT file://%v @~/%v\", filepath.Join(cwd, \"test_data\", \"put_get_1.txt\"), \"put_get_1.txt\"))\n\t\tassertNilF(t, err, \"Failed to execute PUT file\")\n\t\tcrlInMemoryCacheMutex.Lock()\n\t\tmemoryEntriesAfterCSPConnection := len(crlInMemoryCache)\n\t\tcrlInMemoryCacheMutex.Unlock()\n\t\tlogger.Debugf(\"memory entries after CSP connection: %v\", memoryEntriesAfterCSPConnection)\n\t\tassertTrueE(t, memoryEntriesAfterCSPConnection > memoryEntriesAfterSnowflakeConnection)\n\n\t\ttime.Sleep(17 * time.Second) // wait for the cache cleaner to run\n\t\tcrlInMemoryCacheMutex.Lock()\n\t\tassertEqualE(t, len(crlInMemoryCache), 0)\n\t\tcrlInMemoryCacheMutex.Unlock()\n\t})\n\n\tt.Run(\"OCSP and CRL cannot be enabled at the same time\", func(t *testing.T) {\n\t\tcrlInMemoryCache = make(map[string]*crlInMemoryCacheValueType) // cleanup to ensure our test will fill it\n\t\tcfg := &Config{\n\t\t\tUser:                    username,\n\t\t\tPassword:                pass,\n\t\t\tAccount:                 account,\n\t\t\tDatabase:                dbname,\n\t\t\tSchema:                  schemaname,\n\t\t\tCertRevocationCheckMode: CertRevocationCheckEnabled,\n\t\t}\n\t\t_, err := buildSnowflakeConn(context.Background(), *cfg)\n\t\tassertStringContainsE(t, err.Error(), \"both OCSP and CRL cannot be enabled at the same time\")\n\t\tassertEqualE(t, len(crlInMemoryCache), 0)\n\t})\n}\n"
  },
  {
    "path": "ctx_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestCtxVal(t *testing.T) {\n\ttype favContextKey string\n\n\tf := func(ctx context.Context, k favContextKey) error {\n\t\tif v := ctx.Value(k); v != nil {\n\t\t\treturn nil\n\t\t}\n\t\treturn fmt.Errorf(\"key not found: %v\", k)\n\t}\n\n\tk := favContextKey(\"language\")\n\tctx := context.WithValue(context.Background(), k, \"Go\")\n\n\tk2 := favContextKey(\"data\")\n\tctx2 := context.WithValue(ctx, k2, \"Snowflake\")\n\tif err := f(ctx, k); err != nil {\n\t\tt.Error(err)\n\t}\n\tif err := f(ctx, \"color\"); err == nil {\n\t\tt.Error(\"should not have been found in context\")\n\t}\n\n\tif err := f(ctx2, k); err != nil {\n\t\tt.Error(err)\n\t}\n\tif err := f(ctx2, k2); err != nil {\n\t\tt.Error(err)\n\t}\n}\n\nfunc TestLogCtx(t *testing.T) {\n\tlog := CreateDefaultLogger()\n\tsessCtx := context.WithValue(context.Background(), SFSessionIDKey, \"sessID1\")\n\tctx := context.WithValue(sessCtx, SFSessionUserKey, \"admin\")\n\n\tvar b bytes.Buffer\n\tlog.SetOutput(&b)\n\tassertNilF(t, log.SetLogLevel(\"trace\"), \"could not set log level\")\n\tl := log.WithContext(ctx)\n\tl.Info(\"Hello 1\")\n\tl.Warn(\"Hello 2\")\n\ts := b.String()\n\tif len(s) <= 0 {\n\t\tt.Error(\"nothing written\")\n\t}\n\tif !strings.Contains(s, \"LOG_SESSION_ID=sessID1\") {\n\t\tt.Error(\"context ctx1 keys/values not logged\")\n\t}\n\tif !strings.Contains(s, \"LOG_USER=admin\") {\n\t\tt.Error(\"context ctx2 keys/values not logged\")\n\t}\n}\n"
  },
  {
    "path": "datatype.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n)\n\nvar (\n\t// DataTypeFixed is a FIXED datatype.\n\tDataTypeFixed = []byte{types.FixedType.Byte()}\n\t// DataTypeReal is a REAL datatype.\n\tDataTypeReal = []byte{types.RealType.Byte()}\n\t// DataTypeDecfloat is a DECFLOAT datatype.\n\tDataTypeDecfloat = []byte{types.DecfloatType.Byte()}\n\t// DataTypeText is a TEXT datatype.\n\tDataTypeText = []byte{types.TextType.Byte()}\n\t// DataTypeDate is a Date datatype.\n\tDataTypeDate = []byte{types.DateType.Byte()}\n\t// DataTypeVariant is a TEXT datatype.\n\tDataTypeVariant = []byte{types.VariantType.Byte()}\n\t// DataTypeTimestampLtz is a TIMESTAMP_LTZ datatype.\n\tDataTypeTimestampLtz = []byte{types.TimestampLtzType.Byte()}\n\t// DataTypeTimestampNtz is a TIMESTAMP_NTZ datatype.\n\tDataTypeTimestampNtz = []byte{types.TimestampNtzType.Byte()}\n\t// DataTypeTimestampTz is a TIMESTAMP_TZ datatype.\n\tDataTypeTimestampTz = []byte{types.TimestampTzType.Byte()}\n\t// DataTypeObject is a OBJECT datatype.\n\tDataTypeObject = []byte{types.ObjectType.Byte()}\n\t// DataTypeArray is a ARRAY datatype.\n\tDataTypeArray = []byte{types.ArrayType.Byte()}\n\t// DataTypeBinary is a BINARY datatype.\n\tDataTypeBinary = []byte{types.BinaryType.Byte()}\n\t// DataTypeTime is a TIME datatype.\n\tDataTypeTime = []byte{types.TimeType.Byte()}\n\t// DataTypeBoolean is a BOOLEAN datatype.\n\tDataTypeBoolean = []byte{types.BooleanType.Byte()}\n\t// DataTypeNilObject represents a nil structured object.\n\tDataTypeNilObject = []byte{types.NilObjectType.Byte()}\n\t// DataTypeNilArray represents a nil structured array.\n\tDataTypeNilArray = []byte{types.NilArrayType.Byte()}\n\t// DataTypeNilMap represents a nil structured map.\n\tDataTypeNilMap = []byte{types.NilMapType.Byte()}\n)\n\n// dataTypeMode returns the subsequent data type in a string representation.\nfunc dataTypeMode(v driver.Value) (tsmode types.SnowflakeType, err error) {\n\tif bd, ok := v.([]byte); ok {\n\t\tswitch {\n\t\tcase bytes.Equal(bd, DataTypeDecfloat):\n\t\t\ttsmode = types.DecfloatType\n\t\tcase bytes.Equal(bd, DataTypeDate):\n\t\t\ttsmode = types.DateType\n\t\tcase bytes.Equal(bd, DataTypeTime):\n\t\t\ttsmode = types.TimeType\n\t\tcase bytes.Equal(bd, DataTypeTimestampLtz):\n\t\t\ttsmode = types.TimestampLtzType\n\t\tcase bytes.Equal(bd, DataTypeTimestampNtz):\n\t\t\ttsmode = types.TimestampNtzType\n\t\tcase bytes.Equal(bd, DataTypeTimestampTz):\n\t\t\ttsmode = types.TimestampTzType\n\t\tcase bytes.Equal(bd, DataTypeBinary):\n\t\t\ttsmode = types.BinaryType\n\t\tcase bytes.Equal(bd, DataTypeObject):\n\t\t\ttsmode = types.ObjectType\n\t\tcase bytes.Equal(bd, DataTypeArray):\n\t\t\ttsmode = types.ArrayType\n\t\tcase bytes.Equal(bd, DataTypeVariant):\n\t\t\ttsmode = types.VariantType\n\t\tcase bytes.Equal(bd, DataTypeNilObject):\n\t\t\ttsmode = types.NilObjectType\n\t\tcase bytes.Equal(bd, DataTypeNilArray):\n\t\t\ttsmode = types.NilArrayType\n\t\tcase bytes.Equal(bd, DataTypeNilMap):\n\t\t\ttsmode = types.NilMapType\n\t\tdefault:\n\t\t\treturn types.NullType, fmt.Errorf(errors.ErrMsgInvalidByteArray, v)\n\t\t}\n\t} else {\n\t\treturn types.NullType, fmt.Errorf(errors.ErrMsgInvalidByteArray, v)\n\t}\n\treturn tsmode, nil\n}\n\n// SnowflakeParameter includes the columns output from SHOW PARAMETER command.\ntype SnowflakeParameter struct {\n\tKey                       string\n\tValue                     string\n\tDefault                   string\n\tLevel                     string\n\tDescription               string\n\tSetByUser                 string\n\tSetInJob                  string\n\tSetOn                     string\n\tSetByThreadID             string\n\tSetByThreadName           string\n\tSetByClass                string\n\tParameterComment          string\n\tType                      string\n\tIsExpired                 string\n\tExpiresAt                 string\n\tSetByControllingParameter string\n\tActivateVersion           string\n\tPartialRollout            string\n\tUnknown                   string // Reserve for added parameter\n}\n\nfunc populateSnowflakeParameter(colname string, p *SnowflakeParameter) any {\n\tswitch colname {\n\tcase \"key\":\n\t\treturn &p.Key\n\tcase \"value\":\n\t\treturn &p.Value\n\tcase \"default\":\n\t\treturn &p.Default\n\tcase \"level\":\n\t\treturn &p.Level\n\tcase \"description\":\n\t\treturn &p.Description\n\tcase \"set_by_user\":\n\t\treturn &p.SetByUser\n\tcase \"set_in_job\":\n\t\treturn &p.SetInJob\n\tcase \"set_on\":\n\t\treturn &p.SetOn\n\tcase \"set_by_thread_id\":\n\t\treturn &p.SetByThreadID\n\tcase \"set_by_thread_name\":\n\t\treturn &p.SetByThreadName\n\tcase \"set_by_class\":\n\t\treturn &p.SetByClass\n\tcase \"parameter_comment\":\n\t\treturn &p.ParameterComment\n\tcase \"type\":\n\t\treturn &p.Type\n\tcase \"is_expired\":\n\t\treturn &p.IsExpired\n\tcase \"expires_at\":\n\t\treturn &p.ExpiresAt\n\tcase \"set_by_controlling_parameter\":\n\t\treturn &p.SetByControllingParameter\n\tcase \"activate_version\":\n\t\treturn &p.ActivateVersion\n\tcase \"partial_rollout\":\n\t\treturn &p.PartialRollout\n\tdefault:\n\t\tlogger.Debugf(\"unknown type: %v\", colname)\n\t\treturn &p.Unknown\n\t}\n}\n\n// ScanSnowflakeParameter binds SnowflakeParameter variable with an array of column buffer.\nfunc ScanSnowflakeParameter(rows *sql.Rows) (*SnowflakeParameter, error) {\n\tvar err error\n\tvar columns []string\n\tcolumns, err = rows.Columns()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcolNum := len(columns)\n\tp := SnowflakeParameter{}\n\tcols := make([]any, colNum)\n\tfor i := range colNum {\n\t\tcols[i] = populateSnowflakeParameter(columns[i], &p)\n\t}\n\terr = rows.Scan(cols...)\n\treturn &p, err\n}\n"
  },
  {
    "path": "datatype_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n\t\"testing\"\n)\n\nfunc TestDataTypeMode(t *testing.T) {\n\tvar testcases = []struct {\n\t\ttp    driver.Value\n\t\ttmode types.SnowflakeType\n\t\terr   error\n\t}{\n\t\t{tp: DataTypeTimestampLtz, tmode: types.TimestampLtzType, err: nil},\n\t\t{tp: DataTypeTimestampNtz, tmode: types.TimestampNtzType, err: nil},\n\t\t{tp: DataTypeTimestampTz, tmode: types.TimestampTzType, err: nil},\n\t\t{tp: DataTypeDate, tmode: types.DateType, err: nil},\n\t\t{tp: DataTypeTime, tmode: types.TimeType, err: nil},\n\t\t{tp: DataTypeBinary, tmode: types.BinaryType, err: nil},\n\t\t{tp: DataTypeObject, tmode: types.ObjectType, err: nil},\n\t\t{tp: DataTypeArray, tmode: types.ArrayType, err: nil},\n\t\t{tp: DataTypeVariant, tmode: types.VariantType, err: nil},\n\t\t{tp: DataTypeFixed, tmode: types.FixedType,\n\t\t\terr: fmt.Errorf(errors.ErrMsgInvalidByteArray, DataTypeFixed)},\n\t\t{tp: DataTypeReal, tmode: types.RealType,\n\t\t\terr: fmt.Errorf(errors.ErrMsgInvalidByteArray, DataTypeFixed)},\n\t\t{tp: 123, tmode: types.NullType,\n\t\t\terr: fmt.Errorf(errors.ErrMsgInvalidByteArray, 123)},\n\t}\n\tfor _, ts := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v_%v\", ts.tp, ts.tmode), func(t *testing.T) {\n\t\t\ttmode, err := dataTypeMode(ts.tp)\n\t\t\tif ts.err == nil {\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Errorf(\"failed to get datatype mode: %v\", err)\n\t\t\t\t}\n\t\t\t\tif tmode != ts.tmode {\n\t\t\t\t\tt.Errorf(\"wrong data type: %v\", tmode)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif err == nil {\n\t\t\t\t\tt.Errorf(\"should raise an error: %v\", ts.err)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPopulateSnowflakeParameter(t *testing.T) {\n\tcolumns := []string{\"key\", \"value\", \"default\", \"level\", \"description\", \"set_by_user\", \"set_in_job\", \"set_on\", \"set_by_thread_id\", \"set_by_thread_name\", \"set_by_class\", \"parameter_comment\", \"type\", \"is_expired\", \"expires_at\", \"set_by_controlling_parameter\", \"activate_version\", \"partial_rollout\"}\n\tp := SnowflakeParameter{}\n\tcols := make([]any, len(columns))\n\tfor i := range columns {\n\t\tcols[i] = populateSnowflakeParameter(columns[i], &p)\n\t}\n\tfor i := range cols {\n\t\tif cols[i] == nil {\n\t\t\tt.Fatal(\"failed to populate parameter\")\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "datetime.go",
    "content": "package gosnowflake\n\nimport (\n\t\"errors\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n)\n\nvar incorrectSecondsFractionRegex = regexp.MustCompile(`[^.,]FF`)\nvar correctSecondsFractionRegex = regexp.MustCompile(`FF(?P<fraction>\\d?)`)\n\ntype formatReplacement struct {\n\tinput  string\n\toutput string\n}\n\nvar formatReplacements = []formatReplacement{\n\t{input: \"YYYY\", output: \"2006\"},\n\t{input: \"YY\", output: \"06\"},\n\t{input: \"MMMM\", output: \"January\"},\n\t{input: \"MM\", output: \"01\"},\n\t{input: \"MON\", output: \"Jan\"},\n\t{input: \"DD\", output: \"02\"},\n\t{input: \"DY\", output: \"Mon\"},\n\t{input: \"HH24\", output: \"15\"},\n\t{input: \"HH12\", output: \"03\"},\n\t{input: \"AM\", output: \"PM\"},\n\t{input: \"MI\", output: \"04\"},\n\t{input: \"SS\", output: \"05\"},\n\t{input: \"TZH\", output: \"Z07\"},\n\t{input: \"TZM\", output: \"00\"},\n}\n\nfunc timeToString(t time.Time, dateTimeType string, sp *syncParams) (string, error) {\n\tsfFormat, err := dateTimeInputFormatByType(dateTimeType, sp)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tgoFormat, err := snowflakeFormatToGoFormat(sfFormat)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn t.Format(goFormat), nil\n}\n\nfunc snowflakeFormatToGoFormat(sfFormat string) (string, error) {\n\tres := sfFormat\n\tfor _, replacement := range formatReplacements {\n\t\tres = strings.ReplaceAll(res, replacement.input, replacement.output)\n\t}\n\n\tif incorrectSecondsFractionRegex.MatchString(res) {\n\t\treturn \"\", errors.New(\"incorrect second fraction - golang requires fraction to be preceded by comma or decimal point\")\n\t}\n\tfor {\n\t\tsubmatch := correctSecondsFractionRegex.FindStringSubmatch(res)\n\t\tif submatch == nil {\n\t\t\tbreak\n\t\t}\n\t\tfractionNumbers := 9\n\t\tif submatch[1] != \"\" {\n\t\t\tvar err error\n\t\t\tfractionNumbers, err = strconv.Atoi(submatch[1])\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t}\n\t\tres = strings.ReplaceAll(res, submatch[0], strings.Repeat(\"0\", fractionNumbers))\n\t}\n\treturn res, nil\n}\n\nfunc dateTimeOutputFormatByType(dateTimeType string, sp *syncParams) (string, error) {\n\tvar format *string\n\tswitch strings.ToLower(dateTimeType) {\n\tcase \"date\":\n\t\tformat, _ = sp.get(\"date_output_format\")\n\tcase \"time\":\n\t\tformat, _ = sp.get(\"time_output_format\")\n\tcase \"timestamp_ltz\":\n\t\tformat, _ = sp.get(\"timestamp_ltz_output_format\")\n\t\tif format == nil || *format == \"\" {\n\t\t\tformat, _ = sp.get(\"timestamp_output_format\")\n\t\t}\n\tcase \"timestamp_tz\":\n\t\tformat, _ = sp.get(\"timestamp_tz_output_format\")\n\t\tif format == nil || *format == \"\" {\n\t\t\tformat, _ = sp.get(\"timestamp_output_format\")\n\t\t}\n\tcase \"timestamp_ntz\":\n\t\tformat, _ = sp.get(\"timestamp_ntz_output_format\")\n\t\tif format == nil || *format == \"\" {\n\t\t\tformat, _ = sp.get(\"timestamp_output_format\")\n\t\t}\n\t}\n\tif format != nil {\n\t\treturn *format, nil\n\t}\n\treturn \"\", errors.New(\"not known output format parameter for \" + dateTimeType)\n}\n\nfunc dateTimeInputFormatByType(dateTimeType string, sp *syncParams) (string, error) {\n\tvar format *string\n\tvar ok bool\n\tswitch strings.ToLower(dateTimeType) {\n\tcase \"date\":\n\t\tif format, ok = sp.get(\"date_input_format\"); !ok || format == nil || *format == \"\" {\n\t\t\tformat, _ = sp.get(\"date_output_format\")\n\t\t}\n\tcase \"time\":\n\t\tif format, ok = sp.get(\"time_input_format\"); !ok || format == nil || *format == \"\" {\n\t\t\tformat, _ = sp.get(\"time_output_format\")\n\t\t}\n\tcase \"timestamp_ltz\":\n\t\tif format, ok = sp.get(\"timestamp_ltz_input_format\"); !ok || format == nil || *format == \"\" {\n\t\t\tif format, ok = sp.get(\"timestamp_input_format\"); !ok || format == nil || *format == \"\" {\n\t\t\t\tif format, ok = sp.get(\"timestamp_ltz_output_format\"); !ok || format == nil || *format == \"\" {\n\t\t\t\t\tformat, _ = sp.get(\"timestamp_output_format\")\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\tcase \"timestamp_tz\":\n\t\tif format, ok = sp.get(\"timestamp_tz_input_format\"); !ok || format == nil || *format == \"\" {\n\t\t\tif format, ok = sp.get(\"timestamp_input_format\"); !ok || format == nil || *format == \"\" {\n\t\t\t\tif format, ok = sp.get(\"timestamp_tz_output_format\"); !ok || format == nil || *format == \"\" {\n\t\t\t\t\tformat, _ = sp.get(\"timestamp_output_format\")\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\tcase \"timestamp_ntz\":\n\t\tif format, ok = sp.get(\"timestamp_ntz_input_format\"); !ok || format == nil || *format == \"\" {\n\t\t\tif format, ok = sp.get(\"timestamp_input_format\"); !ok || format == nil || *format == \"\" {\n\t\t\t\tif format, ok = sp.get(\"timestamp_ntz_output_format\"); !ok || format == nil || *format == \"\" {\n\t\t\t\t\tformat, _ = sp.get(\"timestamp_output_format\")\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tif format != nil {\n\t\treturn *format, nil\n\t}\n\treturn \"\", errors.New(\"not known input format parameter for \" + dateTimeType)\n}\n"
  },
  {
    "path": "datetime_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestSnowflakeFormatToGoFormatUnitTest(t *testing.T) {\n\tlocation, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tsomeTime1 := time.Date(2024, time.January, 19, 3, 42, 33, 123456789, location)\n\tsomeTime2 := time.Date(1973, time.December, 5, 13, 5, 3, 987000000, location)\n\ttestcases := []struct {\n\t\tinputFormat string\n\t\toutput      string\n\t\tformatted1  string\n\t\tformatted2  string\n\t}{\n\t\t{\n\t\t\tinputFormat: \"YYYY-MM-DD HH24:MI:SS.FF TZH:TZM\",\n\t\t\toutput:      \"2006-01-02 15:04:05.000000000 Z07:00\",\n\t\t\tformatted1:  \"2024-01-19 03:42:33.123456789 +01:00\",\n\t\t\tformatted2:  \"1973-12-05 13:05:03.987000000 +01:00\",\n\t\t},\n\t\t{\n\t\t\tinputFormat: \"YY-MM-DD HH12:MI:SS,FF5AM TZHTZM\",\n\t\t\toutput:      \"06-01-02 03:04:05,00000PM Z0700\",\n\t\t\tformatted1:  \"24-01-19 03:42:33,12345AM +0100\",\n\t\t\tformatted2:  \"73-12-05 01:05:03,98700PM +0100\",\n\t\t},\n\t\t{\n\t\t\tinputFormat: \"MMMM DD, YYYY DY HH24:MI:SS.FF9 TZH:TZM\",\n\t\t\toutput:      \"January 02, 2006 Mon 15:04:05.000000000 Z07:00\",\n\t\t\tformatted1:  \"January 19, 2024 Fri 03:42:33.123456789 +01:00\",\n\t\t\tformatted2:  \"December 05, 1973 Wed 13:05:03.987000000 +01:00\",\n\t\t},\n\t\t{\n\t\t\tinputFormat: \"MON DD, YYYY HH12:MI:SS,FF9PM TZH:TZM\",\n\t\t\toutput:      \"Jan 02, 2006 03:04:05,000000000PM Z07:00\",\n\t\t\tformatted1:  \"Jan 19, 2024 03:42:33,123456789AM +01:00\",\n\t\t\tformatted2:  \"Dec 05, 1973 01:05:03,987000000PM +01:00\",\n\t\t},\n\t\t{\n\t\t\tinputFormat: \"HH24:MI:SS.FF3 HH12:MI:SS,FF9\",\n\t\t\toutput:      \"15:04:05.000 03:04:05,000000000\",\n\t\t\tformatted1:  \"03:42:33.123 03:42:33,123456789\",\n\t\t\tformatted2:  \"13:05:03.987 01:05:03,987000000\",\n\t\t},\n\t}\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.inputFormat, func(t *testing.T) {\n\t\t\tgoFormat, err := snowflakeFormatToGoFormat(tc.inputFormat)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, tc.output, goFormat)\n\t\t\tassertEqualE(t, tc.formatted1, someTime1.Format(goFormat))\n\t\t\tassertEqualE(t, tc.formatted2, someTime2.Format(goFormat))\n\t\t})\n\t}\n}\n\nfunc TestIncorrectSecondsFraction(t *testing.T) {\n\t_, err := snowflakeFormatToGoFormat(\"HH24 MI SS FF\")\n\tassertHasPrefixE(t, err.Error(), \"incorrect second fraction\")\n}\n\nfunc TestSnowflakeFormatToGoFormatIntegrationTest(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIME_OUTPUT_FORMAT = 'HH24:MI:SS.FF'\")\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF3 TZHTZM'\")\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMESTAMP_NTZ_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF3'\")\n\t\tfor _, forceFormat := range []string{forceJSON, forceARROW} {\n\t\t\tdbt.mustExec(forceFormat)\n\n\t\t\tfor _, tc := range []struct {\n\t\t\t\tsfType          string\n\t\t\t\tformatParamName string\n\t\t\t\tsfFunction      string\n\t\t\t}{\n\t\t\t\t{\n\t\t\t\t\tsfType:          \"TIMESTAMPLTZ\",\n\t\t\t\t\tformatParamName: \"TIMESTAMP_OUTPUT_FORMAT\",\n\t\t\t\t\tsfFunction:      \"CURRENT_TIMESTAMP\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tsfType:          \"TIMESTAMPTZ\",\n\t\t\t\t\tformatParamName: \"TIMESTAMP_OUTPUT_FORMAT\",\n\t\t\t\t\tsfFunction:      \"CURRENT_TIMESTAMP\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tsfType:          \"TIMESTAMPNTZ\",\n\t\t\t\t\tformatParamName: \"TIMESTAMP_NTZ_OUTPUT_FORMAT\",\n\t\t\t\t\tsfFunction:      \"CURRENT_TIMESTAMP\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tsfType:          \"DATE\",\n\t\t\t\t\tformatParamName: \"DATE_OUTPUT_FORMAT\",\n\t\t\t\t\tsfFunction:      \"CURRENT_DATE\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tsfType:          \"TIME\",\n\t\t\t\t\tformatParamName: \"TIME_OUTPUT_FORMAT\",\n\t\t\t\t\tsfFunction:      \"CURRENT_TIME\",\n\t\t\t\t},\n\t\t\t} {\n\t\t\t\tt.Run(tc.sfType+\"___\"+forceFormat, func(t *testing.T) {\n\t\t\t\t\tparams := dbt.mustQuery(\"show parameters like '\" + tc.formatParamName + \"'\")\n\t\t\t\t\tdefer params.Close()\n\t\t\t\t\tparams.Next()\n\t\t\t\t\tdefaultTimestampOutputFormat, err := ScanSnowflakeParameter(params.rows)\n\t\t\t\t\tassertNilF(t, err)\n\n\t\t\t\t\trows := dbt.mustQuery(\"SELECT \" + tc.sfFunction + \"()::\" + tc.sfType + \", \" + tc.sfFunction + \"()::\" + tc.sfType + \"::varchar\")\n\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\tvar t1 time.Time\n\t\t\t\t\tvar t2 string\n\t\t\t\t\trows.Next()\n\t\t\t\t\terr = rows.Scan(&t1, &t2)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tgoFormat, err := snowflakeFormatToGoFormat(defaultTimestampOutputFormat.Value)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tassertEqualE(t, t1.Format(goFormat), t2)\n\t\t\t\t\tparseResult, err := time.Parse(goFormat, t2)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tif tc.sfType != \"TIME\" {\n\t\t\t\t\t\tassertEqualE(t, t1.UTC(), parseResult.UTC())\n\t\t\t\t\t} else {\n\t\t\t\t\t\tassertEqualE(t, t1.Hour(), parseResult.Hour())\n\t\t\t\t\t\tassertEqualE(t, t1.Minute(), parseResult.Minute())\n\t\t\t\t\t\tassertEqualE(t, t1.Second(), parseResult.Second())\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "doc.go",
    "content": "/*\nPackage gosnowflake is a pure Go Snowflake driver for the database/sql package.\n\nClients can use the database/sql package directly. For example:\n\n\timport (\n\t\t\"database/sql\"\n\n\t\t_ \"github.com/snowflakedb/gosnowflake/v2\"\n\n\t\t\"log\"\n\t)\n\n\tfunc main() {\n\t\tdb, err := sql.Open(\"snowflake\", \"user:password@my_organization-my_account/mydb\")\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t\tdefer db.Close()\n\t\t...\n\t}\n\n# Connection String\n\nUse the Open() function to create a database handle with connection parameters:\n\n\tdb, err := sql.Open(\"snowflake\", \"<connection string>\")\n\nThe Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats):\n\n  - username[:password]@<account_identifier>/dbname/schemaname[?param1=value&...&paramN=valueN]\n  - username[:password]@<account_identifier>/dbname[?param1=value&...&paramN=valueN]\n  - username[:password]@hostname:port/dbname/schemaname?account=<account_identifier>[&param1=value&...&paramN=valueN]\n\nwhere all parameters must be escaped or use Config and DSN to construct a DSN string.\n\nFor information about account identifiers, see the Snowflake documentation\n(https://docs.snowflake.com/en/user-guide/admin-account-identifier.html).\n\nThe following example opens a database handle with the Snowflake account\nnamed \"my_account\" under the organization named \"my_organization\",\nwhere the username is \"jsmith\", password is \"mypassword\", database is \"mydb\",\nschema is \"testschema\", and warehouse is \"mywh\":\n\n\tdb, err := sql.Open(\"snowflake\", \"jsmith:mypassword@my_organization-my_account/mydb/testschema?warehouse=mywh\")\n\n# Connection Parameters\n\nThe connection string (DSN) can contain both connection parameters (described below) and session parameters\n(https://docs.snowflake.com/en/sql-reference/parameters.html).\n\nThe following connection parameters are supported:\n\n  - account <string>: Specifies your Snowflake account, where \"<string>\" is the account\n    identifier assigned to your account by Snowflake.\n    For information about account identifiers, see the Snowflake documentation\n    (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html).\n\n    If you are using a global URL, then append the connection group and \".global\"\n    (e.g. \"<account_identifier>-<connection_group>.global\"). The account identifier and the\n    connection group are separated by a dash (\"-\"), as shown above.\n\n    This parameter is optional if your account identifier is specified after the \"@\" character\n    in the connection string.\n\n  - region <string>: DEPRECATED. You may specify a region, such as\n    \"eu-central-1\", with this parameter. However, since this parameter\n    is deprecated, it is best to specify the region as part of the\n    account parameter. For details, see the description of the account\n    parameter.\n\n  - --> Important note: for the database object and other objects (schema, role, etc), please always adhere to the rules for Snowflake Object Identifiers; especially https://docs.snowflake.com/en/sql-reference/identifiers-syntax#double-quoted-identifiers.\n    As mentioned in the docs, if you have e.g. a database with mIxEDcAsE naming, as you needed to create it with enclosing it in double quotes, similarly you'll need to reference it\n    also with double quotes when specifying it in the connection string / DSN. In practice, this means you'll need to escape the second pair of double quotes, which are part of the database name, and not the String notation.\n\n  - database: Specifies the database to use by default in the client session\n    (can be changed after login).\n\n  - schema: Specifies the database schema to use by default in the client\n    session (can be changed after login).\n\n  - warehouse: Specifies the virtual warehouse to use by default for queries,\n    loading, etc. in the client session (can be changed after login).\n\n  - role: Specifies the role to use by default for accessing Snowflake\n    objects in the client session (can be changed after login).\n\n  - passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login.\n\n  - passcodeInPassword: false by default. Set to true if the MFA passcode is embedded\n    in the login password. Appends the MFA passcode to the end of the password.\n\n  - loginTimeout: Specifies the timeout, in seconds, for login. The default\n    is 60 seconds. The login request gives up after the timeout length if the\n    HTTP response is success.\n\n  - requestTimeout: Specifies the timeout, in seconds, for a query to complete.\n    0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds.\n    The query request gives up after the timeout length if the HTTP response is success.\n\n  - authenticator: Specifies the authenticator to use for authenticating user credentials.\n    See \"Authenticator Values\" section below for supported values.\n\n  - singleAuthenticationPrompt: specifies whether only one authentication should be performed at the same time for authentications that needs human interactions (like MFA or OAuth authorization code).\n    By default it is true.\n\n  - application: Identifies your application to Snowflake Support.\n\n  - disableOCSPChecks: false by default. Set to true to bypass the Online\n    Certificate Status Protocol (OCSP) certificate revocation check.\n    OCSP module caches responses internally. If your application is long running, you can enable cache clearing by calling StartOCSPCacheClearer and disable by calling StopOCSPCacheClearer.\n    IMPORTANT: Change the default value for testing or emergency situations only.\n\n  - token: a token that can be used to authenticate. Should be used in conjunction with the \"oauth\" authenticator.\n\n  - client_session_keep_alive: Set to true have a heartbeat in the background every hour by default or the value of\n    client_session_keep_alive_heartbeat_frequency, if set, to keep the connection alive such that the connection session\n    will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive.\n\n  - client_session_keep_alive_heartbeat_frequency: Number of seconds in-between client attempts to update the token for the session.\n    > The default is 3600 seconds\n    > Minimum value is 900 seconds. A smaller value will be reset to 900 seconds.\n    > Maximum value is 3600 seconds. A larger value will be reset to 3600 seconds.\n    > This parameter is only valid if client_session_keep_alive is set to true.\n\n  - ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode.\n\n  - certRevocationCheckMode (enabled, advisory, disabled): Specifies the certificate revocation check mode.\n    When enabled, the driver performs a certificate revocation check using CRL.\n    When advisory, the driver performs a certificate revocation check using CRL, but fails the connection only if the certificate is revoked.\n    If the status cannot be determined, the connection is established.\n    When disabled, the driver does not perform a certificate revocation check.\n    Keep in mind that the certificate revocation check with CRLs is a heavy task, both for memory and CPU.\n    The default is disabled.\n\n  - crlAllowCertificatesWithoutCrlURL: if a certificate does not have a CRL URL, the driver will\n    allow the connection to be established.\n    The default is false.\n\n  - SNOWFLAKE_CRL_CACHE_VALIDITY_TIME (environment variable): specifies the validity time of the CRL cache in seconds.\n\n  - crlInMemoryCacheDisabled: set to disable in-memory caching of CRLs.\n\n  - crlOnDiskCacheDisabled: set to disable on-disk caching of CRLs (on-disk cache may help with cold starts).\n\n  - crlDownloadMaxSize: maximum size (in bytes) of a CRL to download. Default is 20MB.\n\n  - SNOWFLAKE_CRL_ON_DISK_CACHE_DIR (environment variable): set to customize the directory for on-disk caching of CRLs.\n\n  - SNOWFLAKE_CRL_ON_DISK_CACHE_REMOVAL_DELAY (environment variable): set the delay (in seconds) for removing the on-disk cache (for debuggability).\n\n  - crlHTTPClientTimeout: customize the HTTP client timeout for downloading CRLs.\n\n  - validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for\n    Database, Schema, Warehouse and Role when setting up the connection\n\n    --> Important note: with the default true value, the connection will fail as the validation fails, if you specify a non-existent database/schema/etc name.\n    This is particularly important when you have a miXedCaSE-named object (e.g. database) and you forgot to properly double quote it.\n    This behaviour is still preferable as it provides a very clear, fail-fast indication of the configuration error. If you would still like to forego this validation,\n    which ensures that the driver always connects with proper database, schema etc. and creates a proper context for it, you can set this configuration to false to allow connection with invalid object identifiers.\n\n    In this case (with this default validation deliberately turned off) the driver cannot guarantee that the actual behaviour inside the session will match with the one you'd expect, i.e. not actually using the database you expect, and so on.\n\n  - tracing: Specifies the logging level to be used. Set to error by default.\n    Valid values are off, fatal, error, warn, info, debug, trace.\n\n  - logQueryText: when set to true, the full query text will be logged. Be aware that it may include sensitive information. Default value is false.\n\n  - logQueryParameters: when set to true, the parameters will be logged. Requires logQueryText to be enabled first. Be aware that it may include sensitive information. Default value is false.\n\n  - disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well.\n    Default value is false.\n\n  - clientConfigFile: specifies the location of the client configuration json file.\n    In this file you can configure Easy Logging feature.\n\n  - disableSamlURLCheck: disables the SAML URL check. Default value is false.\n\nAll other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html).\nFor example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding:\n\n\t...&TIMESTAMP_OUTPUT_FORMAT=MM-DD-YYYY...\n\nA complete connection string looks similar to the following:\n\n\t\tmy_user_name:my_password@ac123456/my_database/my_schema?my_warehouse=inventory_warehouse&role=my_user_role&DATE_OUTPUT_FORMAT=YYYY-MM-DD\n\t                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\t                                                                      connection                     connection           session\n\t                                                                      parameter                      parameter            parameter\n\nSession-level parameters can also be set by using the SQL command \"ALTER SESSION\"\n(https://docs.snowflake.com/en/sql-reference/sql/alter-session.html).\n\nAlternatively, use OpenWithConfig() function to create a database handle with the specified Config.\n\n# Authenticator values\n\n  - To use the internal Snowflake authenticator, specify snowflake (Default).\n\n  - To use programmatic access tokens, specify programmatic_access_token.\n\n  - If you want to cache your MFA logins, specify username_password_mfa. You can pass TOTP in a separate passcode parameter or append it to the password setting in which case you need to set passcodeInPassword = true.\n\n  - To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta).\n\n  - To authenticate using your IDP via a browser, specify externalbrowser.\n\n  - To authenticate via OAuth with token, specify oauth and provide an OAuth Access Token (see the token parameter below).\n\n  - To authenticate via full OAuth flow, specify oauth_authorization_code or oauth_client_credentials and fill relevant parameters (oauthClientId, oauthClientSecret, oauthAuthorizationUrl, oauthTokenRequestUrl, oauthRedirectUri, oauthScope).\n    Specify URLs if you want to use external OAuth2 IdP, otherwise Snowflake will be used as a default IdP.\n    If oauthScope is not configured, the role is used (giving session:role:<roleName> scope).\n    For more information, please reach to official Snowflake documentation.\n\n  - To authenticate via workload identity, specify workload_identity.\n\n    This option requires workloadIdentityProvider option to be set (AWS, GCP, AZURE, OIDC).\n\n    When workloadIdentityProvider=AZURE, workloadIdentityEntraResource can be optionally set to customize entra resource used to fetch JWT token.\n\n    When workloadIdentityProvider=GCP or AWS, workloadIdentityImpersonationPath can be optionally set to customize impersonation path. This is a comma separated list. For GCP the last parameter is a target service account and the rest are chained delegation. For AWS this is the list of role ARNs to assume.\n\n    For more details, refer to the usage guide: https://docs.snowflake.com/en/user-guide/workload-identity-federation\n\n# Connection Config\n\nYou can also connect to your warehouse using the connection config. The database/sql package is appropriate when you want driver-specific connection features that aren’t\navailable in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS.\nFor example:\n\n\tc := &gosnowflake.Config{\n\t\t~your credentials go here~\n\t}\n\tconnector := gosnowflake.NewConnector(gosnowflake.SnowflakeDriver{}, *c)\n\tdb := sql.OpenDB(connector)\n\nWhen Host is a full Snowflake hostname (the host string contains \".snowflakecomputing.\", consistent with DSN-based URLs) and Account is left empty, the driver derives Account from the first DNS label of Host while completing configuration (for example, database/sql.Connector Connect invokes FillMissingConfigParameters). If Host does not contain that substring, you must set Account explicitly (for example private-link or custom endpoints).\n\nWhen Account is already non-empty, it is kept as provided. Truncating a dotted account value from DSN query parameters happens inside ParseDSN before FillMissingConfigParameters; that normalization does not apply to every programmatic Config.\n\nIf you are using this method, you don't need to pass a driver name to specify the driver type in which\nyou are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration\non startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTRATION` in your environment. This is useful if you wish to\nregister multiple versions of the driver.\n\nNote: `GOSNOWFLAKE_SKIP_REGISTRATION` should not be used if sql.Open() is used as the method\nto connect to the server, as sql.Open will require registration so it can map the driver name\nto the driver type, which in this case is \"snowflake\" and SnowflakeDriver{}.\n\nYou can load the connection configuration with .toml file format.\nWith two environment variables, `SNOWFLAKE_HOME` (`connections.toml` file directory) and `SNOWFLAKE_DEFAULT_CONNECTION_NAME` (DSN name),\nthe driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection\nor Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials\n\nIf the connection.toml file is readable by others, a warning will be logged. To disable it you need to set the environment variable `SF_SKIP_WARNING_FOR_READ_PERMISSIONS_ON_CONFIG_FILE` to true.\n\nIf you wish to specify a custom transporter (e.g. to provide a custom TLS config to be used with your custom truststore) pass it through the `NewConnector`. Example:\n\n\ttlsConfig := &tls.Config{\n\t    // your custom fields here\n\t}\n\n\tconfig := Config{\n\t    Transporter: &http.Transport{\n\t        TLSClientConfig: tlsConfig,\n\t    },\n\t}\n\n\tconnector := NewConnector(SnowflakeDriver{}, config)\n\tdb := sql.OpenDB(connector)\n\nAs an alternative, you can use the `RegisterTLSConfig` / `DeregisterTLSConfig` functions as seen in the unit tests: https://github.com/snowflakedb/gosnowflake/blob/v1.16.0/transport_test.go#L127\n\n# Proxy\n\nThe Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting.\n\nNO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy.\n\nNO_PROXY does not support wildcards. Each value specified should be one of the following:\n\n  - The end of a hostname (or a complete hostname), for example: \".amazonaws.com\" or \"xy12345.snowflakecomputing.com\".\n\n  - An IP address, for example \"192.196.1.15\".\n\nIf more than one value is specified, values should be separated by commas, for example:\n\n\tno_proxy=localhost,.my_company.com,xy12345.snowflakecomputing.com,192.168.1.15,192.168.1.16\n\nIn addition to environment variables, the Go Snowflake Driver also supports configuring the proxy via connection parameters.\nWhen these parameters are provided in the connection string or DSN, they take precedence and any environment proxy settings (HTTP_PROXY, HTTPS_PROXY, NO_PROXY) will be ignored.\n\n| Parameter       | Description                                                                 | Default |\n|-----------------|-----------------------------------------------------------------------------|---------|\n| `proxyHost`     | Hostname or IP address of the proxy server.                                 |         |\n| `proxyPort`     | Port number of the proxy server.                                            |         |\n| `proxyUser`     | Username for proxy authentication.                                          |         |\n| `proxyPassword` | Password for proxy authentication.                                          |         |\n| `proxyProtocol` | Protocol to use for proxy connection. Valid values: `http`, `https`.        | `http`  |\n| `noProxy`       | Comma-separated list of hosts that should bypass the proxy.                 |         |\n\nFor more details, please refer to the example in ./cmd/proxyconnection.\n\n# Logging\n\nBy default, the driver uses a built-in slog-based logger at ERROR level.\nThe driver automatically masks secrets in all log messages to prevent credential leakage.\n\nUsers can customize logging in two ways:\n\n1. Using a custom slog.Handler (if you want to use slog with custom formatting):\n\n\t\timport (\n\t\t\t\"log/slog\"\n\t        \"os\"\n\t\t\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\t\t)\n\n\t\t// Create your custom handler\n\t\tcustomHandler := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{\n\t\t\tLevel: slog.LevelDebug,\n\t\t})\n\n\t\t// Get the default logger and set your handler\n\t\tlogger := sf.GetLogger()\n\t\tif sl, ok := logger.(sf.SFSlogLogger); ok {\n\t        sl.SetHandler(customHandler)\n\t    }\n\n2. Using a complete custom logger implementation (if you want full control):\n\n\t// Implement the sf.SFLogger interface\n\ttype MyCustomLogger struct {\n\t\t// your implementation\n\t}\n\n\t// Set your custom logger\n\tcustomLogger := &MyCustomLogger{}\n\tsf.SetLogger(customLogger)\n\nImportant notes:\n\n  - Secret masking is automatically applied to all loggers (both custom and default)\n  - To change log level: logger.SetLogLevel(\"debug\")\n  - To redirect output: logger.SetOutput(writer)\n  - For examples, see log_client_test.go\n\nIf you want to define S3 client logging, override S3LoggingMode variable using configuration: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode\nExample:\n\n\t    import (\n\t      sf \"github.com/snowflakedb/gosnowflake/v2\"\n\t      \"github.com/aws/aws-sdk-go-v2/aws\"\n\t    )\n\n\t    ...\n\n\t\tsf.S3LoggingMode = aws.LogRequest | aws.LogResponseWithBody | aws.LogRetries\n\n# Query tag\n\nA custom query tag can be set in the context. Each query run with this context\nwill include the custom query tag as metadata that will appear in the Query Tag\ncolumn in the Query History log. For example:\n\n\tqueryTag := \"my custom query tag\"\n\tctxWithQueryTag := WithQueryTag(ctx, queryTag)\n\trows, err := db.QueryContext(ctxWithQueryTag, query)\n\n# Query request ID\n\nA specific query request ID can be set in the context and will be passed through\nin place of the default randomized request ID. For example:\n\n\trequestID := ParseUUID(\"6ba7b812-9dad-11d1-80b4-00c04fd430c8\")\n\tctxWithID := WithRequestID(ctx, requestID)\n\trows, err := db.QueryContext(ctxWithID, query)\n\n# Last query ID\n\nIf you need query ID for your query you have to use raw connection.\n\nFor queries:\n```\n\n\terr := conn.Raw(func(x any) error {\n\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, \"SELECT 1\")\n\t\trows, err := stmt.(driver.StmtQueryContext).QueryContext(ctx, nil)\n\t\trows.(SnowflakeRows).GetQueryID()\n\t\tstmt.(SnowflakeStmt).GetQueryID()\n\t\treturn nil\n\t}\n\n```\n\nFor execs:\n```\n\n\terr := conn.Raw(func(x any) error {\n\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, \"INSERT INTO TestStatementQueryIdForExecs VALUES (1)\")\n\t\tresult, err := stmt.(driver.StmtExecContext).ExecContext(ctx, nil)\n\t\tresult.(SnowflakeResult).GetQueryID()\n\t\tstmt.(SnowflakeStmt).GetQueryID()\n\t\treturn nil\n\t}\n\n```\n\n# Fetch Results by Query ID\n\nThe result of your query can be retrieved by setting the query ID in the WithFetchResultByID context.\n```\n\n\t// Get the query ID using raw connection as mentioned above:\n\terr := conn.Raw(func(x any) error {\n\t\trows1, err = x.(driver.QueryerContext).QueryContext(ctx, \"SELECT 1\", nil)\n\t\tqueryID = rows1.(sf.SnowflakeRows).GetQueryID()\n\t\treturn nil\n\t}\n\n\t// Update the Context object to specify the query ID\n\tfetchResultByIDCtx = sf.WithFetchResultByID(ctx, queryID)\n\n\t// Execute an empty string query\n\trows2, err := db.QueryContext(fetchResultByIDCtx, \"\")\n\n\t// Retrieve the results as usual\n\tfor rows2.Next()  {\n\t\terr = rows2.Scan(...)\n\t\t...\n\t}\n\n```\n\n# Canceling Query by CtrlC\n\nFrom 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a\nquery/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter\n(e.g. QueryContext, ExecContext).\n\n\t// handle interrupt signal\n\tctx, cancel := context.WithCancel(context.Background())\n\tc := make(chan os.Signal, 1)\n\tsignal.Notify(c, os.Interrupt)\n\tdefer func() {\n\t\tsignal.Stop(c)\n\t}()\n\tgo func() {\n\t\tselect {\n\t\tcase <-c:\n\t\t\tcancel()\n\t\tcase <-ctx.Done():\n\t\t}\n\t}()\n\t... (connection)\n\t// execute a query\n\trows, err := db.QueryContext(ctx, query)\n\t... (Ctrl+C to cancel the query)\n\nSee cmd/selectmany.go for the full example.\n\n# OpenTelemetry headers\n\nA context containing OpenTelemetry headers for distributed tracing can be\ncreated. Each query, both synchronous and asynchronous, run with this context\nwill include the Trace ID and Span ID as metadata. If you are instrumenting your\nprogram with OpenTelemetry and exporting telemetry data to Snowflake, then\nqueries run with this context will be properly nested under the appropriate\nparent span. This can be viewed in the Traces and Logs tab in Snowsight.\n\nFor example:\n\n\tctx, parent_span := tracer.Start(context.Background(), \"parent_span\")\n\tdefer parent_span.End()\n\trows, err := db.QueryContext(ctx, query)\n\n# Supported Data Types\n\nThe Go Snowflake Driver now supports the Arrow data format for data transfers\nbetween Snowflake and the Golang client. The Arrow data format avoids extra\nconversions between binary and textual representations of the data. The Arrow\ndata format can improve performance and reduce memory consumption in clients.\n\nSnowflake continues to support the JSON data format.\n\nThe data format is controlled by the session-level parameter\nGO_QUERY_RESULT_FORMAT. To use JSON format, execute:\n\n\tALTER SESSION SET GO_QUERY_RESULT_FORMAT = 'JSON';\n\nThe valid values for the parameter are:\n\n  - ARROW (default)\n  - JSON\n\nIf the user attempts to set the parameter to an invalid value, an error is\nreturned.\n\nThe parameter name and the parameter value are case-insensitive.\n\nThis parameter can be set only at the session level.\n\nUsage notes:\n\n  - The Arrow data format reduces rounding errors in floating point numbers. You might see slightly\n    different values for floating point numbers when using Arrow format than when using JSON format.\n    In order to take advantage of the increased precision, you must pass in the context.Context object\n    provided by the WithHigherPrecision function when querying.\n\n  - Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed\n    in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural,\n    expected data type as well.\n\n  - For some numeric data types, the driver can retrieve larger values when using the Arrow format than\n    when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0)\n    values to be retrieved, while using JSON format allows only values in the range supported by the\n    Golang int64 data type.\n\n    Users should ensure that Golang variables are declared using the appropriate data type for the full\n    range of values contained in the column. For an example, see below.\n\nWhen using the Arrow format, the driver supports more Golang data types and\nmore ways to convert SQL values to those Golang data types. The table below\nlists the supported Snowflake SQL data types and the corresponding Golang\ndata types. The columns are:\n\n 1. The SQL data type.\n\n 2. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from\n    Arrow data format via an interface{}.\n\n 3. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data\n    from Arrow data format directly.\n\n 4. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from\n    JSON data format via an interface{}. (All returned values are strings.)\n\n 5. The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from\n    JSON data format directly.\n\n    Go Data Types for Scan()\n    ===================================================================================================================\n    |                    ARROW                    |                    JSON\n    ===================================================================================================================\n    SQL Data Type          | Default Go Data Type   | Supported Go Data  | Default Go Data Type   | Supported Go Data\n    | for Scan() interface{} | Types for Scan()   | for Scan() interface{} | Types for Scan()\n    ===================================================================================================================\n    BOOLEAN              | bool                                        | string                 | bool\n    -------------------------------------------------------------------------------------------------------------------\n    VARCHAR              | string                                      | string\n    -------------------------------------------------------------------------------------------------------------------\n    DOUBLE               | float32, float64                  [1] , [2] | string                 | float32, float64\n    -------------------------------------------------------------------------------------------------------------------\n    INTEGER that         | int, int8, int16, int32, int64              | string                 | int, int8, int16,\n    fits in int64        |                                   [1] , [2] |                        | int32, int64\n    -------------------------------------------------------------------------------------------------------------------\n    INTEGER that doesn't | int, int8, int16, int32, int64,  *big.Int   | string                 | error\n    fit in int64         |                       [1] , [2] , [3] , [4] |\n    -------------------------------------------------------------------------------------------------------------------\n    NUMBER(P, S)         | float32, float64,  *big.Float               | string                 | float32, float64\n    where S > 0          |                       [1] , [2] , [3] , [5] |\n    -------------------------------------------------------------------------------------------------------------------\n    DATE                 | time.Time                                   | string                 | time.Time\n    -------------------------------------------------------------------------------------------------------------------\n    TIME                 | time.Time                                   | string                 | time.Time\n    -------------------------------------------------------------------------------------------------------------------\n    TIMESTAMP_LTZ        | time.Time                                   | string                 | time.Time\n    -------------------------------------------------------------------------------------------------------------------\n    TIMESTAMP_NTZ        | time.Time                                   | string                 | time.Time\n    -------------------------------------------------------------------------------------------------------------------\n    TIMESTAMP_TZ         | time.Time                                   | string                 | time.Time\n    -------------------------------------------------------------------------------------------------------------------\n    BINARY               | []byte                                      | string                 | []byte\n    -------------------------------------------------------------------------------------------------------------------\n    ARRAY [6]            | string / array                              | string / array\n    -------------------------------------------------------------------------------------------------------------------\n    OBJECT [6]           | string / struct                             | string / struct\n    -------------------------------------------------------------------------------------------------------------------\n    VARIANT              | string                                      | string\n    -------------------------------------------------------------------------------------------------------------------\n    MAP                  | map                                         | map\n\n    [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan()\n    method can lose low bits (lose precision), lose high bits (completely change the value), or result in error.\n\n    [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{}\n    causes an error.\n\n    [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context\n    returned by WithHigherPrecision().\n\n    [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to\n    those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below.\n\n    [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to\n    those data types by using .Float32()/.String()/.Float64() methods. For an example, see below.\n\n    [6] Arrays and objects can be either semistructured or structured, see more info in section below.\n\nNote: SQL NULL values are converted to Golang nil values, and vice-versa.\n\n# Semistructured and structured types\n\nSnowflake supports two flavours of \"structured data\" - semistructured and structured.\nSemistructured types are variants, objects and arrays without schema.\nWhen data is fetched, it's represented as strings and the client is responsible for its interpretation.\nExample table definition:\n\n\tCREATE TABLE semistructured (v VARIANT, o OBJECT, a ARRAY)\n\nThe data not have any corresponding schema, so values in table may be slightly different.\n\nSemistuctured variants, objects and arrays are always represented as strings for scanning:\n\n\trows, err := db.Query(\"SELECT {'a': 'b'}::OBJECT\")\n\t// handle error\n\tdefer rows.Close()\n\trows.Next()\n\tvar v string\n\terr := rows.Scan(&v)\n\nWhen inserting, a marker indicating correct type must be used, for example:\n\n\tdb.Exec(\"CREATE TABLE test_object_binding (obj OBJECT)\")\n\tdb.Exec(\"INSERT INTO test_object_binding SELECT (?)\", DataTypeObject, \"{'s': 'some string'}\")\n\nStructured types differentiate from semistructured types by having specific schema.\nIn all rows of the table, values must conform to this schema.\nExample table definition:\n\n\tCREATE TABLE structured (o OBJECT(s VARCHAR, i INTEGER), a ARRAY(INTEGER), m MAP(VARCHAR, BOOLEAN))\n\nTo retrieve structured objects, follow these steps:\n\n1. Create a struct implementing sql.Scanner interface, example:\n\na)\n\n\ttype simpleObject struct {\n\t\ts string\n\t\ti int32\n\t}\n\n\tfunc (so *simpleObject) Scan(val any) error {\n\t\tst := val.(StructuredObject)\n\t\tvar err error\n\t\tif so.s, err = st.GetString(\"s\"); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif so.i, err = st.GetInt32(\"i\"); err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t}\n\nb)\n\n\ttype simpleObject struct {\n\t\tS string `sf:\"otherName\"`\n\t\tI int32 `sf:\"i,ignore\"`\n\t}\n\n\tfunc (so *simpleObject) Scan(val any) error {\n\t\tst := val.(StructuredObject)\n\t\treturn st.ScanTo(so)\n\t}\n\nAutomatic scan goes through all fields in a struct and read object fields.\nStruct fields have to be public.\nEmbedded structs have to be pointers.\nMatching name is built using struct field name with first letter lowercase.\nAdditionally, `sf` tag can be added:\n- first value is always a name of a field in an SQL object\n- additionally `ignore` parameter can be passed to omit this field\n\n2. Use WithStructuredTypesEnabled context while querying data.\n3. Use it in regular scan:\n\n\tvar res simpleObject\n\terr := rows.Scan(&res)\n\nSee StructuredObject for all available operations including null support, embedding nested structs, etc.\n\nRetrieving array of simple types works exactly the same like normal values - using Scan function.\n\nYou can use WithEmbeddedValuesNullable context to handle null values in maps\nand arrays of simple types in the database. In that case, sql null types will be used:\n\n\tctx := WithEmbeddedValuesNullable(WithStructuredTypesEnabled(context.Background()))\n\t...\n\tvar res []sql.NullBool\n\terr := rows.Scan(&res)\n\nIf you want to scan array of structs, you have to use a helper function ScanArrayOfScanners:\n\n\tvar res []*simpleObject\n\terr := rows.Scan(ScanArrayOfScanners(&res))\n\nRetrieving structured maps is very similar to retrieving arrays:\n\n\tvar res map[string]*simpleObject\n\terr := rows.Scan(ScanMapOfScanners(&res))\n\nTo bind structured objects use:\n\n1. Create a type which implements a StructuredObjectWriter interface, example:\n\na)\n\n\ttype simpleObject struct {\n\t\ts string\n\t\ti int32\n\t}\n\n\tfunc (so *simpleObject) Write(sowc StructuredObjectWriterContext) error {\n\t\tif err := sowc.WriteString(\"s\", so.s); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif err := sowc.WriteInt32(\"i\", so.i); err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t}\n\nb)\n\n\ttype simpleObject struct {\n\t\tS string `sf:\"otherName\"`\n\t\tI int32 `sf:\"i,ignore\"`\n\t}\n\n\tfunc (so *simpleObject) Write(sowc StructuredObjectWriterContext) error {\n\t\treturn sowc.WriteAll(so)\n\t}\n\n2. Use an instance as regular bind.\n3. If you need to bind nil value, use special syntax:\n\n\tdb.Exec('INSERT INTO some_table VALUES ?', sf.DataTypeNilObject, reflect.TypeOf(simpleObject{})\n\nBinding structured arrays are like any other parameter.\nThe only difference is - if you want to insert empty array (not nil but empty), you have to use:\n\n\tdb.Exec('INSERT INTO some_table VALUES ?', sf.DataTypeEmptyArray, reflect.TypeOf(simpleObject{}))\n\n# Using higher precision numbers\n\nThe following example shows how to retrieve very large values using the math/big\npackage. This example retrieves a large INTEGER value to an interface and then\nextracts a big.Int value from that interface. If the value fits into an int64,\nthen the code also copies the value to a variable of type int64. Note that a\ncontext that enables higher precision must be passed in with the query.\n\n\timport \"context\"\n\timport \"math/big\"\n\n\t...\n\n\tvar my_interface interface{}\n\tvar my_big_int_pointer *big.Int\n\tvar my_int64 int64\n\tvar rows snowflakeRows\n\n\t...\n\trows = db.QueryContext(WithHigherPrecision(context.Background), <query>)\n\trows.Scan(&my_interface)\n\tmy_big_int_pointer, ok = my_interface.(*big.Int)\n\tif my_big_int_pointer.IsInt64() {\n\t    my_int64 = my_big_int_pointer.Int64()\n\t}\n\nIf the variable named \"rows\" is known to contain a big.Int, then you can use the following instead of scanning into an interface\nand then converting to a big.Int:\n\n\trows.Scan(&my_big_int_pointer)\n\nIf the variable named \"rows\" contains a big.Int, then each of the following fails:\n\n\trows.Scan(&my_int64)\n\n\tmy_int64, _ = my_interface.(int64)\n\nSimilar code and rules also apply to big.Float values.\n\nIf you are not sure what data type will be returned, you can use code similar to the following to check the data type\nof the returned value:\n\n\t// Create variables into which you can scan the returned values.\n\tvar i64 int64\n\tvar bigIntPtr *big.Int\n\n\tfor rows.Next() {\n\t    // Get the data type info.\n\t    column_types, err := rows.ColumnTypes()\n\t    if err != nil {\n\t        log.Fatalf(\"ERROR: ColumnTypes() failed. err: %v\", err)\n\t    }\n\t    // The data type of the zeroeth column in the row.\n\t    column_type := column_types[0].ScanType()\n\t    // Choose the appropriate variable based on the data type.\n\t    switch column_type {\n\t        case reflect.TypeOf(i64):\n\t            err = rows.Scan(&i64)\n\t            fmt.Println(\"INFO: retrieved int64 value:\")\n\t            fmt.Println(i64)\n\t        case reflect.TypeOf(bigIntPtr):\n\t            err = rows.Scan(&bigIntPtr)\n\t            fmt.Println(\"INFO: retrieved bigIntPtr value:\")\n\t            fmt.Println(bigIntPtr)\n\t    }\n\t}\n\n# Using decfloats\n\nBy default, DECFLOAT values are returned as string values.\nIf you want to retrieve them as numbers, you have to use the WithDecfloatMappingEnabled context.\nIf higher precision is enabled, the driver will return them as *big.Float values.\nOtherwise, they will be returned as float64 values.\nKeep in mind that both float64 and *big.Float are not able to precisely represent some DECFLOAT values.\nIf precision is important, you have to use string representation and use your own library to parse it.\n\n# Arrow batches\n\nYou can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows.\nArrow Batches mode is available through the separate `arrowbatches` sub-package (`github.com/snowflakedb/gosnowflake/v2/arrowbatches`).\nThis sub-package provides access to Arrow columnar data using ArrowBatch structs, which correspond to data chunks\nreceived from the backend. They allow for access to specific arrow.Record structs.\n\nThe arrow-compute dependency (which significantly increases binary size) is only pulled in when you import the\narrowbatches sub-package. If you don't need Arrow batch support, simply don't import it.\n\nAn ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and\ntranslated only on demand. Translation options are retrieved from a context.Context interface, which is either\npassed from query context or set by the user using WithContext(ctx) method.\n\nIn order to access them you must use `arrowbatches.WithArrowBatches` context, similar to the following:\n\n\t    var rows driver.Rows\n\t\terr = conn.Raw(func(x interface{}) error {\n\t\t\trows, err = x.(driver.QueryerContext).QueryContext(ctx, query, nil)\n\t\t\treturn err\n\t\t})\n\n\t\t...\n\n\t\tbatches, err := arrowbatches.GetArrowBatches(rows.(sf.SnowflakeRows))\n\n\t\t... // use Arrow records\n\nThis returns []*arrowbatches.ArrowBatch.\n\nArrowBatch functions:\n\nGetRowCount():\nReturns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded,\nirrespective of it’s actual size.\n\nWithContext(ctx context.Context):\nSets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data\nthat has already been downloaded. For example:\n\n\trecords1, _ := batch.Fetch()\n\trecords2, _ := batch.WithContext(ctx).Fetch()\n\nwill produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are:\n-arrowbatches.WithTimestampOption\n-WithHigherPrecision\n-arrowbatches.WithUtf8Validation\ndescribed in more detail later.\n\nFetch():\nReturns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether\nthe underlying data has already been loaded, and downloads it if not.\n\nLimitations:\n\n 1. For some queries Snowflake may decide to return data in JSON format (examples: `SHOW PARAMETERS` or `ls @stage`). You cannot use JSON with Arrow batches context. See alternative below.\n 2. Snowflake handles timestamps in a range which is broader than available space in Arrow timestamp type. Because of that special treatment should be used (see below).\n 3. When using numbers, Snowflake chooses the smallest type that covers all values in a batch. So even when your column is NUMBER(38, 0), if all values are 8bits, array.Int8 is used.\n\nHow to handle timestamps in Arrow batches:\n\nSnowflake returns timestamps natively (from backend to driver) in multiple formats.\nThe Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake.\nAlso, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision.\nConsequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision.\n\nIf you want to use timestamps in Arrow batches, you have two options:\n\n 1. The Go driver can reduce timestamp struct into simple Arrow Timestamp, if you set `arrowbatches.WithTimestampOption` to nanosecond, microsecond, millisecond or second.\n    For nanosecond, some timestamp values might not fit into Arrow timestamp. E.g after year 2262 or before 1677.\n 2. You can use native Snowflake values. In that case you will receive complex structs as described above. To transform Snowflake values into the Golang time.Time struct you can use `ArrowSnowflakeTimestampToTime`.\n    To enable this feature, you must use `arrowbatches.WithTimestampOption` context with value set to`UseOriginalTimestamp`.\n\nHow to handle invalid UTF-8 characters in Arrow batches:\n\nSnowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters.\nHowever, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html\nand https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74),\nArrow string columns should only contain UTF-8 characters.\n\nTo address this issue and prevent potential downstream disruptions, the context arrowbatches.WithUtf8Validation is introduced.\nWhen enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`.\nThis ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks.\n\nHow to handle higher precision in Arrow batches:\n\nTo preserve BigDecimal values within Arrow batches, use WithHigherPrecision.\nThis offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services.\nAlternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision.\nZero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow.\nWHen using NUMBERs with non zero scale, the value is returned as an integer type and a scale is provided in record metadata.\nExample. When we have a 123.45 value that comes from NUMBER(9, 4), it will be represented as 1234500 with scale equal to 4. It is a client responsibility to interpret it correctly.\nAlso - see limitations section above.\n\nHow to handle JSON responses in Arrow batches:\n\nDue to technical limitations Snowflake backend may return JSON even if client expects Arrow.\nIn that case Arrow batches are not available and the error with code ErrNonArrowResponseInArrowBatches is returned.\nThe response is parsed to regular rows.\nYou can read rows in a way described in transform_batches_to_rows.go example.\nThis has a very strong limitation though - this is a very low level API (Go driver API), so there are no conversions ready.\nAll values are returned as strings.\nAlternative approach is to rerun a query, but without enabling Arrow batches and use a general Go SQL API instead of driver API.\nIt can be optimized by using `WithRequestID`, so backend returns results from cache.\n\n# Binding Parameters\n\nBinding allows a SQL statement to use a value that is stored in a Golang variable.\n\nWithout binding, a SQL statement specifies values by specifying literals inside the statement.\nFor example, the following statement uses the literal value “42“ in an UPDATE statement:\n\n\t_, err = db.Exec(\"UPDATE table1 SET integer_column = 42 WHERE ID = 1000\")\n\nWith binding, you can execute a SQL statement that uses a value that is inside a variable. For example:\n\n\tvar my_integer_variable int = 42\n\t_, err = db.Exec(\"UPDATE table1 SET integer_column = ? WHERE ID = 1000\", my_integer_variable)\n\nThe “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable.\n\nBinding data that involves time zones can require special handling. For details, see the section\ntitled \"Timestamps with Time Zones\".\n\nVersion 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.:\n\n\trows, err := db.Query(\"SELECT * FROM TABLE(SOMEFUNCTION(?))\", sql.NullBool{})\n\nThe timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types\nwhich are mapped to single Go time.Time type:\n\n\trows, err := db.Query(\"SELECT * FROM TABLE(SOMEFUNCTION(?))\", sf.TypedNullTime{sql.NullTime{}, sf.TimestampLTZType})\n\n# Binding Parameters to Array Variables\n\nVersion 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL\nINSERT statement. You can use this technique to insert multiple rows in a single batch.\n\nAs an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example\nbinds arrays to the parameters in the INSERT statement.\n\n\t\t// Create a table containing an integer, float, boolean, and string column.\n\t\t_, err = db.Exec(\"create or replace table my_table(c1 int, c2 float, c3 boolean, c4 string)\")\n\t\t...\n\t\t// Define the arrays containing the data to insert.\n\t\tintArray := []int{1, 2, 3}\n\t\tfltArray := []float64{0.1, 2.34, 5.678}\n\t\tboolArray := []bool{true, false, true}\n\t\tstrArray := []string{\"test1\", \"test2\", \"test3\"}\n\t\t...\n\t\t// Insert the data from the arrays and wrap in an Array() function into the table.\n\t    intArr, err := Array(&intArray)\n\t\tfltArr, err := Array(&fltArray)\n\t\tboolArr, err := Array(&boolArray)\n\t\tstrArr, err := Array(&strArray)\n\t\t_, err = db.Exec(\"insert into my_table values (?, ?, ?, ?)\", intArr, fltArr, boolArr, strArr)\n\nIf the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values.\nThis feature is available in version 1.6.12 (and later) of the driver. For example,\n\n\t\t \t// Define the arrays containing the data to insert.\n\t\t \tstrArray := make([]interface{}, 3)\n\t\t\tstrArray[0] = \"test1\"\n\t\t\tstrArray[1] = \"test2\"\n\t\t\tstrArray[2] = nil // This line is optional as nil is the default value.\n\t\t\t...\n\t\t\t// Create a table and insert the data from the array as shown above.\n\t        strArr, err := Array(&strArray)\n\t\t\t_, err = db.Exec(\"create or replace table my_table(c1 string)\")\n\t\t\t_, err = db.Exec(\"insert into my_table values (?)\", strArr)\n\t\t\t...\n\t\t\t// Use sql.NullString to fetch the string column that contains NULL values.\n\t\t\tvar s sql.NullString\n\t\t\trows, _ := db.Query(\"select * from my_table\")\n\t\t\tfor rows.Next() {\n\t\t\t\terr := rows.Scan(&s)\n\t\t\t\tif err != nil {\n\t\t\t\t\tlog.Fatalf(\"Failed to scan. err: %v\", err)\n\t\t\t\t}\n\t\t\t\tif s.Valid {\n\t\t\t\t\tfmt.Println(\"Retrieved value:\", s.String)\n\t\t\t\t} else {\n\t\t\t\t\tfmt.Println(\"Retrieved value: NULL\")\n\t\t\t\t}\n\t\t\t}\n\nFor slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function.\nThis feature is available in version 1.6.13 (and later) of the driver. For example,\n\n\t    ntzArr, err := Array(&ntzArray, sf.TimestampNTZType)\n\t\tltzArr, err := Array(&ltzArray, sf.TimestampLTZType)\n\t\t_, err = db.Exec(\"create or replace table my_table(c1 timestamp_ntz, c2 timestamp_ltz)\")\n\t\t_, err = db.Exec(\"insert into my_table values (?,?)\", ntzArr, ltzArr)\n\nNote: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see\nLoading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html).\n\n# Batch Inserts and Binding Parameters\n\nWhen you use array binding to insert a large number of values, the driver can\nimprove performance by streaming the data (without creating files on the local\nmachine) to a temporary stage for ingestion. The driver automatically does this\nwhen the number of values exceeds a threshold (no changes are needed to user code).\n\nIn order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema:\n\n\tCREATE STAGE\n\nIf the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database.\n\nIn addition, the current database and schema for the session must be set. If these are not set,\nthe CREATE TEMPORARY STAGE command executed by the driver can fail with the following error:\n\n\tCREATE TEMPORARY STAGE SYSTEM$BIND file_format=(type=csv field_optionally_enclosed_by='\"')\n\tCannot perform CREATE STAGE. This session does not have a current schema. Call 'USE SCHEMA', or use a qualified name.\n\nFor alternative ways to load data into the Snowflake database (including bulk loading using the COPY command),\nsee Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html).\n\n# Binding a Parameter to a Time Type\n\nGo's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable.\nHowever, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data\ntype to associate with the binding parameter. For example:\n\n\tdbt.mustExec(\"CREATE OR REPLACE TABLE tztest (id int, ntz, timestamp_ntz, ltz timestamp_ltz)\")\n\t// ...\n\tstmt, err :=dbt.db.Prepare(\"INSERT INTO tztest(id,ntz,ltz) VALUES(1, ?, ?)\")\n\t// ...\n\ttmValue time.Now()\n\t// ... Is tmValue a TIMESTAMP_NTZ or TIMESTAMP_LTZ?\n\t_, err = stmt.Exec(tmValue, tmValue)\n\nTo resolve this issue, a binding parameter flag is introduced that associates\nany subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ\nor BINARY data type. The above example could be rewritten as follows:\n\n\timport (\n\t\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\t)\n\tdbt.mustExec(\"CREATE OR REPLACE TABLE tztest (id int, ntz, timestamp_ntz, ltz timestamp_ltz)\")\n\t// ...\n\tstmt, err :=dbt.db.Prepare(\"INSERT INTO tztest(id,ntz,ltz) VALUES(1, ?, ?)\")\n\t// ...\n\ttmValue time.Now()\n\t// ...\n\t_, err = stmt.Exec(sf.DataTypeTimestampNtz, tmValue, sf.DataTypeTimestampLtz, tmValue)\n\n# Timestamps with Time Zones\n\nThe driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the\noffset-based Location types, which represent a collection of time offsets in\nuse in a geographical area, such as CET (Central European Time) or UTC\n(Coordinated Universal Time). The offset-based Location data is generated and\ncached when a Go Snowflake Driver application starts, and if the given offset\nis not in the cache, it is generated dynamically.\n\nCurrently, Snowflake does not support the name-based Location types (e.g. \"America/Los_Angeles\").\n\nFor more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location.\n\n# Binary Data\n\nInternally, this feature leverages the []byte data type. As a result, BINARY\ndata cannot be bound without the binding parameter flag. In the following\nexample, sf is an alias for the gosnowflake package:\n\n\tvar b = []byte{0x01, 0x02, 0x03}\n\t_, err = stmt.Exec(sf.DataTypeBinary, b)\n\n# JWT authentication\n\nThe Go Snowflake Driver supports JWT (JSON Web Token) authentication.\n\nTo enable this feature, construct the DSN with fields \"authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>\",\nor using a Config structure specifying:\n\n\tconfig := &Config{\n\t\t...\n\t\tAuthenticator: AuthTypeJwt,\n\t\tPrivateKey:   \"<your_private_key_struct in *rsa.PrivateKey type>\",\n\t}\n\nThe <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL\nbase 64 URL format is through the base64.URLEncoding.EncodeToString() function.\n\nOn the server side, you can alter the public key with the SQL command:\n\n\tALTER USER <your_user_name> SET RSA_PUBLIC_KEY='<your_public_key>';\n\nThe <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base\n64 Standard format is through the base64.StdEncoding.EncodeToString() function.\n\nTo generate the valid key pair, you can execute the following commands in the shell:\n\n\t\t# generate 2048-bit pkcs8 encoded RSA private key\n\t\topenssl genpkey -algorithm RSA \\\n\t    \t-pkeyopt rsa_keygen_bits:2048 \\\n\t    \t-pkeyopt rsa_keygen_pubexp:65537 | \\\n\t  \t\topenssl pkcs8 -topk8 -outform der > rsa-2048-private-key.p8\n\n\t\t# extract 2048-bit PKI encoded RSA public key from the private key\n\t\topenssl pkey -pubout -inform der -outform der \\\n\t    \t-in rsa-2048-private-key.p8 \\\n\t    \t-out rsa-2048-public-key.spki\n\nNote: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key.\nFor security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and\ndecrypt the key in your application using a library you trust.\n\nJWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds.\nEach retry timeout is configured by `jwtClientTimeout`.\nRetries are limited by total time of `loginTimeout`.\n\n# External browser authentication\n\nThe driver allows to authenticate using the external browser.\n\nWhen a connection is created, the driver will open the browser window and ask the user to sign in.\n\nTo enable this feature, construct the DSN with field \"authenticator=EXTERNALBROWSER\" or using a Config structure with\nfollowing Authenticator specified:\n\n\tconfig := &Config{\n\t\t...\n\t\tAuthenticator: AuthTypeExternalBrowser,\n\t}\n\nThe external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when\nbrowser window was closed, or not responding.\n\nTimeout defaults to 120s and can be changed through setting DSN field \"externalBrowserTimeout=240\" (time in seconds)\nor using a Config structure with following ExternalBrowserTimeout specified:\n\n\tconfig := &Config{\n\t\tExternalBrowserTimeout: 240 * time.Second, // Requires time.Duration\n\t}\n\n# Executing Multiple Statements in One Call\n\nThis feature is available in version 1.3.8 or later of the driver.\n\nBy default, Snowflake returns an error for queries issued with multiple statements.\nThis restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection).\n\nThe multi-statement feature allows users skip this restriction and execute multiple SQL statements through a\nsingle Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully.\nThe risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to\ninject a statement by appending it. More details are below.\n\nThe Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call:\n\n  - db.QueryContext(): This function is used to execute queries, such as SELECT statements, that return a result set.\n  - db.ExecContext(): This function is used to execute statements that don't return a result set (i.e. most DML and DDL statements).\n\nTo compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons,\nin the order in which the statements should be executed.\n\nTo protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies\nthe number of statements in the string. For example:\n\n\timport (\n\t\t\"context\"\n\t\t\"database/sql\"\n\t)\n\n\tvar multiStatementQuery = \"SELECT c1 FROM t1; SELECT c2 FROM t2\"\n\tvar number_of_statements = 2\n\tctx := WithMultiStatement(context.Background(), number_of_statements)\n\trows, err := db.QueryContext(ctx, multiStatementQuery)\n\nWhen multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After\nyou process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet().\n\nThe following pseudo-code shows how to process multiple result sets:\n\n\tExecute the statement and get the result set(s):\n\n\t\trows, err := db.QueryContext(ctx, multiStmtQuery)\n\n\tRetrieve the rows in the first query's result set:\n\n\t\twhile rows.Next() {\n\t\t\terr = rows.Scan(&variable_1)\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\t...\n\t\t}\n\n\tRetrieve the remaining result sets and the rows in them:\n\n\t\twhile rows.NextResultSet()  {\n\n\t\t\twhile rows.Next() {\n\t\t\t\t...\n\t\t\t}\n\n\t\t}\n\nThe function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each\nindividual statement. For example, if your multi-statement query executed two UPDATE statements, each of which\nupdated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not\navailable.\n\nThe following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext():\n\n\tExecute the SQL statements:\n\n\t    res, err := db.ExecContext(ctx, multiStmtQuery)\n\n\tGet the summed result and store it in the variable named count:\n\n\t    count, err := res.RowsAffected()\n\nNote: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors.\nFor example, suppose you expected the return value to be 20 because you expected each UPDATE statement to\nupdate 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5\nrows, the total would still be 20. You would see no indication that the UPDATES had not functioned as\nexpected.\n\nThe ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it\nstill returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query\nstatements) is impractical.\n\nThe QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function\nreturns a result set for each statement, whether or not the statement is a query. For each non-query statement, the\nresult set contains a single row that contains a single column; the value is the number of rows changed by the\nstatement.\n\nIf you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a\nmulti-statement query, use QueryContext(). You can retrieve the result sets for the queries,\nand you can retrieve or ignore the row counts for the non-query statements.\n\nNote: PUT statements are not supported for multi-statement queries.\n\nIf a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is\naborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected.\n\nFor example, if the statements below are run as one multi-statement query, the multi-statement query fails on the\nthird statement, and an exception is thrown.\n\n\tCREATE OR REPLACE TABLE test(n int);\n\tINSERT INTO TEST VALUES (1), (2);\n\tINSERT INTO TEST VALUES ('not_an_integer');  -- execution fails here\n\tINSERT INTO TEST VALUES (3);\n\nIf you then query the contents of the table named \"test\", the values 1 and 2 would be present.\n\nWhen using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For\nexample:\n\n\trows, err := db.QueryContext(ctx, multiStmtQuery)\n\tif err != nil {\n\t\tFatalf(\"failed to query multiple statements: %v\", err)\n\t}\n\nPreparing statements and using bind variables are also not supported for multi-statement queries.\n\n# Asynchronous Queries\n\nThe Go Snowflake Driver supports asynchronous execution of SQL statements.\nAsynchronous execution allows you to start executing a statement and then\nretrieve the result later without being blocked while waiting. While waiting\nfor the result of a SQL statement, you can perform other tasks, including\nexecuting other SQL statements.\n\nMost of the steps to execute an asynchronous query are the same as the\nsteps to execute a synchronous query. However, there is an additional step,\nwhich is that you must call the WithAsyncMode() function to update\nyour Context object to specify that asynchronous mode is enabled.\n\nIn the code below, the call to \"WithAsyncMode()\" is specific\nto asynchronous mode. The rest of the code is compatible with both\nasynchronous mode and synchronous mode.\n\n\t...\n\n\t// Update your Context object to specify asynchronous mode:\n\tctx := WithAsyncMode(context.Background())\n\n\t// Execute your query as usual by calling:\n\trows, _ := db.QueryContext(ctx, query_string)\n\n\t// Retrieve the results as usual by calling:\n\tfor rows.Next()  {\n\t\terr := rows.Scan(...)\n\t\t...\n\t}\n\nThe function db.QueryContext() returns an object of type snowflakeRows\nregardless of whether the query is synchronous or asynchronous. However:\n\n  - If the query is synchronous, then db.QueryContext() does not return until\n    the query has finished and the result set has been loaded into the\n    snowflakeRows object.\n  - If the query is asynchronous, then db.QueryContext() returns a\n    potentially incomplete snowflakeRows object that is filled in later\n    in the background.\n\nThe call to the Next() function of snowflakeRows is always synchronous (i.e. blocking).\nIf the query has not yet completed and the snowflakeRows object (named \"rows\" in this\nexample) has not been filled in yet, then rows.Next() waits until the result set has been filled in.\n\nMore generally, calls to any Golang SQL API function implemented in snowflakeRows or\nsnowflakeResult are blocking calls, and wait if results are not yet available.\n(Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(),\nsnowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().)\n\nBecause the example code above executes only one query and no other activity, there is\nno significant difference in behavior between asynchronous and synchronous behavior.\nThe differences become significant if, for example, you want to perform some other\nactivity after the query starts and before it completes. The example code below starts\na query, which run in the background, and then retrieves the results later.\n\nThis example uses small SELECT statements that do not retrieve enough data to require\nasynchronous handling. However, the technique works for larger data sets, and for\nsituations where the programmer might want to do other work after starting the queries\nand before retrieving the results. For a more elaborative example please see cmd/async/async.go\n\n\t\tpackage gosnowflake\n\n\t\timport  (\n\t\t\t\"context\"\n\t\t\t\"database/sql\"\n\t\t\t\"database/sql/driver\"\n\t\t\t\"fmt\"\n\t\t\t\"log\"\n\t\t\t\"os\"\n\t\t\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\t    )\n\n\t\t...\n\n\t\tfunc DemonstrateAsyncMode(db *sql.DB) {\n\t\t\t// Enable asynchronous mode\n\t\t\tctx := sf.WithAsyncMode(context.Background())\n\n\t\t\t// Run the query with asynchronous context\n\t\t\trows, err := db.QueryContext(ctx, \"select 1\")\n\t\t\tif err != nil {\n\t\t\t\t// handle error\n\t\t\t}\n\n\t\t\t// do something as the workflow continues whereas the query is computing in the background\n\t\t\t...\n\n\t\t\t// Get the data when you are ready to handle it\n\t\t\tvar val int\n\t\t\terr = rows.Scan(&val)\n\t\t\tif err != nil {\n\t\t\t\t// handle error\n\t\t\t}\n\n\t\t\t...\n\t\t}\n\n==> Some considerations related to the ServerSessionKeepAlive configuration option in context of asynchronous query execution\n\nWhen SQL Go connection is being closed, it performs the following actions:\n\n* stops the scheduled heartbeats (CLIENT_SESSION_KEEP_ALIVE), if it was enabled\n\n* cleans up all the http connections which are already idle - doesn't touch the ones which are in active use currently\n\n* if Config.ServerSessionKeepAlive is false (default), then actively logs out the current Snowflake session.\n\n!! Caveat: If there are any queries which are currently executing in the same Snowflake session (e.g. async queries sent with WithAsyncMode()), then those queries are automatically cancelled from the client side a couple minutes later after the Close() call, as a Snowflake session which has been actively logged out from, cannot sustain any queries.\n\nYou can govern this behaviour with setting Config.ServerSessionKeepAlive to true; when the corresponding Snowflake session will be kept alive for a long time (determined by the Snowflake engine) even after an explicit Connection.Close() call past the time when the last running query in the session finished executing.\n\nThe behaviour is also dependent on ABORT_DETACHED_QUERY parameter, please see the detailed explanation in the parameter description at https://docs.snowflake.com/en/sql-reference/parameters#abort-detached-query.\n\nAs a consequence, best practice would be to isolate all long-running async tasks (especially ones supposed to be continued after the connection is closed) into a separate connection.\n\n# Support For PUT and GET\n\nThe Go Snowflake Driver supports the PUT and GET commands.\n\nThe PUT command copies a file from a local computer (the computer where the\nGolang client is running) to a stage on the cloud platform. The GET command\ncopies data files from a stage on the cloud platform to a local computer.\n\nSee the following for information on the syntax and supported parameters:\n\n  - PUT: https://docs.snowflake.com/en/sql-reference/sql/put.html\n  - GET: https://docs.snowflake.com/en/sql-reference/sql/get.html\n\nUsing PUT:\n\nThe following example shows how to run a PUT command by passing a string to the\ndb.Query() function:\n\n\tdb.Query(\"PUT file://<local_file> <stage_identifier> <optional_parameters>\")\n\n\"<local_file>\" should include the file path as well as the name. Snowflake recommends\nusing an absolute path rather than a relative path. For example:\n\n\tdb.Query(\"PUT file:///tmp/my_data_file @~ auto_compress=false overwrite=false\")\n\nDifferent client platforms (e.g. linux, Windows) have different path name\nconventions. Ensure that you specify path names appropriately. This is\nparticularly important on Windows, which uses the backslash character as\nboth an escape character and as a separator in path names.\n\nTo send information from a stream (rather than a file) use code similar to the code below.\n(The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.)\n\n\tfileStream, _ := os.Open(fname)\n\tdefer func() {\n\t\tif fileStream != nil {\n\t\t\tfileStream.Close()\n\t\t}\n\t} ()\n\n\tsql := \"put 'file://%v' @%%%v auto_compress=true parallel=30\"\n\tsqlText := fmt.Sprintf(sql,\n\t\tstrings.ReplaceAll(fname, \"\\\\\", \"\\\\\\\\\"),\n\t\ttableName)\n\tdbt.mustExecContext(WithFilePutStream(context.Background(), fileStream),\n\t\tsqlText)\n\nNote: PUT statements are not supported for multi-statement queries.\n\nUsing GET:\n\nThe following example shows how to run a GET command by passing a string to the\ndb.Query() function:\n\n\tdb.Query(\"GET <internal_stage_identifier> file://<local_file> <optional_parameters>\")\n\n\"<local_file>\" should include the file path as well as the name. Snowflake recommends using\nan absolute path rather than a relative path. For example:\n\n\tdb.Query(\"GET @~ file:///tmp/my_data_file auto_compress=false overwrite=false\")\n\nTo download a file into an in-memory stream (rather than a file) use code similar to the code below.\n\n\tvar streamBuf bytes.Buffer\n\tctx := WithFileGetStream(context.Background(), &streamBuf)\n\n\tsql := \"get @~/data1.txt.gz file:///tmp/testData\"\n\tdbt.mustExecContext(ctx, sql)\n\t// streamBuf is now filled with the stream. Use bytes.NewReader(streamBuf.Bytes()) to read uncompressed stream or\n\t// use gzip.NewReader(&streamBuf) for to read compressed stream.\n\nNote: GET statements are not supported for multi-statement queries.\n\nSpecifying temporary directory for encryption and compression:\n\nPutting and getting requires compression and/or encryption, which is done in the OS temporary directory.\nIf you cannot use default temporary directory for your OS or you want to specify it yourself, you can use \"tmpDirPath\" DSN parameter.\nRemember, to encode slashes.\nExample:\n\n\tu:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&tmpDirPath=%2Fother%2Ftmp\n\nUsing custom configuration for PUT/GET:\n\nIf you want to override some default configuration options, you can use `WithFileTransferOptions` context.\nThere are multiple config parameters including progress bars or compression.\n\n# Minicore (Native Library)\n\nThe Go Snowflake Driver includes an embedded native library called \"minicore\" that verifies loading of native Rust extensions on various platforms. By default, minicore is enabled and loaded dynamically at runtime.\n\n## Disabling Minicore\n\nThere are two ways to disable minicore:\n\n1. **At runtime using environment variable:**\n\n\tSet the SF_DISABLE_MINICORE environment variable to \"true\" to disable minicore loading:\n\n\t  export SF_DISABLE_MINICORE=true\n\n\tThis is useful when you want to disable minicore for a specific run without recompiling.\n\n2. **At compile time using build tags:**\n\n\tBuild with the -tags minicore_disabled flag to completely exclude minicore from the binary:\n\n\t  go build -tags minicore_disabled ./...\n\n\tThis is required for static linking (e.g., CGO_ENABLED=0) because minicore relies on\n\tdynamic library loading (dlopen) which is incompatible with static binaries.\n\n\tBenefits of compile-time disable:\n\t  - Smaller binary size (no embedded native libraries)\n\t  - No CGO dependency for POSIX systems\n\t  - Compatible with static linking\n\n\tExample for fully static build:\n\n\t  CGO_ENABLED=0 go build -tags minicore_disabled ./...\n\n## Static Linking\n\nOn Linux, if the binary is fully statically linked (e.g., built with\n-linkmode external -extldflags '-static'), the driver automatically detects this\nand skips minicore loading. Calling dlopen from a statically linked glibc binary\nwould crash with SIGFPE, so the driver inspects the ELF header for a dynamic\nlinker (PT_INTERP) and gracefully skips minicore if none is found.\n\nWhen minicore is disabled (either at runtime, at compile time, or automatically\ndue to static linking), the driver continues to work normally but without the\nadditional functionality provided by the native library.\n\n# FIPS forcing\n\nIf you force FIPS mode using fips140 GODEBUG option, driver will switch OCSP requests from SHA-1 to SHA-256.\nBe aware, that Snowflake cache server doesn't support OCSP requests signed with SHA-256, so driver may work slower, and, in case of OCSP cache server unavailability, OCSP requests will fail, and if OCSP is enabled, then connection attempts will fail as well.\n\n# Connectivity diagnostics\n\n==> Relevant configuration\n\n- `ConnectionDiagnosticsEnabled` (default: false) - main flag to enable the diagnostics\n\n- `ConnectionDiagnosticsAllowlistFile` - specify `/path/to/allowlist.json` to use a specific allowlist file which the driver should parse. If not specified, the driver tries to open `allowlist.json` from the current directory.\nThe `ConnectionDiagnosticsAllowlistFile` is only taken into consideration when `ConnectionDiagnosticsEnabled=true`\n\n==> Flow of operation when `ConnectionDiagnosticsEnabled=true`\n\n1. upon initial startup, driver opens and reads the `allowlist.json` to determine which hosts it needs to connect to, and then for each entry in the allowlist\n\n2. perform a DNS resolution test to see if the hostname is resolvable\n\n3. driver logs an Error, when encountering a _public_ IP address for a host which looks to be a _private_ link hostname\n\n4. checks if proxy is used in the connection\n\n5. sets up a connection; for which we use the same transport which is driven by the driver's config (custom transport, or when OCSP disabled then OCSP-less transport, or by default, the OCSP-enabled transport)\n\n6. for HTTP endpoints, issues a HTTP GET request and see if it connects\n\n7. for HTTPS endpoints, the same , plus\n  - verifies if HTTPS connectivity is set up correctly\n  - parses the certificate chain and logs information on each certificate (on DEBUG loglevel, dump the whole cert)\n  - if (implicitly) configured from `CertRevocationCheckMode` being `advisory` or `enabled`, also tries to connect to the CRL endpoints\n\n8. the driver exits after performing diagnostics. If you want to use the driver 'normally' after performing connection diagnostics, set `ConnectionDiagnosticsEnabled=false` or remove it from the config\n*/\npackage gosnowflake\n"
  },
  {
    "path": "driver.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n)\n\n// SnowflakeDriver is a context of Go Driver\ntype SnowflakeDriver struct{}\n\n// Open creates a new connection.\nfunc (d SnowflakeDriver) Open(dsn string) (driver.Conn, error) {\n\tvar cfg *Config\n\tvar err error\n\tlogger.Info(\"Open\")\n\tctx := context.Background()\n\tif dsn == \"autoConfig\" {\n\t\tcfg, err = sfconfig.LoadConnectionConfig()\n\t} else {\n\t\tcfg, err = ParseDSN(dsn)\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn d.OpenWithConfig(ctx, *cfg)\n}\n\n// OpenConnector creates a new connector with parsed DSN.\nfunc (d SnowflakeDriver) OpenConnector(dsn string) (driver.Connector, error) {\n\tvar cfg *Config\n\tvar err error\n\tif dsn == \"autoConfig\" {\n\t\tcfg, err = sfconfig.LoadConnectionConfig()\n\t} else {\n\t\tcfg, err = ParseDSN(dsn)\n\t}\n\tif err != nil {\n\t\treturn Connector{}, err\n\t}\n\treturn NewConnector(d, *cfg), nil\n}\n\n// OpenWithConfig creates a new connection with the given Config.\nfunc (d SnowflakeDriver) OpenWithConfig(ctx context.Context, config Config) (driver.Conn, error) {\n\ttimer := time.Now()\n\tif err := config.Validate(); err != nil {\n\t\treturn nil, err\n\t}\n\tif config.Params == nil {\n\t\tconfig.Params = make(map[string]*string)\n\t}\n\tif config.Tracing != \"\" {\n\t\tif err := logger.SetLogLevel(config.Tracing); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\tlogger.WithContext(ctx).Info(\"OpenWithConfig\")\n\n\tif config.ConnectionDiagnosticsEnabled {\n\t\tconnDiagDownloadCrl := (config.CertRevocationCheckMode.String() == \"ADVISORY\") || (config.CertRevocationCheckMode.String() == \"ENABLED\")\n\t\tlogger.WithContext(ctx).Infof(\"Connection diagnostics enabled. Allowlist file specified in config: %s, will download CRLs in certificates: %s\",\n\t\t\tconfig.ConnectionDiagnosticsAllowlistFile, strconv.FormatBool(connDiagDownloadCrl))\n\t\tperformDiagnosis(&config, connDiagDownloadCrl)\n\t\tlogger.WithContext(ctx).Info(\"Connection diagnostics finished.\")\n\t\tlogger.WithContext(ctx).Warn(\"A connection to Snowflake was not created because the driver is running in diagnostics mode. If this is unintended then disable diagnostics check by removing the ConnectionDiagnosticsEnabled connection parameter\")\n\t\tos.Exit(0)\n\t}\n\tsc, err := buildSnowflakeConn(ctx, config)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif strings.HasSuffix(strings.ToLower(config.Host), sfconfig.CnDomain) {\n\t\tlogger.WithContext(ctx).Info(\"Connecting to CHINA Snowflake domain\")\n\t} else {\n\t\tlogger.WithContext(ctx).Info(\"Connecting to GLOBAL Snowflake domain\")\n\t}\n\n\tif err = authenticateWithConfig(sc); err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"Failed to authenticate. Connection failed after %v milliseconds\", time.Since(timer).String())\n\t\treturn nil, err\n\t}\n\tsc.connectionTelemetry(&config)\n\n\tsc.startHeartBeat()\n\tsc.internal = &httpClient{sr: sc.rest}\n\t// Check context before returning since connectionTelemetry doesn't handle cancellation\n\tif ctx.Err() != nil {\n\t\treturn nil, ctx.Err()\n\t}\n\tlogger.WithContext(ctx).Infof(\"Connected successfully after %v milliseconds\", time.Since(timer).String())\n\treturn sc, nil\n}\n\nfunc runningOnGithubAction() bool {\n\treturn os.Getenv(\"GITHUB_ACTIONS\") != \"\"\n}\n\n// GOSNOWFLAKE_SKIP_REGISTRATION is an environment variable which can be set client side to\n// bypass dbSql driver registration. This should not be used if sql.Open() is used as the method\n// to connect to the server, as sql.Open will require registration so it can map the driver name\n// to the driver type, which in this case is \"snowflake\" and SnowflakeDriver{}. If you wish to call\n// into multiple versions of the driver from one client, this is needed because calling register\n// twice with the same name on init will cause the driver to panic.\nfunc skipRegistration() bool {\n\treturn os.Getenv(\"GOSNOWFLAKE_SKIP_REGISTRATION\") != \"\"\n}\n\nfunc init() {\n\tif !skipRegistration() {\n\t\tsql.Register(\"snowflake\", &SnowflakeDriver{})\n\t}\n\n\t// Set initial log level\n\t_ = GetLogger().SetLogLevel(\"error\")\n\tif runningOnGithubAction() {\n\t\t_ = GetLogger().SetLogLevel(\"fatal\")\n\t}\n}\n"
  },
  {
    "path": "driver_ocsp_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"database/sql\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc setenv(k, v string) {\n\terr := os.Setenv(k, v)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n}\n\nfunc unsetenv(k string) {\n\terr := os.Unsetenv(k)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n}\n\n// deleteOCSPCacheFile deletes the OCSP response cache file\nfunc deleteOCSPCacheFile() {\n\tos.Remove(cacheFileName)\n}\n\n// deleteOCSPCacheAll deletes all entries in the OCSP response cache on memory\nfunc deleteOCSPCacheAll() {\n\tsyncUpdateOcspResponseCache(func() {\n\t\tocspResponseCache = make(map[certIDKey]*certCacheValue)\n\t})\n}\n\nfunc cleanup() {\n\tdeleteOCSPCacheFile()\n\tdeleteOCSPCacheAll()\n\tunsetenv(cacheServerURLEnv)\n\tunsetenv(ocspTestResponderURLEnv)\n\tunsetenv(ocspTestNoOCSPURLEnv)\n\tunsetenv(cacheDirEnv)\n}\n\nfunc TestOCSPFailOpen(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount1\",\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  10 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenTrue,\n\t\tAuthenticator: AuthTypeSnowflake,\n\t\tPrivateKey:    nil,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\tdriverErr, ok := err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tif isFailToConnectOrAuthErr(driverErr) {\n\t\tt.Fatalf(\"should failed to connect %v\", err)\n\t}\n}\n\nfunc isFailToConnectOrAuthErr(driverErr *SnowflakeError) bool {\n\treturn driverErr.Number != ErrCodeFailedToConnect && driverErr.Number != ErrFailedToAuth\n}\n\nfunc TestOCSPFailOpenWithoutFileCache(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tsetenv(cacheDirEnv, \"/NEVER_EXISTS\")\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount1\",\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  10 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenTrue,\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\tdriverErr, ok := err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tif isFailToConnectOrAuthErr(driverErr) {\n\t\tt.Fatalf(\"should failed to connect %v\", err)\n\t}\n}\n\nfunc TestOCSPFailOpenRevokedStatus(t *testing.T) {\n\tt.Skip(\"revoked.badssl.com certificate expired\")\n\tcleanup()\n\tdefer cleanup()\n\n\tocspCacheServerEnabled = false\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount6\",\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tHost:          \"revoked.badssl.com\",\n\t\tLoginTimeout:  10 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenTrue,\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\turlErr, ok := err.(*url.Error)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error URL Error: %v\", err)\n\t}\n\tvar driverErr *SnowflakeError\n\tdriverErr, ok = urlErr.Err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tif driverErr.Number != ErrOCSPStatusRevoked {\n\t\tt.Fatalf(\"should failed to connect %v\", err)\n\t}\n}\n\nfunc TestOCSPFailClosedRevokedStatus(t *testing.T) {\n\tt.Skip(\"revoked.badssl.com certificate expired\")\n\tcleanup()\n\tdefer cleanup()\n\n\tocspCacheServerEnabled = false\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount7\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tHost:          \"revoked.badssl.com\",\n\t\tLoginTimeout:  20 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenFalse,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\turlErr, ok := err.(*url.Error)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error URL Error: %v\", err)\n\t}\n\tvar driverErr *SnowflakeError\n\tdriverErr, ok = urlErr.Err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tif driverErr.Number != ErrOCSPStatusRevoked {\n\t\tt.Fatalf(\"should failed to connect %v\", err)\n\t}\n}\n\nfunc TestOCSPFailOpenCacheServerTimeout(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tsetenv(cacheServerURLEnv, fmt.Sprintf(\"http://localhost:%v/hang\", wiremock.port))\n\twiremock.registerMappings(t, newWiremockMapping(\"hang.json\"))\n\torigCacheServerTimeout := OcspCacheServerTimeout\n\tOcspCacheServerTimeout = time.Second\n\tdefer func() {\n\t\tOcspCacheServerTimeout = origCacheServerTimeout\n\t}()\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount8\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  10 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenTrue,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\tdriverErr, ok := err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tif isFailToConnectOrAuthErr(driverErr) {\n\t\tt.Fatalf(\"should failed to connect %v\", err)\n\t}\n}\n\nfunc TestOCSPFailClosedCacheServerTimeout(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tsetenv(cacheServerURLEnv, fmt.Sprintf(\"http://localhost:%v/hang\", wiremock.port))\n\twiremock.registerMappings(t, newWiremockMapping(\"hang.json\"))\n\torigCacheServerTimeout := OcspCacheServerTimeout\n\tOcspCacheServerTimeout = time.Second\n\tdefer func() {\n\t\tOcspCacheServerTimeout = origCacheServerTimeout\n\t}()\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount9\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  20 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenFalse,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif err == nil {\n\t\tt.Fatalf(\"should failed to connect. err:  %v\", err)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\n\tswitch errType := err.(type) {\n\t// Before Go 1.17\n\tcase *SnowflakeError:\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t\t}\n\t\tif isFailToConnectOrAuthErr(driverErr) {\n\t\t\tt.Fatalf(\"should have failed to connect. err: %v\", err)\n\t\t}\n\t// Go 1.18 and after rejects SHA-1 certificates, therefore a different error is returned (https://github.com/golang/go/issues/41682)\n\tcase *url.Error:\n\t\texpectedErrMsg := \"bad OCSP signature\"\n\t\tif !strings.Contains(err.Error(), expectedErrMsg) {\n\t\t\tt.Fatalf(\"should have failed with bad OCSP signature. err:  %v\", err)\n\t\t}\n\tdefault:\n\t\tt.Fatalf(\"should failed to connect. err type: %v, err:  %v\", errType, err)\n\t}\n}\n\nfunc TestOCSPFailOpenResponderTimeout(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tocspCacheServerEnabled = false\n\tsetenv(ocspTestResponderURLEnv, fmt.Sprintf(\"http://localhost:%v/ocsp/hang\", wiremock.port))\n\twiremock.registerMappings(t, newWiremockMapping(\"hang.json\"))\n\torigOCSPResponderTimeout := OcspResponderTimeout\n\tOcspResponderTimeout = 1000\n\tdefer func() {\n\t\tOcspResponderTimeout = origOCSPResponderTimeout\n\t}()\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount10\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  10 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenTrue,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\tdriverErr, ok := err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tif isFailToConnectOrAuthErr(driverErr) {\n\t\tt.Fatalf(\"should failed to connect %v\", err)\n\t}\n}\n\nfunc TestOCSPFailClosedResponderTimeout(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tocspCacheServerEnabled = false\n\tsetenv(ocspTestResponderURLEnv, fmt.Sprintf(\"http://localhost:%v/hang\", wiremock.port))\n\twiremock.registerMappings(t, newWiremockMapping(\"hang.json\"))\n\torigOCSPResponderTimeout := OcspResponderTimeout\n\torigOCSPMaxRetryCount := OcspMaxRetryCount\n\tOcspResponderTimeout = 100 * time.Millisecond\n\tOcspMaxRetryCount = 1\n\tdefer func() {\n\t\tOcspResponderTimeout = origOCSPResponderTimeout\n\t\tOcspMaxRetryCount = origOCSPMaxRetryCount\n\t}()\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount11\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  3 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenFalse,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\turlErr, ok := err.(*url.Error)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error URL Error: %v\", err)\n\t}\n\turlErr0, ok := urlErr.Err.(*url.Error)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error URL Error: %v\", urlErr.Err)\n\t}\n\tif !strings.Contains(urlErr0.Err.Error(), \"Client.Timeout\") && !strings.Contains(urlErr0.Err.Error(), \"connection refused\") {\n\t\tt.Fatalf(\"the root cause is not  timeout: %v\", urlErr0.Err)\n\t}\n}\n\nfunc TestOCSPFailOpenResponder404(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tocspCacheServerEnabled = false\n\tsetenv(ocspTestResponderURLEnv, fmt.Sprintf(\"http://localhost:%v/404\", wiremock.port))\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount10\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  5 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenTrue,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\tdriverErr, ok := err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tif isFailToConnectOrAuthErr(driverErr) {\n\t\tt.Fatalf(\"should failed to connect %v\", err)\n\t}\n}\n\nfunc TestOCSPFailClosedResponder404(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tocspCacheServerEnabled = false\n\tsetenv(ocspTestResponderURLEnv, fmt.Sprintf(\"http://localhost:%v/404\", wiremock.port))\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount11\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  5 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenFalse,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\turlErr, ok := err.(*url.Error)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tif !strings.Contains(urlErr.Err.Error(), \"404 Not Found\") && !strings.Contains(urlErr.Err.Error(), \"connection refused\") {\n\t\tt.Fatalf(\"the root cause is not 404: %v\", urlErr.Err)\n\t}\n}\n\nfunc TestExpiredCertificate(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount10\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tHost:          \"expired.badssl.com\",\n\t\tLoginTimeout:  10 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenTrue,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\turlErr, ok := err.(*url.Error)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error URL Error: %v\", err)\n\t}\n\t_, ok = urlErr.Err.(x509.CertificateInvalidError)\n\n\tif !ok {\n\t\t// Go 1.20 throws tls CertificateVerification error\n\t\terrString := urlErr.Err.Error()\n\t\t// badssl sometimes times out\n\t\tif !strings.Contains(errString, \"certificate has expired or is not yet valid\") && !strings.Contains(errString, \"timeout\") && !strings.Contains(errString, \"connection attempt failed\") {\n\t\t\tt.Fatalf(\"failed to extract error Certificate error: %v\", err)\n\t\t}\n\t}\n}\n\n/*\nDISABLED: sicne it appeared self-signed.badssl.com is not well maintained,\n          this test is no longer reliable.\n// TestSelfSignedCertificate tests self-signed certificate\nfunc TestSelfSignedCertificate(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tconfig := &Config{\n\t\tAccount:      \"fakeaccount10\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:         \"fakeuser\",\n\t\tPassword:     \"fakepassword\",\n\t\tHost:         \"self-signed.badssl.com\",\n\t\tLoginTimeout: 10 * time.Second,\n\t\tOCSPFailOpen: OCSPFailOpenTrue,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\turlErr, ok := err.(*url.Error)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error URL Error: %v\", err)\n\t}\n\t_, ok = urlErr.Err.(x509.UnknownAuthorityError)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error Certificate error: %v\", err)\n\t}\n}\n*/\n\nfunc TestOCSPFailOpenNoOCSPURL(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tocspCacheServerEnabled = false\n\tsetenv(ocspTestNoOCSPURLEnv, \"true\")\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount10\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  10 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenTrue,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\tdriverErr, ok := err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tif isFailToConnectOrAuthErr(driverErr) {\n\t\tt.Fatalf(\"should failed to connect %v\", err)\n\t}\n}\n\nfunc TestOCSPFailClosedNoOCSPURL(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tocspCacheServerEnabled = false\n\tsetenv(ocspTestNoOCSPURLEnv, \"true\")\n\n\tconfig := &Config{\n\t\tAccount:       \"fakeaccount11\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t\tUser:          \"fakeuser\",\n\t\tPassword:      \"fakepassword\",\n\t\tLoginTimeout:  20 * time.Second,\n\t\tOCSPFailOpen:  OCSPFailOpenFalse,\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tvar testURL string\n\ttestURL, err = DSN(config)\n\tassertNilF(t, err, \"failed to build URL from Config\")\n\n\tif db, err = sql.Open(\"snowflake\", testURL); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", testURL, err)\n\t}\n\tdefer db.Close()\n\tif err = db.Ping(); err == nil {\n\t\tt.Fatalf(\"should fail to ping. %v\", testURL)\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\turlErr, ok := err.(*url.Error)\n\tif !ok {\n\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t}\n\tdriverErr, ok := urlErr.Err.(*SnowflakeError)\n\tif !ok {\n\t\tif !strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\t\tt.Fatalf(\"failed to extract error SnowflakeError: %v\", err)\n\t\t}\n\t}\n\tif driverErr.Number != ErrOCSPNoOCSPResponderURL {\n\t\tt.Fatalf(\"should failed to connect %v\", err)\n\t}\n}\n\nfunc TestOCSPUnexpectedResponses(t *testing.T) {\n\tcleanup()\n\tdefer cleanup()\n\n\tocspCacheServerEnabled = false\n\n\tcfg := wiremockHTTPS.connectionConfig(t)\n\n\tcountingRoundTripper := newCountingRoundTripper(http.DefaultTransport)\n\tocspTransport := wiremockHTTPS.ocspTransporter(t, countingRoundTripper)\n\tcfg.Transporter = ocspTransport\n\n\trunSampleQuery := func(cfg *Config) {\n\t\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\t\tdb := sql.OpenDB(connector)\n\t\trows, err := db.Query(\"SELECT 1\")\n\t\tassertNilF(t, err)\n\t\tdefer rows.Close()\n\t\tvar v int\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&v)\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, v, 1)\n\t}\n\n\tt.Run(\"should retry when OCSP is not reachable\", func(t *testing.T) {\n\t\tcountingRoundTripper.reset()\n\t\ttestResponderOverride := overrideEnv(ocspTestResponderURLEnv, \"http://localhost:56734\")\n\t\tdefer testResponderOverride.rollback()\n\t\twiremock.registerMappings(t, wiremockMapping{filePath: \"select1.json\"},\n\t\t\twiremockMapping{filePath: \"auth/password/successful_flow.json\"},\n\t\t)\n\t\trunSampleQuery(cfg)\n\t\tassertTrueE(t, countingRoundTripper.postReqCount[\"http://localhost:56734\"] > 1)\n\t\tassertEqualE(t, countingRoundTripper.getReqCount[\"http://localhost:56734\"], 0)\n\t})\n\n\tt.Run(\"should fallback to GET when POST returns malformed response\", func(t *testing.T) {\n\t\tcountingRoundTripper.reset()\n\t\ttestResponderOverride := overrideEnv(ocspTestResponderURLEnv, wiremock.baseURL())\n\t\tdefer testResponderOverride.rollback()\n\t\twiremock.registerMappings(t, wiremockMapping{filePath: \"ocsp/malformed.json\"},\n\t\t\twiremockMapping{filePath: \"select1.json\"},\n\t\t\twiremockMapping{filePath: \"auth/password/successful_flow.json\"},\n\t\t)\n\t\trunSampleQuery(cfg)\n\t\tassertEqualE(t, countingRoundTripper.postReqCount[wiremock.baseURL()], 2)\n\t\tassertEqualE(t, countingRoundTripper.getReqCount[wiremock.baseURL()], 2)\n\t})\n\n\tt.Run(\"should not fallback to GET when for POST unauthorized is returned\", func(t *testing.T) {\n\t\tcountingRoundTripper.reset()\n\t\tassertNilF(t, os.Setenv(ocspTestResponderURLEnv, wiremock.baseURL()))\n\t\ttestResponderOverride := overrideEnv(ocspTestResponderURLEnv, wiremock.baseURL())\n\t\tdefer testResponderOverride.rollback()\n\t\twiremock.registerMappings(t, wiremockMapping{filePath: \"ocsp/unauthorized.json\"},\n\t\t\twiremockMapping{filePath: \"select1.json\"},\n\t\t\twiremockMapping{filePath: \"auth/password/successful_flow.json\"},\n\t\t)\n\t\trunSampleQuery(cfg)\n\t\tassertEqualE(t, countingRoundTripper.postReqCount[wiremock.baseURL()], 2)\n\t\tassertEqualE(t, countingRoundTripper.getReqCount[wiremock.baseURL()], 0)\n\t})\n}\n\nfunc TestConnectionToMultipleConfigurations(t *testing.T) {\n\tsetenv(cacheServerURLEnv, defaultCacheServerHost)\n\twiremockHTTPS.registerMappings(t, wiremockMapping{filePath: \"auth/password/successful_flow.json\"})\n\terr := RegisterTLSConfig(\"wiremock\", &tls.Config{\n\t\tRootCAs: wiremockHTTPS.certPool(t),\n\t})\n\tassertNilF(t, err)\n\n\torigOcspMaxRetryCount := OcspMaxRetryCount\n\tOcspMaxRetryCount = 1\n\tdefer func() {\n\t\tOcspMaxRetryCount = origOcspMaxRetryCount\n\t}()\n\n\tcfgForFailOpen := wiremockHTTPS.connectionConfig(t)\n\tcfgForFailOpen.OCSPFailOpen = OCSPFailOpenTrue\n\tcfgForFailOpen.Transporter = nil\n\tcfgForFailOpen.TLSConfigName = \"wiremock\"\n\tcfgForFailOpen.MaxRetryCount = 1\n\n\tcfgForFailClose := wiremockHTTPS.connectionConfig(t)\n\tcfgForFailClose.OCSPFailOpen = OCSPFailOpenFalse\n\tcfgForFailClose.Transporter = nil\n\tcfgForFailClose.TLSConfigName = \"wiremock\"\n\tcfgForFailClose.MaxRetryCount = 1\n\n\t// we ignore closing here, since these are only wiremock connections\n\tfailOpenDb := sql.OpenDB(NewConnector(SnowflakeDriver{}, *cfgForFailOpen))\n\tfailCloseDb := sql.OpenDB(NewConnector(SnowflakeDriver{}, *cfgForFailClose))\n\n\t_, err = failOpenDb.Conn(context.Background())\n\tassertNilF(t, err)\n\n\t_, err = failCloseDb.Conn(context.Background())\n\tassertNotNilF(t, err)\n\tvar se *SnowflakeError\n\tassertTrueF(t, errors.As(err, &se))\n\tassertStringContainsE(t, se.Error(), \"no OCSP server is attached to the certificate\")\n\n\t_, err = failOpenDb.Conn(context.Background())\n\tassertNilF(t, err)\n\n\t// new connections should still behave the same way\n\tfailOpenDb2 := sql.OpenDB(NewConnector(SnowflakeDriver{}, *cfgForFailOpen))\n\tfailCloseDb2 := sql.OpenDB(NewConnector(SnowflakeDriver{}, *cfgForFailClose))\n\n\t_, err = failOpenDb2.Conn(context.Background())\n\tassertNilF(t, err)\n\n\t_, err = failCloseDb2.Conn(context.Background())\n\tassertNotNilF(t, err)\n\tassertTrueF(t, errors.As(err, &se))\n\tassertStringContainsE(t, se.Error(), \"no OCSP server is attached to the certificate\")\n\n\t// and old connections should still behave the same way\n\t_, err = failOpenDb.Conn(context.Background())\n\tassertNilF(t, err)\n\n\t_, err = failCloseDb.Conn(context.Background())\n\tassertNotNilF(t, err)\n\tassertTrueF(t, errors.As(err, &se))\n\tassertStringContainsE(t, se.Error(), \"no OCSP server is attached to the certificate\")\n}\n"
  },
  {
    "path": "driver_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"cmp\"\n\t\"context\"\n\t\"crypto/rsa\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"encoding/base64\"\n\t\"encoding/pem\"\n\t\"errors\"\n\t\"flag\"\n\t\"fmt\"\n\t\"math\"\n\t\"math/big\"\n\t\"math/rand\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"os/signal\"\n\t\"path/filepath\"\n\t\"reflect\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"syscall\"\n\t\"testing\"\n\t\"time\"\n)\n\nvar (\n\tusername         string\n\tpass             string\n\taccount          string\n\tdbname           string\n\tschemaname       string\n\twarehouse        string\n\trolename         string\n\tdsn              string\n\thost             string\n\tport             string\n\tprotocol         string\n\tcustomPrivateKey bool            // Whether user has specified the private key path\n\ttestPrivKey      *rsa.PrivateKey // Valid private key used for all test cases\n\tdebugMode        bool\n)\n\nconst (\n\tselectNumberSQL       = \"SELECT %s::NUMBER(%v, %v) AS C\"\n\tselectVariousTypes    = \"SELECT 1.0::NUMBER(30,2) as C1, 2::NUMBER(18,0) AS C2, 22::NUMBER(38, 0) AS C2A, 't3' AS C3, 4.2::DOUBLE AS C4, 'abcd'::BINARY(8388608) AS C5, true AS C6\"\n\tselectRandomGenerator = \"SELECT SEQ8(), RANDSTR(1000, RANDOM()) FROM TABLE(GENERATOR(ROWCOUNT=>%v))\"\n\tPSTLocation           = \"America/Los_Angeles\"\n)\n\n// The tests require the following parameters in the environment variables.\n// SNOWFLAKE_TEST_USER, SNOWFLAKE_TEST_PASSWORD, SNOWFLAKE_TEST_ACCOUNT,\n// SNOWFLAKE_TEST_DATABASE, SNOWFLAKE_TEST_SCHEMA, SNOWFLAKE_TEST_WAREHOUSE.\n// Optionally you may specify SNOWFLAKE_TEST_PROTOCOL, SNOWFLAKE_TEST_HOST\n// and SNOWFLAKE_TEST_PORT to specify the endpoint.\nfunc init() {\n\t// get environment variables\n\tenv := func(key, defaultValue string) string {\n\t\treturn cmp.Or(os.Getenv(key), defaultValue)\n\t}\n\tusername = env(\"SNOWFLAKE_TEST_USER\", \"testuser\")\n\tpass = env(\"SNOWFLAKE_TEST_PASSWORD\", \"testpassword\")\n\taccount = env(\"SNOWFLAKE_TEST_ACCOUNT\", \"testaccount\")\n\tdbname = env(\"SNOWFLAKE_TEST_DATABASE\", \"testdb\")\n\tschemaname = env(\"SNOWFLAKE_TEST_SCHEMA\", \"public\")\n\trolename = env(\"SNOWFLAKE_TEST_ROLE\", \"sysadmin\")\n\twarehouse = env(\"SNOWFLAKE_TEST_WAREHOUSE\", \"testwarehouse\")\n\n\tprotocol = env(\"SNOWFLAKE_TEST_PROTOCOL\", \"https\")\n\thost = os.Getenv(\"SNOWFLAKE_TEST_HOST\")\n\tport = env(\"SNOWFLAKE_TEST_PORT\", \"443\")\n\tif host == \"\" {\n\t\thost = fmt.Sprintf(\"%s.snowflakecomputing.com\", account)\n\t} else {\n\t\thost = fmt.Sprintf(\"%s:%s\", host, port)\n\t}\n\n\tsetupPrivateKey()\n\n\tcreateDSN(\"UTC\")\n\n\tdebugMode, _ = strconv.ParseBool(os.Getenv(\"SNOWFLAKE_TEST_DEBUG\"))\n\tif debugMode {\n\t\t_ = GetLogger().SetLogLevel(\"debug\")\n\t}\n}\n\nfunc createDSN(timezone string) {\n\t// Check if we should use JWT authentication\n\tauthenticator := os.Getenv(\"SNOWFLAKE_TEST_AUTHENTICATOR\")\n\n\tif authenticator == \"SNOWFLAKE_JWT\" {\n\t\t// For JWT authentication, don't include password in the DSN\n\t\tdsn = fmt.Sprintf(\"%s@%s/%s/%s\", username, host, dbname, schemaname)\n\t} else {\n\t\t// For standard password authentication\n\t\tdsn = fmt.Sprintf(\"%s:%s@%s/%s/%s\", username, pass, host, dbname, schemaname)\n\t}\n\n\tparameters := url.Values{}\n\tparameters.Add(\"timezone\", timezone)\n\tif protocol != \"\" {\n\t\tparameters.Add(\"protocol\", protocol)\n\t}\n\tif account != \"\" {\n\t\tparameters.Add(\"account\", account)\n\t}\n\tif warehouse != \"\" {\n\t\tparameters.Add(\"warehouse\", warehouse)\n\t}\n\tif rolename != \"\" {\n\t\tparameters.Add(\"role\", rolename)\n\t}\n\n\t// Add authenticator and private key for JWT authentication\n\tif authenticator == \"SNOWFLAKE_JWT\" {\n\t\tparameters.Add(\"authenticator\", \"SNOWFLAKE_JWT\")\n\t\tparameters.Add(\"jwtClientTimeout\", \"20\")\n\t\tprivateKeyPath := os.Getenv(\"SNOWFLAKE_TEST_PRIVATE_KEY\")\n\t\tif privateKeyPath != \"\" {\n\t\t\t// Read and encode the private key file\n\t\t\tprivateKeyBytes, err := os.ReadFile(privateKeyPath)\n\t\t\tif err == nil {\n\t\t\t\tblock, _ := pem.Decode(privateKeyBytes)\n\t\t\t\tif block != nil && block.Type == \"PRIVATE KEY\" {\n\t\t\t\t\tencodedKey := base64.URLEncoding.EncodeToString(block.Bytes)\n\t\t\t\t\tparameters.Add(\"privateKey\", encodedKey)\n\t\t\t\t} else if block == nil {\n\t\t\t\t\tpanic(\"Failed to decode PEM block from private key file\")\n\t\t\t\t} else {\n\t\t\t\t\tpanic(\"Expected 'PRIVATE KEY' block type\")\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tpanic(\"Failed to read private key file\")\n\t\t\t}\n\t\t} else {\n\t\t\tpanic(\"SNOWFLAKE_TEST_PRIVATE_KEY environment variable is not set for JWT authentication\")\n\t\t}\n\t}\n\n\tif len(parameters) > 0 {\n\t\tdsn += \"?\" + parameters.Encode()\n\t}\n}\n\n// setup creates a test schema so that all tests can run in the same schema\nfunc setup() (string, error) {\n\tenv := func(key, defaultValue string) string {\n\t\treturn cmp.Or(os.Getenv(key), defaultValue)\n\t}\n\n\torgSchemaname := schemaname\n\tif env(\"GITHUB_WORKFLOW\", \"\") != \"\" {\n\t\tgithubRunnerID := env(\"RUNNER_TRACKING_ID\", \"GITHUB_RUNNER_ID\")\n\t\tgithubRunnerID = strings.ReplaceAll(githubRunnerID, \"-\", \"_\")\n\t\tgithubSha := env(\"GITHUB_SHA\", \"GITHUB_SHA\")\n\t\tschemaname = fmt.Sprintf(\"%v_%v\", githubRunnerID, githubSha)\n\t} else {\n\t\tschemaname = fmt.Sprintf(\"golang_%v\", time.Now().UnixNano())\n\t}\n\tvar db *sql.DB\n\tvar err error\n\tif db, err = sql.Open(\"snowflake\", dsn); err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to open db. err: %v\", err)\n\t}\n\tdefer db.Close()\n\tif _, err = db.Exec(fmt.Sprintf(\"CREATE OR REPLACE SCHEMA %v\", schemaname)); err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to create schema. %v\", err)\n\t}\n\tcreateDSN(\"UTC\")\n\treturn orgSchemaname, nil\n}\n\n// teardown drops the test schema\nfunc teardown() error {\n\tvar db *sql.DB\n\tvar err error\n\tif db, err = sql.Open(\"snowflake\", dsn); err != nil {\n\t\treturn fmt.Errorf(\"failed to open db. %v, err: %v\", dsn, err)\n\t}\n\tdefer db.Close()\n\tif _, err = db.Exec(fmt.Sprintf(\"DROP SCHEMA IF EXISTS %v\", schemaname)); err != nil {\n\t\treturn fmt.Errorf(\"failed to create schema. %v\", err)\n\t}\n\treturn nil\n}\n\nfunc TestMain(m *testing.M) {\n\tflag.Parse()\n\tsignal.Ignore(syscall.SIGQUIT)\n\tif value := os.Getenv(\"SKIP_SETUP\"); value != \"\" {\n\t\tos.Exit(m.Run())\n\t}\n\n\tif _, err := setup(); err != nil {\n\t\tpanic(err)\n\t}\n\tret := m.Run()\n\tif err := teardown(); err != nil {\n\t\tpanic(err)\n\t}\n\tos.Exit(ret)\n}\n\ntype DBTest struct {\n\t*testing.T\n\tconn *sql.Conn\n}\n\nfunc (dbt *DBTest) mustQueryT(t *testing.T, query string, args ...any) (rows *RowsExtended) {\n\tt.Helper()\n\t// handler interrupt signal\n\tctx, cancel := context.WithCancel(context.Background())\n\tc := make(chan os.Signal, 1)\n\tc0 := make(chan bool, 1)\n\tsignal.Notify(c, os.Interrupt)\n\tdefer func() {\n\t\tsignal.Stop(c)\n\t}()\n\tgo func() {\n\t\tselect {\n\t\tcase <-c:\n\t\t\tfmt.Println(\"Caught signal, canceling...\")\n\t\t\tcancel()\n\t\tcase <-ctx.Done():\n\t\t\tfmt.Println(\"Done\")\n\t\tcase <-c0:\n\t\t}\n\t\tclose(c)\n\t}()\n\n\trs, err := dbt.conn.QueryContext(ctx, query, args...)\n\tif err != nil {\n\t\tt.Fatalf(\"query, query=%v, err=%v\", query, err)\n\t}\n\treturn &RowsExtended{\n\t\trows:      rs,\n\t\tcloseChan: &c0,\n\t\tt:         t,\n\t}\n}\n\nfunc (dbt *DBTest) mustQuery(query string, args ...any) (rows *RowsExtended) {\n\tdbt.Helper()\n\treturn dbt.mustQueryT(dbt.T, query, args...)\n}\n\nfunc (dbt *DBTest) mustQueryContext(ctx context.Context, query string, args ...any) (rows *RowsExtended) {\n\tdbt.Helper()\n\treturn dbt.mustQueryContextT(ctx, dbt.T, query, args...)\n}\n\nfunc (dbt *DBTest) mustQueryContextT(ctx context.Context, t *testing.T, query string, args ...any) (rows *RowsExtended) {\n\tt.Helper()\n\t// handler interrupt signal\n\tctx, cancel := context.WithCancel(ctx)\n\tc := make(chan os.Signal, 1)\n\tc0 := make(chan bool, 1)\n\tsignal.Notify(c, os.Interrupt)\n\tdefer func() {\n\t\tsignal.Stop(c)\n\t}()\n\tgo func() {\n\t\tselect {\n\t\tcase <-c:\n\t\t\tfmt.Println(\"Caught signal, canceling...\")\n\t\t\tcancel()\n\t\tcase <-ctx.Done():\n\t\t\tfmt.Println(\"Done\")\n\t\tcase <-c0:\n\t\t}\n\t\tclose(c)\n\t}()\n\n\trs, err := dbt.conn.QueryContext(ctx, query, args...)\n\tif err != nil {\n\t\tt.Fatalf(\"query, query=%v, err=%v\", query, err)\n\t}\n\treturn &RowsExtended{\n\t\trows:      rs,\n\t\tcloseChan: &c0,\n\t\tt:         t,\n\t}\n}\n\nfunc (dbt *DBTest) query(query string, args ...any) (*sql.Rows, error) {\n\treturn dbt.conn.QueryContext(context.Background(), query, args...)\n}\n\nfunc (dbt *DBTest) mustQueryAssertCount(query string, expected int, args ...any) {\n\trows := dbt.mustQuery(query, args...)\n\tdefer rows.Close()\n\tcnt := 0\n\tfor rows.Next() {\n\t\tcnt++\n\t}\n\tif cnt != expected {\n\t\tdbt.Fatalf(\"expected %v, got %v\", expected, cnt)\n\t}\n}\n\nfunc (dbt *DBTest) prepare(query string) (*sql.Stmt, error) {\n\treturn dbt.conn.PrepareContext(context.Background(), query)\n}\n\nfunc (dbt *DBTest) fail(method, query string, err error) {\n\tif !debugMode && len(query) > 1000 {\n\t\tquery = \"[query too large to print]\"\n\t}\n\tdbt.Fatalf(\"error on %s [%s]: %s\", method, query, err.Error())\n}\n\nfunc (dbt *DBTest) mustExec(query string, args ...any) (res sql.Result) {\n\treturn dbt.mustExecContext(context.Background(), query, args...)\n}\n\nfunc (dbt *DBTest) mustExecT(t *testing.T, query string, args ...any) (res sql.Result) {\n\treturn dbt.mustExecContextT(context.Background(), t, query, args...)\n}\n\nfunc (dbt *DBTest) mustExecContext(ctx context.Context, query string, args ...any) (res sql.Result) {\n\tres, err := dbt.conn.ExecContext(ctx, query, args...)\n\tif err != nil {\n\t\tdbt.fail(\"exec context\", query, err)\n\t}\n\treturn res\n}\n\nfunc (dbt *DBTest) mustExecContextT(ctx context.Context, t *testing.T, query string, args ...any) (res sql.Result) {\n\tres, err := dbt.conn.ExecContext(ctx, query, args...)\n\tif err != nil {\n\t\tt.Fatalf(\"exec context: query=%v, err=%v\", query, err)\n\t}\n\treturn res\n}\n\nfunc (dbt *DBTest) exec(query string, args ...any) (sql.Result, error) {\n\treturn dbt.conn.ExecContext(context.Background(), query, args...)\n}\n\nfunc (dbt *DBTest) mustDecimalSize(ct *sql.ColumnType) (pr int64, sc int64) {\n\tvar ok bool\n\tpr, sc, ok = ct.DecimalSize()\n\tif !ok {\n\t\tdbt.Fatalf(\"failed to get decimal size. %v\", ct)\n\t}\n\treturn pr, sc\n}\n\nfunc (dbt *DBTest) mustFailDecimalSize(ct *sql.ColumnType) {\n\tvar ok bool\n\tif _, _, ok = ct.DecimalSize(); ok {\n\t\tdbt.Fatalf(\"should not return decimal size. %v\", ct)\n\t}\n}\n\nfunc (dbt *DBTest) mustLength(ct *sql.ColumnType) (cLen int64) {\n\tvar ok bool\n\tcLen, ok = ct.Length()\n\tif !ok {\n\t\tdbt.Fatalf(\"failed to get length. %v\", ct)\n\t}\n\treturn cLen\n}\n\nfunc (dbt *DBTest) mustFailLength(ct *sql.ColumnType) {\n\tvar ok bool\n\tif _, ok = ct.Length(); ok {\n\t\tdbt.Fatalf(\"should not return length. %v\", ct)\n\t}\n}\n\nfunc (dbt *DBTest) mustNullable(ct *sql.ColumnType) (canNull bool) {\n\tvar ok bool\n\tcanNull, ok = ct.Nullable()\n\tif !ok {\n\t\tdbt.Fatalf(\"failed to get length. %v\", ct)\n\t}\n\treturn canNull\n}\n\nfunc (dbt *DBTest) mustPrepare(query string) (stmt *sql.Stmt) {\n\tstmt, err := dbt.conn.PrepareContext(context.Background(), query)\n\tif err != nil {\n\t\tdbt.fail(\"prepare\", query, err)\n\t}\n\treturn stmt\n}\n\nfunc (dbt *DBTest) forceJSON() {\n\tdbt.mustExec(forceJSON)\n}\n\nfunc (dbt *DBTest) forceArrow() {\n\tdbt.mustExec(forceARROW)\n\tdbt.mustExec(\"alter session set ENABLE_STRUCTURED_TYPES_NATIVE_ARROW_FORMAT = false\")\n\tdbt.mustExec(\"alter session set FORCE_ENABLE_STRUCTURED_TYPES_NATIVE_ARROW_FORMAT = false\")\n}\n\nfunc (dbt *DBTest) forceNativeArrow() { // structured types\n\tdbt.mustExec(forceARROW)\n\tdbt.mustExec(\"alter session set ENABLE_STRUCTURED_TYPES_NATIVE_ARROW_FORMAT = true\")\n\tdbt.mustExec(\"alter session set FORCE_ENABLE_STRUCTURED_TYPES_NATIVE_ARROW_FORMAT = true\")\n}\n\nfunc (dbt *DBTest) enableStructuredTypes() {\n\t_, err := dbt.exec(\"alter session set ENABLE_STRUCTURED_TYPES_IN_CLIENT_RESPONSE = true\")\n\tif err != nil {\n\t\tdbt.Log(err)\n\t}\n\t_, err = dbt.exec(\"alter session set IGNORE_CLIENT_VESRION_IN_STRUCTURED_TYPES_RESPONSE = true\")\n\tif err != nil {\n\t\tdbt.Log(err)\n\t}\n\t_, err = dbt.exec(\"alter session set ENABLE_STRUCTURED_TYPES_IN_FDN_TABLES = true\")\n\tif err != nil {\n\t\tdbt.Log(err)\n\t}\n}\n\nfunc (dbt *DBTest) enableStructuredTypesBinding() {\n\tdbt.enableStructuredTypes()\n\t_, err := dbt.exec(\"ALTER SESSION SET ENABLE_OBJECT_TYPED_BINDS = true\")\n\tif err != nil {\n\t\tdbt.Log(err)\n\t}\n\t_, err = dbt.exec(\"ALTER SESSION SET ENABLE_STRUCTURED_TYPES_IN_BINDS = Enable\")\n\tif err != nil {\n\t\tdbt.Log(err)\n\t}\n}\n\ntype SCTest struct {\n\t*testing.T\n\tsc *snowflakeConn\n}\n\nfunc (sct *SCTest) fail(method, query string, err error) {\n\tif !debugMode && len(query) > 300 {\n\t\tquery = \"[query too large to print]\"\n\t}\n\tsct.Fatalf(\"error on %s [%s]: %s\", method, query, err.Error())\n}\n\nfunc (sct *SCTest) mustExec(query string, args []driver.Value) driver.Result {\n\tresult, err := sct.sc.Exec(query, args)\n\tif err != nil {\n\t\tsct.fail(\"exec\", query, err)\n\t}\n\treturn result\n}\nfunc (sct *SCTest) mustQuery(query string, args []driver.Value) driver.Rows {\n\trows, err := sct.sc.Query(query, args)\n\tif err != nil {\n\t\tsct.fail(\"query\", query, err)\n\t}\n\treturn rows\n}\n\nfunc (sct *SCTest) mustQueryContext(ctx context.Context, query string, args []driver.NamedValue) driver.Rows {\n\trows, err := sct.sc.QueryContext(ctx, query, args)\n\tif err != nil {\n\t\tsct.fail(\"QueryContext\", query, err)\n\t}\n\treturn rows\n}\n\ntype testConfig struct {\n\tdsn string\n}\n\nfunc runDBTest(t *testing.T, test func(dbt *DBTest)) {\n\trunDBTestWithConfig(t, &testConfig{dsn}, test)\n}\n\nfunc runDBTestWithConfig(t *testing.T, testCfg *testConfig, test func(dbt *DBTest)) {\n\n\tdb, conn := openConn(t, testCfg)\n\tdefer conn.Close()\n\tdefer db.Close()\n\tdbt := &DBTest{t, conn}\n\n\ttest(dbt)\n}\n\nfunc runSnowflakeConnTest(t *testing.T, test func(sct *SCTest)) {\n\trunSnowflakeConnTestWithConfig(t, &testConfig{dsn}, test)\n}\n\nfunc runSnowflakeConnTestWithConfig(t *testing.T, testCfg *testConfig, test func(sct *SCTest)) {\n\tconfig, err := ParseDSN(testCfg.dsn)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tsc, err := buildSnowflakeConn(context.Background(), *config)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer sc.Close()\n\tif err = authenticateWithConfig(sc); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tsct := &SCTest{t, sc}\n\n\ttest(sct)\n}\n\nfunc getDbHandlerFromConfig(t *testing.T, cfg *Config) *sql.DB {\n\tdsn, err := DSN(cfg)\n\tassertNilF(t, err, \"failed to create DSN from Config\")\n\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tassertNilF(t, err, \"failed to open database\")\n\n\treturn db\n}\n\nfunc runningOnAWS() bool {\n\treturn os.Getenv(\"CLOUD_PROVIDER\") == \"AWS\"\n}\n\nfunc runningOnGCP() bool {\n\treturn os.Getenv(\"CLOUD_PROVIDER\") == \"GCP\"\n}\n\nfunc runningOnLinux() bool {\n\treturn runtime.GOOS == \"linux\"\n}\n\nfunc TestKnownUserInvalidPasswordParameters(t *testing.T) {\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/password/invalid_password.json\"},\n\t)\n\n\tcfg := wiremock.connectionConfig()\n\tcfg.User = \"testUser\"\n\tcfg.Password = \"INVALID_PASSWORD\"\n\tcfg.Authenticator = AuthTypeSnowflake // Force password auth\n\n\tdb := sql.OpenDB(NewConnector(SnowflakeDriver{}, *cfg))\n\tdefer db.Close()\n\n\t_, err := db.Exec(\"SELECT 1\")\n\tassertNotNilF(t, err, \"should cause an authentication error\")\n\n\tvar driverErr *SnowflakeError\n\tassertErrorsAsF(t, err, &driverErr)\n\tassertEqualE(t, driverErr.Number, 390100)\n}\n\nfunc TestCommentOnlyQuery(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tquery := \"--\"\n\t\t// just a comment, no query\n\t\trows, err := dbt.query(query)\n\t\tif err == nil {\n\t\t\trows.Close()\n\t\t\tdbt.fail(\"query\", query, err)\n\t\t}\n\t\tif driverErr, ok := err.(*SnowflakeError); ok {\n\t\t\tif driverErr.Number != 900 { // syntax error\n\t\t\t\tdbt.fail(\"query\", query, err)\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestEmptyQuery(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tquery := \"select 1 from dual where 1=0\"\n\t\t// just a comment, no query\n\t\trows := dbt.conn.QueryRowContext(context.Background(), query)\n\t\tvar v1 any\n\t\tif err := rows.Scan(&v1); err != sql.ErrNoRows {\n\t\t\tdbt.Errorf(\"should fail. err: %v\", err)\n\t\t}\n\t\trows = dbt.conn.QueryRowContext(context.Background(), query)\n\t\tif err := rows.Scan(&v1); err != sql.ErrNoRows {\n\t\t\tdbt.Errorf(\"should fail. err: %v\", err)\n\t\t}\n\t})\n}\n\nfunc TestEmptyQueryWithRequestID(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tquery := \"select 1\"\n\t\tctx := WithRequestID(context.Background(), NewUUID())\n\t\trows := dbt.conn.QueryRowContext(ctx, query)\n\t\tvar v1 any\n\t\tif err := rows.Scan(&v1); err != nil {\n\t\t\tdbt.Errorf(\"should not have failed with valid request id. err: %v\", err)\n\t\t}\n\t})\n}\n\nfunc TestRequestIDFromTwoDifferentSessions(t *testing.T) {\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tassertNilF(t, err)\n\tdb.SetMaxOpenConns(10)\n\n\tconn, err := db.Conn(context.Background())\n\tassertNilF(t, err)\n\tdefer conn.Close()\n\t_, err = conn.ExecContext(context.Background(), forceJSON)\n\tassertNilF(t, err)\n\n\tconn2, err := db.Conn(context.Background())\n\tassertNilF(t, err)\n\tdefer conn2.Close()\n\t_, err = conn2.ExecContext(context.Background(), forceJSON)\n\tassertNilF(t, err)\n\n\t// creating table\n\treqIDForCreate := NewUUID()\n\t_, err = conn.ExecContext(WithRequestID(context.Background(), reqIDForCreate), \"CREATE TABLE req_id_testing (id INTEGER)\")\n\tassertNilF(t, err)\n\tdefer func() {\n\t\t_, err = db.Exec(\"DROP TABLE IF EXISTS req_id_testing\")\n\t\tassertNilE(t, err)\n\t}()\n\t_, err = conn.ExecContext(WithRequestID(context.Background(), reqIDForCreate), \"CREATE TABLE req_id_testing (id INTEGER)\")\n\tassertNilF(t, err)\n\tdefer func() {\n\t\t_, err = db.Exec(\"DROP TABLE IF EXISTS req_id_testing\")\n\t\tassertNilE(t, err)\n\t}()\n\n\t// should fail as API v1 does not allow reusing requestID across sessions for DML statements\n\t_, err = conn2.ExecContext(WithRequestID(context.Background(), reqIDForCreate), \"CREATE TABLE req_id_testing (id INTEGER)\")\n\tassertNotNilE(t, err)\n\tassertStringContainsE(t, err.Error(), \"already exists\")\n\n\t// inserting a record\n\treqIDForInsert := NewUUID()\n\texecResult, err := conn.ExecContext(WithRequestID(context.Background(), reqIDForInsert), \"INSERT INTO req_id_testing VALUES (1)\")\n\tassertNilF(t, err)\n\trowsInserted, err := execResult.RowsAffected()\n\tassertNilF(t, err)\n\tassertEqualE(t, rowsInserted, int64(1))\n\n\t_, err = conn2.ExecContext(WithRequestID(context.Background(), reqIDForInsert), \"INSERT INTO req_id_testing VALUES (1)\")\n\tassertNilF(t, err)\n\trowsInserted2, err := execResult.RowsAffected()\n\tassertNilF(t, err)\n\tassertEqualE(t, rowsInserted2, int64(1))\n\n\t// selecting data\n\treqIDForSelect := NewUUID()\n\trows, err := conn.QueryContext(WithRequestID(context.Background(), reqIDForSelect), \"SELECT * FROM req_id_testing\")\n\tassertNilF(t, err)\n\tdefer rows.Close()\n\tvar i int\n\tassertTrueE(t, rows.Next())\n\tassertNilF(t, rows.Scan(&i))\n\tassertEqualE(t, i, 1)\n\ti = 0\n\tassertTrueE(t, rows.Next())\n\tassertNilF(t, rows.Scan(&i))\n\tassertEqualE(t, i, 1)\n\tassertFalseE(t, rows.Next())\n\n\trows2, err := conn.QueryContext(WithRequestID(context.Background(), reqIDForSelect), \"SELECT * FROM req_id_testing\")\n\tassertNilF(t, err)\n\tdefer rows2.Close()\n\tassertTrueE(t, rows2.Next())\n\tassertNilF(t, rows2.Scan(&i))\n\tassertEqualE(t, i, 1)\n\ti = 0\n\tassertTrueE(t, rows2.Next())\n\tassertNilF(t, rows2.Scan(&i))\n\tassertEqualE(t, i, 1)\n\tassertFalseE(t, rows2.Next())\n\n\t// insert another data\n\t_, err = conn.ExecContext(context.Background(), \"INSERT INTO req_id_testing VALUES (1)\")\n\tassertNilF(t, err)\n\n\t// selecting using old request id\n\trows3, err := conn.QueryContext(WithRequestID(context.Background(), reqIDForSelect), \"SELECT * FROM req_id_testing\")\n\tassertNilF(t, err)\n\tdefer rows3.Close()\n\tassertTrueE(t, rows3.Next())\n\tassertNilF(t, rows3.Scan(&i))\n\tassertEqualE(t, i, 1)\n\ti = 0\n\tassertTrueE(t, rows3.Next())\n\tassertNilF(t, rows3.Scan(&i))\n\tassertEqualE(t, i, 1)\n\ti = 0\n\tassertFalseF(t, rows3.Next())\n}\n\nfunc TestCRUD(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\t// Create Table\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test (value BOOLEAN)\")\n\n\t\t// Test for unexpected Data\n\t\tvar out bool\n\t\trows := dbt.mustQuery(\"SELECT * FROM test\")\n\t\tdefer rows.Close()\n\t\tif rows.Next() {\n\t\t\tdbt.Error(\"unexpected Data in empty table\")\n\t\t}\n\n\t\t// Create Data\n\t\tres := dbt.mustExec(\"INSERT INTO test VALUES (true)\")\n\t\tcount, err := res.RowsAffected()\n\t\tif err != nil {\n\t\t\tdbt.Fatalf(\"res.RowsAffected() returned error: %s\", err.Error())\n\t\t}\n\t\tif count != 1 {\n\t\t\tdbt.Fatalf(\"expected 1 affected row, got %d\", count)\n\t\t}\n\n\t\tid, err := res.LastInsertId()\n\t\tif err != nil {\n\t\t\tdbt.Fatalf(\"res.LastInsertId() returned error: %s\", err.Error())\n\t\t}\n\t\tif id != -1 {\n\t\t\tdbt.Fatalf(\n\t\t\t\t\"expected InsertId -1, got %d. Snowflake doesn't support last insert ID\", id)\n\t\t}\n\n\t\t// Read\n\t\trows = dbt.mustQuery(\"SELECT value FROM test\")\n\t\tdefer func(rows *RowsExtended) {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}(rows)\n\t\tif rows.Next() {\n\t\t\tassertNilF(t, rows.Scan(&out))\n\t\t\tif !out {\n\t\t\t\tdbt.Errorf(\"%t should be true\", out)\n\t\t\t}\n\t\t\tif rows.Next() {\n\t\t\t\tdbt.Error(\"unexpected Data\")\n\t\t\t}\n\t\t} else {\n\t\t\tdbt.Error(\"no Data\")\n\t\t}\n\n\t\t// Update\n\t\tres = dbt.mustExec(\"UPDATE test SET value = ? WHERE value = ?\", false, true)\n\t\tcount, err = res.RowsAffected()\n\t\tif err != nil {\n\t\t\tdbt.Fatalf(\"res.RowsAffected() returned error: %s\", err.Error())\n\t\t}\n\t\tif count != 1 {\n\t\t\tdbt.Fatalf(\"expected 1 affected row, got %d\", count)\n\t\t}\n\n\t\t// Check Update\n\t\trows = dbt.mustQuery(\"SELECT value FROM test\")\n\t\tdefer func(rows *RowsExtended) {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}(rows)\n\t\tif rows.Next() {\n\t\t\tassertNilF(t, rows.Scan(&out))\n\t\t\tif out {\n\t\t\t\tdbt.Errorf(\"%t should be true\", out)\n\t\t\t}\n\t\t\tif rows.Next() {\n\t\t\t\tdbt.Error(\"unexpected Data\")\n\t\t\t}\n\t\t} else {\n\t\t\tdbt.Error(\"no Data\")\n\t\t}\n\n\t\t// Delete\n\t\tres = dbt.mustExec(\"DELETE FROM test WHERE value = ?\", false)\n\t\tcount, err = res.RowsAffected()\n\t\tif err != nil {\n\t\t\tdbt.Fatalf(\"res.RowsAffected() returned error: %s\", err.Error())\n\t\t}\n\t\tif count != 1 {\n\t\t\tdbt.Fatalf(\"expected 1 affected row, got %d\", count)\n\t\t}\n\n\t\t// Check for unexpected rows\n\t\tres = dbt.mustExec(\"DELETE FROM test\")\n\t\tcount, err = res.RowsAffected()\n\t\tif err != nil {\n\t\t\tdbt.Fatalf(\"res.RowsAffected() returned error: %s\", err.Error())\n\t\t}\n\t\tif count != 0 {\n\t\t\tdbt.Fatalf(\"expected 0 affected row, got %d\", count)\n\t\t}\n\t})\n}\n\nfunc TestInt(t *testing.T) {\n\ttestInt(t, false)\n}\n\nfunc testInt(t *testing.T, json bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttypes := []string{\"INT\", \"INTEGER\"}\n\t\tin := int64(42)\n\t\tvar out int64\n\t\tvar rows *RowsExtended\n\n\t\t// SIGNED\n\t\tfor _, v := range types {\n\t\t\tt.Run(v, func(t *testing.T) {\n\t\t\t\tif json {\n\t\t\t\t\tdbt.mustExec(forceJSON)\n\t\t\t\t}\n\t\t\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test (value \" + v + \")\")\n\t\t\t\tdbt.mustExec(\"INSERT INTO test VALUES (?)\", in)\n\t\t\t\trows = dbt.mustQuery(\"SELECT value FROM test\")\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\t\t\t\tif rows.Next() {\n\t\t\t\t\tassertNilF(t, rows.Scan(&out))\n\t\t\t\t\tif in != out {\n\t\t\t\t\t\tdbt.Errorf(\"%s: %d != %d\", v, in, out)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tdbt.Errorf(\"%s: no data\", v)\n\t\t\t\t}\n\n\t\t\t})\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test\")\n\t})\n}\n\nfunc TestFloat32(t *testing.T) {\n\ttestFloat32(t, false)\n}\n\nfunc testFloat32(t *testing.T, json bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttypes := [2]string{\"FLOAT\", \"DOUBLE\"}\n\t\tin := float32(42.23)\n\t\tvar out float32\n\t\tvar rows *RowsExtended\n\t\tfor _, v := range types {\n\t\t\tt.Run(v, func(t *testing.T) {\n\t\t\t\tif json {\n\t\t\t\t\tdbt.mustExec(forceJSON)\n\t\t\t\t}\n\t\t\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test (value \" + v + \")\")\n\t\t\t\tdbt.mustExec(\"INSERT INTO test VALUES (?)\", in)\n\t\t\t\trows = dbt.mustQuery(\"SELECT value FROM test\")\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\t\t\t\tif rows.Next() {\n\t\t\t\t\terr := rows.Scan(&out)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tdbt.Errorf(\"failed to scan data: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tif in != out {\n\t\t\t\t\t\tdbt.Errorf(\"%s: %g != %g\", v, in, out)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tdbt.Errorf(\"%s: no data\", v)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test\")\n\t})\n}\n\nfunc TestFloat64(t *testing.T) {\n\ttestFloat64(t, false)\n}\n\nfunc testFloat64(t *testing.T, json bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttypes := [2]string{\"FLOAT\", \"DOUBLE\"}\n\t\texpected := 42.23\n\t\tvar out float64\n\t\tvar rows *RowsExtended\n\t\tfor _, v := range types {\n\t\t\tt.Run(v, func(t *testing.T) {\n\t\t\t\tif json {\n\t\t\t\t\tdbt.mustExec(forceJSON)\n\t\t\t\t}\n\t\t\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test (value \" + v + \")\")\n\t\t\t\tdbt.mustExec(\"INSERT INTO test VALUES (42.23)\")\n\t\t\t\trows = dbt.mustQuery(\"SELECT value FROM test\")\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\t\t\t\tif rows.Next() {\n\t\t\t\t\tassertNilF(t, rows.Scan(&out))\n\t\t\t\t\tif expected != out {\n\t\t\t\t\t\tdbt.Errorf(\"%s: %g != %g\", v, expected, out)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tdbt.Errorf(\"%s: no data\", v)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test\")\n\t})\n}\n\nfunc TestDecfloat(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tfor _, format := range []string{\"JSON\", \"ARROW\"} {\n\t\t\tif format == \"JSON\" {\n\t\t\t\tdbt.mustExecT(t, forceJSON)\n\t\t\t} else {\n\t\t\t\tdbt.mustExecT(t, forceARROW)\n\t\t\t}\n\t\t\tfor _, higherPrecision := range []bool{false, true} {\n\t\t\t\tfor _, decfloatMappingEnabled := range []bool{true, false} {\n\t\t\t\t\tt.Run(fmt.Sprintf(\"format=%v,higherPrecision=%v,decfloatMappingEnabled=%v\", format, higherPrecision, decfloatMappingEnabled), func(t *testing.T) {\n\t\t\t\t\t\tfor _, tc := range []struct {\n\t\t\t\t\t\t\tin                      string\n\t\t\t\t\t\t\tstandardPrecisionOutput float64\n\t\t\t\t\t\t\thigherPrecisionOutput   string\n\t\t\t\t\t\t\tdecfloatDisabledOutput  string\n\t\t\t\t\t\t}{\n\t\t\t\t\t\t\t{in: \"0\", standardPrecisionOutput: 0, higherPrecisionOutput: \"0\", decfloatDisabledOutput: \"0\"},\n\t\t\t\t\t\t\t{in: \"-1\", standardPrecisionOutput: -1, higherPrecisionOutput: \"-1\", decfloatDisabledOutput: \"-1\"},\n\t\t\t\t\t\t\t{in: \"-1.5\", standardPrecisionOutput: -1.5, higherPrecisionOutput: \"-1.5\", decfloatDisabledOutput: \"-1.5\"},\n\t\t\t\t\t\t\t{in: \"1e1\", standardPrecisionOutput: 10, higherPrecisionOutput: \"10\", decfloatDisabledOutput: \"10\"},\n\t\t\t\t\t\t\t{in: \"1e2\", standardPrecisionOutput: 100, higherPrecisionOutput: \"100\", decfloatDisabledOutput: \"100\"},\n\t\t\t\t\t\t\t{in: \"-2e3\", standardPrecisionOutput: -2000, higherPrecisionOutput: \"-2000\", decfloatDisabledOutput: \"-2000\"},\n\t\t\t\t\t\t\t{in: \"1e100\", standardPrecisionOutput: math.Pow10(100), higherPrecisionOutput: \"1e+100\", decfloatDisabledOutput: \"1e100\"},\n\t\t\t\t\t\t\t{in: \"-1.2345e2\", standardPrecisionOutput: -123.45, higherPrecisionOutput: \"-123.45\", decfloatDisabledOutput: \"-123.45\"},\n\t\t\t\t\t\t\t{in: \"1.23456e2\", standardPrecisionOutput: 123.456, higherPrecisionOutput: \"123.456\", decfloatDisabledOutput: \"123.456\"},\n\t\t\t\t\t\t\t{in: \"-9.87654321E-250\", standardPrecisionOutput: -9.876654321 * math.Pow10(-250), higherPrecisionOutput: \"-9.87654321e-250\", decfloatDisabledOutput: \"-9.87654321e-250\"},\n\t\t\t\t\t\t\t{in: \"1.2345678901234567890123456789012345678e37\", standardPrecisionOutput: 12345678901234567525491324606797053952, higherPrecisionOutput: \"12345678901234567890123456789012345678\", decfloatDisabledOutput: \"12345678901234567890123456789012345678\"}, // pragma: allowlist secret\n\t\t\t\t\t\t} {\n\t\t\t\t\t\t\tt.Run(tc.in, func(t *testing.T) {\n\t\t\t\t\t\t\t\tctx := context.Background()\n\t\t\t\t\t\t\t\tif higherPrecision {\n\t\t\t\t\t\t\t\t\tctx = WithHigherPrecision(ctx)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif decfloatMappingEnabled {\n\t\t\t\t\t\t\t\t\tctx = WithDecfloatMappingEnabled(ctx)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\trows := dbt.mustQueryContextT(ctx, t, fmt.Sprintf(\"SELECT '%v'::DECFLOAT UNION SELECT NULL ORDER BY 1\", tc.in))\n\t\t\t\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\t\t\t\trows.mustNext()\n\t\t\t\t\t\t\t\tif !decfloatMappingEnabled {\n\t\t\t\t\t\t\t\t\tvar s string\n\t\t\t\t\t\t\t\t\trows.mustScan(&s)\n\t\t\t\t\t\t\t\t\tif format == \"ARROW\" {\n\t\t\t\t\t\t\t\t\t\tassertEqualF(t, s, strings.ToLower(tc.in))\n\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\tassertEqualE(t, s, tc.decfloatDisabledOutput)\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[string]())\n\t\t\t\t\t\t\t\t} else if higherPrecision {\n\t\t\t\t\t\t\t\t\tvar bf *big.Float\n\t\t\t\t\t\t\t\t\trows.mustScan(&bf)\n\t\t\t\t\t\t\t\t\tassertEqualE(t, bf.Text('g', 38), tc.higherPrecisionOutput)\n\t\t\t\t\t\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[*big.Float]())\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tvar f float64\n\t\t\t\t\t\t\t\t\trows.mustScan(&f)\n\t\t\t\t\t\t\t\t\tassertEqualEpsilonE(t, f, tc.standardPrecisionOutput, 0.0001)\n\t\t\t\t\t\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[float64]())\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\trows.mustNext()\n\t\t\t\t\t\t\t\tif !decfloatMappingEnabled {\n\t\t\t\t\t\t\t\t\tvar s sql.NullString\n\t\t\t\t\t\t\t\t\trows.mustScan(&s)\n\t\t\t\t\t\t\t\t\tassertFalseE(t, s.Valid)\n\t\t\t\t\t\t\t\t} else if higherPrecision {\n\t\t\t\t\t\t\t\t\tvar bf *big.Float\n\t\t\t\t\t\t\t\t\trows.mustScan(&bf)\n\t\t\t\t\t\t\t\t\tassertNilE(t, bf)\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tvar f sql.NullFloat64\n\t\t\t\t\t\t\t\t\trows.mustScan(&f)\n\t\t\t\t\t\t\t\t\tassertFalseE(t, f.Valid)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t})\n\t\t\t\t\t\t}\n\t\t\t\t\t})\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tt.Run(\"Binding simple value\", func(t *testing.T) {\n\t\t\tt.Run(\"As string\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContextT(context.Background(), t, \"SELECT ?::DECFLOAT\", DataTypeDecfloat, \"1234567890.1234567890123456789012345678\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.mustNext()\n\t\t\t\tvar s string\n\t\t\t\trows.mustScan(&s)\n\t\t\t\tassertEqualE(t, s, \"1.2345678901234567890123456789012345678e9\")\n\t\t\t})\n\t\t\tt.Run(\"As float\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContextT(WithDecfloatMappingEnabled(context.Background()), t, \"SELECT ?::DECFLOAT\", DataTypeDecfloat, 123.45)\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.mustNext()\n\t\t\t\tvar f float64\n\t\t\t\trows.mustScan(&f)\n\t\t\t\tassertEqualE(t, f, 123.45)\n\t\t\t})\n\t\t\tt.Run(\"As *big.Float\", func(t *testing.T) {\n\t\t\t\tbfFromString, ok := new(big.Float).SetPrec(127).SetString(\"1234567890.1234567890123456789012345678\")\n\t\t\t\tassertTrueF(t, ok)\n\t\t\t\tprintln(bfFromString.Text('g', 40))\n\t\t\t\trows := dbt.mustQueryContextT(WithDecfloatMappingEnabled(WithHigherPrecision(context.Background())), t, \"SELECT ?::DECFLOAT\", DataTypeDecfloat, bfFromString)\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.mustNext()\n\t\t\t\tbf := new(big.Float).SetPrec(127)\n\t\t\t\trows.mustScan(&bf)\n\t\t\t\tprintln(bf.Text('g', 40))\n\t\t\t\tassertTrueE(t, bf.Cmp(bfFromString) == 0)\n\t\t\t})\n\t\t})\n\n\t\tt.Run(\"Binding array\", func(t *testing.T) {\n\t\t\tbfFromString, ok := new(big.Float).SetPrec(127).SetString(\"1234567890.1234567890123456789012345678\")\n\t\t\tassertTrueF(t, ok)\n\t\t\tarrays := []any{\n\t\t\t\tmustArray([]string{\"123.45\", \"1234567890.1234567890123456789012345678\"}, DataTypeDecfloat),\n\t\t\t\tmustArray([]float64{123.45, 1234567890.1234567890123456789012345678}, DataTypeDecfloat),\n\t\t\t\tmustArray([]*big.Float{\n\t\t\t\t\tnew(big.Float).SetFloat64(123.45),\n\t\t\t\t\tbfFromString,\n\t\t\t\t}, DataTypeDecfloat),\n\t\t\t}\n\t\t\tfor _, bulk := range []bool{false, true} {\n\t\t\t\tfor idx, arr := range arrays {\n\t\t\t\t\tt.Run(fmt.Sprintf(\"bulk=%v, idx=%v\", bulk, idx), func(t *testing.T) {\n\t\t\t\t\t\tif bulk {\n\t\t\t\t\t\t\tdbt.mustExecT(t, \"ALTER SESSION SET CLIENT_STAGE_ARRAY_BINDING_THRESHOLD = 1\")\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tdbt.mustExecT(t, \"ALTER SESSION SET CLIENT_STAGE_ARRAY_BINDING_THRESHOLD = 100\")\n\t\t\t\t\t\t}\n\t\t\t\t\t\tdbt.mustExecT(t, \"CREATE OR REPLACE TABLE test_decfloat (value DECFLOAT)\")\n\t\t\t\t\t\tdefer dbt.mustExecT(t, \"DROP TABLE IF EXISTS test_decfloat\")\n\t\t\t\t\t\t_ = dbt.mustExecT(t, \"INSERT INTO test_decfloat VALUES (?)\", arr)\n\t\t\t\t\t\trows := dbt.mustQueryT(t, \"SELECT value FROM test_decfloat ORDER BY 1\")\n\t\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\t\trows.mustNext()\n\t\t\t\t\t\tvar f float64\n\t\t\t\t\t\trows.mustScan(&f)\n\t\t\t\t\t\tassertEqualEpsilonE(t, f, 123.45, 0.01)\n\t\t\t\t\t\trows.mustNext()\n\t\t\t\t\t\tif idx != 1 { // float64 cannot be bound with the full precision\n\t\t\t\t\t\t\tvar s string\n\t\t\t\t\t\t\trows.mustScan(&s)\n\t\t\t\t\t\t\tassertEqualE(t, s, \"1.2345678901234567890123456789012345678e9\")\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\trows.mustScan(&f)\n\t\t\t\t\t\t\tassertEqualEpsilonE(t, f, 1234567890.1234567890123456789012345678, 0.01)\n\t\t\t\t\t\t}\n\t\t\t\t\t})\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestString(t *testing.T) {\n\ttestString(t, false)\n}\n\nfunc testString(t *testing.T, json bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\t\ttypes := []string{\"CHAR(255)\", \"VARCHAR(255)\", \"TEXT\", \"STRING\"}\n\t\tin := \"κόσμε üöäßñóùéàâÿœ'îë Árvíztűrő いろはにほへとちりぬるを イロハニホヘト דג סקרן чащах  น่าฟังเอย\"\n\t\tvar out string\n\t\tvar rows *RowsExtended\n\n\t\tfor _, v := range types {\n\t\t\tt.Run(v, func(t *testing.T) {\n\t\t\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test (value \" + v + \")\")\n\t\t\t\tdbt.mustExec(\"INSERT INTO test VALUES (?)\", in)\n\n\t\t\t\trows = dbt.mustQuery(\"SELECT value FROM test\")\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\t\t\t\tif rows.Next() {\n\t\t\t\t\tassertNilF(t, rows.Scan(&out))\n\t\t\t\t\tif in != out {\n\t\t\t\t\t\tdbt.Errorf(\"%s: %s != %s\", v, in, out)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tdbt.Errorf(\"%s: no data\", v)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test\")\n\n\t\t// BLOB (Snowflake doesn't support BLOB type but STRING covers large text data)\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test (id int, value STRING)\")\n\n\t\tid := 2\n\t\tin = `Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam\n\t\t\tnonumy eirmod tempor invidunt ut labore et dolore magna aliquyam\n\t\t\terat, sed diam voluptua. At vero eos et accusam et justo duo\n\t\t\tdolores et ea rebum. Stet clita kasd gubergren, no sea takimata\n\t\t\tsanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet,\n\t\t\tconsetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt\n\t\t\tut labore et dolore magna aliquyam erat, sed diam voluptua. At vero\n\t\t\teos et accusam et justo duo dolores et ea rebum. Stet clita kasd\n\t\t\tgubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.`\n\t\tdbt.mustExec(\"INSERT INTO test VALUES (?, ?)\", id, in)\n\n\t\tif err := dbt.conn.QueryRowContext(context.Background(), \"SELECT value FROM test WHERE id = ?\", id).Scan(&out); err != nil {\n\t\t\tdbt.Fatalf(\"Error on BLOB-Query: %s\", err.Error())\n\t\t} else if out != in {\n\t\t\tdbt.Errorf(\"BLOB: %s != %s\", in, out)\n\t\t}\n\t})\n}\n\n/** TESTING TYPES **/\n// testUUID is a wrapper around UUID for unit testing purposes and should not be used in production\ntype testUUID struct {\n\tUUID\n}\n\nfunc newTestUUID() testUUID {\n\treturn testUUID{NewUUID()}\n}\n\nfunc parseTestUUID(str string) testUUID {\n\tif str == \"\" {\n\t\treturn testUUID{}\n\t}\n\treturn testUUID{ParseUUID(str)}\n}\n\n// Scan implements sql.Scanner so UUIDs can be read from databases transparently.\n// Currently, database types that map to string and []byte are supported. Please\n// consult database-specific driver documentation for matching types.\nfunc (uuid *testUUID) Scan(src any) error {\n\tswitch src := src.(type) {\n\tcase nil:\n\t\treturn nil\n\n\tcase string:\n\t\t// if an empty UUID comes from a table, we return a null UUID\n\t\tif src == \"\" {\n\t\t\treturn nil\n\t\t}\n\n\t\t// see Parse for required string format\n\t\tu := ParseUUID(src)\n\n\t\t*uuid = testUUID{u}\n\n\tcase []byte:\n\t\t// if an empty UUID comes from a table, we return a null UUID\n\t\tif len(src) == 0 {\n\t\t\treturn nil\n\t\t}\n\n\t\t// assumes a simple slice of bytes if 16 bytes\n\t\t// otherwise attempts to parse\n\t\tif len(src) != 16 {\n\t\t\treturn uuid.Scan(string(src))\n\t\t}\n\t\tcopy((uuid.UUID)[:], src)\n\n\tdefault:\n\t\treturn fmt.Errorf(\"Scan: unable to scan type %T into UUID\", src)\n\t}\n\n\treturn nil\n}\n\n// Value implements sql.Valuer so that UUIDs can be written to databases\n// transparently. Currently, UUIDs map to strings. Please consult\n// database-specific driver documentation for matching types.\nfunc (uuid testUUID) Value() (driver.Value, error) {\n\treturn uuid.String(), nil\n}\n\nfunc TestUUID(t *testing.T) {\n\tt.Run(\"JSON\", func(t *testing.T) {\n\t\ttestUUIDWithFormat(t, true, false)\n\t})\n\tt.Run(\"Arrow\", func(t *testing.T) {\n\t\ttestUUIDWithFormat(t, false, true)\n\t})\n}\n\nfunc testUUIDWithFormat(t *testing.T, json, arrow bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t} else if arrow {\n\t\t\tdbt.mustExec(forceARROW)\n\t\t}\n\n\t\ttypes := []string{\"CHAR(255)\", \"VARCHAR(255)\", \"TEXT\", \"STRING\"}\n\n\t\tin := make([]testUUID, len(types))\n\n\t\tfor i := range types {\n\t\t\tin[i] = newTestUUID()\n\t\t}\n\n\t\tfor i, v := range types {\n\t\t\tt.Run(v, func(t *testing.T) {\n\t\t\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test (value \" + v + \")\")\n\t\t\t\tdbt.mustExec(\"INSERT INTO test VALUES (?)\", in[i])\n\n\t\t\t\trows := dbt.mustQuery(\"SELECT value FROM test\")\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\t\t\t\tif rows.Next() {\n\t\t\t\t\tvar out testUUID\n\t\t\t\t\tassertNilF(t, rows.Scan(&out))\n\t\t\t\t\tif in[i] != out {\n\t\t\t\t\t\tdbt.Errorf(\"%s: %s != %s\", v, in, out)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tdbt.Errorf(\"%s: no data\", v)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test\")\n\t})\n}\n\ntype tcDateTimeTimestamp struct {\n\tdbtype  string\n\ttlayout string\n\ttests   []timeTest\n}\n\ntype timeTest struct {\n\ts string    // source date time string\n\tt time.Time // expected fetched data\n}\n\nfunc (tt timeTest) genQuery() string {\n\treturn \"SELECT '%s'::%s\"\n}\n\nfunc (tt timeTest) run(t *testing.T, dbt *DBTest, dbtype, tlayout string) {\n\tvar rows *RowsExtended\n\tquery := fmt.Sprintf(tt.genQuery(), tt.s, dbtype)\n\trows = dbt.mustQuery(query)\n\tdefer rows.Close()\n\tvar err error\n\tif !rows.Next() {\n\t\terr = rows.Err()\n\t\tif err == nil {\n\t\t\terr = fmt.Errorf(\"no data\")\n\t\t}\n\t\tdbt.Errorf(\"%s: %s\", dbtype, err)\n\t\treturn\n\t}\n\n\tvar dst any\n\tif err = rows.Scan(&dst); err != nil {\n\t\tdbt.Errorf(\"%s: %s\", dbtype, err)\n\t\treturn\n\t}\n\tswitch val := dst.(type) {\n\tcase []uint8:\n\t\tstr := string(val)\n\t\tif str == tt.s {\n\t\t\treturn\n\t\t}\n\t\tdbt.Errorf(\"%s to string: expected %q, got %q\",\n\t\t\tdbtype,\n\t\t\ttt.s,\n\t\t\tstr,\n\t\t)\n\tcase time.Time:\n\t\tif val.UnixNano() == tt.t.UnixNano() {\n\t\t\treturn\n\t\t}\n\t\tt.Logf(\"source:%v, expected: %v, got:%v\", tt.s, tt.t, val)\n\t\tdbt.Errorf(\"%s to string: expected %q, got %q\",\n\t\t\tdbtype,\n\t\t\ttt.s,\n\t\t\tval.Format(tlayout),\n\t\t)\n\tdefault:\n\t\tdbt.Errorf(\"%s: unhandled type %T (is '%v')\",\n\t\t\tdbtype, val, val,\n\t\t)\n\t}\n}\n\nfunc TestSimpleDateTimeTimestampFetch(t *testing.T) {\n\ttestSimpleDateTimeTimestampFetch(t, false)\n}\n\nfunc testSimpleDateTimeTimestampFetch(t *testing.T, json bool) {\n\tvar scan = func(rows *RowsExtended, cd any, ct any, cts any) {\n\t\tif err := rows.Scan(cd, ct, cts); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n\tvar fetchTypes = []func(*RowsExtended){\n\t\tfunc(rows *RowsExtended) {\n\t\t\tvar cd, ct, cts time.Time\n\t\t\tscan(rows, &cd, &ct, &cts)\n\t\t},\n\t\tfunc(rows *RowsExtended) {\n\t\t\tvar cd, ct, cts time.Time\n\t\t\tscan(rows, &cd, &ct, &cts)\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\t\tfor _, f := range fetchTypes {\n\t\t\trows := dbt.mustQuery(\"SELECT CURRENT_DATE(), CURRENT_TIME(), CURRENT_TIMESTAMP()\")\n\t\t\tdefer rows.Close()\n\t\t\tif rows.Next() {\n\t\t\t\tf(rows)\n\t\t\t} else {\n\t\t\t\tt.Fatal(\"no results\")\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestDateTime(t *testing.T) {\n\ttestDateTime(t, false)\n}\n\nfunc testDateTime(t *testing.T, json bool) {\n\tafterTime := func(t time.Time, d string) time.Time {\n\t\tdur, err := time.ParseDuration(d)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\treturn t.Add(dur)\n\t}\n\tt0 := time.Time{}\n\ttstr0 := \"0000-00-00 00:00:00.000000000\"\n\ttestcases := []tcDateTimeTimestamp{\n\t\t{\"DATE\", format[:10], []timeTest{\n\t\t\t{t: time.Date(2011, 11, 20, 0, 0, 0, 0, time.UTC)},\n\t\t\t{t: time.Date(2, 8, 2, 0, 0, 0, 0, time.UTC), s: \"0002-08-02\"},\n\t\t}},\n\t\t{\"TIME\", format[11:19], []timeTest{\n\t\t\t{t: afterTime(t0, \"12345s\")},\n\t\t\t{t: t0, s: tstr0[11:19]},\n\t\t}},\n\t\t{\"TIME(0)\", format[11:19], []timeTest{\n\t\t\t{t: afterTime(t0, \"12345s\")},\n\t\t\t{t: t0, s: tstr0[11:19]},\n\t\t}},\n\t\t{\"TIME(1)\", format[11:21], []timeTest{\n\t\t\t{t: afterTime(t0, \"12345600ms\")},\n\t\t\t{t: t0, s: tstr0[11:21]},\n\t\t}},\n\t\t{\"TIME(6)\", format[11:], []timeTest{\n\t\t\t{t: t0, s: tstr0[11:]},\n\t\t}},\n\t\t{\"DATETIME\", format[:19], []timeTest{\n\t\t\t{t: time.Date(2011, 11, 20, 21, 27, 37, 0, time.UTC)},\n\t\t}},\n\t\t{\"DATETIME(0)\", format[:21], []timeTest{\n\t\t\t{t: time.Date(2011, 11, 20, 21, 27, 37, 0, time.UTC)},\n\t\t}},\n\t\t{\"DATETIME(1)\", format[:21], []timeTest{\n\t\t\t{t: time.Date(2011, 11, 20, 21, 27, 37, 100000000, time.UTC)},\n\t\t}},\n\t\t{\"DATETIME(6)\", format, []timeTest{\n\t\t\t{t: time.Date(2011, 11, 20, 21, 27, 37, 123456000, time.UTC)},\n\t\t}},\n\t\t{\"DATETIME(9)\", format, []timeTest{\n\t\t\t{t: time.Date(2011, 11, 20, 21, 27, 37, 123456789, time.UTC)},\n\t\t}},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\t\tfor _, setups := range testcases {\n\t\t\tt.Run(setups.dbtype, func(t *testing.T) {\n\t\t\t\tfor _, setup := range setups.tests {\n\t\t\t\t\tif setup.s == \"\" {\n\t\t\t\t\t\t// fill time string wherever Go can reliable produce it\n\t\t\t\t\t\tsetup.s = setup.t.Format(setups.tlayout)\n\t\t\t\t\t}\n\t\t\t\t\tsetup.run(t, dbt, setups.dbtype, setups.tlayout)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestTimestampLTZ(t *testing.T) {\n\ttestTimestampLTZ(t, false)\n}\n\nfunc testTimestampLTZ(t *testing.T, json bool) {\n\t// Set session time zone in Los Angeles, same as machine\n\tcreateDSN(PSTLocation)\n\tlocation, err := time.LoadLocation(PSTLocation)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\ttestcases := []tcDateTimeTimestamp{\n\t\t{\n\t\t\tdbtype:  \"TIMESTAMP_LTZ(9)\",\n\t\t\ttlayout: format,\n\t\t\ttests: []timeTest{\n\t\t\t\t{\n\t\t\t\t\ts: \"2016-12-30 05:02:03\",\n\t\t\t\t\tt: time.Date(2016, 12, 30, 5, 2, 3, 0, location),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\ts: \"2016-12-30 05:02:03 -00:00\",\n\t\t\t\t\tt: time.Date(2016, 12, 30, 5, 2, 3, 0, time.UTC),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\ts: \"2017-05-12 00:51:42\",\n\t\t\t\t\tt: time.Date(2017, 5, 12, 0, 51, 42, 0, location),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\ts: \"2017-03-12 01:00:00\",\n\t\t\t\t\tt: time.Date(2017, 3, 12, 1, 0, 0, 0, location),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\ts: \"2017-03-13 04:00:00\",\n\t\t\t\t\tt: time.Date(2017, 3, 13, 4, 0, 0, 0, location),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\ts: \"2017-03-13 04:00:00.123456789\",\n\t\t\t\t\tt: time.Date(2017, 3, 13, 4, 0, 0, 123456789, location),\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tdbtype:  \"TIMESTAMP_LTZ(8)\",\n\t\t\ttlayout: format,\n\t\t\ttests: []timeTest{\n\t\t\t\t{\n\t\t\t\t\ts: \"2017-03-13 04:00:00.123456789\",\n\t\t\t\t\tt: time.Date(2017, 3, 13, 4, 0, 0, 123456780, location),\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\t\tfor _, setups := range testcases {\n\t\t\tt.Run(setups.dbtype, func(t *testing.T) {\n\t\t\t\tfor _, setup := range setups.tests {\n\t\t\t\t\tif setup.s == \"\" {\n\t\t\t\t\t\t// fill time string wherever Go can reliable produce it\n\t\t\t\t\t\tsetup.s = setup.t.Format(setups.tlayout)\n\t\t\t\t\t}\n\t\t\t\t\tsetup.run(t, dbt, setups.dbtype, setups.tlayout)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n\t// Revert timezone to UTC, which is default for the test suit\n\tcreateDSN(\"UTC\")\n}\n\nfunc TestTimestampTZ(t *testing.T) {\n\ttestTimestampTZ(t, false)\n}\n\nfunc testTimestampTZ(t *testing.T, json bool) {\n\tsflo := func(offsets string) (loc *time.Location) {\n\t\tr, err := LocationWithOffsetString(offsets)\n\t\tif err != nil {\n\t\t\treturn time.UTC\n\t\t}\n\t\treturn r\n\t}\n\ttestcases := []tcDateTimeTimestamp{\n\t\t{\n\t\t\tdbtype:  \"TIMESTAMP_TZ(9)\",\n\t\t\ttlayout: format,\n\t\t\ttests: []timeTest{\n\t\t\t\t{\n\t\t\t\t\ts: \"2016-12-30 05:02:03 +07:00\",\n\t\t\t\t\tt: time.Date(2016, 12, 30, 5, 2, 3, 0,\n\t\t\t\t\t\tsflo(\"+0700\")),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\ts: \"2017-05-23 03:56:41 -09:00\",\n\t\t\t\t\tt: time.Date(2017, 5, 23, 3, 56, 41, 0,\n\t\t\t\t\t\tsflo(\"-0900\")),\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\t\tfor _, setups := range testcases {\n\t\t\tt.Run(setups.dbtype, func(t *testing.T) {\n\t\t\t\tfor _, setup := range setups.tests {\n\t\t\t\t\tif setup.s == \"\" {\n\t\t\t\t\t\t// fill time string wherever Go can reliable produce it\n\t\t\t\t\t\tsetup.s = setup.t.Format(setups.tlayout)\n\t\t\t\t\t}\n\t\t\t\t\tsetup.run(t, dbt, setups.dbtype, setups.tlayout)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestNULL(t *testing.T) {\n\ttestNULL(t, false)\n}\n\nfunc testNULL(t *testing.T, json bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\t\tnullStmt, err := dbt.conn.PrepareContext(context.Background(), \"SELECT NULL\")\n\t\tif err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tdefer nullStmt.Close()\n\n\t\tnonNullStmt, err := dbt.conn.PrepareContext(context.Background(), \"SELECT 1\")\n\t\tif err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tdefer nonNullStmt.Close()\n\n\t\t// NullBool\n\t\tvar nb sql.NullBool\n\t\t// Invalid\n\t\tif err = nullStmt.QueryRow().Scan(&nb); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif nb.Valid {\n\t\t\tdbt.Error(\"valid NullBool which should be invalid\")\n\t\t}\n\t\t// Valid\n\t\tif err = nonNullStmt.QueryRow().Scan(&nb); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif !nb.Valid {\n\t\t\tdbt.Error(\"invalid NullBool which should be valid\")\n\t\t} else if !nb.Bool {\n\t\t\tdbt.Errorf(\"Unexpected NullBool value: %t (should be true)\", nb.Bool)\n\t\t}\n\n\t\t// NullFloat64\n\t\tvar nf sql.NullFloat64\n\t\t// Invalid\n\t\tif err = nullStmt.QueryRow().Scan(&nf); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif nf.Valid {\n\t\t\tdbt.Error(\"valid NullFloat64 which should be invalid\")\n\t\t}\n\t\t// Valid\n\t\tif err = nonNullStmt.QueryRow().Scan(&nf); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif !nf.Valid {\n\t\t\tdbt.Error(\"invalid NullFloat64 which should be valid\")\n\t\t} else if nf.Float64 != float64(1) {\n\t\t\tdbt.Errorf(\"unexpected NullFloat64 value: %f (should be 1.0)\", nf.Float64)\n\t\t}\n\n\t\t// NullInt64\n\t\tvar ni sql.NullInt64\n\t\t// Invalid\n\t\tif err = nullStmt.QueryRow().Scan(&ni); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif ni.Valid {\n\t\t\tdbt.Error(\"valid NullInt64 which should be invalid\")\n\t\t}\n\t\t// Valid\n\t\tif err = nonNullStmt.QueryRow().Scan(&ni); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif !ni.Valid {\n\t\t\tdbt.Error(\"invalid NullInt64 which should be valid\")\n\t\t} else if ni.Int64 != int64(1) {\n\t\t\tdbt.Errorf(\"unexpected NullInt64 value: %d (should be 1)\", ni.Int64)\n\t\t}\n\n\t\t// NullString\n\t\tvar ns sql.NullString\n\t\t// Invalid\n\t\tif err = nullStmt.QueryRow().Scan(&ns); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif ns.Valid {\n\t\t\tdbt.Error(\"valid NullString which should be invalid\")\n\t\t}\n\t\t// Valid\n\t\tif err = nonNullStmt.QueryRow().Scan(&ns); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif !ns.Valid {\n\t\t\tdbt.Error(\"invalid NullString which should be valid\")\n\t\t} else if ns.String != `1` {\n\t\t\tdbt.Error(\"unexpected NullString value:\" + ns.String + \" (should be `1`)\")\n\t\t}\n\n\t\t// nil-bytes\n\t\tvar b []byte\n\t\t// Read nil\n\t\tif err = nullStmt.QueryRow().Scan(&b); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif b != nil {\n\t\t\tdbt.Error(\"non-nil []byte which should be nil\")\n\t\t}\n\t\t// Read non-nil\n\t\tif err = nonNullStmt.QueryRow().Scan(&b); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif b == nil {\n\t\t\tdbt.Error(\"nil []byte which should be non-nil\")\n\t\t}\n\t\t// Insert nil\n\t\tb = nil\n\t\tsuccess := false\n\t\tif err = dbt.conn.QueryRowContext(context.Background(), \"SELECT ? IS NULL\", b).Scan(&success); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif !success {\n\t\t\tdbt.Error(\"inserting []byte(nil) as NULL failed\")\n\t\t\tt.Fatal(\"stopping\")\n\t\t}\n\t\t// Check input==output with input==nil\n\t\tb = nil\n\t\tif err = dbt.conn.QueryRowContext(context.Background(), \"SELECT ?\", b).Scan(&b); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif b != nil {\n\t\t\tdbt.Error(\"non-nil echo from nil input\")\n\t\t}\n\t\t// Check input==output with input!=nil\n\t\tb = []byte(\"\")\n\t\tif err = dbt.conn.QueryRowContext(context.Background(), \"SELECT ?\", b).Scan(&b); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif b == nil {\n\t\t\tdbt.Error(\"nil echo from non-nil input\")\n\t\t}\n\n\t\t// Insert NULL\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test (dummmy1 int, value int, dummy2 int)\")\n\t\tdbt.mustExec(\"INSERT INTO test VALUES (?, ?, ?)\", 1, nil, 2)\n\n\t\tvar dummy1, out, dummy2 any\n\t\trows := dbt.mustQuery(\"SELECT * FROM test\")\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tif rows.Next() {\n\t\t\tassertNilF(t, rows.Scan(&dummy1, &out, &dummy2))\n\t\t\tif out != nil {\n\t\t\t\tdbt.Errorf(\"%v != nil\", out)\n\t\t\t}\n\t\t} else {\n\t\t\tdbt.Error(\"no data\")\n\t\t}\n\t})\n}\n\nfunc TestVariant(t *testing.T) {\n\ttestVariant(t, false)\n}\n\nfunc testVariant(t *testing.T, json bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\t\trows := dbt.mustQuery(`select parse_json('[{\"id\":1, \"name\":\"test1\"},{\"id\":2, \"name\":\"test2\"}]')`)\n\t\tdefer rows.Close()\n\t\tvar v string\n\t\tif rows.Next() {\n\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Fatal(\"no rows\")\n\t\t}\n\t})\n}\n\nfunc TestArray(t *testing.T) {\n\ttestArray(t, false)\n}\n\nfunc testArray(t *testing.T, json bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\t\trows := dbt.mustQuery(`select as_array(parse_json('[{\"id\":1, \"name\":\"test1\"},{\"id\":2, \"name\":\"test2\"}]'))`)\n\t\tdefer rows.Close()\n\t\tvar v string\n\t\tif rows.Next() {\n\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Fatal(\"no rows\")\n\t\t}\n\t})\n}\n\nfunc TestLargeSetResult(t *testing.T) {\n\tcustomJSONDecoderEnabled = false\n\ttestLargeSetResult(t, 100000, false)\n}\n\nfunc testLargeSetResult(t *testing.T, numrows int, json bool) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif json {\n\t\t\tdbt.mustExec(forceJSON)\n\t\t}\n\t\trows := dbt.mustQuery(fmt.Sprintf(selectRandomGenerator, numrows))\n\t\tdefer rows.Close()\n\t\tcnt := 0\n\t\tvar idx int\n\t\tvar v string\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&idx, &v); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tcnt++\n\t\t}\n\t\tlogger.Infof(\"NextResultSet: %v\", rows.NextResultSet())\n\n\t\tif cnt != numrows {\n\t\t\tdbt.Errorf(\"number of rows didn't match. expected: %v, got: %v\", numrows, cnt)\n\t\t}\n\t})\n}\n\n// TestPingpongQuery validates that the driver's ping-pong keepalive protocol\n// maintains the connection during long-running queries. TIMELIMIT=>60 must be\n// long enough to trigger the ping-pong mechanism. Do not reduce this value.\nfunc TestPingpongQuery(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tnumrows := 1\n\t\trows := dbt.mustQuery(\"SELECT DISTINCT 1 FROM TABLE(GENERATOR(TIMELIMIT=> 60))\")\n\t\tdefer rows.Close()\n\t\tcnt := 0\n\t\tfor rows.Next() {\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != numrows {\n\t\t\tdbt.Errorf(\"number of rows didn't match. expected: %v, got: %v\", numrows, cnt)\n\t\t}\n\t})\n}\n\nfunc TestDML(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test(c1 int, c2 string)\")\n\t\tif err := insertData(dbt, false); err != nil {\n\t\t\tdbt.Fatalf(\"failed to insert data: %v\", err)\n\t\t}\n\t\tresults, err := queryTest(dbt)\n\t\tif err != nil {\n\t\t\tdbt.Fatalf(\"failed to query test table: %v\", err)\n\t\t}\n\t\tif len(*results) != 0 {\n\t\t\tdbt.Fatalf(\"number of returned data didn't match. expected 0, got: %v\", len(*results))\n\t\t}\n\t\tif err = insertData(dbt, true); err != nil {\n\t\t\tdbt.Fatalf(\"failed to insert data: %v\", err)\n\t\t}\n\t\tresults, err = queryTest(dbt)\n\t\tif err != nil {\n\t\t\tdbt.Fatalf(\"failed to query test table: %v\", err)\n\t\t}\n\t\tif len(*results) != 2 {\n\t\t\tdbt.Fatalf(\"number of returned data didn't match. expected 2, got: %v\", len(*results))\n\t\t}\n\t})\n}\n\nfunc insertData(dbt *DBTest, commit bool) error {\n\ttx, err := dbt.conn.BeginTx(context.Background(), nil)\n\tif err != nil {\n\t\tdbt.Fatalf(\"failed to begin transaction: %v\", err)\n\t}\n\tres, err := tx.Exec(\"INSERT INTO test VALUES(1, 'test1'), (2, 'test2')\")\n\tif err != nil {\n\t\tdbt.Fatalf(\"failed to insert value into test: %v\", err)\n\t}\n\tn, err := res.RowsAffected()\n\tif err != nil {\n\t\tdbt.Fatalf(\"failed to rows affected: %v\", err)\n\t}\n\tif n != 2 {\n\t\tdbt.Fatalf(\"failed to insert value into test. expected: 2, got: %v\", n)\n\t}\n\tresults, err := queryTestTx(tx)\n\tif err != nil {\n\t\tdbt.Fatalf(\"failed to query test table: %v\", err)\n\t}\n\tif len(*results) != 2 {\n\t\tdbt.Fatalf(\"number of returned data didn't match. expected 2, got: %v\", len(*results))\n\t}\n\tif commit {\n\t\tif err = tx.Commit(); err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\tif err = tx.Rollback(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn err\n}\n\nfunc queryTestTx(tx *sql.Tx) (*map[int]string, error) {\n\tvar c1 int\n\tvar c2 string\n\trows, err := tx.Query(\"SELECT c1, c2 FROM test\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer rows.Close()\n\n\tresults := make(map[int]string, 2)\n\tfor rows.Next() {\n\t\tif err = rows.Scan(&c1, &c2); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tresults[c1] = c2\n\t}\n\treturn &results, nil\n}\n\nfunc queryTest(dbt *DBTest) (*map[int]string, error) {\n\tvar c1 int\n\tvar c2 string\n\trows, err := dbt.query(\"SELECT c1, c2 FROM test\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer rows.Close()\n\tresults := make(map[int]string, 2)\n\tfor rows.Next() {\n\t\tif err = rows.Scan(&c1, &c2); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tresults[c1] = c2\n\t}\n\treturn &results, nil\n}\n\nfunc TestCancelQuery(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)\n\t\tdefer cancel()\n\n\t\t_, err := dbt.conn.QueryContext(ctx, \"CALL SYSTEM$WAIT(10, 'SECONDS')\")\n\t\tif err == nil {\n\t\t\tdbt.Fatal(\"No timeout error returned\")\n\t\t}\n\t\tif !errors.Is(err, context.DeadlineExceeded) {\n\t\t\tdbt.Fatalf(\"Timeout error mismatch: expect %v, receive %v\", context.DeadlineExceeded, err.Error())\n\t\t}\n\t})\n}\n\nfunc TestCancelQueryWithConnectionContext(t *testing.T) {\n\ttestCases := []struct {\n\t\tname            string\n\t\tsetupConnection func(ctx context.Context, db *sql.DB) error\n\t}{\n\t\t{\n\t\t\tname: \"explicit connection\",\n\t\t\tsetupConnection: func(ctx context.Context, db *sql.DB) error {\n\t\t\t\t_, err := db.Conn(ctx)\n\t\t\t\treturn err\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"implicit connection\",\n\t\t\tsetupConnection: func(ctx context.Context, db *sql.DB) error {\n\t\t\t\t_, err := db.ExecContext(ctx, \"SELECT 1\")\n\t\t\t\treturn err\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tdb := openDB(t)\n\t\t\tdefer db.Close()\n\n\t\t\tctx, cancelConnectionContext := context.WithCancel(context.Background())\n\t\t\terr := tc.setupConnection(ctx, db)\n\t\t\tassertNilF(t, err, \"connection setup should succeed\")\n\n\t\t\tcancelConnectionContext()\n\n\t\t\t_, err = db.ExecContext(context.Background(), \"SELECT 1\")\n\t\t\tassertNilF(t, err, \"subsequent SELECT should work after cancelled connection context\")\n\n\t\t\tcwd, err := os.Getwd()\n\t\t\tassertNilF(t, err, \"Failed to get current working directory\")\n\t\t\tfilePath := filepath.Join(cwd, \"test_data\", \"put_get_1.txt\")\n\n\t\t\tputQuery := fmt.Sprintf(\"PUT file://%v @~/%v\", filePath, \"test_cancel_query_with_connection_context.txt\")\n\t\t\t_, err = db.ExecContext(context.Background(), putQuery)\n\t\t\tassertNilF(t, err, \"PUT statement should work after cancelled connection context\")\n\t\t})\n\t}\n}\n\nfunc TestPing(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif err := dbt.conn.PingContext(context.Background()); err != nil {\n\t\t\tt.Fatalf(\"failed to ping. err: %v\", err)\n\t\t}\n\t\tif err := dbt.conn.PingContext(context.Background()); err != nil {\n\t\t\tt.Fatalf(\"failed to ping with context. err: %v\", err)\n\t\t}\n\t\tif err := dbt.conn.Close(); err != nil {\n\t\t\tt.Fatalf(\"failed to close db. err: %v\", err)\n\t\t}\n\t\tif err := dbt.conn.PingContext(context.Background()); err == nil {\n\t\t\tt.Fatal(\"should have failed to ping\")\n\t\t}\n\t\tif err := dbt.conn.PingContext(context.Background()); err == nil {\n\t\t\tt.Fatal(\"should have failed to ping with context\")\n\t\t}\n\t})\n}\n\nfunc TestDoubleDollar(t *testing.T) {\n\t// no escape is required for dollar signs\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tsql := `create or replace function dateErr(I double) returns date\nlanguage javascript strict\nas $$\n  var x = [\n    0, \"1400000000000\",\n    \"2013-04-05\",\n    [], [1400000000000],\n    \"x1234\",\n    Number.NaN, null, undefined,\n    {},\n    [1400000000000,1500000000000]\n  ];\n  return x[I];\n$$\n;`\n\t\tdbt.mustExec(sql)\n\t})\n}\n\nfunc TestTimezoneSessionParameter(t *testing.T) {\n\tcreateDSN(PSTLocation)\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryT(t, \"SHOW PARAMETERS LIKE 'TIMEZONE'\")\n\t\tdefer rows.Close()\n\t\tif !rows.Next() {\n\t\t\tt.Fatal(\"failed to get timezone.\")\n\t\t}\n\n\t\tp, err := ScanSnowflakeParameter(rows.rows)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"failed to run get timezone value. err: %v\", err)\n\t\t}\n\t\tif p.Value != PSTLocation {\n\t\t\tt.Errorf(\"failed to get an expected timezone. got: %v\", p.Value)\n\t\t}\n\t})\n\tcreateDSN(\"UTC\")\n}\n\nfunc TestLargeSetResultCancel(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tc := make(chan error)\n\t\tctx, cancel := context.WithCancel(context.Background())\n\t\tgo func() {\n\t\t\t// attempt to run a 100 seconds query, but it should be canceled in 1 second\n\t\t\ttimelimit := 100\n\t\t\trows, err := dbt.conn.QueryContext(\n\t\t\t\tctx,\n\t\t\t\tfmt.Sprintf(\"SELECT COUNT(*) FROM TABLE(GENERATOR(timelimit=>%v))\", timelimit))\n\t\t\tif err != nil {\n\t\t\t\tc <- err\n\t\t\t\treturn\n\t\t\t}\n\t\t\tdefer rows.Close()\n\t\t\tc <- nil\n\t\t}()\n\t\t// cancel after 1 second\n\t\ttime.Sleep(time.Second)\n\t\tcancel()\n\t\tret := <-c\n\t\tif !errors.Is(ret, context.Canceled) {\n\t\t\tt.Fatalf(\"failed to cancel. err: %v\", ret)\n\t\t}\n\t\tclose(c)\n\t})\n}\n\nfunc TestValidateDatabaseParameter(t *testing.T) {\n\t// Parse the global DSN to get base configuration with proper authentication\n\tcfg, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse global dsn\")\n\t}\n\n\ttestcases := []struct {\n\t\tdescription string\n\t\tdbname      string\n\t\tschemaname  string\n\t\tparams      map[string]string\n\t\terrorCode   int\n\t}{\n\t\t{\n\t\t\tdescription: \"invalid_database_and_schema\",\n\t\t\tdbname:      \"NOT_EXISTS\",\n\t\t\tschemaname:  \"NOT_EXISTS\",\n\t\t\terrorCode:   ErrObjectNotExistOrAuthorized,\n\t\t},\n\t\t{\n\t\t\tdescription: \"invalid_schema\",\n\t\t\tdbname:      cfg.Database,\n\t\t\tschemaname:  \"NOT_EXISTS\",\n\t\t\terrorCode:   ErrObjectNotExistOrAuthorized,\n\t\t},\n\t\t{\n\t\t\tdescription: \"invalid_warehouse\",\n\t\t\tdbname:      cfg.Database,\n\t\t\tschemaname:  cfg.Schema,\n\t\t\tparams: map[string]string{\n\t\t\t\t\"warehouse\": \"NOT_EXIST\",\n\t\t\t},\n\t\t\terrorCode: ErrObjectNotExistOrAuthorized,\n\t\t},\n\t\t{\n\t\t\tdescription: \"invalid_role\",\n\t\t\tdbname:      cfg.Database,\n\t\t\tschemaname:  cfg.Schema,\n\t\t\tparams: map[string]string{\n\t\t\t\t\"role\": \"NOT_EXIST\",\n\t\t\t},\n\t\t\terrorCode: ErrRoleNotExist,\n\t\t},\n\t}\n\tfor idx, tc := range testcases {\n\t\tt.Run(tc.description, func(t *testing.T) {\n\t\t\t// Create a new config based on the global config (which already has proper authentication)\n\t\t\ttestCfg := *cfg // Copy the config with proper authentication from global DSN\n\t\t\ttestCfg.Database = tc.dbname\n\t\t\ttestCfg.Schema = tc.schemaname\n\n\t\t\t// Override with test-specific parameters\n\t\t\ttestCfg.Warehouse = tc.params[\"warehouse\"]\n\t\t\ttestCfg.Role = tc.params[\"role\"]\n\n\t\t\tdb := sql.OpenDB(NewConnector(SnowflakeDriver{}, testCfg))\n\t\t\tdefer db.Close()\n\n\t\t\tif _, err = db.Exec(\"SELECT 1\"); err == nil {\n\t\t\t\tt.Fatal(\"should cause an error.\")\n\t\t\t}\n\t\t\tif driverErr, ok := err.(*SnowflakeError); ok {\n\t\t\t\tif driverErr.Number != tc.errorCode {\n\t\t\t\t\tmaskedErr := maskSecrets(err.Error())\n\t\t\t\t\tt.Errorf(\"got unexpected error: %s in test case %d\", maskedErr, idx)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestSpecifyWarehouseDatabase(t *testing.T) {\n\t// Parse the global DSN to get base configuration with proper authentication\n\tcfg, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatal(\"Failed to parse global dsn\")\n\t}\n\n\t// Override with test-specific settings\n\tcfg.Warehouse = warehouse\n\n\tdb := sql.OpenDB(NewConnector(SnowflakeDriver{}, *cfg))\n\tdefer db.Close()\n\n\tif _, err = db.Exec(\"SELECT 1\"); err != nil {\n\t\tmaskedErr := maskSecrets(err.Error())\n\t\tt.Fatalf(\"failed to execute a select 1: %s\", maskedErr)\n\t}\n}\n\nfunc TestFetchNil(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQuery(\"SELECT * FROM values(3,4),(null, 5) order by 2\")\n\t\tdefer rows.Close()\n\t\tvar c1 sql.NullInt64\n\t\tvar c2 sql.NullInt64\n\n\t\tvar results []sql.NullInt64\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&c1, &c2); err != nil {\n\t\t\t\tdbt.Fatal(err)\n\t\t\t}\n\t\t\tresults = append(results, c1)\n\t\t}\n\t\tif results[1].Valid {\n\t\t\tt.Errorf(\"First element of second row must be nil (NULL). %v\", results)\n\t\t}\n\t})\n}\n\nfunc TestPingInvalidHost(t *testing.T) {\n\tconfig := Config{\n\t\tAccount:      \"NOT_EXISTS\",\n\t\tUser:         \"BOGUS_USER\",\n\t\tPassword:     \"barbar\",\n\t\tLoginTimeout: 10 * time.Second,\n\t}\n\n\ttestURL, err := DSN(&config)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to parse config. config: %v, err: %v\", config, err)\n\t}\n\n\tdb, err := sql.Open(\"snowflake\", testURL)\n\tassertNilF(t, err, \"failed to initialize the connection\")\n\tif err = db.PingContext(context.Background()); err == nil {\n\t\tt.Fatal(\"should cause an error\")\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\tif driverErr, ok := err.(*SnowflakeError); !ok || ok && isFailToConnectOrAuthErr(driverErr) {\n\t\t// Failed to connect error\n\t\tt.Fatalf(\"error didn't match\")\n\t}\n}\n\nfunc TestOpenWithConfig(t *testing.T) {\n\tconfig := Config{\n\t\tAccount:       \"testaccount\",\n\t\tUser:          \"testuser\",\n\t\tPassword:      \"testpassword\",\n\t\tAuthenticator: AuthTypeSnowflake, // Force password authentication\n\t\tPrivateKey:    nil,               // Ensure no private key\n\t}\n\n\ttestURL, err := DSN(&config)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to parse config. config: %v, err: %v\", config, err)\n\t}\n\n\tdb, err := sql.Open(\"snowflake\", testURL)\n\tassertNilF(t, err, \"failed to initialize the connection\")\n\tif err = db.PingContext(context.Background()); err == nil {\n\t\tt.Fatal(\"should cause an error\")\n\t}\n\tif strings.Contains(err.Error(), \"HTTP Status: 513. Hanging?\") {\n\t\treturn\n\t}\n\tif driverErr, ok := err.(*SnowflakeError); !ok || ok && isFailToConnectOrAuthErr(driverErr) {\n\t\t// Failed to connect error\n\t\tt.Fatalf(\"error didn't match\")\n\t}\n}\n\nfunc TestOpenWithConfigCancel(t *testing.T) {\n\twiremock.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/password/successful_flow_with_telemetry.json\", params: map[string]string{\"%CLIENT_TELEMETRY_ENABLED%\": \"true\"}},\n\t)\n\tdriver := SnowflakeDriver{}\n\tconfig := wiremock.connectionConfig()\n\tblockingRoundTripper := newBlockingRoundTripper(createTestNoRevocationTransport(), 0)\n\tcountingRoundTripper := newCountingRoundTripper(blockingRoundTripper)\n\tconfig.Transporter = countingRoundTripper\n\n\tt.Run(\"canceled during request:login-request\", func(t *testing.T) {\n\t\tblockingRoundTripper.setPathBlockTime(\"/session/v1/login-request\", 50*time.Millisecond)\n\t\tctx, cancel := context.WithTimeout(context.Background(), 20*time.Millisecond)\n\t\tdefer cancel()\n\t\t_, err := driver.OpenWithConfig(ctx, *config)\n\t\tassertErrIsE(t, err, context.DeadlineExceeded)\n\t\tassertEqualE(t, countingRoundTripper.totalRequestsByPath(\"/session/v1/login-request\"), 1)\n\t\tassertEqualE(t, countingRoundTripper.totalRequestsByPath(\"/telemetry/send\"), 0)\n\t})\n\n\tt.Run(\"canceled during request:telemetry/send\", func(t *testing.T) {\n\t\tblockingRoundTripper.reset()\n\t\tcountingRoundTripper.reset()\n\t\tblockingRoundTripper.setPathBlockTime(\"/telemetry/send\", 400*time.Millisecond)\n\t\tctx, cancel := context.WithTimeout(context.Background(), 200*time.Millisecond)\n\t\tdefer cancel()\n\t\t_, err := driver.OpenWithConfig(ctx, *config)\n\t\tassertErrIsE(t, err, context.DeadlineExceeded)\n\t\tassertEqualE(t, countingRoundTripper.totalRequestsByPath(\"/session/v1/login-request\"), 1)\n\t\tassertEqualE(t, countingRoundTripper.totalRequestsByPath(\"/telemetry/send\"), 1)\n\t})\n}\n\nfunc TestOpenWithInvalidConfig(t *testing.T) {\n\tconfig, err := ParseDSN(\"u:p@h?tmpDirPath=%2Fnon-existing\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to parse dsn. err: %v\", err)\n\t}\n\tconfig.Authenticator = AuthTypeSnowflake\n\tconfig.PrivateKey = nil\n\tdriver := SnowflakeDriver{}\n\t_, err = driver.OpenWithConfig(context.Background(), *config)\n\tif err == nil || !strings.Contains(err.Error(), \"/non-existing\") {\n\t\tt.Fatalf(\"should fail on missing directory\")\n\t}\n}\n\nfunc TestOpenWithTransport(t *testing.T) {\n\tconfig, err := ParseDSN(dsn)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to parse dsn. err: %v\", err)\n\t}\n\tcountingTransport := newCountingRoundTripper(createTestNoRevocationTransport())\n\tvar transport http.RoundTripper = countingTransport\n\tconfig.Transporter = transport\n\tdriver := SnowflakeDriver{}\n\tdb, err := driver.OpenWithConfig(context.Background(), *config)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to open with config. config: %v\", config))\n\tconn := db.(*snowflakeConn)\n\tif conn.rest.Client.Transport != transport {\n\t\tt.Fatal(\"transport doesn't match\")\n\t}\n\tdb.Close()\n\tif countingTransport.totalRequests() == 0 {\n\t\tt.Fatal(\"transport did not receive any requests\")\n\t}\n\n\t// Test that transport override also works in OCSP checks disabled.\n\tcountingTransport.reset()\n\tconfig.DisableOCSPChecks = true\n\tdb, err = driver.OpenWithConfig(context.Background(), *config)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to open with config. config: %v\", config))\n\tconn = db.(*snowflakeConn)\n\tif conn.rest.Client.Transport != transport {\n\t\tt.Fatal(\"transport doesn't match\")\n\t}\n\tdb.Close()\n\tif countingTransport.totalRequests() == 0 {\n\t\tt.Fatal(\"transport did not receive any requests\")\n\t}\n}\n\nfunc TestClientSessionKeepAliveParameter(t *testing.T) {\n\t// This test doesn't really validate the CLIENT_SESSION_KEEP_ALIVE functionality but simply checks\n\t// the session parameter.\n\tcustomDsn := dsn + \"&client_session_keep_alive=true\"\n\n\trunDBTestWithConfig(t, &testConfig{dsn: customDsn}, func(dbt *DBTest) {\n\t\trows := dbt.mustQuery(\"SHOW PARAMETERS LIKE 'CLIENT_SESSION_KEEP_ALIVE'\")\n\t\tdefer rows.Close()\n\t\tif !rows.Next() {\n\t\t\tt.Fatal(\"failed to get timezone.\")\n\t\t}\n\n\t\tp, err := ScanSnowflakeParameter(rows.rows)\n\t\tassertNilF(t, err, \"failed to run get client_session_keep_alive value\")\n\t\tif p.Value != \"true\" {\n\t\t\tt.Fatalf(\"failed to get an expected client_session_keep_alive. got: %v\", maskSecrets(p.Value))\n\t\t}\n\n\t\trows2 := dbt.mustQuery(\"select count(*) from table(generator(timelimit=>30))\")\n\t\tdefer rows2.Close()\n\t})\n}\n\nfunc TestTimePrecision(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"create or replace table z3 (t1 time(5))\")\n\t\trows := dbt.mustQuery(\"select * from z3\")\n\t\tdefer rows.Close()\n\t\tcols, err := rows.ColumnTypes()\n\t\tassertNilE(t, err, \"failed to get column types\")\n\t\tif pres, _, ok := cols[0].DecimalSize(); pres != 5 || !ok {\n\t\t\tt.Fatalf(\"Wrong value returned. Got %v instead of 5.\", pres)\n\t\t}\n\t})\n}\n\nfunc initPoolWithSize(t *testing.T, db *sql.DB, poolSize int) {\n\twg := sync.WaitGroup{}\n\twg.Add(poolSize)\n\tfor range poolSize {\n\t\tgo func(wg *sync.WaitGroup) {\n\t\t\tdefer wg.Done()\n\t\t\ttime.Sleep(time.Duration(rand.Intn(1000)) * time.Millisecond)\n\t\t\trunSmokeQuery(t, db)\n\t\t}(&wg)\n\t}\n\twg.Wait()\n}\n\nfunc initPoolWithSizeAndReturnErrors(db *sql.DB, poolSize int) []error {\n\twg := sync.WaitGroup{}\n\twg.Add(poolSize)\n\terrMu := sync.Mutex{}\n\tvar errs []error\n\tfor i := range poolSize {\n\t\tgo func(wg *sync.WaitGroup) {\n\t\t\tdefer wg.Done()\n\t\t\t// Wiremock handles incoming request in parallel, in non-atomic way.\n\t\t\t// If two requests start at the same time, they both see the same scenario state,\n\t\t\t// even if it should be changed after the request is matched to a particular scenario state.\n\t\t\ttime.Sleep(time.Duration(i * 5 * int(time.Millisecond)))\n\t\t\terr := runSmokeQueryAndReturnErrors(db)\n\t\t\tif err != nil {\n\t\t\t\terrMu.Lock()\n\t\t\t\terrs = append(errs, err)\n\t\t\t\terrMu.Unlock()\n\t\t\t}\n\t\t}(&wg)\n\t}\n\twg.Wait()\n\treturn errs\n}\n\nfunc runSelectCurrentUser(t *testing.T, db *sql.DB) string {\n\trows, err := db.Query(\"SELECT current_user()\")\n\tassertNilF(t, err)\n\tdefer rows.Close()\n\tassertTrueF(t, rows.Next())\n\tvar v string\n\terr = rows.Scan(&v)\n\tassertNilF(t, err)\n\treturn v\n}\n\nfunc runSmokeQuery(t *testing.T, db *sql.DB) {\n\trows, err := db.Query(\"SELECT 1\")\n\tassertNilF(t, err)\n\tdefer rows.Close()\n\tassertTrueF(t, rows.Next())\n\tvar v int\n\terr = rows.Scan(&v)\n\tassertNilF(t, err)\n\tassertEqualE(t, v, 1)\n}\n\nfunc runSmokeQueryAndReturnErrors(db *sql.DB) error {\n\trows, err := db.Query(\"SELECT 1\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer rows.Close()\n\tif !rows.Next() {\n\t\treturn fmt.Errorf(\"no rows\")\n\t}\n\tvar v int\n\terr = rows.Scan(&v)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif v != 1 {\n\t\treturn fmt.Errorf(\"value mismatch. expected 1, got %v\", v)\n\t}\n\treturn nil\n}\n\nfunc runSmokeQueryWithConn(t *testing.T, conn *sql.Conn) {\n\trows, err := conn.QueryContext(context.Background(), \"SELECT 1\")\n\tassertNilF(t, err)\n\tdefer rows.Close()\n\tassertTrueF(t, rows.Next())\n\tvar v int\n\terr = rows.Scan(&v)\n\tassertNilF(t, err)\n\tassertEqualE(t, v, 1)\n}\n"
  },
  {
    "path": "dsn.go",
    "content": "package gosnowflake\n\nimport (\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n)\n\n// Type aliases — re-exported from internal/config for backward compatibility.\ntype (\n\t// Config is a set of configuration parameters\n\tConfig = sfconfig.Config\n\t// ConfigBool is a type to represent true or false in the Config\n\tConfigBool = sfconfig.Bool\n\t// ConfigParam is used to bind the name of the Config field with the environment variable and set the requirement for it\n\tConfigParam = sfconfig.Param\n)\n\n// ConfigBool constants — re-exported from internal/config.\nconst (\n\t// configBoolNotSet represents the default value for the config field which is not set\n\tconfigBoolNotSet = sfconfig.BoolNotSet\n\t// ConfigBoolTrue represents true for the config field\n\tConfigBoolTrue = sfconfig.BoolTrue\n\t// ConfigBoolFalse represents false for the config field\n\tConfigBoolFalse = sfconfig.BoolFalse\n)\n\n// DSN constructs a DSN for Snowflake db.\nfunc DSN(cfg *Config) (string, error) { return sfconfig.DSN(cfg) }\n\n// ParseDSN parses the DSN string to a Config.\nfunc ParseDSN(dsn string) (*Config, error) { return sfconfig.ParseDSN(dsn) }\n\n// GetConfigFromEnv is used to parse the environment variable values to specific fields of the Config\nfunc GetConfigFromEnv(properties []*ConfigParam) (*Config, error) {\n\treturn sfconfig.GetConfigFromEnv(properties)\n}\n\nfunc transportConfigFor(tt transportType) *transportConfig {\n\treturn defaultTransportConfigs.forTransportType(tt)\n}\n"
  },
  {
    "path": "easy_logging.go",
    "content": "package gosnowflake\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"io\"\n\t\"os\"\n\t\"path\"\n\t\"runtime\"\n\t\"strings\"\n\t\"sync\"\n\n\tloggerinternal \"github.com/snowflakedb/gosnowflake/v2/internal/logger\"\n)\n\ntype initTrials struct {\n\teverTriedToInitialize bool\n\tclientConfigFileInput string\n\tconfigureCounter      int\n\tmu                    sync.Mutex\n}\n\nvar easyLoggingInitTrials = initTrials{\n\teverTriedToInitialize: false,\n\tclientConfigFileInput: \"\",\n\tconfigureCounter:      0,\n\tmu:                    sync.Mutex{},\n}\n\nfunc (i *initTrials) setInitTrial(clientConfigFileInput string) {\n\ti.everTriedToInitialize = true\n\ti.clientConfigFileInput = clientConfigFileInput\n}\n\nfunc (i *initTrials) increaseReconfigureCounter() {\n\ti.configureCounter++\n}\n\nfunc initEasyLogging(clientConfigFileInput string) error {\n\teasyLoggingInitTrials.mu.Lock()\n\tdefer easyLoggingInitTrials.mu.Unlock()\n\n\tif !allowedToInitialize(clientConfigFileInput) {\n\t\tlogger.Info(\"Skipping Easy Logging initialization as it is not allowed to initialize\")\n\t\treturn nil\n\t}\n\tlogger.Infof(\"Trying to initialize Easy Logging\")\n\tconfig, configPath, err := getClientConfig(clientConfigFileInput)\n\tif err != nil {\n\t\tlogger.Errorf(\"Failed to initialize Easy Logging, err: %s\", err)\n\t\treturn easyLoggingInitError(err)\n\t}\n\tif config == nil {\n\t\tlogger.Info(\"Easy Logging is disabled as no config has been found\")\n\t\teasyLoggingInitTrials.setInitTrial(clientConfigFileInput)\n\t\treturn nil\n\t}\n\tvar logLevel string\n\tlogLevel, err = getLogLevel(config.Common.LogLevel)\n\tif err != nil {\n\t\tlogger.Errorf(\"Failed to initialize Easy Logging, err: %s\", err)\n\t\treturn easyLoggingInitError(err)\n\t}\n\tvar logPath string\n\tlogPath, err = getLogPath(config.Common.LogPath)\n\tif err != nil {\n\t\tlogger.Errorf(\"Failed to initialize Easy Logging, err: %s\", err)\n\t\treturn easyLoggingInitError(err)\n\t}\n\tlogger.Infof(\"Initializing Easy Logging with logPath=%s and logLevel=%s from file: %s\", logPath, logLevel, configPath)\n\terr = reconfigureEasyLogging(logLevel, logPath)\n\tif err != nil {\n\t\tlogger.Errorf(\"Failed to initialize Easy Logging, err: %s\", err)\n\t}\n\teasyLoggingInitTrials.setInitTrial(clientConfigFileInput)\n\teasyLoggingInitTrials.increaseReconfigureCounter()\n\treturn err\n}\n\nfunc easyLoggingInitError(err error) error {\n\treturn &SnowflakeError{\n\t\tNumber:      ErrCodeClientConfigFailed,\n\t\tMessage:     errors2.ErrMsgClientConfigFailed,\n\t\tMessageArgs: []any{err.Error()},\n\t}\n}\n\nfunc reconfigureEasyLogging(logLevel string, logPath string) error {\n\t// don't allow any change if a non-default logger is already being used.\n\tcurrentLogger := GetLogger()\n\tif !loggerinternal.IsEasyLoggingLogger(currentLogger) {\n\t\tlogger.Warnf(\"Cannot reconfigure easy logging: custom logger is in use\")\n\t\treturn nil // cannot replace custom logger\n\t}\n\n\tnewLogger := CreateDefaultLogger()\n\terr := newLogger.SetLogLevel(logLevel)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar output io.Writer\n\tvar file *os.File\n\toutput, file, err = createLogWriter(logPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\tnewLogger.SetOutput(output)\n\terr = loggerinternal.CloseFileOnLoggerReplace(newLogger, file)\n\tif err != nil {\n\t\tlogger.Errorf(\"%s\", err)\n\t}\n\n\t// Actually set the new logger as the global logger\n\tif err := SetLogger(newLogger); err != nil {\n\t\tlogger.Errorf(\"Failed to set new logger: %s\", err)\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc createLogWriter(logPath string) (io.Writer, *os.File, error) {\n\tif strings.EqualFold(logPath, \"STDOUT\") {\n\t\treturn os.Stdout, nil, nil\n\t}\n\tlogFileName := path.Join(logPath, \"snowflake.log\")\n\tfile, err := os.OpenFile(logFileName, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0640)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\treturn file, file, nil\n}\n\nfunc allowedToInitialize(clientConfigFileInput string) bool {\n\ttriedToInitializeWithoutConfigFile := easyLoggingInitTrials.everTriedToInitialize && easyLoggingInitTrials.clientConfigFileInput == \"\"\n\tisAllowedToInitialize := !easyLoggingInitTrials.everTriedToInitialize || (triedToInitializeWithoutConfigFile && clientConfigFileInput != \"\")\n\tif !isAllowedToInitialize && easyLoggingInitTrials.clientConfigFileInput != clientConfigFileInput {\n\t\tlogger.Warnf(\"Easy logging will not be configured for CLIENT_CONFIG_FILE=%s because it was previously configured for a different client config\", clientConfigFileInput)\n\t}\n\treturn isAllowedToInitialize\n}\n\nfunc getLogLevel(logLevel string) (string, error) {\n\tif logLevel == \"\" {\n\t\tlogger.Warn(\"LogLevel in client config not found. Using default value: OFF\")\n\t\treturn levelOff, nil\n\t}\n\treturn toLogLevel(logLevel)\n}\n\nfunc getLogPath(logPath string) (string, error) {\n\tlogPathOrDefault := logPath\n\tif logPath == \"\" {\n\t\thomeDir, err := os.UserHomeDir()\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"user home directory is not accessible, err: %w\", err)\n\t\t}\n\t\tlogPathOrDefault = homeDir\n\t\tlogger.Warnf(\"LogPath in client config not found. Using user home directory as a default value: %s\", logPathOrDefault)\n\t}\n\tpathWithGoSubdir := path.Join(logPathOrDefault, \"go\")\n\texists, err := dirExists(pathWithGoSubdir)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif !exists {\n\t\terr = os.MkdirAll(pathWithGoSubdir, 0700)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\tlogDirPermValid, perm, err := isDirAccessCorrect(pathWithGoSubdir)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif !logDirPermValid {\n\t\tlogger.Warnf(\"Log directory: %s could potentially be accessed by others. Directory chmod: 0%o\", pathWithGoSubdir, *perm)\n\t}\n\treturn pathWithGoSubdir, nil\n}\n\nfunc isDirAccessCorrect(dirPath string) (bool, *os.FileMode, error) {\n\tif runtime.GOOS == \"windows\" {\n\t\treturn true, nil, nil\n\t}\n\tdirStat, err := os.Stat(dirPath)\n\tif err != nil {\n\t\treturn false, nil, err\n\t}\n\tperm := dirStat.Mode().Perm()\n\tif perm != 0700 {\n\t\treturn false, &perm, nil\n\t}\n\treturn true, &perm, nil\n}\n\nfunc dirExists(dirPath string) (bool, error) {\n\tstat, err := os.Stat(dirPath)\n\tif err == nil {\n\t\treturn stat.IsDir(), nil\n\t}\n\tif errors.Is(err, os.ErrNotExist) {\n\t\treturn false, nil\n\t}\n\treturn false, err\n}\n"
  },
  {
    "path": "easy_logging_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\n\tloggerinternal \"github.com/snowflakedb/gosnowflake/v2/internal/logger\"\n)\n\nfunc TestInitializeEasyLoggingOnlyOnceWhenConfigGivenAsAParameter(t *testing.T) {\n\tskipOnWindows(t, \"Doesn't work on Windows\")\n\tdefer cleanUp()\n\torigLogLevel := logger.GetLogLevel()\n\tdefer logger.SetLogLevel(origLogLevel)\n\tlogger.SetLogLevel(\"error\")\n\n\tlogDir := t.TempDir()\n\tlogLevel := levelError\n\tcontents := createClientConfigContent(logLevel, logDir)\n\tconfigFilePath := createFile(t, \"config.json\", contents, logDir)\n\teasyLoggingInitTrials.reset()\n\n\terr := openWithClientConfigFile(t, configFilePath)\n\n\tassertNilF(t, err, \"open config error\")\n\tassertEqualE(t, toClientConfigLevel(logger.GetLogLevel()), logLevel, \"error log level check\")\n\tassertEqualE(t, easyLoggingInitTrials.configureCounter, 1)\n\n\terr = openWithClientConfigFile(t, \"\")\n\tassertNilF(t, err, \"open config error\")\n\terr = openWithClientConfigFile(t, configFilePath)\n\tassertNilF(t, err, \"open config error\")\n\terr = openWithClientConfigFile(t, \"/another-config.json\")\n\tassertNilF(t, err, \"open config error\")\n\n\tassertEqualE(t, toClientConfigLevel(logger.GetLogLevel()), logLevel, \"error log level check\")\n\tassertEqualE(t, easyLoggingInitTrials.configureCounter, 1)\n}\n\nfunc TestConfigureEasyLoggingOnlyOnceWhenInitializedWithoutConfigFilePath(t *testing.T) {\n\tskipOnWindows(t, \"Doesn't work on Windows\")\n\tskipOnMissingHome(t)\n\torigLogLevel := logger.GetLogLevel()\n\tdefer logger.SetLogLevel(origLogLevel)\n\tlogger.SetLogLevel(\"error\")\n\n\tappExe, err := os.Executable()\n\tassertNilF(t, err, \"application exe not accessible\")\n\tuserHome, err := os.UserHomeDir()\n\tassertNilF(t, err, \"user home directory not accessible\")\n\n\ttestcases := []struct {\n\t\tname string\n\t\tdir  string\n\t}{\n\t\t{\n\t\t\tname: \"user home directory\",\n\t\t\tdir:  userHome,\n\t\t},\n\t\t{\n\t\t\tname: \"application directory\",\n\t\t\tdir:  filepath.Dir(appExe),\n\t\t},\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tdefer cleanUp()\n\t\t\tlogDir := t.TempDir()\n\t\t\tassertNilF(t, err, \"user home directory error\")\n\t\t\tlogLevel := levelError\n\t\t\tcontents := createClientConfigContent(logLevel, logDir)\n\t\t\tconfigFilePath := createFile(t, defaultConfigName, contents, test.dir)\n\t\t\tdefer os.Remove(configFilePath)\n\t\t\teasyLoggingInitTrials.reset()\n\n\t\t\terr = openWithClientConfigFile(t, \"\")\n\t\t\tassertNilF(t, err, \"open config error\")\n\t\t\terr = openWithClientConfigFile(t, \"\")\n\t\t\tassertNilF(t, err, \"open config error\")\n\n\t\t\tassertEqualE(t, toClientConfigLevel(logger.GetLogLevel()), logLevel, \"error log level check\")\n\t\t\tassertEqualE(t, easyLoggingInitTrials.configureCounter, 1)\n\t\t})\n\t}\n}\n\nfunc TestReconfigureEasyLoggingIfConfigPathWasNotGivenForTheFirstTime(t *testing.T) {\n\tskipOnWindows(t, \"Doesn't work on Windows\")\n\tskipOnMissingHome(t)\n\tdefer cleanUp()\n\torigLogLevel := logger.GetLogLevel()\n\tdefer logger.SetLogLevel(origLogLevel)\n\tlogger.SetLogLevel(\"error\")\n\n\tconfigDir, err := os.UserHomeDir()\n\tlogDir := t.TempDir()\n\tassertNilF(t, err, \"user home directory error\")\n\thomeConfigLogLevel := levelError\n\thomeConfigContent := createClientConfigContent(homeConfigLogLevel, logDir)\n\thomeConfigFilePath := createFile(t, defaultConfigName, homeConfigContent, configDir)\n\tdefer os.Remove(homeConfigFilePath)\n\tcustomLogLevel := levelWarn\n\tcustomFileContent := createClientConfigContent(customLogLevel, logDir)\n\tcustomConfigFilePath := createFile(t, \"config.json\", customFileContent, configDir)\n\teasyLoggingInitTrials.reset()\n\n\terr = openWithClientConfigFile(t, \"\")\n\tlogger.Error(\"Error message\")\n\n\tassertNilF(t, err, \"open config error\")\n\tassertEqualE(t, toClientConfigLevel(logger.GetLogLevel()), homeConfigLogLevel, \"tmp dir log level check\")\n\tassertEqualE(t, easyLoggingInitTrials.configureCounter, 1)\n\n\terr = openWithClientConfigFile(t, customConfigFilePath)\n\tlogger.Error(\"Warning message\")\n\n\tassertNilF(t, err, \"open config error\")\n\tassertEqualE(t, toClientConfigLevel(logger.GetLogLevel()), customLogLevel, \"custom dir log level check\")\n\tassertEqualE(t, easyLoggingInitTrials.configureCounter, 2)\n\tvar logContents []byte\n\tlogContents, err = os.ReadFile(path.Join(logDir, \"go\", \"snowflake.log\"))\n\tassertNilF(t, err, \"read file error\")\n\tlogs := notEmptyLines(string(logContents))\n\tassertEqualE(t, len(logs), 2, \"number of logs\")\n}\n\nfunc TestEasyLoggingFailOnUnknownLevel(t *testing.T) {\n\tdefer cleanUp()\n\tdir := t.TempDir()\n\teasyLoggingInitTrials.reset()\n\tconfigContent := createClientConfigContent(\"something_unknown\", dir)\n\tconfigFilePath := createFile(t, \"config.json\", configContent, dir)\n\n\terr := openWithClientConfigFile(t, configFilePath)\n\n\tassertNotNilF(t, err, \"open config error\")\n\tassertStringContainsE(t, err.Error(), fmt.Sprint(ErrCodeClientConfigFailed), \"error code\")\n\tassertStringContainsE(t, err.Error(), \"parsing client config failed\", \"error message\")\n}\n\nfunc TestEasyLoggingFailOnNotExistingConfigFile(t *testing.T) {\n\tdefer cleanUp()\n\teasyLoggingInitTrials.reset()\n\n\terr := openWithClientConfigFile(t, \"/not-existing-file.json\")\n\n\tassertNotNilF(t, err, \"open config error\")\n\tassertStringContainsE(t, err.Error(), fmt.Sprint(ErrCodeClientConfigFailed), \"error code\")\n\tassertStringContainsE(t, err.Error(), \"parsing client config failed\", \"error message\")\n}\n\nfunc TestLogToConfiguredFile(t *testing.T) {\n\tskipOnWindows(t, \"Doesn't work on Windows\")\n\tdefer cleanUp()\n\torigLogLevel := logger.GetLogLevel()\n\tdefer logger.SetLogLevel(origLogLevel)\n\tlogger.SetLogLevel(\"error\")\n\n\tdir := t.TempDir()\n\teasyLoggingInitTrials.reset()\n\tconfigContent := createClientConfigContent(levelWarn, dir)\n\tconfigFilePath := createFile(t, \"config.json\", configContent, dir)\n\tlogFilePath := path.Join(dir, \"go\", \"snowflake.log\")\n\terr := openWithClientConfigFile(t, configFilePath)\n\tassertNilF(t, err, \"open config error\")\n\n\tlogger.Error(\"Error message\")\n\tlogger.Warn(\"Warning message\")\n\tlogger.Info(\"Info message\")\n\tlogger.Trace(\"Trace message\")\n\n\tvar logContents []byte\n\tlogContents, err = os.ReadFile(logFilePath)\n\tassertNilF(t, err, \"read file error\")\n\tlogs := notEmptyLines(string(logContents))\n\tassertEqualE(t, len(logs), 2, \"number of logs\")\n\terrorLogs := filterStrings(logs, func(val string) bool {\n\t\treturn strings.Contains(val, \"level=ERROR\")\n\t})\n\tassertEqualE(t, len(errorLogs), 1, \"error logs count\")\n\twarningLogs := filterStrings(logs, func(val string) bool {\n\t\treturn strings.Contains(val, \"level=WARN\")\n\t})\n\tassertEqualE(t, len(warningLogs), 1, \"warning logs count\")\n}\n\nfunc TestDataRace(t *testing.T) {\n\tn := 10\n\twg := sync.WaitGroup{}\n\twg.Add(n)\n\n\tfor range make([]int, n) {\n\t\tgo func() {\n\t\t\tdefer wg.Done()\n\n\t\t\terr := initEasyLogging(\"\")\n\t\t\tassertNilF(t, err, \"no error from db\")\n\t\t}()\n\t}\n\n\twg.Wait()\n}\n\nfunc notEmptyLines(lines string) []string {\n\tnotEmptyFunc := func(val string) bool {\n\t\treturn val != \"\"\n\t}\n\treturn filterStrings(strings.Split(strings.ReplaceAll(lines, \"\\r\\n\", \"\\n\"), \"\\n\"), notEmptyFunc)\n}\n\nfunc cleanUp() {\n\tnewLogger := CreateDefaultLogger()\n\tif _, ok := logger.(loggerinternal.EasyLoggingSupport); ok {\n\t\tSetLogger(newLogger)\n\t}\n\teasyLoggingInitTrials.reset()\n}\n\nfunc toClientConfigLevel(logLevel string) string {\n\tlogLevelUpperCase := strings.ToUpper(logLevel)\n\tswitch strings.ToUpper(logLevel) {\n\tcase \"WARNING\":\n\t\treturn levelWarn\n\tcase levelOff, levelError, levelWarn, levelInfo, levelDebug, levelTrace:\n\t\treturn logLevelUpperCase\n\tdefault:\n\t\treturn \"\"\n\t}\n}\n\nfunc filterStrings(values []string, keep func(string) bool) []string {\n\tvar filteredStrings []string\n\tfor _, val := range values {\n\t\tif keep(val) {\n\t\t\tfilteredStrings = append(filteredStrings, val)\n\t\t}\n\t}\n\treturn filteredStrings\n}\n\nfunc defaultConfig(t *testing.T) *Config {\n\tconfig, err := ParseDSN(dsn)\n\tassertNilF(t, err, \"parse dsn error\")\n\treturn config\n}\n\nfunc openWithClientConfigFile(t *testing.T, clientConfigFile string) error {\n\tdriver := SnowflakeDriver{}\n\tconfig := defaultConfig(t)\n\tconfig.ClientConfigFile = clientConfigFile\n\t_, err := driver.OpenWithConfig(context.Background(), *config)\n\treturn err\n}\n\nfunc (i *initTrials) reset() {\n\ti.mu.Lock()\n\tdefer i.mu.Unlock()\n\n\ti.everTriedToInitialize = false\n\ti.clientConfigFileInput = \"\"\n\ti.configureCounter = 0\n}\n"
  },
  {
    "path": "encrypt_util.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"crypto/aes\"\n\t\"crypto/cipher\"\n\t\"crypto/rand\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"io\"\n\t\"os\"\n\t\"strconv\"\n)\n\nconst gcmIvLengthInBytes = 12\n\nvar (\n\tdefaultKeyAad  = make([]byte, 0)\n\tdefaultDataAad = make([]byte, 0)\n)\n\n// override default behavior for wrapper\nfunc (ew *encryptionWrapper) UnmarshalJSON(data []byte) error {\n\t// if GET, unmarshal slice of encryptionMaterial\n\tif err := json.Unmarshal(data, &ew.EncryptionMaterials); err == nil {\n\t\treturn err\n\t}\n\t// else (if PUT), unmarshal the encryptionMaterial itself\n\treturn json.Unmarshal(data, &ew.snowflakeFileEncryption)\n}\n\n// encryptStreamCBC encrypts a stream buffer using AES128 block cipher in CBC mode\n// with PKCS5 padding\nfunc encryptStreamCBC(\n\tsfe *snowflakeFileEncryption,\n\tsrc io.Reader,\n\tout io.Writer,\n\tchunkSize int) (*encryptMetadata, error) {\n\tif chunkSize == 0 {\n\t\tchunkSize = aes.BlockSize * 4 * 1024\n\t}\n\tkek, err := base64.StdEncoding.DecodeString(sfe.QueryStageMasterKey)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tkeySize := len(kek)\n\n\tfileKey := getSecureRandom(keySize)\n\tblock, err := aes.NewCipher(fileKey)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdataIv := getSecureRandom(block.BlockSize())\n\n\tmode := cipher.NewCBCEncrypter(block, dataIv)\n\tcipherText := make([]byte, chunkSize)\n\tchunk := make([]byte, chunkSize)\n\n\t// encrypt file with CBC\n\tpadded := false\n\tfor {\n\t\t// read the stream buffer up to len(chunk) bytes into chunk\n\t\t// note that all spaces in chunk may be used even if Read() returns n < len(chunk)\n\t\tn, err := src.Read(chunk)\n\t\tif err != nil && err != io.EOF {\n\t\t\treturn nil, fmt.Errorf(\"reading: %w\", err)\n\t\t}\n\t\tif n == 0 {\n\t\t\tbreak\n\t\t}\n\n\t\tif n%aes.BlockSize != 0 {\n\t\t\t// add padding to the end of the chunk and update the length n\n\t\t\tchunk = padBytesLength(chunk[:n], aes.BlockSize)\n\t\t\tn = len(chunk)\n\t\t\tpadded = true\n\t\t}\n\t\t// make sure only n bytes of chunk is used\n\t\tmode.CryptBlocks(cipherText, chunk[:n])\n\t\tif _, err := out.Write(cipherText[:n]); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\t// add padding if not yet added\n\tif !padded {\n\t\tpadding := bytes.Repeat([]byte(string(rune(aes.BlockSize))), aes.BlockSize)\n\t\tmode.CryptBlocks(cipherText, padding)\n\t\tif _, err := out.Write(cipherText[:len(padding)]); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\t// encrypt key with ECB\n\tfileKey = padBytesLength(fileKey, block.BlockSize())\n\tencryptedFileKey := make([]byte, len(fileKey))\n\tif err = encryptECB(encryptedFileKey, fileKey, kek); err != nil {\n\t\treturn nil, err\n\t}\n\n\tmatDesc := materialDescriptor{\n\t\tfmt.Sprintf(\"%v\", sfe.SMKID),\n\t\tsfe.QueryID,\n\t\tstrconv.Itoa(keySize * 8),\n\t}\n\n\tmatDescUnicode, err := matdescToUnicode(matDesc)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &encryptMetadata{\n\t\tbase64.StdEncoding.EncodeToString(encryptedFileKey),\n\t\tbase64.StdEncoding.EncodeToString(dataIv),\n\t\tmatDescUnicode,\n\t}, nil\n}\n\nfunc encryptECB(encrypted []byte, fileKey []byte, decodedKey []byte) error {\n\tblock, err := aes.NewCipher(decodedKey)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif len(fileKey)%block.BlockSize() != 0 {\n\t\treturn fmt.Errorf(\"input not full of blocks\")\n\t}\n\tif len(encrypted) < len(fileKey) {\n\t\treturn fmt.Errorf(\"output length is smaller than input length\")\n\t}\n\tfor len(fileKey) > 0 {\n\t\tblock.Encrypt(encrypted, fileKey[:block.BlockSize()])\n\t\tencrypted = encrypted[block.BlockSize():]\n\t\tfileKey = fileKey[block.BlockSize():]\n\t}\n\treturn nil\n}\n\nfunc decryptECB(decrypted []byte, keyBytes []byte, decodedKey []byte) error {\n\tblock, err := aes.NewCipher(decodedKey)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif len(keyBytes)%block.BlockSize() != 0 {\n\t\treturn fmt.Errorf(\"input not full of blocks\")\n\t}\n\tif len(decrypted) < len(keyBytes) {\n\t\treturn fmt.Errorf(\"output length is smaller than input length\")\n\t}\n\tfor len(keyBytes) > 0 {\n\t\tblock.Decrypt(decrypted, keyBytes[:block.BlockSize()])\n\t\tkeyBytes = keyBytes[block.BlockSize():]\n\t\tdecrypted = decrypted[block.BlockSize():]\n\t}\n\treturn nil\n}\n\nfunc encryptFileCBC(\n\tsfe *snowflakeFileEncryption,\n\tfilename string,\n\tchunkSize int,\n\ttmpDir string) (\n\tmeta *encryptMetadata, fileName string, err error) {\n\tif chunkSize == 0 {\n\t\tchunkSize = aes.BlockSize * 4 * 1024\n\t}\n\ttmpOutputFile, err := os.CreateTemp(tmpDir, baseName(filename)+\"#\")\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\tdefer func() {\n\t\tif tmpErr := tmpOutputFile.Close(); tmpErr != nil && err == nil {\n\t\t\terr = tmpErr\n\t\t}\n\t}()\n\tinfile, err := os.OpenFile(filename, os.O_CREATE|os.O_RDONLY, readWriteFileMode)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\tdefer func() {\n\t\tif tmpErr := infile.Close(); tmpErr != nil && err == nil {\n\t\t\terr = tmpErr\n\t\t}\n\t}()\n\n\tmeta, err = encryptStreamCBC(sfe, infile, tmpOutputFile, chunkSize)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\treturn meta, tmpOutputFile.Name(), err\n}\n\nfunc decryptFileKeyECB(\n\tmetadata *encryptMetadata,\n\tsfe *snowflakeFileEncryption) ([]byte, []byte, error) {\n\tdecodedKey, err := base64.StdEncoding.DecodeString(sfe.QueryStageMasterKey)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tkeyBytes, err := base64.StdEncoding.DecodeString(metadata.key) // encrypted file key\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tivBytes, err := base64.StdEncoding.DecodeString(metadata.iv)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\t// decrypt file key\n\tdecryptedKey := make([]byte, len(keyBytes))\n\tif err = decryptECB(decryptedKey, keyBytes, decodedKey); err != nil {\n\t\treturn nil, nil, err\n\t}\n\tdecryptedKey, err = paddingTrim(decryptedKey)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\treturn decryptedKey, ivBytes, err\n}\n\nfunc initCBC(decryptedKey []byte, ivBytes []byte) (cipher.BlockMode, error) {\n\tblock, err := aes.NewCipher(decryptedKey)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tmode := cipher.NewCBCDecrypter(block, ivBytes)\n\n\treturn mode, err\n}\n\nfunc decryptFileCBC(\n\tmetadata *encryptMetadata,\n\tsfe *snowflakeFileEncryption,\n\tfilename string,\n\tchunkSize int,\n\ttmpDir string) (outputFileName string, err error) {\n\ttmpOutputFile, err := os.CreateTemp(tmpDir, baseName(filename)+\"#\")\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdefer func() {\n\t\tif tmpErr := tmpOutputFile.Close(); tmpErr != nil && err == nil {\n\t\t\terr = tmpErr\n\t\t}\n\t}()\n\tinfile, err := os.Open(filename)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdefer func() {\n\t\tif tmpErr := infile.Close(); tmpErr != nil && err == nil {\n\t\t\terr = tmpErr\n\t\t}\n\t}()\n\ttotalFileSize, err := decryptStreamCBC(metadata, sfe, chunkSize, infile, tmpOutputFile)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\terr = tmpOutputFile.Truncate(int64(totalFileSize))\n\treturn tmpOutputFile.Name(), err\n}\n\n// Returns decrypted file size and any error that happened during decryption.\nfunc decryptStreamCBC(\n\tmetadata *encryptMetadata,\n\tsfe *snowflakeFileEncryption,\n\tchunkSize int,\n\tsrc io.Reader,\n\tout io.Writer) (int, error) {\n\tif chunkSize == 0 {\n\t\tchunkSize = aes.BlockSize * 4 * 1024\n\t}\n\tdecryptedKey, ivBytes, err := decryptFileKeyECB(metadata, sfe)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\tmode, err := initCBC(decryptedKey, ivBytes)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\tvar totalFileSize int\n\tvar prevChunk []byte\n\tfor {\n\t\tchunk := make([]byte, chunkSize)\n\t\tn, err := src.Read(chunk)\n\t\tif err != nil && err != io.EOF {\n\t\t\treturn 0, fmt.Errorf(\"reading: %w\", err)\n\t\t}\n\t\tif n == 0 {\n\t\t\tbreak\n\t\t}\n\n\t\tif n%aes.BlockSize != 0 {\n\t\t\t// add padding to the end of the chunk and update the length n\n\t\t\tchunk = padBytesLength(chunk[:n], aes.BlockSize)\n\t\t\tn = len(chunk)\n\t\t}\n\t\ttotalFileSize += n\n\t\tchunk = chunk[:n]\n\t\tmode.CryptBlocks(chunk, chunk)\n\t\tif _, err := out.Write(chunk); err != nil {\n\t\t\treturn 0, err\n\t\t}\n\t\tprevChunk = chunk\n\t}\n\n\tif prevChunk != nil {\n\t\ttotalFileSize -= paddingOffset(prevChunk)\n\t}\n\treturn totalFileSize, nil\n}\n\nfunc encryptGCM(iv []byte, plaintext []byte, encryptionKey []byte, aad []byte) ([]byte, error) {\n\taead, err := initGcm(encryptionKey)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn aead.Seal(nil, iv, plaintext, aad), nil\n}\n\nfunc decryptGCM(iv []byte, ciphertext []byte, encryptionKey []byte, aad []byte) ([]byte, error) {\n\taead, err := initGcm(encryptionKey)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn aead.Open(nil, iv, ciphertext, aad)\n}\n\nfunc initGcm(encryptionKey []byte) (cipher.AEAD, error) {\n\tblock, err := aes.NewCipher(encryptionKey)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn cipher.NewGCM(block)\n}\n\nfunc encryptFileGCM(\n\tsfe *snowflakeFileEncryption,\n\tfilename string,\n\ttmpDir string) (\n\tmeta *gcmEncryptMetadata, outputFileName string, err error) {\n\ttmpOutputFile, err := os.CreateTemp(tmpDir, baseName(filename)+\"#\")\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\tdefer func() {\n\t\tif tmpErr := tmpOutputFile.Close(); tmpErr != nil && err == nil {\n\t\t\terr = tmpErr\n\t\t}\n\t}()\n\tinfile, err := os.OpenFile(filename, os.O_CREATE|os.O_RDONLY, readWriteFileMode)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\tdefer func() {\n\t\tif tmpErr := infile.Close(); tmpErr != nil && err == nil {\n\t\t\terr = tmpErr\n\t\t}\n\t}()\n\tplaintext, err := os.ReadFile(filename)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\tkek, err := base64.StdEncoding.DecodeString(sfe.QueryStageMasterKey)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\tkeySize := len(kek)\n\tfileKey := getSecureRandom(keySize)\n\tkeyIv := getSecureRandom(gcmIvLengthInBytes)\n\tencryptedFileKey, err := encryptGCM(keyIv, fileKey, kek, defaultKeyAad)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\tdataIv := getSecureRandom(gcmIvLengthInBytes)\n\tencryptedData, err := encryptGCM(dataIv, plaintext, fileKey, defaultDataAad)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\t_, err = tmpOutputFile.Write(encryptedData)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\tmatDesc := materialDescriptor{\n\t\tfmt.Sprintf(\"%v\", sfe.SMKID),\n\t\tsfe.QueryID,\n\t\tstrconv.Itoa(keySize * 8),\n\t}\n\n\tmatDescUnicode, err := matdescToUnicode(matDesc)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\tmeta = &gcmEncryptMetadata{\n\t\tkey:     base64.StdEncoding.EncodeToString(encryptedFileKey),\n\t\tkeyIv:   base64.StdEncoding.EncodeToString(keyIv),\n\t\tdataIv:  base64.StdEncoding.EncodeToString(dataIv),\n\t\tkeyAad:  base64.StdEncoding.EncodeToString(defaultKeyAad),\n\t\tdataAad: base64.StdEncoding.EncodeToString(defaultDataAad),\n\t\tmatdesc: matDescUnicode,\n\t}\n\treturn meta, tmpOutputFile.Name(), nil\n}\n\nfunc decryptFileGCM(\n\tmetadata *gcmEncryptMetadata,\n\tsfe *snowflakeFileEncryption,\n\tfilename string,\n\ttmpDir string) (\n\tstring, error) {\n\tkek, err := base64.StdEncoding.DecodeString(sfe.QueryStageMasterKey)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tencryptedFileKey, err := base64.StdEncoding.DecodeString(metadata.key)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tkeyIv, err := base64.StdEncoding.DecodeString(metadata.keyIv)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tkeyAad, err := base64.StdEncoding.DecodeString(metadata.keyAad)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdataIv, err := base64.StdEncoding.DecodeString(metadata.dataIv)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdataAad, err := base64.StdEncoding.DecodeString(metadata.dataAad)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tfileKey, err := decryptGCM(keyIv, encryptedFileKey, kek, keyAad)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tciphertext, err := os.ReadFile(filename)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tplaintext, err := decryptGCM(dataIv, ciphertext, fileKey, dataAad)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\ttmpOutputFile, err := os.CreateTemp(tmpDir, baseName(filename)+\"#\")\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\t_, err = tmpOutputFile.Write(plaintext)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn tmpOutputFile.Name(), nil\n}\n\ntype materialDescriptor struct {\n\tSmkID   string `json:\"smkId\"`\n\tQueryID string `json:\"queryId\"`\n\tKeySize string `json:\"keySize\"`\n}\n\nfunc matdescToUnicode(matdesc materialDescriptor) (string, error) {\n\ts, err := json.Marshal(&matdesc)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn string(s), nil\n}\n\nfunc getSecureRandom(byteLength int) []byte {\n\ttoken := make([]byte, byteLength)\n\t_, err := rand.Read(token)\n\tif err != nil {\n\t\tlogger.Errorf(\"cannot init secure random. %v\", err)\n\t}\n\treturn token\n}\n\nfunc padBytesLength(src []byte, blockSize int) []byte {\n\tpadLength := blockSize - len(src)%blockSize\n\tpadText := bytes.Repeat([]byte{byte(padLength)}, padLength)\n\treturn append(src, padText...)\n}\n\nfunc paddingTrim(src []byte) ([]byte, error) {\n\tif len(src) == 0 {\n\t\tlogger.Errorf(\"padding trim failed - data length is 0\")\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:  ErrInvalidPadding,\n\t\t\tMessage: \"padding validation failed\",\n\t\t}\n\t}\n\tunpadding := src[len(src)-1]\n\tn := int(unpadding)\n\tif n == 0 || n > len(src) {\n\t\tlogger.Errorf(\"padding validation failed - invalid padding detected. data length: %d, padding value: %d\",\n\t\t\tlen(src), n)\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:  ErrInvalidPadding,\n\t\t\tMessage: errors.ErrMsgInvalidPadding,\n\t\t}\n\t}\n\treturn src[:len(src)-n], nil\n}\n\nfunc paddingOffset(src []byte) int {\n\tlength := len(src)\n\treturn int(src[length-1])\n}\n\ntype contentKey struct {\n\tKeyID         string `json:\"KeyId,omitempty\"`\n\tEncryptionKey string `json:\"EncryptedKey,omitempty\"`\n\tAlgorithm     string `json:\"Algorithm,omitempty\"`\n}\n\ntype encryptionAgent struct {\n\tProtocol            string `json:\"Protocol,omitempty\"`\n\tEncryptionAlgorithm string `json:\"EncryptionAlgorithm,omitempty\"`\n}\n\ntype keyMetadata struct {\n\tEncryptionLibrary string `json:\"EncryptionLibrary,omitempty\"`\n}\n\ntype encryptionData struct {\n\tEncryptionMode      string          `json:\"EncryptionMode,omitempty\"`\n\tWrappedContentKey   contentKey      `json:\"WrappedContentKey\"`\n\tEncryptionAgent     encryptionAgent `json:\"EncryptionAgent\"`\n\tContentEncryptionIV string          `json:\"ContentEncryptionIV,omitempty\"`\n\tKeyWrappingMetadata keyMetadata     `json:\"KeyWrappingMetadata\"`\n}\n\ntype snowflakeFileEncryption struct {\n\tQueryStageMasterKey string `json:\"queryStageMasterKey,omitempty\"`\n\tQueryID             string `json:\"queryId,omitempty\"`\n\tSMKID               int64  `json:\"smkId,omitempty\"`\n}\n\n// PUT requests return a single encryptionMaterial object whereas GET requests\n// return a slice (array) of encryptionMaterial objects, both under the field\n// 'encryptionMaterial'\ntype encryptionWrapper struct {\n\tsnowflakeFileEncryption\n\tEncryptionMaterials []snowflakeFileEncryption\n}\n\ntype encryptMetadata struct {\n\tkey     string\n\tiv      string\n\tmatdesc string\n}\n\ntype gcmEncryptMetadata struct {\n\tkey     string\n\tkeyIv   string\n\tdataIv  string\n\tkeyAad  string\n\tdataAad string\n\tmatdesc string\n}\n"
  },
  {
    "path": "encrypt_util_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bufio\"\n\t\"compress/gzip\"\n\t\"encoding/base64\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"math/rand\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"testing\"\n\t\"testing/iotest\"\n\t\"time\"\n)\n\nconst timeFormat = \"2006-01-02T15:04:05\"\n\ntype encryptDecryptTestFile struct {\n\tnumberOfBytesInEachRow int\n\tnumberOfLines          int\n}\n\nfunc TestEncryptDecryptFileCBC(t *testing.T) {\n\tencMat := snowflakeFileEncryption{\n\t\t\"ztke8tIdVt1zmlQIZm0BMA==\",\n\t\t\"123873c7-3a66-40c4-ab89-e3722fbccce1\",\n\t\t9223372036854775807,\n\t}\n\tdata := \"test data\"\n\tinputFile := \"test_encrypt_decrypt_file\"\n\n\tfd, err := os.Create(inputFile)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer fd.Close()\n\tdefer os.Remove(inputFile)\n\tif _, err = fd.Write([]byte(data)); err != nil {\n\t\tt.Error(err)\n\t}\n\n\tmetadata, encryptedFile, err := encryptFileCBC(&encMat, inputFile, 0, \"\")\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer os.Remove(encryptedFile)\n\tassertStringContainsE(t, metadata.matdesc, \"9223372036854775807\")\n\tdecryptedFile, err := decryptFileCBC(metadata, &encMat, encryptedFile, 0, \"\")\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer os.Remove(decryptedFile)\n\n\tfd, err = os.Open(decryptedFile)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer fd.Close()\n\tcontent, err := io.ReadAll(fd)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif string(content) != data {\n\t\tt.Fatalf(\"data did not match content. expected: %v, got: %v\", data, string(content))\n\t}\n}\n\nfunc TestEncryptDecryptFilePadding(t *testing.T) {\n\tencMat := snowflakeFileEncryption{\n\t\t\"ztke8tIdVt1zmlQIZm0BMA==\",\n\t\t\"123873c7-3a66-40c4-ab89-e3722fbccce1\",\n\t\t3112,\n\t}\n\n\ttestcases := []encryptDecryptTestFile{\n\t\t// File size is a multiple of 65536 bytes (chunkSize)\n\t\t{numberOfBytesInEachRow: 8, numberOfLines: 16384},\n\t\t{numberOfBytesInEachRow: 16, numberOfLines: 4096},\n\t\t// File size is not a multiple of 65536 bytes (chunkSize)\n\t\t{numberOfBytesInEachRow: 8, numberOfLines: 10240},\n\t\t{numberOfBytesInEachRow: 16, numberOfLines: 6144},\n\t\t// The second chunk's size is a multiple of 16 bytes (aes.BlockSize)\n\t\t{numberOfBytesInEachRow: 16, numberOfLines: 4097},\n\t\t// The second chunk's size is not a multiple of 16 bytes (aes.BlockSize)\n\t\t{numberOfBytesInEachRow: 12, numberOfLines: 5462},\n\t\t{numberOfBytesInEachRow: 10, numberOfLines: 6556},\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v_%v\", test.numberOfBytesInEachRow, test.numberOfLines), func(t *testing.T) {\n\t\t\ttmpDir, err := generateKLinesOfNByteRows(test.numberOfLines, test.numberOfBytesInEachRow, t.TempDir())\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\n\t\t\tencryptDecryptFile(t, encMat, test.numberOfLines, tmpDir)\n\t\t})\n\t}\n}\n\nfunc TestEncryptDecryptLargeFileCBC(t *testing.T) {\n\tencMat := snowflakeFileEncryption{\n\t\t\"ztke8tIdVt1zmlQIZm0BMA==\",\n\t\t\"123873c7-3a66-40c4-ab89-e3722fbccce1\",\n\t\t3112,\n\t}\n\n\tnumberOfFiles := 1\n\tnumberOfLines := 10000\n\ttmpDir, err := generateKLinesOfNFiles(numberOfLines, numberOfFiles, false, t.TempDir())\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tencryptDecryptFile(t, encMat, numberOfLines, tmpDir)\n}\n\nfunc TestEncryptStreamCBCReadError(t *testing.T) {\n\tsfe := snowflakeFileEncryption{\n\t\tQueryStageMasterKey: \"YWJjZGVmMTIzNDU2Nzg5MA==\",\n\t\tQueryID:             \"unused\",\n\t\tSMKID:               9223372036854775807,\n\t}\n\n\twantErr := errors.New(\"test error\")\n\tr := iotest.ErrReader(wantErr)\n\n\tn, err := encryptStreamCBC(&sfe, r, nil, 0)\n\tassertTrueF(t, errors.Is(err, wantErr), fmt.Sprintf(\"expected error: %v, got: %v\", wantErr, err))\n\tassertNilE(t, n, \"expected no metadata on error\")\n}\n\nfunc TestDecryptStreamCBCReadError(t *testing.T) {\n\ttmpDir := t.TempDir()\n\ttempFile, err := os.CreateTemp(tmpDir, \"gcm\")\n\tassertNilF(t, err)\n\t_, err = tempFile.Write([]byte(\"abc\"))\n\tassertNilF(t, err)\n\terr = tempFile.Close()\n\tassertNilF(t, err)\n\n\tsfe := snowflakeFileEncryption{\n\t\tQueryStageMasterKey: \"YWJjZGVmMTIzNDU2Nzg5MA==\",\n\t\tQueryID:             \"unused\",\n\t\tSMKID:               9223372036854775807,\n\t}\n\tmeta, _, err := encryptFileCBC(&sfe, tempFile.Name(), 0, tmpDir)\n\tassertNilF(t, err)\n\tassertStringContainsF(t, meta.matdesc, \"9223372036854775807\")\n\n\twantErr := errors.New(\"test error\")\n\tr := iotest.ErrReader(wantErr)\n\n\tn, err := decryptStreamCBC(meta, &sfe, 0, r, nil)\n\tassertTrueF(t, errors.Is(err, wantErr), fmt.Sprintf(\"expected error: %v, got: %v\", wantErr, err))\n\tassertEqualE(t, n, 0, \"expected 0 bytes written\")\n}\n\nfunc encryptDecryptFile(t *testing.T, encMat snowflakeFileEncryption, expected int, tmpDir string) {\n\tfiles, err := filepath.Glob(filepath.Join(tmpDir, \"file*\"))\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tinputFile := files[0]\n\n\tmetadata, encryptedFile, err := encryptFileCBC(&encMat, inputFile, 0, tmpDir)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer os.Remove(encryptedFile)\n\tdecryptedFile, err := decryptFileCBC(metadata, &encMat, encryptedFile, 0, tmpDir)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer os.Remove(decryptedFile)\n\n\tcnt := 0\n\tfd, err := os.Open(decryptedFile)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer fd.Close()\n\n\tscanner := bufio.NewScanner(fd)\n\tfor scanner.Scan() {\n\t\tcnt++\n\t}\n\tif err = scanner.Err(); err != nil {\n\t\tt.Error(err)\n\t}\n\tif cnt != expected {\n\t\tt.Fatalf(\"incorrect number of lines. expected: %v, got: %v\", expected, cnt)\n\t}\n}\n\nfunc generateKLinesOfNByteRows(numLines int, numBytes int, tmpDir string) (string, error) {\n\tfname := path.Join(tmpDir, \"file\"+strconv.FormatInt(int64(numLines*numBytes), 10))\n\tf, err := os.Create(fname)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tfor range numLines {\n\t\tstr := randomString(numBytes - 1) // \\n is the last character\n\t\trec := fmt.Sprintf(\"%v\\n\", str)\n\t\tif _, err = f.Write([]byte(rec)); err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\terr = f.Close()\n\treturn tmpDir, err\n}\n\nfunc generateKLinesOfNFiles(k int, n int, compress bool, tmpDir string) (string, error) {\n\tfor i := range n {\n\t\tfname := path.Join(tmpDir, \"file\"+strconv.FormatInt(int64(i), 10))\n\t\tf, err := os.Create(fname)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tfor range k {\n\t\t\tnum := rand.Float64() * 10000\n\t\t\tmin := time.Date(1970, 1, 0, 0, 0, 0, 0, time.UTC).Unix()\n\t\t\tmax := time.Date(2070, 1, 0, 0, 0, 0, 0, time.UTC).Unix()\n\t\t\tdelta := max - min\n\t\t\tsec := rand.Int63n(delta) + min\n\t\t\ttm := time.Unix(sec, 0)\n\t\t\tdt := tm.Format(\"2021-03-01\")\n\t\t\tsec = rand.Int63n(delta) + min\n\t\t\tts := time.Unix(sec, 0).Format(timeFormat)\n\t\t\tsec = rand.Int63n(delta) + min\n\t\t\ttsltz := time.Unix(sec, 0).Format(timeFormat)\n\t\t\tsec = rand.Int63n(delta) + min\n\t\t\ttsntz := time.Unix(sec, 0).Format(timeFormat)\n\t\t\tsec = rand.Int63n(delta) + min\n\t\t\ttstz := time.Unix(sec, 0).Format(timeFormat)\n\t\t\tpct := rand.Float64() * 1000\n\t\t\tratio := fmt.Sprintf(\"%.2f\", rand.Float64()*1000)\n\t\t\trec := fmt.Sprintf(\"%v,%v,%v,%v,%v,%v,%v,%v\\n\", num, dt, ts, tsltz, tsntz, tstz, pct, ratio)\n\t\t\tif _, err = f.Write([]byte(rec)); err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t}\n\t\tif err = f.Close(); err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tif compress {\n\t\t\tif !isWindows {\n\t\t\t\tgzipCmd := exec.Command(\"gzip\", filepath.Join(tmpDir, \"file\"+strconv.FormatInt(int64(i), 10)))\n\t\t\t\tgzipOut, err := gzipCmd.StdoutPipe()\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\tgzipErr, err := gzipCmd.StderrPipe()\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\tif err = gzipCmd.Start(); err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\tif _, err = io.ReadAll(gzipOut); err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\tif _, err = io.ReadAll(gzipErr); err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\tif err = gzipCmd.Wait(); err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfOut, err := os.Create(fname + \".gz\")\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\tw := gzip.NewWriter(fOut)\n\t\t\t\tfIn, err := os.Open(fname)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\tif _, err = io.Copy(w, fIn); err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\tw.Close()\n\t\t\t\tfOut.Close()\n\t\t\t\tfIn.Close()\n\t\t\t}\n\t\t}\n\t}\n\treturn tmpDir, nil\n}\n\nfunc TestEncryptDecryptGCM(t *testing.T) {\n\tinput := []byte(\"abc\")\n\tiv := []byte(\"ab1234567890\")      // pragma: allowlist secret\n\tkey := []byte(\"1234567890abcdef\") // pragma: allowlist secret\n\tencrypted, err := encryptGCM(iv, input, key, nil)\n\tassertNilF(t, err)\n\tassertEqualE(t, base64.StdEncoding.EncodeToString(encrypted), \"iG+lT4o27hkzj3kblYRzQikLVQ==\")\n\n\tdecrypted, err := decryptGCM(iv, encrypted, key, nil)\n\tassertNilF(t, err)\n\tassertDeepEqualE(t, decrypted, input)\n}\n\nfunc TestEncryptDecryptFileGCM(t *testing.T) {\n\ttmpDir := os.TempDir()\n\ttempFile, err := os.CreateTemp(tmpDir, \"gcm\")\n\tassertNilF(t, err)\n\t_, err = tempFile.Write([]byte(\"abc\"))\n\tassertNilF(t, err)\n\n\tsfe := &snowflakeFileEncryption{\n\t\tQueryStageMasterKey: \"YWJjZGVmMTIzNDU2Nzg5MA==\",\n\t\tQueryID:             \"unused\",\n\t\tSMKID:               9223372036854775807,\n\t}\n\tmeta, encryptedFileName, err := encryptFileGCM(sfe, tempFile.Name(), tmpDir)\n\tassertNilF(t, err)\n\tassertStringContainsE(t, meta.matdesc, \"9223372036854775807\")\n\n\tdecryptedFileName, err := decryptFileGCM(meta, sfe, encryptedFileName, tmpDir)\n\tassertNilF(t, err)\n\n\tfileContent, err := os.ReadFile(decryptedFileName)\n\tassertNilF(t, err)\n\n\tassertEqualE(t, string(fileContent), \"abc\")\n}\n"
  },
  {
    "path": "errors.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"runtime/debug\"\n\t\"strconv\"\n\t\"time\"\n\n\tsferrors \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n)\n\n// SnowflakeError is a error type including various Snowflake specific information.\ntype SnowflakeError = sferrors.SnowflakeError\n\nfunc generateTelemetryExceptionData(se *SnowflakeError) *telemetryData {\n\tdata := &telemetryData{\n\t\tMessage: map[string]string{\n\t\t\ttypeKey:          sqlException,\n\t\t\tsourceKey:        telemetrySource,\n\t\t\tdriverTypeKey:    \"Go\",\n\t\t\tdriverVersionKey: SnowflakeGoDriverVersion,\n\t\t\tstacktraceKey:    maskSecrets(string(debug.Stack())),\n\t\t},\n\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t}\n\tif se.QueryID != \"\" {\n\t\tdata.Message[queryIDKey] = se.QueryID\n\t}\n\tif se.SQLState != \"\" {\n\t\tdata.Message[sqlStateKey] = se.SQLState\n\t}\n\tif se.Message != \"\" {\n\t\tdata.Message[reasonKey] = se.Message\n\t}\n\tif len(se.MessageArgs) > 0 {\n\t\tdata.Message[reasonKey] = fmt.Sprintf(se.Message, se.MessageArgs...)\n\t}\n\tif se.Number != 0 {\n\t\tdata.Message[errorNumberKey] = strconv.Itoa(se.Number)\n\t}\n\treturn data\n}\n\n// exceptionTelemetry generates telemetry data from the error and adds it to the telemetry queue.\nfunc exceptionTelemetry(se *SnowflakeError, sc *snowflakeConn) *SnowflakeError {\n\tif sc == nil || sc.telemetry == nil || !sc.telemetry.enabled {\n\t\treturn se // skip expensive stacktrace generation below if telemetry is disabled\n\t}\n\tdata := generateTelemetryExceptionData(se)\n\tif err := sc.telemetry.addLog(data); err != nil {\n\t\tlogger.WithContext(sc.ctx).Debugf(\"failed to log to telemetry: %v\", data)\n\t}\n\treturn se\n}\n\n// return populated error fields replacing the default response\nfunc populateErrorFields(code int, data *execResponse) *SnowflakeError {\n\terr := sferrors.ErrUnknownError()\n\tif code != -1 {\n\t\terr.Number = code\n\t}\n\tif data.Data.SQLState != \"\" {\n\t\terr.SQLState = data.Data.SQLState\n\t}\n\tif data.Message != \"\" {\n\t\terr.Message = data.Message\n\t}\n\tif data.Data.QueryID != \"\" {\n\t\terr.QueryID = data.Data.QueryID\n\t}\n\treturn err\n}\n\n// Snowflake Server Error code\nconst (\n\tqueryNotExecutingCode       = \"000605\"\n\tqueryInProgressCode         = \"333333\"\n\tqueryInProgressAsyncCode    = \"333334\"\n\tsessionExpiredCode          = \"390112\"\n\tinvalidOAuthAccessTokenCode = \"390303\"\n\texpiredOAuthAccessTokenCode = \"390318\"\n)\n\n// Driver return errors — re-exported from internal/errors\nconst (\n\t/* connection */\n\n\t// ErrCodeEmptyAccountCode is an error code for the case where a DSN doesn't include account parameter\n\tErrCodeEmptyAccountCode = sferrors.ErrCodeEmptyAccountCode\n\t// ErrCodeEmptyUsernameCode is an error code for the case where a DSN doesn't include user parameter\n\tErrCodeEmptyUsernameCode = sferrors.ErrCodeEmptyUsernameCode\n\t// ErrCodeEmptyPasswordCode is an error code for the case where a DSN doesn't include password parameter\n\tErrCodeEmptyPasswordCode = sferrors.ErrCodeEmptyPasswordCode\n\t// ErrCodeFailedToParseHost is an error code for the case where a DSN includes an invalid host name\n\tErrCodeFailedToParseHost = sferrors.ErrCodeFailedToParseHost\n\t// ErrCodeFailedToParsePort is an error code for the case where a DSN includes an invalid port number\n\tErrCodeFailedToParsePort = sferrors.ErrCodeFailedToParsePort\n\t// ErrCodeIdpConnectionError is an error code for the case where a IDP connection failed\n\tErrCodeIdpConnectionError = sferrors.ErrCodeIdpConnectionError\n\t// ErrCodeSSOURLNotMatch is an error code for the case where a SSO URL doesn't match\n\tErrCodeSSOURLNotMatch = sferrors.ErrCodeSSOURLNotMatch\n\t// ErrCodeServiceUnavailable is an error code for the case where service is unavailable.\n\tErrCodeServiceUnavailable = sferrors.ErrCodeServiceUnavailable\n\t// ErrCodeFailedToConnect is an error code for the case where a DB connection failed due to wrong account name\n\tErrCodeFailedToConnect = sferrors.ErrCodeFailedToConnect\n\t// ErrCodeRegionOverlap is an error code for the case where a region is specified despite an account region present\n\tErrCodeRegionOverlap = sferrors.ErrCodeRegionOverlap\n\t// ErrCodePrivateKeyParseError is an error code for the case where the private key is not parsed correctly\n\tErrCodePrivateKeyParseError = sferrors.ErrCodePrivateKeyParseError\n\t// ErrCodeFailedToParseAuthenticator is an error code for the case where a DNS includes an invalid authenticator\n\tErrCodeFailedToParseAuthenticator = sferrors.ErrCodeFailedToParseAuthenticator\n\t// ErrCodeClientConfigFailed is an error code for the case where clientConfigFile is invalid or applying client configuration fails\n\tErrCodeClientConfigFailed = sferrors.ErrCodeClientConfigFailed\n\t// ErrCodeTomlFileParsingFailed is an error code for the case where parsing the toml file is failed because of invalid value.\n\tErrCodeTomlFileParsingFailed = sferrors.ErrCodeTomlFileParsingFailed\n\t// ErrCodeFailedToFindDSNInToml is an error code for the case where the DSN does not exist in the toml file.\n\tErrCodeFailedToFindDSNInToml = sferrors.ErrCodeFailedToFindDSNInToml\n\t// ErrCodeInvalidFilePermission is an error code for the case where the user does not have 0600 permission to the toml file.\n\tErrCodeInvalidFilePermission = sferrors.ErrCodeInvalidFilePermission\n\t// ErrCodeEmptyPasswordAndToken is an error code for the case where a DSN do includes neither password nor token\n\tErrCodeEmptyPasswordAndToken = sferrors.ErrCodeEmptyPasswordAndToken\n\t// ErrCodeEmptyOAuthParameters is an error code for the case where the client ID or client secret are not provided for OAuth flows.\n\tErrCodeEmptyOAuthParameters = sferrors.ErrCodeEmptyOAuthParameters\n\t// ErrMissingAccessATokenButRefreshTokenPresent is an error code for the case when access token is not found in cache, but the refresh token is present.\n\tErrMissingAccessATokenButRefreshTokenPresent = sferrors.ErrMissingAccessATokenButRefreshTokenPresent\n\t// ErrCodeMissingTLSConfig is an error code for the case where the TLS config is missing.\n\tErrCodeMissingTLSConfig = sferrors.ErrCodeMissingTLSConfig\n\n\t/* network */\n\n\t// ErrFailedToPostQuery is an error code for the case where HTTP POST failed.\n\tErrFailedToPostQuery = sferrors.ErrFailedToPostQuery\n\t// ErrFailedToRenewSession is an error code for the case where session renewal failed.\n\tErrFailedToRenewSession = sferrors.ErrFailedToRenewSession\n\t// ErrFailedToCancelQuery is an error code for the case where cancel query failed.\n\tErrFailedToCancelQuery = sferrors.ErrFailedToCancelQuery\n\t// ErrFailedToCloseSession is an error code for the case where close session failed.\n\tErrFailedToCloseSession = sferrors.ErrFailedToCloseSession\n\t// ErrFailedToAuth is an error code for the case where authentication failed for unknown reason.\n\tErrFailedToAuth = sferrors.ErrFailedToAuth\n\t// ErrFailedToAuthSAML is an error code for the case where authentication via SAML failed for unknown reason.\n\tErrFailedToAuthSAML = sferrors.ErrFailedToAuthSAML\n\t// ErrFailedToAuthOKTA is an error code for the case where authentication via OKTA failed for unknown reason.\n\tErrFailedToAuthOKTA = sferrors.ErrFailedToAuthOKTA\n\t// ErrFailedToGetSSO is an error code for the case where authentication via OKTA failed for unknown reason.\n\tErrFailedToGetSSO = sferrors.ErrFailedToGetSSO\n\t// ErrFailedToParseResponse is an error code for when we cannot parse an external browser response from Snowflake.\n\tErrFailedToParseResponse = sferrors.ErrFailedToParseResponse\n\t// ErrFailedToGetExternalBrowserResponse is an error code for when there's an error reading from the open socket.\n\tErrFailedToGetExternalBrowserResponse = sferrors.ErrFailedToGetExternalBrowserResponse\n\t// ErrFailedToHeartbeat is an error code when a heartbeat fails.\n\tErrFailedToHeartbeat = sferrors.ErrFailedToHeartbeat\n\n\t/* rows */\n\n\t// ErrFailedToGetChunk is an error code for the case where it failed to get chunk of result set\n\tErrFailedToGetChunk = sferrors.ErrFailedToGetChunk\n\t// ErrNonArrowResponseInArrowBatches is an error code for case where ArrowBatches mode is enabled, but response is not Arrow-based\n\tErrNonArrowResponseInArrowBatches = sferrors.ErrNonArrowResponseInArrowBatches\n\n\t/* transaction*/\n\n\t// ErrNoReadOnlyTransaction is an error code for the case where readonly mode is specified.\n\tErrNoReadOnlyTransaction = sferrors.ErrNoReadOnlyTransaction\n\t// ErrNoDefaultTransactionIsolationLevel is an error code for the case where non default isolation level is specified.\n\tErrNoDefaultTransactionIsolationLevel = sferrors.ErrNoDefaultTransactionIsolationLevel\n\n\t/* file transfer */\n\n\t// ErrInvalidStageFs is an error code denoting an invalid stage in the file system\n\tErrInvalidStageFs = sferrors.ErrInvalidStageFs\n\t// ErrFailedToDownloadFromStage is an error code denoting the failure to download a file from the stage\n\tErrFailedToDownloadFromStage = sferrors.ErrFailedToDownloadFromStage\n\t// ErrFailedToUploadToStage is an error code denoting the failure to upload a file to the stage\n\tErrFailedToUploadToStage = sferrors.ErrFailedToUploadToStage\n\t// ErrInvalidStageLocation is an error code denoting an invalid stage location\n\tErrInvalidStageLocation = sferrors.ErrInvalidStageLocation\n\t// ErrLocalPathNotDirectory is an error code denoting a local path that is not a directory\n\tErrLocalPathNotDirectory = sferrors.ErrLocalPathNotDirectory\n\t// ErrFileNotExists is an error code denoting the file to be transferred does not exist\n\tErrFileNotExists = sferrors.ErrFileNotExists\n\t// ErrCompressionNotSupported is an error code denoting the user specified compression type is not supported\n\tErrCompressionNotSupported = sferrors.ErrCompressionNotSupported\n\t// ErrInternalNotMatchEncryptMaterial is an error code denoting the encryption material specified does not match\n\tErrInternalNotMatchEncryptMaterial = sferrors.ErrInternalNotMatchEncryptMaterial\n\t// ErrCommandNotRecognized is an error code denoting the PUT/GET command was not recognized\n\tErrCommandNotRecognized = sferrors.ErrCommandNotRecognized\n\t// ErrFailedToConvertToS3Client is an error code denoting the failure of an interface to s3.Client conversion\n\tErrFailedToConvertToS3Client = sferrors.ErrFailedToConvertToS3Client\n\t// ErrNotImplemented is an error code denoting the file transfer feature is not implemented\n\tErrNotImplemented = sferrors.ErrNotImplemented\n\t// ErrInvalidPadding is an error code denoting the invalid padding of decryption key\n\tErrInvalidPadding = sferrors.ErrInvalidPadding\n\n\t/* binding */\n\n\t// ErrBindSerialization is an error code for a failed serialization of bind variables\n\tErrBindSerialization = sferrors.ErrBindSerialization\n\t// ErrBindUpload is an error code for the uploading process of bind elements to the stage\n\tErrBindUpload = sferrors.ErrBindUpload\n\n\t/* async */\n\n\t// ErrAsync is an error code for an unknown async error\n\tErrAsync = sferrors.ErrAsync\n\n\t/* multi-statement */\n\n\t// ErrNoResultIDs is an error code for empty result IDs for multi statement queries\n\tErrNoResultIDs = sferrors.ErrNoResultIDs\n\n\t/* converter */\n\n\t// ErrInvalidTimestampTz is an error code for the case where a returned TIMESTAMP_TZ internal value is invalid\n\tErrInvalidTimestampTz = sferrors.ErrInvalidTimestampTz\n\t// ErrInvalidOffsetStr is an error code for the case where an offset string is invalid. The input string must\n\t// consist of sHHMI where one sign character '+'/'-' followed by zero filled hours and minutes\n\tErrInvalidOffsetStr = sferrors.ErrInvalidOffsetStr\n\t// ErrInvalidBinaryHexForm is an error code for the case where a binary data in hex form is invalid.\n\tErrInvalidBinaryHexForm = sferrors.ErrInvalidBinaryHexForm\n\t// ErrTooHighTimestampPrecision is an error code for the case where cannot convert Snowflake timestamp to arrow.Timestamp\n\tErrTooHighTimestampPrecision = sferrors.ErrTooHighTimestampPrecision\n\t// ErrNullValueInArray is an error code for the case where there are null values in an array without arrayValuesNullable set to true\n\tErrNullValueInArray = sferrors.ErrNullValueInArray\n\t// ErrNullValueInMap is an error code for the case where there are null values in a map without mapValuesNullable set to true\n\tErrNullValueInMap = sferrors.ErrNullValueInMap\n\n\t/* OCSP */\n\n\t// ErrOCSPStatusRevoked is an error code for the case where the certificate is revoked.\n\tErrOCSPStatusRevoked = sferrors.ErrOCSPStatusRevoked\n\t// ErrOCSPStatusUnknown is an error code for the case where the certificate revocation status is unknown.\n\tErrOCSPStatusUnknown = sferrors.ErrOCSPStatusUnknown\n\t// ErrOCSPInvalidValidity is an error code for the case where the OCSP response validity is invalid.\n\tErrOCSPInvalidValidity = sferrors.ErrOCSPInvalidValidity\n\t// ErrOCSPNoOCSPResponderURL is an error code for the case where the OCSP responder URL is not attached.\n\tErrOCSPNoOCSPResponderURL = sferrors.ErrOCSPNoOCSPResponderURL\n\n\t/* query Status*/\n\n\t// ErrQueryStatus when check the status of a query, receive error or no status\n\tErrQueryStatus = sferrors.ErrQueryStatus\n\t// ErrQueryIDFormat the query ID given to fetch its result is not valid\n\tErrQueryIDFormat = sferrors.ErrQueryIDFormat\n\t// ErrQueryReportedError server side reports the query failed with error\n\tErrQueryReportedError = sferrors.ErrQueryReportedError\n\t// ErrQueryIsRunning the query is still running\n\tErrQueryIsRunning = sferrors.ErrQueryIsRunning\n\n\t/* GS error code */\n\n\t// ErrSessionGone is an GS error code for the case that session is already closed\n\tErrSessionGone = sferrors.ErrSessionGone\n\t// ErrRoleNotExist is a GS error code for the case that the role specified does not exist\n\tErrRoleNotExist = sferrors.ErrRoleNotExist\n\t// ErrObjectNotExistOrAuthorized is a GS error code for the case that the server-side object specified does not exist\n\tErrObjectNotExistOrAuthorized = sferrors.ErrObjectNotExistOrAuthorized\n)\n"
  },
  {
    "path": "errors_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestErrorMessage(t *testing.T) {\n\te := &SnowflakeError{\n\t\tNumber:  1,\n\t\tMessage: \"test message\",\n\t}\n\tif !strings.Contains(e.Error(), \"000001\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif !strings.Contains(e.Error(), \"test message\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\te = &SnowflakeError{\n\t\tNumber:      1,\n\t\tMessage:     \"test message: %v, %v\",\n\t\tMessageArgs: []any{\"C1\", \"C2\"},\n\t}\n\tif !strings.Contains(e.Error(), \"000001\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif !strings.Contains(e.Error(), \"test message\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif !strings.Contains(e.Error(), \"C1\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\te = &SnowflakeError{\n\t\tNumber:      1,\n\t\tMessage:     \"test message: %v, %v\",\n\t\tMessageArgs: []any{\"C1\", \"C2\"},\n\t\tSQLState:    \"01112\",\n\t}\n\tif !strings.Contains(e.Error(), \"000001\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif !strings.Contains(e.Error(), \"test message\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif !strings.Contains(e.Error(), \"C1\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif !strings.Contains(e.Error(), \"01112\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\te = &SnowflakeError{\n\t\tNumber:      1,\n\t\tMessage:     \"test message: %v, %v\",\n\t\tMessageArgs: []any{\"C1\", \"C2\"},\n\t\tSQLState:    \"01112\",\n\t\tQueryID:     \"abcdef-abcdef-abcdef\",\n\t}\n\tif !strings.Contains(e.Error(), \"000001\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif !strings.Contains(e.Error(), \"test message\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif !strings.Contains(e.Error(), \"C1\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif !strings.Contains(e.Error(), \"01112\") {\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\tif strings.Contains(e.Error(), \"abcdef-abcdef-abcdef\") {\n\t\t// no quid\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n\te.IncludeQueryID = true\n\tif !strings.Contains(e.Error(), \"abcdef-abcdef-abcdef\") {\n\t\t// no quid\n\t\tt.Errorf(\"failed to format error. %v\", e)\n\t}\n}\n"
  },
  {
    "path": "file_compression_type.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"strings\"\n\n\t\"github.com/gabriel-vasile/mimetype\"\n)\n\ntype compressionType struct {\n\tname          string\n\tfileExtension string\n\tmimeSubtypes  []string\n\tisSupported   bool\n}\n\nvar compressionTypes = map[string]*compressionType{\n\t\"GZIP\": {\n\t\t\"GZIP\",\n\t\t\".gz\",\n\t\t[]string{\"gzip\", \"x-gzip\"},\n\t\ttrue,\n\t},\n\t\"DEFLATE\": {\n\t\t\"DEFLATE\",\n\t\t\".deflate\",\n\t\t[]string{\"zlib\", \"deflate\"},\n\t\ttrue,\n\t},\n\t\"RAW_DEFLATE\": {\n\t\t\"RAW_DEFLATE\",\n\t\t\".raw_deflate\",\n\t\t[]string{\"raw_deflate\"},\n\t\ttrue,\n\t},\n\t\"BZIP2\": {\n\t\t\"BZIP2\",\n\t\t\".bz2\",\n\t\t[]string{\"bzip2\", \"x-bzip2\", \"x-bz2\", \"x-bzip\", \"bz2\"},\n\t\ttrue,\n\t},\n\t\"LZIP\": {\n\t\t\"LZIP\",\n\t\t\".lz\",\n\t\t[]string{\"lzip\", \"x-lzip\"},\n\t\tfalse,\n\t},\n\t\"LZMA\": {\n\t\t\"LZMA\",\n\t\t\".lzma\",\n\t\t[]string{\"lzma\", \"x-lzma\"},\n\t\tfalse,\n\t},\n\t\"LZO\": {\n\t\t\"LZO\",\n\t\t\".lzo\",\n\t\t[]string{\"lzo\", \"x-lzo\"},\n\t\tfalse,\n\t},\n\t\"XZ\": {\n\t\t\"XZ\",\n\t\t\".xz\",\n\t\t[]string{\"xz\", \"x-xz\"},\n\t\tfalse,\n\t},\n\t\"COMPRESS\": {\n\t\t\"COMPRESS\",\n\t\t\".Z\",\n\t\t[]string{\"compress\", \"x-compress\"},\n\t\tfalse,\n\t},\n\t\"PARQUET\": {\n\t\t\"PARQUET\",\n\t\t\".parquet\",\n\t\t[]string{\"parquet\"},\n\t\ttrue,\n\t},\n\t\"ZSTD\": {\n\t\t\"ZSTD\",\n\t\t\".zst\",\n\t\t[]string{\"zstd\", \"x-zstd\"},\n\t\ttrue,\n\t},\n\t\"BROTLI\": {\n\t\t\"BROTLI\",\n\t\t\".br\",\n\t\t[]string{\"br\", \"x-br\"},\n\t\ttrue,\n\t},\n\t\"ORC\": {\n\t\t\"ORC\",\n\t\t\".orc\",\n\t\t[]string{\"orc\"},\n\t\ttrue,\n\t},\n}\n\nvar mimeSubTypeToCompression map[string]*compressionType\nvar extensionToCompression map[string]*compressionType\n\nfunc init() {\n\tmimeSubTypeToCompression = make(map[string]*compressionType)\n\textensionToCompression = make(map[string]*compressionType)\n\tfor _, meta := range compressionTypes {\n\t\textensionToCompression[meta.fileExtension] = meta\n\t\tfor _, subtype := range meta.mimeSubtypes {\n\t\t\tmimeSubTypeToCompression[subtype] = meta\n\t\t}\n\t}\n\tmimetype.Extend(func(raw []byte, limit uint32) bool {\n\t\treturn bytes.HasPrefix(raw, []byte(\"PAR1\"))\n\t}, \"snowflake/parquet\", \".parquet\")\n\tmimetype.Extend(func(raw []byte, limit uint32) bool {\n\t\treturn bytes.HasPrefix(raw, []byte(\"ORC\"))\n\t}, \"snowflake/orc\", \".orc\")\n}\n\nfunc lookupByMimeSubType(mimeSubType string) *compressionType {\n\tif val, ok := mimeSubTypeToCompression[strings.ToLower(mimeSubType)]; ok {\n\t\treturn val\n\t}\n\treturn nil\n}\n\nfunc lookupByExtension(extension string) *compressionType {\n\tif val, ok := extensionToCompression[strings.ToLower(extension)]; ok {\n\t\treturn val\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "file_transfer_agent.go",
    "content": "package gosnowflake\n\n//lint:file-ignore U1000 Ignore all unused code\n\nimport (\n\t\"bytes\"\n\t\"cmp\"\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"io\"\n\t\"math\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"regexp\"\n\t\"runtime\"\n\t\"sort\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/aws/aws-sdk-go-v2/service/s3\"\n\t\"github.com/gabriel-vasile/mimetype\"\n)\n\ntype (\n\tcloudType   string\n\tcommandType string\n)\n\nconst (\n\tfileProtocol                        = \"file://\"\n\tmultiPartThreshold          int64   = 64 * 1024 * 1024\n\tstreamingMultiPartThreshold int64   = 8 * 1024 * 1024\n\tisWindows                           = runtime.GOOS == \"windows\"\n\tmb                          float64 = 1024.0 * 1024.0\n)\n\nconst (\n\tuploadCommand   commandType = \"UPLOAD\"\n\tdownloadCommand commandType = \"DOWNLOAD\"\n\tunknownCommand  commandType = \"UNKNOWN\"\n\n\tputRegexp string = `(?i)^(?:/\\*.*\\*/\\s*)*\\s*put\\s+`\n\tgetRegexp string = `(?i)^(?:/\\*.*\\*/\\s*)*\\s*get\\s+`\n)\n\nconst (\n\ts3Client    cloudType = \"S3\"\n\tazureClient cloudType = \"AZURE\"\n\tgcsClient   cloudType = \"GCS\"\n\tlocal       cloudType = \"LOCAL_FS\"\n)\n\ntype resultStatus int\n\nconst (\n\terrStatus resultStatus = iota\n\tuploaded\n\tdownloaded\n\tskipped\n\trenewToken\n\trenewPresignedURL\n\tnotFoundFile\n\tneedRetry\n\tneedRetryWithLowerConcurrency\n)\n\nfunc (rs resultStatus) String() string {\n\treturn [...]string{\"ERROR\", \"UPLOADED\", \"DOWNLOADED\", \"SKIPPED\",\n\t\t\"RENEW_TOKEN\", \"RENEW_PRESIGNED_URL\", \"NOT_FOUND_FILE\", \"NEED_RETRY\",\n\t\t\"NEED_RETRY_WITH_LOWER_CONCURRENCY\"}[rs]\n}\n\nfunc (rs resultStatus) isSet() bool {\n\treturn uploaded <= rs && rs <= needRetryWithLowerConcurrency\n}\n\n// SnowflakeFileTransferOptions enables users to specify options regarding\n// files transfers such as PUT/GET\ntype SnowflakeFileTransferOptions struct {\n\tshowProgressBar    bool\n\tMultiPartThreshold int64\n\n\t/* streaming PUT */\n\tcompressSourceFromStream bool\n\n\t/* PUT */\n\tputCallback             *snowflakeProgressPercentage\n\tputAzureCallback        *snowflakeProgressPercentage\n\tputCallbackOutputStream *io.Writer\n\n\t/* GET */\n\tgetCallback             *snowflakeProgressPercentage\n\tgetAzureCallback        *snowflakeProgressPercentage\n\tgetCallbackOutputStream *io.Writer\n}\n\ntype snowflakeFileTransferAgent struct {\n\tctx                         context.Context\n\tsc                          *snowflakeConn\n\tdata                        *execResponseData\n\tcommand                     string\n\tcommandType                 commandType\n\tstageLocationType           cloudType\n\tfileMetadata                []*fileMetadata\n\tencryptionMaterial          []*snowflakeFileEncryption\n\tstageInfo                   *execResponseStageInfo\n\tresults                     []*fileMetadata\n\tsourceStream                io.Reader\n\tsrcLocations                []string\n\tautoCompress                bool\n\tsrcCompression              string\n\tparallel                    int64\n\toverwrite                   bool\n\tsrcFiles                    []string\n\tlocalLocation               string\n\tsrcFileToEncryptionMaterial map[string]*snowflakeFileEncryption\n\tuseAccelerateEndpoint       bool\n\tpresignedURLs               []string\n\toptions                     *SnowflakeFileTransferOptions\n\tstreamBuffer                *bytes.Buffer\n}\n\nfunc (sfa *snowflakeFileTransferAgent) execute() error {\n\tvar err error\n\tif err = sfa.parseCommand(); err != nil {\n\t\treturn err\n\t}\n\n\tif err = sfa.initFileMetadata(); err != nil {\n\t\treturn err\n\t}\n\n\tif sfa.commandType == uploadCommand {\n\t\tif err = sfa.processFileCompressionType(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif err = sfa.transferAccelerateConfig(); err != nil {\n\t\treturn err\n\t}\n\n\tif sfa.commandType == downloadCommand {\n\t\tif _, err = os.Stat(sfa.localLocation); os.IsNotExist(err) {\n\t\t\tif err = os.MkdirAll(sfa.localLocation, os.ModePerm); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\n\tif sfa.stageLocationType == local {\n\t\tif _, err = os.Stat(sfa.stageInfo.Location); os.IsNotExist(err) {\n\t\t\tif err = os.MkdirAll(sfa.stageInfo.Location, os.ModePerm); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\n\tif err = sfa.updateFileMetadataWithPresignedURL(); err != nil {\n\t\treturn err\n\t}\n\n\tsmallFileMetas := make([]*fileMetadata, 0)\n\tlargeFileMetas := make([]*fileMetadata, 0)\n\n\tfor _, meta := range sfa.fileMetadata {\n\t\tmeta.overwrite = sfa.overwrite\n\t\tmeta.sfa = sfa\n\t\tmeta.options = sfa.options\n\t\tif sfa.stageLocationType != local {\n\t\t\tsizeThreshold := sfa.options.MultiPartThreshold\n\t\t\tmeta.options.MultiPartThreshold = sizeThreshold\n\t\t\tif sfa.commandType == uploadCommand {\n\t\t\t\tif meta.srcFileSize > sizeThreshold {\n\t\t\t\t\tmeta.parallel = sfa.parallel\n\t\t\t\t\tlargeFileMetas = append(largeFileMetas, meta)\n\t\t\t\t} else {\n\t\t\t\t\tmeta.parallel = 1\n\t\t\t\t\tsmallFileMetas = append(smallFileMetas, meta)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t// Enable multi-part download for all files to improve performance.\n\t\t\t\t// The MultiPartThreshold will be passed to the Cloud Storage Provider to determine the part size.\n\t\t\t\tmeta.parallel = sfa.parallel\n\t\t\t\tlargeFileMetas = append(largeFileMetas, meta)\n\t\t\t}\n\t\t} else {\n\t\t\tmeta.parallel = 1\n\t\t\tsmallFileMetas = append(smallFileMetas, meta)\n\t\t}\n\t}\n\n\tif sfa.commandType == uploadCommand {\n\t\tif err = sfa.upload(largeFileMetas, smallFileMetas); err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\tif err = sfa.download(largeFileMetas); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) parseCommand() error {\n\tvar err error\n\tif sfa.data.Command != \"\" {\n\t\tsfa.commandType = commandType(sfa.data.Command)\n\t} else {\n\t\tsfa.commandType = unknownCommand\n\t}\n\n\tsfa.initEncryptionMaterial()\n\tif len(sfa.data.SrcLocations) == 0 {\n\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   ErrInvalidStageLocation,\n\t\t\tSQLState: sfa.data.SQLState,\n\t\t\tQueryID:  sfa.data.QueryID,\n\t\t\tMessage:  \"failed to parse location\",\n\t\t}, sfa.sc)\n\t}\n\tsfa.srcLocations = sfa.data.SrcLocations\n\n\tif sfa.commandType == uploadCommand {\n\t\tif sfa.sourceStream != nil {\n\t\t\tsfa.srcFiles = sfa.srcLocations // streaming PUT\n\t\t} else {\n\t\t\tsfa.srcFiles, err = sfa.expandFilenames(sfa.srcLocations)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\tsfa.autoCompress = sfa.data.AutoCompress\n\t\tsfa.srcCompression = strings.ToLower(sfa.data.SourceCompression)\n\t} else {\n\t\tsfa.srcFiles = sfa.srcLocations\n\t\tsfa.srcFileToEncryptionMaterial = make(map[string]*snowflakeFileEncryption)\n\t\tif len(sfa.data.SrcLocations) == len(sfa.encryptionMaterial) {\n\t\t\tfor i, srcFile := range sfa.srcFiles {\n\t\t\t\tsfa.srcFileToEncryptionMaterial[srcFile] = sfa.encryptionMaterial[i]\n\t\t\t}\n\t\t} else if len(sfa.encryptionMaterial) != 0 {\n\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\tNumber:      ErrInternalNotMatchEncryptMaterial,\n\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\tMessage:     errors2.ErrMsgInternalNotMatchEncryptMaterial,\n\t\t\t\tMessageArgs: []any{len(sfa.data.SrcLocations), len(sfa.encryptionMaterial)},\n\t\t\t}, sfa.sc)\n\t\t}\n\n\t\tsfa.localLocation, err = expandUser(sfa.data.LocalLocation)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif fi, err := os.Stat(sfa.localLocation); err != nil || !fi.IsDir() {\n\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\tNumber:      ErrLocalPathNotDirectory,\n\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\tMessage:     errors2.ErrMsgLocalPathNotDirectory,\n\t\t\t\tMessageArgs: []any{sfa.localLocation},\n\t\t\t}, sfa.sc)\n\t\t}\n\t}\n\n\tsfa.parallel = 1\n\tif sfa.data.Parallel != 0 {\n\t\tsfa.parallel = sfa.data.Parallel\n\t}\n\tsfa.overwrite = sfa.data.Overwrite\n\tsfa.stageLocationType = cloudType(strings.ToUpper(sfa.data.StageInfo.LocationType))\n\tsfa.stageInfo = &sfa.data.StageInfo\n\tsfa.presignedURLs = make([]string, 0)\n\tif len(sfa.data.PresignedURLs) != 0 {\n\t\tsfa.presignedURLs = sfa.data.PresignedURLs\n\t}\n\n\tif sfa.getStorageClient(sfa.stageLocationType) == nil {\n\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:      ErrInvalidStageFs,\n\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\tMessage:     errors2.ErrMsgInvalidStageFs,\n\t\t\tMessageArgs: []any{sfa.stageLocationType},\n\t\t}, sfa.sc)\n\t}\n\treturn nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) initEncryptionMaterial() {\n\tsfa.encryptionMaterial = make([]*snowflakeFileEncryption, 0)\n\twrapper := sfa.data.EncryptionMaterial\n\n\tif sfa.commandType == uploadCommand {\n\t\tif wrapper.QueryID != \"\" {\n\t\t\tsfa.encryptionMaterial = append(sfa.encryptionMaterial, &wrapper.snowflakeFileEncryption)\n\t\t}\n\t} else {\n\t\tfor _, encmat := range wrapper.EncryptionMaterials {\n\t\t\tif encmat.QueryID != \"\" {\n\t\t\t\tsfa.encryptionMaterial = append(sfa.encryptionMaterial, &encmat)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (sfa *snowflakeFileTransferAgent) expandFilenames(locations []string) ([]string, error) {\n\tcanonicalLocations := make([]string, 0)\n\tfor _, fileName := range locations {\n\t\tif sfa.commandType == uploadCommand {\n\t\t\tvar err error\n\t\t\tfileName, err = expandUser(fileName)\n\t\t\tif err != nil {\n\t\t\t\treturn []string{}, err\n\t\t\t}\n\t\t\tif !filepath.IsAbs(fileName) {\n\t\t\t\tcwd, err := getDirectory()\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn []string{}, err\n\t\t\t\t}\n\t\t\t\tfileName = filepath.Join(cwd, fileName)\n\t\t\t}\n\t\t\tif isWindows && len(fileName) > 2 && fileName[0] == '/' && fileName[2] == ':' {\n\t\t\t\t// Windows path: /C:/data/file1.txt where it starts with slash\n\t\t\t\t// followed by a drive letter and colon.\n\t\t\t\tfileName = fileName[1:]\n\t\t\t}\n\t\t\tfiles, err := filepath.Glob(fileName)\n\t\t\tif err != nil {\n\t\t\t\treturn []string{}, err\n\t\t\t}\n\t\t\tcanonicalLocations = append(canonicalLocations, files...)\n\t\t} else {\n\t\t\tcanonicalLocations = append(canonicalLocations, fileName)\n\t\t}\n\t}\n\treturn canonicalLocations, nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) initFileMetadata() error {\n\tsfa.fileMetadata = []*fileMetadata{}\n\tswitch sfa.commandType {\n\tcase uploadCommand:\n\t\tlogger.Debugf(\"upload command initiated - file count: %d, query ID: %s, encryption materials: %d\",\n\t\t\tlen(sfa.srcFiles), sfa.data.QueryID, len(sfa.encryptionMaterial))\n\n\t\tif len(sfa.srcFiles) == 0 {\n\t\t\tfileName := sfa.data.SrcLocations\n\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\tNumber:      ErrFileNotExists,\n\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\tMessage:     errors2.ErrMsgFileNotExists,\n\t\t\t\tMessageArgs: []any{fileName},\n\t\t\t}, sfa.sc)\n\t\t}\n\t\t// Handles bulk inserts by checking if sourceStream exists.\n\t\t// - If the file exists locally (PUT command), it saves the stream without loading it into memory.\n\t\t// - If not, treats it as an INSERT converted to PUT for bulk upload.\n\t\tif sfa.sourceStream != nil {\n\t\t\t//Bulk insert case\n\t\t\tfileName := sfa.srcFiles[0]\n\t\t\tfileInfo, err := os.Stat(fileName)\n\t\t\tif err != nil {\n\t\t\t\tbuf := new(bytes.Buffer)\n\t\t\t\t_, err := buf.ReadFrom(sfa.sourceStream)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\t\t\tNumber:      ErrFileNotExists,\n\t\t\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\t\t\tMessage:     errors2.ErrMsgFailToReadDataFromBuffer,\n\t\t\t\t\t\tMessageArgs: []any{fileName},\n\t\t\t\t\t}, sfa.sc)\n\t\t\t\t}\n\t\t\t\tsfa.fileMetadata = append(sfa.fileMetadata, &fileMetadata{\n\t\t\t\t\tname:              baseName(fileName),\n\t\t\t\t\tsrcFileName:       fileName,\n\t\t\t\t\tsrcStream:         buf,\n\t\t\t\t\tfileStream:        sfa.sourceStream,\n\t\t\t\t\tsrcFileSize:       int64(buf.Len()),\n\t\t\t\t\tstageLocationType: sfa.stageLocationType,\n\t\t\t\t\tstageInfo:         sfa.stageInfo,\n\t\t\t\t})\n\t\t\t} else {\n\t\t\t\t//PUT command with existing file\n\t\t\t\tsfa.fileMetadata = append(sfa.fileMetadata, &fileMetadata{\n\t\t\t\t\tname:              baseName(fileName),\n\t\t\t\t\tsrcFileName:       fileName,\n\t\t\t\t\tfileStream:        sfa.sourceStream,\n\t\t\t\t\tsrcFileSize:       fileInfo.Size(),\n\t\t\t\t\tstageLocationType: sfa.stageLocationType,\n\t\t\t\t\tstageInfo:         sfa.stageInfo,\n\t\t\t\t})\n\t\t\t}\n\t\t} else {\n\t\t\tfor i, fileName := range sfa.srcFiles {\n\t\t\t\tfi, err := os.Stat(fileName)\n\t\t\t\tif os.IsNotExist(err) {\n\t\t\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\t\t\tNumber:      ErrFileNotExists,\n\t\t\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\t\t\tMessage:     errors2.ErrMsgFileNotExists,\n\t\t\t\t\t\tMessageArgs: []any{fileName},\n\t\t\t\t\t}, sfa.sc)\n\t\t\t\t} else if fi.IsDir() {\n\t\t\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\t\t\tNumber:      ErrFileNotExists,\n\t\t\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\t\t\tMessage:     errors2.ErrMsgFileNotExists,\n\t\t\t\t\t\tMessageArgs: []any{fileName},\n\t\t\t\t\t}, sfa.sc)\n\t\t\t\t}\n\t\t\t\tsfa.fileMetadata = append(sfa.fileMetadata, &fileMetadata{\n\t\t\t\t\tname:              baseName(fileName),\n\t\t\t\t\tsrcFileName:       fileName,\n\t\t\t\t\tsrcFileSize:       fi.Size(),\n\t\t\t\t\tstageLocationType: sfa.stageLocationType,\n\t\t\t\t\tstageInfo:         sfa.stageInfo,\n\t\t\t\t})\n\t\t\t\tif len(sfa.encryptionMaterial) > 0 {\n\t\t\t\t\tsfa.fileMetadata[i].encryptionMaterial = sfa.encryptionMaterial[0]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif len(sfa.encryptionMaterial) > 0 {\n\t\t\tfor _, meta := range sfa.fileMetadata {\n\t\t\t\tmeta.encryptionMaterial = sfa.encryptionMaterial[0]\n\t\t\t}\n\t\t}\n\tcase downloadCommand:\n\t\tlogger.Debugf(\"download command initiated - file count: %d, query ID: %s\",\n\t\t\tlen(sfa.srcFiles), sfa.data.QueryID)\n\n\t\tfor _, fileName := range sfa.srcFiles {\n\t\t\tif len(fileName) > 0 {\n\t\t\t\t_, after, ok := strings.Cut(fileName, \"/\")\n\t\t\t\tdstFileName := fileName\n\t\t\t\tif ok {\n\t\t\t\t\tdstFileName = after\n\t\t\t\t}\n\t\t\t\tsfa.fileMetadata = append(sfa.fileMetadata, &fileMetadata{\n\t\t\t\t\tname:              baseName(fileName),\n\t\t\t\t\tsrcFileName:       fileName,\n\t\t\t\t\tdstFileName:       dstFileName,\n\t\t\t\t\tdstStream:         new(bytes.Buffer),\n\t\t\t\t\tstageLocationType: sfa.stageLocationType,\n\t\t\t\t\tstageInfo:         sfa.stageInfo,\n\t\t\t\t\tlocalLocation:     sfa.localLocation,\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t\tfor _, meta := range sfa.fileMetadata {\n\t\t\tfileName := meta.srcFileName\n\t\t\tif val, ok := sfa.srcFileToEncryptionMaterial[fileName]; ok {\n\t\t\t\tmeta.encryptionMaterial = val\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) processFileCompressionType() error {\n\tvar userSpecifiedSourceCompression *compressionType\n\tvar autoDetect bool\n\tswitch sfa.srcCompression {\n\tcase \"auto_detect\":\n\t\tautoDetect = true\n\tcase \"none\":\n\t\tautoDetect = false\n\tdefault:\n\t\tuserSpecifiedSourceCompression = lookupByMimeSubType(sfa.srcCompression)\n\t\tif userSpecifiedSourceCompression == nil || !userSpecifiedSourceCompression.isSupported {\n\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\tNumber:      ErrCompressionNotSupported,\n\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\tMessage:     errors2.ErrMsgFeatureNotSupported,\n\t\t\t\tMessageArgs: []any{userSpecifiedSourceCompression},\n\t\t\t}, sfa.sc)\n\t\t}\n\t\tautoDetect = false\n\t}\n\n\tgzipCompression := compressionTypes[\"GZIP\"]\n\tfor _, meta := range sfa.fileMetadata {\n\t\tfileName := meta.srcFileName\n\t\tvar currentFileCompressionType *compressionType\n\t\tif autoDetect {\n\t\t\tcurrentFileCompressionType = lookupByExtension(filepath.Ext(fileName))\n\t\t\tif currentFileCompressionType == nil {\n\t\t\t\tvar mtype *mimetype.MIME\n\t\t\t\tvar err error\n\t\t\t\tif meta.srcStream != nil {\n\t\t\t\t\tr := getReaderFromBuffer(&meta.srcStream)\n\t\t\t\t\tmtype, err = mimetype.DetectReader(r)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err = io.ReadAll(r); err != nil { // flush out tee buffer\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tmtype, err = mimetype.DetectFile(fileName)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tcurrentFileCompressionType = lookupByExtension(mtype.Extension())\n\t\t\t}\n\n\t\t\tif currentFileCompressionType != nil && !currentFileCompressionType.isSupported {\n\t\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\t\tNumber:      ErrCompressionNotSupported,\n\t\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\t\tMessage:     errors2.ErrMsgFeatureNotSupported,\n\t\t\t\t\tMessageArgs: []any{userSpecifiedSourceCompression},\n\t\t\t\t}, sfa.sc)\n\t\t\t}\n\t\t} else {\n\t\t\tcurrentFileCompressionType = userSpecifiedSourceCompression\n\t\t}\n\n\t\tif currentFileCompressionType != nil {\n\t\t\tmeta.srcCompressionType = currentFileCompressionType\n\t\t\tif currentFileCompressionType.isSupported {\n\t\t\t\tmeta.dstCompressionType = currentFileCompressionType\n\t\t\t\tmeta.requireCompress = false\n\t\t\t\tmeta.dstFileName = meta.name\n\t\t\t} else {\n\t\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\t\tNumber:      ErrCompressionNotSupported,\n\t\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\t\tMessage:     errors2.ErrMsgFeatureNotSupported,\n\t\t\t\t\tMessageArgs: []any{userSpecifiedSourceCompression},\n\t\t\t\t}, sfa.sc)\n\t\t\t}\n\t\t} else {\n\t\t\tmeta.requireCompress = sfa.autoCompress\n\t\t\tmeta.srcCompressionType = nil\n\t\t\tif sfa.autoCompress {\n\t\t\t\tdstFileName := meta.name + compressionTypes[\"GZIP\"].fileExtension\n\t\t\t\tmeta.dstFileName = dstFileName\n\t\t\t\tmeta.dstCompressionType = gzipCompression\n\t\t\t} else {\n\t\t\t\tmeta.dstFileName = meta.name\n\t\t\t\tmeta.dstCompressionType = nil\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) updateFileMetadataWithPresignedURL() error {\n\t// presigned URL only applies to GCS\n\tif sfa.stageLocationType == gcsClient {\n\t\tswitch sfa.commandType {\n\t\tcase uploadCommand:\n\t\t\t// SNOW-3309225 - When a downscoped token is available, the token already covers the entire stage prefix so per-file\n\t\t\t// re-querying is unnecessary. Skipping the extra round-trip also avoids a path mismatch on versioned stages.\n\t\t\tif sfa.stageInfo != nil && sfa.stageInfo.Creds.GcsAccessToken != \"\" {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tfilePathToBeReplaced := sfa.getLocalFilePathFromCommand(sfa.command)\n\t\t\tfor _, meta := range sfa.fileMetadata {\n\t\t\t\tfilePathToBeReplacedWith := strings.TrimRight(filePathToBeReplaced, meta.dstFileName) + meta.dstFileName\n\t\t\t\tcommandWithSingleFile := strings.ReplaceAll(sfa.command, filePathToBeReplaced, filePathToBeReplacedWith)\n\t\t\t\treq := execRequest{\n\t\t\t\t\tSQLText: commandWithSingleFile,\n\t\t\t\t}\n\t\t\t\theaders := getHeaders()\n\t\t\t\theaders[httpHeaderAccept] = headerContentTypeApplicationJSON\n\t\t\t\tjsonBody, err := json.Marshal(req)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tdata, err := sfa.sc.rest.FuncPostQuery(\n\t\t\t\t\tsfa.ctx,\n\t\t\t\t\tsfa.sc.rest,\n\t\t\t\t\t&url.Values{},\n\t\t\t\t\theaders,\n\t\t\t\t\tjsonBody,\n\t\t\t\t\tsfa.sc.rest.RequestTimeout,\n\t\t\t\t\tgetOrGenerateRequestIDFromContext(sfa.ctx),\n\t\t\t\t\tsfa.sc.cfg)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\n\t\t\t\tif data.Data.StageInfo != (execResponseStageInfo{}) {\n\t\t\t\t\tmeta.stageInfo = &data.Data.StageInfo\n\t\t\t\t\tmeta.presignedURL = nil\n\t\t\t\t\tif meta.stageInfo.PresignedURL != \"\" {\n\t\t\t\t\t\tmeta.presignedURL, err = url.Parse(meta.stageInfo.PresignedURL)\n\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\treturn err\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\tcase downloadCommand:\n\t\t\tfor i, meta := range sfa.fileMetadata {\n\t\t\t\tif len(sfa.presignedURLs) > 0 {\n\t\t\t\t\tvar err error\n\t\t\t\t\tmeta.presignedURL, err = url.Parse(sfa.presignedURLs[i])\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tmeta.presignedURL = nil\n\t\t\t\t}\n\t\t\t}\n\t\tdefault:\n\t\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\t\tNumber:      ErrCommandNotRecognized,\n\t\t\t\tSQLState:    sfa.data.SQLState,\n\t\t\t\tQueryID:     sfa.data.QueryID,\n\t\t\t\tMessage:     errors2.ErrMsgCommandNotRecognized,\n\t\t\t\tMessageArgs: []any{sfa.commandType},\n\t\t\t}, sfa.sc)\n\t\t}\n\t}\n\treturn nil\n}\n\ntype s3BucketAccelerateConfigGetter interface {\n\tGetBucketAccelerateConfiguration(ctx context.Context, params *s3.GetBucketAccelerateConfigurationInput, optFns ...func(*s3.Options)) (*s3.GetBucketAccelerateConfigurationOutput, error)\n}\n\ntype s3ClientCreator interface {\n\textractBucketNameAndPath(location string) (*s3Location, error)\n\tcreateClientWithConfig(info *execResponseStageInfo, useAccelerateEndpoint bool, cfg *Config, telemetry *snowflakeTelemetry) (cloudClient, error)\n}\n\nfunc (sfa *snowflakeFileTransferAgent) transferAccelerateConfigWithUtil(s3Util s3ClientCreator) error {\n\ts3Loc, err := s3Util.extractBucketNameAndPath(sfa.stageInfo.Location)\n\tif err != nil {\n\t\treturn err\n\t}\n\ts3Cli, err := s3Util.createClientWithConfig(sfa.stageInfo, false, sfa.sc.cfg, sfa.sc.telemetry)\n\tif err != nil {\n\t\treturn err\n\t}\n\tclient, ok := s3Cli.(s3BucketAccelerateConfigGetter)\n\tif !ok {\n\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   ErrFailedToConvertToS3Client,\n\t\t\tSQLState: sfa.data.SQLState,\n\t\t\tQueryID:  sfa.data.QueryID,\n\t\t\tMessage:  errors2.ErrMsgFailedToConvertToS3Client,\n\t\t}, sfa.sc)\n\t}\n\tret, err := withCloudStorageTimeout(sfa.ctx, sfa.sc.cfg, func(ctx context.Context) (*s3.GetBucketAccelerateConfigurationOutput, error) {\n\t\treturn client.GetBucketAccelerateConfiguration(ctx, &s3.GetBucketAccelerateConfigurationInput{\n\t\t\tBucket: &s3Loc.bucketName,\n\t\t})\n\t})\n\tsfa.useAccelerateEndpoint = ret != nil && ret.Status == \"Enabled\"\n\tif err != nil {\n\t\tlogger.WithContext(sfa.sc.ctx).Warnf(\"An error occurred when getting accelerate config: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc withCloudStorageTimeout[T any](ctx context.Context, cfg *Config, f func(ctx context.Context) (T, error)) (T, error) {\n\tif cfg.CloudStorageTimeout > 0 {\n\t\tcancelCtx, cancelFunc := context.WithTimeout(ctx, cfg.CloudStorageTimeout)\n\t\tdefer cancelFunc()\n\t\treturn f(cancelCtx)\n\t}\n\treturn f(ctx)\n}\n\nfunc (sfa *snowflakeFileTransferAgent) transferAccelerateConfig() error {\n\tif sfa.stageLocationType == s3Client {\n\t\ts3Util := new(snowflakeS3Client)\n\t\treturn sfa.transferAccelerateConfigWithUtil(s3Util)\n\t}\n\treturn nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) getLocalFilePathFromCommand(command string) string {\n\tif len(command) == 0 || !strings.Contains(command, fileProtocol) {\n\t\treturn \"\"\n\t}\n\tif !regexp.MustCompile(putRegexp).Match([]byte(command)) {\n\t\treturn \"\"\n\t}\n\n\tfilePathBeginIdx := strings.Index(command, fileProtocol)\n\tisFilePathQuoted := command[filePathBeginIdx-1] == '\\''\n\tfilePathBeginIdx += len(fileProtocol)\n\tvar filePathEndIdx int\n\tfilePath := \"\"\n\n\tif isFilePathQuoted {\n\t\tfilePathEndIdx = filePathBeginIdx + strings.Index(command[filePathBeginIdx:], \"'\")\n\t\tif filePathEndIdx > filePathBeginIdx {\n\t\t\tfilePath = command[filePathBeginIdx:filePathEndIdx]\n\t\t}\n\t} else {\n\t\tindexList := make([]int, 0)\n\t\tdelims := []rune{' ', '\\n', ';'}\n\t\tfor _, delim := range delims {\n\t\t\tindex := strings.Index(command[filePathBeginIdx:], string(delim))\n\t\t\tif index != -1 {\n\t\t\t\tindexList = append(indexList, index)\n\t\t\t}\n\t\t}\n\t\tfilePathEndIdx = -1\n\t\tif getMin(indexList) != -1 {\n\t\t\tfilePathEndIdx = filePathBeginIdx + getMin(indexList)\n\t\t}\n\t\tif filePathEndIdx > filePathBeginIdx {\n\t\t\tfilePath = command[filePathBeginIdx:filePathEndIdx]\n\t\t} else {\n\t\t\tfilePath = command[filePathBeginIdx:]\n\t\t}\n\t}\n\treturn filePath\n}\n\nfunc (sfa *snowflakeFileTransferAgent) upload(\n\tlargeFileMetadata []*fileMetadata,\n\tsmallFileMetadata []*fileMetadata) error {\n\tclient, err := sfa.getStorageClient(sfa.stageLocationType).\n\t\tcreateClient(sfa.stageInfo, sfa.useAccelerateEndpoint, sfa.sc.cfg, sfa.sc.telemetry)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor _, meta := range smallFileMetadata {\n\t\tmeta.client = client\n\t}\n\tfor _, meta := range largeFileMetadata {\n\t\tmeta.client = client\n\t}\n\n\tif len(smallFileMetadata) > 0 {\n\t\tlogger.WithContext(sfa.sc.ctx).Infof(\"uploading %v small files\", len(smallFileMetadata))\n\t\tif err = sfa.uploadFilesParallel(smallFileMetadata); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tif len(largeFileMetadata) > 0 {\n\t\tlogger.WithContext(sfa.sc.ctx).Infof(\"uploading %v large files\", len(largeFileMetadata))\n\t\tif err = sfa.uploadFilesSequential(largeFileMetadata); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) download(\n\tfileMetadata []*fileMetadata) error {\n\tclient, err := sfa.getStorageClient(sfa.stageLocationType).\n\t\tcreateClient(sfa.stageInfo, sfa.useAccelerateEndpoint, sfa.sc.cfg, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor _, meta := range fileMetadata {\n\t\tmeta.client = client\n\t}\n\n\tlogger.WithContext(sfa.sc.ctx).Infof(\"downloading %v files\", len(fileMetadata))\n\tif err = sfa.downloadFilesParallel(fileMetadata); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) uploadFilesParallel(fileMetas []*fileMetadata) error {\n\tidx := 0\n\tfileMetaLen := len(fileMetas)\n\tvar err error\n\tfor idx < fileMetaLen {\n\t\tendOfIdx := intMin(fileMetaLen, idx+int(sfa.parallel))\n\t\ttargetMeta := fileMetas[idx:endOfIdx]\n\t\tfor {\n\t\t\tvar wg sync.WaitGroup\n\t\t\tresults := make([]*fileMetadata, len(targetMeta))\n\t\t\terrors := make([]error, len(targetMeta))\n\t\t\tfor i, meta := range targetMeta {\n\t\t\t\twg.Add(1)\n\t\t\t\tgo func(k int, m *fileMetadata) {\n\t\t\t\t\tdefer wg.Done()\n\t\t\t\t\tdefer func() {\n\t\t\t\t\t\tif r := recover(); r != nil {\n\t\t\t\t\t\t\terrors[k] = fmt.Errorf(\"panic during file upload: %v\", r)\n\t\t\t\t\t\t\tresults[k] = nil\n\t\t\t\t\t\t}\n\t\t\t\t\t}()\n\t\t\t\t\tresults[k], errors[k] = sfa.uploadOneFile(m)\n\t\t\t\t}(i, meta)\n\t\t\t}\n\t\t\twg.Wait()\n\n\t\t\t// append errors with no result associated to separate array\n\t\t\tvar errorMessages []string\n\t\t\tfor i, result := range results {\n\t\t\t\tif result == nil {\n\t\t\t\t\tif errors[i] == nil {\n\t\t\t\t\t\terrorMessages = append(errorMessages, \"unknown error\")\n\t\t\t\t\t} else {\n\t\t\t\t\t\terrorMessages = append(errorMessages, errors[i].Error())\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif errorMessages != nil {\n\t\t\t\t// sort the error messages to be more deterministic as the goroutines may finish in different order each time\n\t\t\t\tsort.Strings(errorMessages)\n\t\t\t\treturn fmt.Errorf(\"errors during file upload:\\n%v\", strings.Join(errorMessages, \"\\n\"))\n\t\t\t}\n\n\t\t\tretryMeta := make([]*fileMetadata, 0)\n\t\t\tfor i, result := range results {\n\t\t\t\tresult.errorDetails = errors[i]\n\t\t\t\tif result.resStatus == renewToken || result.resStatus == renewPresignedURL {\n\t\t\t\t\tretryMeta = append(retryMeta, result)\n\t\t\t\t} else {\n\t\t\t\t\tsfa.results = append(sfa.results, result)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif len(retryMeta) == 0 {\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\tneedRenewToken := false\n\t\t\tfor _, result := range retryMeta {\n\t\t\t\tif result.resStatus == renewToken {\n\t\t\t\t\tneedRenewToken = true\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif needRenewToken {\n\t\t\t\tclient, err := sfa.renewExpiredClient()\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tfor _, result := range retryMeta {\n\t\t\t\t\tresult.client = client\n\t\t\t\t}\n\t\t\t\tif endOfIdx < fileMetaLen {\n\t\t\t\t\tfor i := idx + int(sfa.parallel); i < fileMetaLen; i++ {\n\t\t\t\t\t\tfileMetas[i].client = client\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfor _, result := range retryMeta {\n\t\t\t\tif result.resStatus == renewPresignedURL {\n\t\t\t\t\tif err = sfa.updateFileMetadataWithPresignedURL(); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\ttargetMeta = retryMeta\n\t\t}\n\t\tif endOfIdx == fileMetaLen {\n\t\t\tbreak\n\t\t}\n\t\tidx += int(sfa.parallel)\n\t}\n\treturn err\n}\n\nfunc (sfa *snowflakeFileTransferAgent) uploadFilesSequential(fileMetas []*fileMetadata) error {\n\tidx := 0\n\tfileMetaLen := len(fileMetas)\n\tfor idx < fileMetaLen {\n\t\tres, err := sfa.uploadOneFile(fileMetas[idx])\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif res.resStatus == renewToken {\n\t\t\tclient, err := sfa.renewExpiredClient()\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfor i := idx; i < fileMetaLen; i++ {\n\t\t\t\tfileMetas[i].client = client\n\t\t\t}\n\t\t\tcontinue\n\t\t} else if res.resStatus == renewPresignedURL {\n\t\t\tif err = sfa.updateFileMetadataWithPresignedURL(); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\tsfa.results = append(sfa.results, res)\n\t\tidx++\n\t}\n\treturn nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) uploadOneFile(meta *fileMetadata) (*fileMetadata, error) {\n\tmeta.realSrcFileName = meta.srcFileName\n\ttmpDir := \"\"\n\tif meta.fileStream == nil {\n\t\tvar err error\n\t\ttmpDir, err = os.MkdirTemp(sfa.sc.cfg.TmpDirPath, \"\")\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tmeta.tmpDir = tmpDir\n\t}\n\tdefer func() {\n\t\tif err := os.RemoveAll(tmpDir); err != nil {\n\t\t\tlogger.WithContext(sfa.sc.ctx).Warnf(\"failed to remove temp dir %v: %v\", tmpDir, err)\n\t\t}\n\t}()\n\n\tfileUtil := new(snowflakeFileUtil)\n\n\terr := compressDataIfRequired(meta, fileUtil, tmpDir)\n\tif err != nil {\n\t\treturn meta, err\n\t}\n\n\terr = updateUploadSize(meta, fileUtil)\n\tif err != nil {\n\t\treturn meta, err\n\t}\n\n\terr = encryptDataIfRequired(meta, sfa.stageLocationType)\n\tif err != nil {\n\t\treturn meta, err\n\t}\n\n\tclient := sfa.getStorageClient(sfa.stageLocationType)\n\tif err = client.uploadOneFileWithRetry(sfa.ctx, meta); err != nil {\n\t\treturn meta, err\n\t}\n\treturn meta, nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) downloadFilesParallel(fileMetas []*fileMetadata) error {\n\tidx := 0\n\tfileMetaLen := len(fileMetas)\n\tvar err error\n\tfor idx < fileMetaLen {\n\t\tendOfIdx := intMin(fileMetaLen, idx+int(sfa.parallel))\n\t\ttargetMeta := fileMetas[idx:endOfIdx]\n\t\tfor {\n\t\t\tvar wg sync.WaitGroup\n\t\t\tresults := make([]*fileMetadata, len(targetMeta))\n\t\t\terrors := make([]error, len(targetMeta))\n\t\t\tfor i, meta := range targetMeta {\n\t\t\t\twg.Add(1)\n\t\t\t\tgo func(k int, m *fileMetadata) {\n\t\t\t\t\tdefer wg.Done()\n\t\t\t\t\tdefer func() {\n\t\t\t\t\t\tif r := recover(); r != nil {\n\t\t\t\t\t\t\terrors[k] = fmt.Errorf(\"panic during file download: %v\", r)\n\t\t\t\t\t\t\tresults[k] = nil\n\t\t\t\t\t\t}\n\t\t\t\t\t}()\n\t\t\t\t\tresults[k], errors[k] = sfa.downloadOneFile(sfa.ctx, m)\n\t\t\t\t}(i, meta)\n\t\t\t}\n\t\t\twg.Wait()\n\n\t\t\tretryMeta := make([]*fileMetadata, 0)\n\t\t\tfor i, result := range results {\n\t\t\t\tresult.errorDetails = errors[i]\n\t\t\t\tif result.resStatus == renewToken || result.resStatus == renewPresignedURL {\n\t\t\t\t\tretryMeta = append(retryMeta, result)\n\t\t\t\t} else {\n\t\t\t\t\tsfa.results = append(sfa.results, result)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif len(retryMeta) == 0 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tlogger.WithContext(sfa.sc.ctx).Infof(\"%v retries found\", len(retryMeta))\n\n\t\t\tneedRenewToken := false\n\t\t\tfor _, result := range retryMeta {\n\t\t\t\tif result.resStatus == renewToken {\n\t\t\t\t\tneedRenewToken = true\n\t\t\t\t}\n\t\t\t\tlogger.WithContext(sfa.sc.ctx).Infof(\n\t\t\t\t\t\"retying download file %v with status %v\",\n\t\t\t\t\tresult.name, result.resStatus)\n\t\t\t}\n\n\t\t\tif needRenewToken {\n\t\t\t\tclient, err := sfa.renewExpiredClient()\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tfor _, result := range retryMeta {\n\t\t\t\t\tresult.client = client\n\t\t\t\t}\n\t\t\t\tif endOfIdx < fileMetaLen {\n\t\t\t\t\tfor i := idx + int(sfa.parallel); i < fileMetaLen; i++ {\n\t\t\t\t\t\tfileMetas[i].client = client\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfor _, result := range retryMeta {\n\t\t\t\tif result.resStatus == renewPresignedURL {\n\t\t\t\t\tif err = sfa.updateFileMetadataWithPresignedURL(); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\ttargetMeta = retryMeta\n\t\t}\n\t\tif endOfIdx == fileMetaLen {\n\t\t\tbreak\n\t\t}\n\t\tidx += int(sfa.parallel)\n\t}\n\treturn err\n}\n\nfunc (sfa *snowflakeFileTransferAgent) downloadOneFile(ctx context.Context, meta *fileMetadata) (*fileMetadata, error) {\n\tif !isFileGetStream(ctx) {\n\t\ttmpDir, err := os.MkdirTemp(sfa.sc.cfg.TmpDirPath, \"\")\n\t\tif err != nil {\n\t\t\treturn meta, err\n\t\t}\n\t\tmeta.tmpDir = tmpDir\n\t\tdefer func() {\n\t\t\tif err = os.RemoveAll(tmpDir); err != nil {\n\t\t\t\tlogger.WithContext(sfa.sc.ctx).Warnf(\"failed to remove temp dir %v: %v\", tmpDir, err)\n\t\t\t}\n\t\t}()\n\t}\n\tclient := sfa.getStorageClient(sfa.stageLocationType)\n\tif err := client.downloadOneFile(ctx, meta); err != nil {\n\t\tmeta.dstFileSize = -1\n\t\tif !meta.resStatus.isSet() {\n\t\t\tmeta.resStatus = errStatus\n\t\t}\n\t\tmeta.errorDetails = errors.New(err.Error() + \", file=\" + meta.dstFileName)\n\t\treturn meta, err\n\t}\n\treturn meta, nil\n}\n\nfunc (sfa *snowflakeFileTransferAgent) getStorageClient(stageLocationType cloudType) storageUtil {\n\tswitch stageLocationType {\n\tcase local:\n\t\treturn &localUtil{}\n\tcase s3Client, azureClient, gcsClient:\n\t\treturn &remoteStorageUtil{\n\t\t\tcfg:       sfa.sc.cfg,\n\t\t\ttelemetry: sfa.sc.telemetry,\n\t\t}\n\tdefault:\n\t\treturn nil\n\t}\n}\n\nfunc (sfa *snowflakeFileTransferAgent) renewExpiredClient() (cloudClient, error) {\n\tdata, err := sfa.sc.exec(\n\t\tsfa.ctx,\n\t\tsfa.command,\n\t\tfalse,\n\t\tfalse,\n\t\tfalse,\n\t\t[]driver.NamedValue{})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tstorageClient := sfa.getStorageClient(sfa.stageLocationType)\n\treturn storageClient.createClient(&data.Data.StageInfo, sfa.useAccelerateEndpoint, sfa.sc.cfg, nil)\n}\n\nfunc (sfa *snowflakeFileTransferAgent) result() (*execResponse, error) {\n\t// inherit old response data\n\tdata := sfa.data\n\trowset := make([]fileTransferResultType, 0)\n\tif sfa.commandType == uploadCommand {\n\t\tif len(sfa.results) > 0 {\n\t\t\tfor _, meta := range sfa.results {\n\t\t\t\tvar srcCompressionType, dstCompressionType *compressionType\n\t\t\t\tif meta.srcCompressionType != nil {\n\t\t\t\t\tsrcCompressionType = meta.srcCompressionType\n\t\t\t\t} else {\n\t\t\t\t\tsrcCompressionType = &compressionType{\n\t\t\t\t\t\tname: \"NONE\",\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif meta.dstCompressionType != nil {\n\t\t\t\t\tdstCompressionType = meta.dstCompressionType\n\t\t\t\t} else {\n\t\t\t\t\tdstCompressionType = &compressionType{\n\t\t\t\t\t\tname: \"NONE\",\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\terrorDetails := meta.errorDetails\n\t\t\t\tsrcFileSize := meta.srcFileSize\n\t\t\t\tdstFileSize := meta.dstFileSize\n\t\t\t\tif errorDetails != nil {\n\t\t\t\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\t\t\t\tNumber:   ErrFailedToUploadToStage,\n\t\t\t\t\t\tSQLState: sfa.data.SQLState,\n\t\t\t\t\t\tQueryID:  sfa.data.QueryID,\n\t\t\t\t\t\tMessage:  errorDetails.Error(),\n\t\t\t\t\t}, sfa.sc)\n\t\t\t\t}\n\t\t\t\trowset = append(rowset, fileTransferResultType{\n\t\t\t\t\tmeta.name,\n\t\t\t\t\tmeta.srcFileName,\n\t\t\t\t\tmeta.dstFileName,\n\t\t\t\t\tsrcFileSize,\n\t\t\t\t\tdstFileSize,\n\t\t\t\t\tsrcCompressionType,\n\t\t\t\t\tdstCompressionType,\n\t\t\t\t\tmeta.resStatus,\n\t\t\t\t\tmeta.errorDetails,\n\t\t\t\t})\n\t\t\t}\n\t\t\tsort.Slice(rowset, func(i, j int) bool {\n\t\t\t\treturn rowset[i].srcFileName < rowset[j].srcFileName\n\t\t\t})\n\t\t\tccrs := make([][]*string, 0, len(rowset))\n\t\t\tfor _, rs := range rowset {\n\t\t\t\tsrcFileSize := fmt.Sprintf(\"%v\", rs.srcFileSize)\n\t\t\t\tdstFileSize := fmt.Sprintf(\"%v\", rs.dstFileSize)\n\t\t\t\tresStatus := rs.resStatus.String()\n\t\t\t\terrorStr := \"\"\n\t\t\t\tif rs.errorDetails != nil {\n\t\t\t\t\terrorStr = rs.errorDetails.Error()\n\t\t\t\t}\n\t\t\t\tccrs = append(ccrs, []*string{\n\t\t\t\t\t&rs.srcFileName,\n\t\t\t\t\t&rs.dstFileName,\n\t\t\t\t\t&srcFileSize,\n\t\t\t\t\t&dstFileSize,\n\t\t\t\t\t&rs.srcCompressionType.name,\n\t\t\t\t\t&rs.dstCompressionType.name,\n\t\t\t\t\t&resStatus,\n\t\t\t\t\t&errorStr,\n\t\t\t\t})\n\t\t\t}\n\t\t\tdata.RowSet = ccrs\n\t\t\tcc := make([]chunkRowType, len(ccrs))\n\t\t\tpopulateJSONRowSet(cc, ccrs)\n\t\t\tdata.QueryResultFormat = \"json\"\n\t\t\trt := []query.ExecResponseRowType{\n\t\t\t\t{Name: \"source\", ByteLength: 10000, Length: 10000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"target\", ByteLength: 10000, Length: 10000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"source_size\", ByteLength: 64, Length: 64, Type: \"FIXED\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"target_size\", ByteLength: 64, Length: 64, Type: \"FIXED\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"source_compression\", ByteLength: 10000, Length: 10000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"target_compression\", ByteLength: 10000, Length: 10000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"status\", ByteLength: 10000, Length: 10000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"message\", ByteLength: 10000, Length: 10000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t\t\t}\n\t\t\tdata.RowType = rt\n\t\t\treturn &execResponse{Data: *data, Success: true}, nil\n\t\t}\n\t} else { // DOWNLOAD\n\t\tif len(sfa.results) > 0 {\n\t\t\tfor _, meta := range sfa.results {\n\t\t\t\tdstFileSize := meta.dstFileSize\n\t\t\t\terrorDetails := meta.errorDetails\n\t\t\t\tif errorDetails != nil {\n\t\t\t\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\t\t\t\tNumber:   ErrFailedToDownloadFromStage,\n\t\t\t\t\t\tSQLState: sfa.data.SQLState,\n\t\t\t\t\t\tQueryID:  sfa.data.QueryID,\n\t\t\t\t\t\tMessage:  errorDetails.Error(),\n\t\t\t\t\t}, sfa.sc)\n\t\t\t\t}\n\n\t\t\t\trowset = append(rowset, fileTransferResultType{\n\t\t\t\t\t\"\", \"\", meta.dstFileName, 0, dstFileSize,\n\t\t\t\t\tnil, nil, meta.resStatus, meta.errorDetails,\n\t\t\t\t})\n\t\t\t}\n\t\t\tsort.Slice(rowset, func(i, j int) bool {\n\t\t\t\treturn rowset[i].srcFileName < rowset[j].srcFileName\n\t\t\t})\n\t\t\tccrs := make([][]*string, 0, len(rowset))\n\t\t\tfor _, rs := range rowset {\n\t\t\t\tdstFileSize := fmt.Sprintf(\"%v\", rs.dstFileSize)\n\t\t\t\tresStatus := rs.resStatus.String()\n\t\t\t\terrorStr := \"\"\n\t\t\t\tif rs.errorDetails != nil {\n\t\t\t\t\terrorStr = rs.errorDetails.Error()\n\t\t\t\t}\n\t\t\t\tccrs = append(ccrs, []*string{\n\t\t\t\t\t&rs.dstFileName,\n\t\t\t\t\t&dstFileSize,\n\t\t\t\t\t&resStatus,\n\t\t\t\t\t&errorStr,\n\t\t\t\t})\n\t\t\t}\n\t\t\tdata.RowSet = ccrs\n\t\t\tcc := make([]chunkRowType, len(ccrs))\n\t\t\tpopulateJSONRowSet(cc, ccrs)\n\t\t\tdata.QueryResultFormat = \"json\"\n\t\t\trt := []query.ExecResponseRowType{\n\t\t\t\t{Name: \"file\", ByteLength: 10000, Length: 10000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"size\", ByteLength: 64, Length: 64, Type: \"FIXED\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"status\", ByteLength: 10000, Length: 10000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t\t\t\t{Name: \"message\", ByteLength: 10000, Length: 10000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t\t\t}\n\t\t\tdata.RowType = rt\n\t\t\treturn &execResponse{Data: *data, Success: true}, nil\n\t\t}\n\t}\n\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\tNumber:   ErrNotImplemented,\n\t\tSQLState: sfa.data.SQLState,\n\t\tQueryID:  sfa.data.QueryID,\n\t\tMessage:  errors2.ErrMsgNotImplemented,\n\t}, sfa.sc)\n}\n\nfunc isFileTransfer(query string) bool {\n\tputRe := regexp.MustCompile(putRegexp)\n\tgetRe := regexp.MustCompile(getRegexp)\n\treturn putRe.Match([]byte(query)) || getRe.Match([]byte(query))\n}\n\ntype snowflakeProgressPercentage struct {\n\tfilename        string\n\tfileSize        float64\n\toutputStream    *io.Writer\n\tshowProgressBar bool\n\tseenSoFar       int64\n\tdone            bool\n\tstartTime       time.Time\n}\n\nfunc (spp *snowflakeProgressPercentage) call(bytesAmount int64) {\n\tif spp.outputStream != nil {\n\t\tspp.seenSoFar += bytesAmount\n\t\tpercentage := spp.percent(spp.seenSoFar, spp.fileSize)\n\t\tif !spp.done {\n\t\t\tspp.done = spp.updateProgress(spp.filename, spp.startTime, spp.fileSize, percentage, spp.outputStream, spp.showProgressBar)\n\t\t}\n\t}\n}\n\nfunc (spp *snowflakeProgressPercentage) percent(seenSoFar int64, size float64) float64 {\n\tif float64(seenSoFar) >= size || size <= 0 {\n\t\treturn 1.0\n\t}\n\treturn float64(seenSoFar) / size\n}\n\nfunc (spp *snowflakeProgressPercentage) updateProgress(filename string, startTime time.Time, totalSize float64, progress float64, outputStream *io.Writer, showProgressBar bool) bool {\n\tbarLength := 10\n\ttotalSize /= mb\n\tstatus := \"\"\n\telapsedTime := time.Since(startTime)\n\n\tvar throughput float64\n\tif elapsedTime != 0.0 {\n\t\tthroughput = totalSize / elapsedTime.Seconds()\n\t}\n\n\tif progress < 0 {\n\t\tprogress = 0\n\t\tstatus = \"Halt...\\r\\n\"\n\t}\n\tif progress >= 1 {\n\t\tstatus = fmt.Sprintf(\"Done (%.3fs, %.2fMB/s)\", elapsedTime.Seconds(), throughput)\n\t}\n\tif status == \"\" && showProgressBar {\n\t\tstatus = fmt.Sprintf(\"(%.3fsm %.2fMB/s)\", elapsedTime.Seconds(), throughput)\n\t}\n\tif status != \"\" {\n\t\tblock := int(math.Round(float64(barLength) * progress))\n\t\ttext := fmt.Sprintf(\"\\r%v(%.2fMB): [%v] %.2f%% %v \", filename, totalSize, strings.Repeat(\"#\", block)+strings.Repeat(\"-\", barLength-block), progress*100, status)\n\t\t_, err := (*outputStream).Write([]byte(text))\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"cannot write status of progress. %v\", err)\n\t\t}\n\t}\n\treturn progress == 1.0\n}\n\nfunc compressDataIfRequired(meta *fileMetadata, fileUtil *snowflakeFileUtil, tmpDir string) error {\n\tvar err error\n\tif meta.requireCompress {\n\t\tif meta.srcStream != nil {\n\t\t\tmeta.realSrcStream, _, err = fileUtil.compressFileWithGzipFromStream(&meta.srcStream)\n\t\t} else {\n\t\t\tmeta.realSrcFileName, _, err = fileUtil.compressFileWithGzip(meta.srcFileName, tmpDir)\n\t\t}\n\t}\n\treturn err\n}\n\nfunc updateUploadSize(meta *fileMetadata, fileUtil *snowflakeFileUtil) error {\n\tvar err error\n\tif meta.fileStream != nil {\n\t\tmeta.sha256Digest, meta.uploadSize, err = fileUtil.getDigestAndSizeForStream(meta.fileStream)\n\t} else {\n\t\tmeta.sha256Digest, meta.uploadSize, err = fileUtil.getDigestAndSizeForFile(meta.realSrcFileName)\n\t}\n\treturn err\n}\n\nfunc encryptDataIfRequired(meta *fileMetadata, ct cloudType) error {\n\tif ct != local && meta.encryptionMaterial != nil {\n\t\tvar err error\n\t\tif meta.srcStream != nil {\n\t\t\tvar encryptedStream bytes.Buffer\n\t\t\tsrcStream := cmp.Or(meta.realSrcStream, meta.srcStream)\n\t\t\tmeta.encryptMeta, err = encryptStreamCBC(meta.encryptionMaterial, srcStream, &encryptedStream, 0)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tmeta.realSrcStream = &encryptedStream\n\t\t} else {\n\t\t\tvar dataFile string\n\t\t\tmeta.encryptMeta, dataFile, err = encryptFileCBC(meta.encryptionMaterial, meta.realSrcFileName, 0, meta.tmpDir)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tmeta.realSrcFileName = dataFile\n\t\t}\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "file_transfer_agent_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/url\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/aws/aws-sdk-go-v2/service/s3\"\n\n\t\"github.com/aws/smithy-go\"\n)\n\ntype tcFilePath struct {\n\tcommand string\n\tpath    string\n}\n\nfunc TestGetBucketAccelerateConfiguration(t *testing.T) {\n\tif runningOnGithubAction() {\n\t\tt.Skip(\"Should be run against an account in AWS EU North1 region.\")\n\t}\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations: make([]string, 0),\n\t\t\t},\n\t\t}\n\t\tif err := sfa.transferAccelerateConfig(); err != nil {\n\t\t\tvar ae smithy.APIError\n\t\t\tif errors.As(err, &ae) {\n\t\t\t\tif ae.ErrorCode() == \"MethodNotAllowed\" {\n\t\t\t\t\tt.Fatalf(\"should have ignored 405 error: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t})\n}\n\ntype s3ClientCreatorMock struct {\n\textract func(string) (*s3Location, error)\n\tcreate  func(info *execResponseStageInfo, useAccelerateEndpoint bool, cfg *Config, telemetry *snowflakeTelemetry) (cloudClient, error)\n}\n\nfunc (mock *s3ClientCreatorMock) extractBucketNameAndPath(location string) (*s3Location, error) {\n\treturn mock.extract(location)\n}\n\nfunc (mock *s3ClientCreatorMock) createClientWithConfig(info *execResponseStageInfo, useAccelerateEndpoint bool, cfg *Config, telemetry *snowflakeTelemetry) (cloudClient, error) {\n\treturn mock.create(info, useAccelerateEndpoint, cfg, telemetry)\n}\n\ntype s3BucketAccelerateConfigGetterMock struct {\n\terr error\n}\n\nfunc (mock *s3BucketAccelerateConfigGetterMock) GetBucketAccelerateConfiguration(ctx context.Context, params *s3.GetBucketAccelerateConfigurationInput, optFns ...func(*s3.Options)) (*s3.GetBucketAccelerateConfigurationOutput, error) {\n\treturn nil, mock.err\n}\n\nfunc TestGetBucketAccelerateConfigurationTooManyRetries(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tbuf := &bytes.Buffer{}\n\t\tlogger.SetOutput(buf)\n\t\terr := logger.SetLogLevel(\"warn\")\n\t\tif err != nil {\n\t\t\treturn\n\t\t}\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations: make([]string, 0),\n\t\t\t},\n\t\t\tstageInfo: &execResponseStageInfo{\n\t\t\t\tLocation: \"test\",\n\t\t\t},\n\t\t}\n\t\terr = sfa.transferAccelerateConfigWithUtil(&s3ClientCreatorMock{\n\t\t\textract: func(s string) (*s3Location, error) {\n\t\t\t\treturn &s3Location{bucketName: \"test\", s3Path: \"test\"}, nil\n\t\t\t},\n\t\t\tcreate: func(info *execResponseStageInfo, useAccelerateEndpoint bool, cfg *Config, _ *snowflakeTelemetry) (cloudClient, error) {\n\t\t\t\treturn &s3BucketAccelerateConfigGetterMock{err: errors.New(\"testing\")}, nil\n\t\t\t},\n\t\t})\n\t\tassertNilE(t, err)\n\t\tassertStringContainsE(t, buf.String(), \"msg=\\\"An error occurred when getting accelerate config: testing\\\"\")\n\t})\n}\n\nfunc TestGetBucketAccelerateConfigurationFailedExtractBucketNameAndPath(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations: make([]string, 0),\n\t\t\t},\n\t\t\tstageInfo: &execResponseStageInfo{\n\t\t\t\tLocation: \"test\",\n\t\t\t},\n\t\t}\n\t\terr := sfa.transferAccelerateConfigWithUtil(&s3ClientCreatorMock{\n\t\t\textract: func(s string) (*s3Location, error) {\n\t\t\t\treturn nil, errors.New(\"failed extraction\")\n\t\t\t},\n\t\t})\n\t\tassertNotNilE(t, err)\n\t})\n}\n\nfunc TestGetBucketAccelerateConfigurationFailedCreateClient(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations: make([]string, 0),\n\t\t\t},\n\t\t\tstageInfo: &execResponseStageInfo{\n\t\t\t\tLocation: \"test\",\n\t\t\t},\n\t\t}\n\t\terr := sfa.transferAccelerateConfigWithUtil(&s3ClientCreatorMock{\n\t\t\textract: func(s string) (*s3Location, error) {\n\t\t\t\treturn &s3Location{bucketName: \"test\", s3Path: \"test\"}, nil\n\t\t\t},\n\t\t\tcreate: func(info *execResponseStageInfo, useAccelerateEndpoint bool, cfg *Config, _ *snowflakeTelemetry) (cloudClient, error) {\n\t\t\t\treturn nil, errors.New(\"failed creation\")\n\t\t\t},\n\t\t})\n\t\tassertNotNilE(t, err)\n\t})\n}\n\nfunc TestGetBucketAccelerateConfigurationInvalidClient(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations: make([]string, 0),\n\t\t\t},\n\t\t\tstageInfo: &execResponseStageInfo{\n\t\t\t\tLocation: \"test\",\n\t\t\t},\n\t\t}\n\t\terr := sfa.transferAccelerateConfigWithUtil(&s3ClientCreatorMock{\n\t\t\textract: func(s string) (*s3Location, error) {\n\t\t\t\treturn &s3Location{bucketName: \"test\", s3Path: \"test\"}, nil\n\t\t\t},\n\t\t\tcreate: func(info *execResponseStageInfo, useAccelerateEndpoint bool, cfg *Config, _ *snowflakeTelemetry) (cloudClient, error) {\n\t\t\t\treturn 1, nil\n\t\t\t},\n\t\t})\n\t\tassertNotNilE(t, err)\n\t})\n}\n\nfunc TestUnitDownloadWithInvalidLocalPath(t *testing.T) {\n\ttmpDir, err := os.MkdirTemp(\"\", \"data\")\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer func() {\n\t\tassertNilF(t, os.RemoveAll(tmpDir))\n\t}()\n\ttestData := filepath.Join(tmpDir, \"data.txt\")\n\tf, err := os.Create(testData)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\t_, err = f.WriteString(\"test1,test2\\ntest3,test4\\n\")\n\tassertNilF(t, err)\n\tassertNilF(t, f.Close())\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif _, err = dbt.exec(\"use role sysadmin\"); err != nil {\n\t\t\tt.Skip(\"snowflake admin account not accessible\")\n\t\t}\n\t\tdbt.mustExec(\"rm @~/test_get\")\n\t\tsqlText := fmt.Sprintf(\"put file://%v @~/test_get\", testData)\n\t\tsqlText = strings.ReplaceAll(sqlText, \"\\\\\", \"\\\\\\\\\")\n\t\tdbt.mustExec(sqlText)\n\n\t\tsqlText = fmt.Sprintf(\"get @~/test_get/data.txt file://%v\\\\get\", tmpDir)\n\t\tif _, err = dbt.query(sqlText); err == nil {\n\t\t\tt.Fatalf(\"should return local path not directory error.\")\n\t\t}\n\t\tdbt.mustExec(\"rm @~/test_get\")\n\t})\n}\nfunc TestUnitGetLocalFilePathFromCommand(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations: make([]string, 0),\n\t\t\t},\n\t\t}\n\t\ttestcases := []tcFilePath{\n\t\t\t{\"PUT file:///tmp/my_data_file.txt @~ overwrite=true\", \"/tmp/my_data_file.txt\"},\n\t\t\t{\"PUT 'file:///tmp/my_data_file.txt' @~ overwrite=true\", \"/tmp/my_data_file.txt\"},\n\t\t\t{\"PUT file:///tmp/sub_dir/my_data_file.txt\\n @~ overwrite=true\", \"/tmp/sub_dir/my_data_file.txt\"},\n\t\t\t{\"PUT file:///tmp/my_data_file.txt    @~ overwrite=true\", \"/tmp/my_data_file.txt\"},\n\t\t\t{\"\", \"\"},\n\t\t\t{\"PUT 'file2:///tmp/my_data_file.txt' @~ overwrite=true\", \"\"},\n\t\t}\n\t\tfor _, test := range testcases {\n\t\t\tt.Run(test.command, func(t *testing.T) {\n\t\t\t\tpath := sfa.getLocalFilePathFromCommand(test.command)\n\t\t\t\tif path != test.path {\n\t\t\t\t\tt.Fatalf(\"unexpected file path. expected: %v, but got: %v\", test.path, path)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestUnitProcessFileCompressionType(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t}\n\t\ttestcases := []struct {\n\t\t\tsrcCompression string\n\t\t}{\n\t\t\t{\"none\"},\n\t\t\t{\"auto_detect\"},\n\t\t\t{\"gzip\"},\n\t\t}\n\n\t\tfor _, test := range testcases {\n\t\t\tt.Run(test.srcCompression, func(t *testing.T) {\n\t\t\t\tsfa.srcCompression = test.srcCompression\n\t\t\t\terr := sfa.processFileCompressionType()\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"failed to process file compression\")\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\n\t\t// test invalid compression type error\n\t\tsfa.srcCompression = \"gz\"\n\t\tdata := &execResponseData{\n\t\t\tSQLState: \"S00087\",\n\t\t\tQueryID:  \"01aa2e8b-0405-ab7c-0000-53b10632f626\",\n\t\t}\n\t\tsfa.data = data\n\t\terr := sfa.processFileCompressionType()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have failed\")\n\t\t}\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"should be snowflake error. err: %v\", err)\n\t\t}\n\t\tif driverErr.Number != ErrCompressionNotSupported {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrCompressionNotSupported, driverErr.Number)\n\t\t}\n\t})\n}\n\nfunc TestParseCommandWithInvalidStageLocation(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations: make([]string, 0),\n\t\t\t},\n\t\t}\n\n\t\terr := sfa.parseCommand()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have raised an error\")\n\t\t}\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tif !ok || driverErr.Number != ErrInvalidStageLocation {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrInvalidStageLocation, driverErr.Number)\n\t\t}\n\t})\n}\n\nfunc TestParseCommandEncryptionMaterialMismatchError(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tmockEncMaterial1 := snowflakeFileEncryption{\n\t\t\tQueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\",\n\t\t\tQueryID:             \"01abc874-0406-1bf0-0000-53b10668e056\",\n\t\t\tSMKID:               92019681909886,\n\t\t}\n\n\t\tmockEncMaterial2 := snowflakeFileEncryption{\n\t\t\tQueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\",\n\t\t\tQueryID:             \"01abc874-0406-1bf0-0000-53b10668e056\",\n\t\t\tSMKID:               92019681909886,\n\t\t}\n\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations: []string{\"/tmp/uploads\"},\n\t\t\t\tEncryptionMaterial: encryptionWrapper{\n\t\t\t\t\tsnowflakeFileEncryption: mockEncMaterial1,\n\t\t\t\t\tEncryptionMaterials:     []snowflakeFileEncryption{mockEncMaterial1, mockEncMaterial2},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\terr := sfa.parseCommand()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have raised an error\")\n\t\t}\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tif !ok || driverErr.Number != ErrInternalNotMatchEncryptMaterial {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrInternalNotMatchEncryptMaterial, driverErr.Number)\n\t\t}\n\t})\n}\n\nfunc TestParseCommandInvalidStorageClientException(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\ttmpDir, err := os.MkdirTemp(\"\", \"abc\")\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tmockEncMaterial1 := snowflakeFileEncryption{\n\t\t\tQueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\",\n\t\t\tQueryID:             \"01abc874-0406-1bf0-0000-53b10668e056\",\n\t\t\tSMKID:               92019681909886,\n\t\t}\n\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations:  []string{\"/tmp/uploads\"},\n\t\t\t\tLocalLocation: tmpDir,\n\t\t\t\tEncryptionMaterial: encryptionWrapper{\n\t\t\t\t\tsnowflakeFileEncryption: mockEncMaterial1,\n\t\t\t\t\tEncryptionMaterials:     []snowflakeFileEncryption{mockEncMaterial1},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\terr = sfa.parseCommand()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have raised an error\")\n\t\t}\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tif !ok || driverErr.Number != ErrInvalidStageFs {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrInvalidStageFs, driverErr.Number)\n\t\t}\n\t})\n}\n\nfunc TestInitFileMetadataError(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    []string{\"fileDoesNotExist.txt\"},\n\t\t\tdata: &execResponseData{\n\t\t\t\tSQLState: \"123456\",\n\t\t\t\tQueryID:  \"01aa2e8b-0405-ab7c-0000-53b10632f626\",\n\t\t\t},\n\t\t}\n\n\t\terr := sfa.initFileMetadata()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have raised an error\")\n\t\t}\n\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tif !ok || driverErr.Number != ErrFileNotExists {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrFileNotExists, driverErr.Number)\n\t\t}\n\n\t\ttmpDir, err := os.MkdirTemp(\"\", \"data\")\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tdefer os.RemoveAll(tmpDir)\n\t\tsfa.srcFiles = []string{tmpDir}\n\n\t\terr = sfa.initFileMetadata()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have raised an error\")\n\t\t}\n\n\t\tdriverErr, ok = err.(*SnowflakeError)\n\t\tif !ok || driverErr.Number != ErrFileNotExists {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrFileNotExists, driverErr.Number)\n\t\t}\n\t})\n}\n\nfunc TestUpdateMetadataWithPresignedUrl(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tinfo := execResponseStageInfo{\n\t\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\t\tLocationType: \"GCS\",\n\t\t}\n\n\t\tdir, err := os.Getwd()\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\n\t\ttestURL := \"https://storage.google.com/gcs-blob/storage/users/456?Signature=testsignature123\"\n\n\t\tpresignedURLMock := func(_ context.Context, _ *snowflakeRestful,\n\t\t\t_ *url.Values, _ map[string]string, _ []byte, _ time.Duration,\n\t\t\trequestID UUID, _ *Config) (*execResponse, error) {\n\t\t\t// ensure the same requestID from context is used\n\t\t\tif len(requestID) == 0 {\n\t\t\t\tt.Fatal(\"requestID is empty\")\n\t\t\t}\n\t\t\tdd := &execResponseData{\n\t\t\t\tQueryID: \"01aa2e8b-0405-ab7c-0000-53b10632f626\",\n\t\t\t\tCommand: string(uploadCommand),\n\t\t\t\tStageInfo: execResponseStageInfo{\n\t\t\t\t\tLocationType: \"GCS\",\n\t\t\t\t\tLocation:     \"gcspuscentral1-4506459564-stage/users/456\",\n\t\t\t\t\tPath:         \"users/456\",\n\t\t\t\t\tRegion:       \"US_CENTRAL1\",\n\t\t\t\t\tPresignedURL: testURL,\n\t\t\t\t},\n\t\t\t}\n\t\t\treturn &execResponse{\n\t\t\t\tData:    *dd,\n\t\t\t\tMessage: \"\",\n\t\t\t\tCode:    \"0\",\n\t\t\t\tSuccess: true,\n\t\t\t}, nil\n\t\t}\n\n\t\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tuploadMeta := fileMetadata{\n\t\t\tname:              \"data1.txt.gz\",\n\t\t\tstageLocationType: \"GCS\",\n\t\t\tnoSleepingTime:    true,\n\t\t\tclient:            gcsCli,\n\t\t\tsha256Digest:      \"123456789abcdef\",\n\t\t\tstageInfo:         &info,\n\t\t\tdstFileName:       \"data1.txt.gz\",\n\t\t\tsrcFileName:       path.Join(dir, \"/test_data/data1.txt\"),\n\t\t\toverwrite:         true,\n\t\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t\t},\n\t\t}\n\n\t\tsct.sc.rest.FuncPostQuery = presignedURLMock\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:               context.Background(),\n\t\t\tsc:                sct.sc,\n\t\t\tcommandType:       uploadCommand,\n\t\t\tcommand:           \"put file:///tmp/test_data/data1.txt @~\",\n\t\t\tstageLocationType: gcsClient,\n\t\t\tfileMetadata:      []*fileMetadata{&uploadMeta},\n\t\t}\n\n\t\terr = sfa.updateFileMetadataWithPresignedURL()\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tif testURL != sfa.fileMetadata[0].presignedURL.String() {\n\t\t\tt.Fatalf(\"failed to update metadata with presigned url. expected: %v. got: %v\", testURL, sfa.fileMetadata[0].presignedURL.String())\n\t\t}\n\t})\n}\n\nfunc TestUpdateMetadataWithPresignedUrlForDownload(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tinfo := execResponseStageInfo{\n\t\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\t\tLocationType: \"GCS\",\n\t\t}\n\n\t\tdir, err := os.Getwd()\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\n\t\ttestURL := \"https://storage.google.com/gcs-blob/storage/users/456?Signature=testsignature123\"\n\n\t\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tdownloadMeta := fileMetadata{\n\t\t\tname:              \"data1.txt.gz\",\n\t\t\tstageLocationType: \"GCS\",\n\t\t\tnoSleepingTime:    true,\n\t\t\tclient:            gcsCli,\n\t\t\tstageInfo:         &info,\n\t\t\tdstFileName:       \"data1.txt.gz\",\n\t\t\toverwrite:         true,\n\t\t\tsrcFileName:       \"data1.txt.gz\",\n\t\t\tlocalLocation:     dir,\n\t\t}\n\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:               context.Background(),\n\t\t\tsc:                sct.sc,\n\t\t\tcommandType:       downloadCommand,\n\t\t\tcommand:           \"get @~/data1.txt.gz file:///tmp/testData\",\n\t\t\tstageLocationType: gcsClient,\n\t\t\tfileMetadata:      []*fileMetadata{&downloadMeta},\n\t\t\tpresignedURLs:     []string{testURL},\n\t\t}\n\n\t\terr = sfa.updateFileMetadataWithPresignedURL()\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tif testURL != sfa.fileMetadata[0].presignedURL.String() {\n\t\t\tt.Fatalf(\"failed to update metadata with presigned url. expected: %v. got: %v\", testURL, sfa.fileMetadata[0].presignedURL.String())\n\t\t}\n\t})\n}\n\nfunc TestUpdateMetadataWithPresignedUrlError(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:               context.Background(),\n\t\t\tsc:                sct.sc,\n\t\t\tcommand:           \"get @~/data1.txt.gz file:///tmp/testData\",\n\t\t\tstageLocationType: gcsClient,\n\t\t\tdata: &execResponseData{\n\t\t\t\tSQLState: \"123456\",\n\t\t\t\tQueryID:  \"01aa2e8b-0405-ab7c-0000-53b10632f626\",\n\t\t\t},\n\t\t}\n\n\t\terr := sfa.updateFileMetadataWithPresignedURL()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have raised an error\")\n\t\t}\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tif !ok || driverErr.Number != ErrCommandNotRecognized {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrCommandNotRecognized, driverErr.Number)\n\t\t}\n\t})\n}\n\nfunc TestUpdateMetadataSkipsSecondQueryWithGcsDownscopedToken(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"ya29.downscoped-token-test\",\n\t\t},\n\t}\n\n\tdir, err := os.Getwd()\n\tassertNilF(t, err, fmt.Sprintf(\"os.Getwd was unsuccessful, error: %v\", err))\n\n\tpostQueryCalled := false\n\tpresignedURLMock := func(_ context.Context, _ *snowflakeRestful,\n\t\t_ *url.Values, _ map[string]string, _ []byte, _ time.Duration,\n\t\t_ UUID, _ *Config) (*execResponse, error) {\n\t\tpostQueryCalled = true\n\t\tt.Fatal(\"FuncPostQuery should not be called when a downscoped token is present\")\n\t\treturn nil, nil\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tassertNilF(t, err, fmt.Sprintf(\"could not create gcsCli, error: %v\", err))\n\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            gcsCli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       filepath.Join(dir, \"test_data\", \"data1.txt\"),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncPostQuery: presignedURLMock,\n\t}\n\tsfa := &snowflakeFileTransferAgent{\n\t\tctx: context.Background(),\n\t\tsc: &snowflakeConn{\n\t\t\tcfg:  &Config{},\n\t\t\trest: sr,\n\t\t},\n\t\tcommandType:       uploadCommand,\n\t\tcommand:           \"put file:///tmp/test_data/data1.txt @~\",\n\t\tstageLocationType: gcsClient,\n\t\tstageInfo:         &info,\n\t\tfileMetadata:      []*fileMetadata{&uploadMeta},\n\t}\n\n\terr = sfa.updateFileMetadataWithPresignedURL()\n\tassertNilF(t, err, fmt.Sprintf(\"unexpected error in updateFileMetadataWithPresignedURL, error: %v\", err))\n\tassertFalseF(t, postQueryCalled, \"should not have issued a second query when downscoped token is available\")\n\tassertEqualF(t, uploadMeta.stageInfo, &info, \"stageInfo on metadata should remain unchanged\")\n}\n\nfunc TestUpdateMetadataStillQueriesWithPresignedUrlOnGcs(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\tLocationType: \"GCS\",\n\t}\n\n\tdir, err := os.Getwd()\n\tassertNilF(t, err, fmt.Sprintf(\"os.Getwd was unsuccessful, error: %v\", err))\n\n\ttestURL := \"https://storage.google.com/gcs-blob/storage/users/456?Signature=testsignature456\"\n\n\tpostQueryCalled := false\n\tpresignedURLMock := func(_ context.Context, _ *snowflakeRestful,\n\t\t_ *url.Values, _ map[string]string, _ []byte, _ time.Duration,\n\t\t_ UUID, _ *Config) (*execResponse, error) {\n\t\tpostQueryCalled = true\n\t\tdd := &execResponseData{\n\t\t\tQueryID: \"01aa2e8b-0405-ab7c-0000-53b10632f626\",\n\t\t\tCommand: string(uploadCommand),\n\t\t\tStageInfo: execResponseStageInfo{\n\t\t\t\tLocationType: \"GCS\",\n\t\t\t\tLocation:     \"gcspuscentral1-4506459564-stage/users/456\",\n\t\t\t\tPath:         \"users/456\",\n\t\t\t\tRegion:       \"US_CENTRAL1\",\n\t\t\t\tPresignedURL: testURL,\n\t\t\t},\n\t\t}\n\t\treturn &execResponse{\n\t\t\tData:    *dd,\n\t\t\tMessage: \"\",\n\t\t\tCode:    \"0\",\n\t\t\tSuccess: true,\n\t\t}, nil\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tassertNilF(t, err, fmt.Sprintf(\"could not create gcsCli, error: %v\", err))\n\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            gcsCli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       filepath.Join(dir, \"test_data\", \"data1.txt\"),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncPostQuery: presignedURLMock,\n\t}\n\tsfa := &snowflakeFileTransferAgent{\n\t\tctx: context.Background(),\n\t\tsc: &snowflakeConn{\n\t\t\tcfg:  &Config{},\n\t\t\trest: sr,\n\t\t},\n\t\tcommandType:       uploadCommand,\n\t\tcommand:           \"put file:///tmp/test_data/data1.txt @~\",\n\t\tstageLocationType: gcsClient,\n\t\tstageInfo: &execResponseStageInfo{\n\t\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\t\tLocationType: \"GCS\",\n\t\t},\n\t\tfileMetadata: []*fileMetadata{&uploadMeta},\n\t}\n\n\terr = sfa.updateFileMetadataWithPresignedURL()\n\tassertNilF(t, err, fmt.Sprintf(\"unexpected error in updateFileMetadataWithPresignedURL: %v\", err))\n\tassertTrueF(t, postQueryCalled, \"FuncPostQuery should have been called for presigned URL flow (no downscoped token)\")\n\tassertNotNilF(t, uploadMeta.presignedURL, \"presignedURL should have been set on metadata\")\n\tassertEqualF(t, testURL, uploadMeta.presignedURL.String(), fmt.Sprintf(\"presigned URL %v does not match testUrl %v\", uploadMeta.presignedURL.String(), testURL))\n}\n\nfunc TestUploadWhenFilesystemReadOnlyError(t *testing.T) {\n\tif isWindows {\n\t\tt.Skip(\"permission model is different\")\n\t}\n\n\troPath := t.TempDir()\n\n\t// Set the temp directory to read only\n\terr := os.Chmod(roPath, 0444)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\t// Make sure that the test uses read only directory\n\tt.Setenv(\"TMPDIR\", roPath)\n\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            gcsClient,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(dir, \"/test_data/data1.txt\"),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t}\n\n\tsfa := &snowflakeFileTransferAgent{\n\t\tctx: context.Background(),\n\t\tsc: &snowflakeConn{\n\t\t\tcfg: &Config{},\n\t\t},\n\t\tcommandType:       uploadCommand,\n\t\tcommand:           \"put file:///tmp/test_data/data1.txt @~\",\n\t\tstageLocationType: gcsClient,\n\t\tfileMetadata:      []*fileMetadata{&uploadMeta},\n\t\tparallel:          1,\n\t}\n\n\terr = sfa.uploadFilesParallel([]*fileMetadata{&uploadMeta})\n\tif err == nil {\n\t\tt.Fatal(\"should error when the filesystem is read only\")\n\t}\n\tif !strings.Contains(err.Error(), \"errors during file upload:\\nmkdir\") {\n\t\tt.Fatalf(\"should error when creating the temporary directory. Instead errored with: %v\", err)\n\t}\n}\n\nfunc TestUploadWhenErrorWithResultIsReturned(t *testing.T) {\n\tif isWindows {\n\t\tt.Skip(\"permission model is different\")\n\t}\n\tvar err error\n\n\tdir, err := os.Getwd()\n\tassertNilF(t, err)\n\terr = createWriteonlyFile(path.Join(dir, \"test_data\"), \"writeonly.csv\")\n\tassertNilF(t, err)\n\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            local,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo: &execResponseStageInfo{\n\t\t\tLocation:     dir,\n\t\t\tLocationType: \"local\",\n\t\t},\n\t\tdstFileName: \"data1.txt.gz\",\n\t\tsrcFileName: path.Join(dir, \"test_data/writeonly.csv\"),\n\t\toverwrite:   true,\n\t}\n\n\tsfa := &snowflakeFileTransferAgent{\n\t\tctx: context.Background(),\n\t\tsc: &snowflakeConn{\n\t\t\tcfg: &Config{\n\t\t\t\tTmpDirPath: dir,\n\t\t\t},\n\t\t},\n\t\tdata: &execResponseData{\n\t\t\tSrcLocations:      []string{path.Join(dir, \"/test_data/writeonly.csv\")},\n\t\t\tCommand:           \"UPLOAD\",\n\t\t\tSourceCompression: \"none\",\n\t\t\tStageInfo: execResponseStageInfo{\n\t\t\t\tLocationType: \"LOCAL_FS\",\n\t\t\t\tLocation:     dir,\n\t\t\t},\n\t\t},\n\t\tcommandType:       uploadCommand,\n\t\tcommand:           fmt.Sprintf(\"put file://%v/test_data/data1.txt @~\", dir),\n\t\tstageLocationType: local,\n\t\tfileMetadata:      []*fileMetadata{&uploadMeta},\n\t\tparallel:          1,\n\t}\n\n\terr = sfa.execute()\n\tassertNilF(t, err) // execute should not propagate errors, it should be returned by sfa.result only\n\t_, err = sfa.result()\n\tassertNotNilE(t, err)\n}\n\nfunc createWriteonlyFile(dir, filename string) error {\n\tpath := path.Join(dir, filename)\n\tif _, err := os.Stat(path); errors.Is(err, os.ErrNotExist) {\n\t\tif _, err := os.Create(path); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tif err := os.Chmod(path, 0222); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc TestUnitUpdateProgress(t *testing.T) {\n\tvar b bytes.Buffer\n\tbuf := io.Writer(&b)\n\t_, err := buf.Write([]byte(\"testing\"))\n\tassertNilF(t, err)\n\n\tspp := &snowflakeProgressPercentage{\n\t\tfilename:        \"test.txt\",\n\t\tfileSize:        float64(1500),\n\t\toutputStream:    &buf,\n\t\tshowProgressBar: true,\n\t\tdone:            false,\n\t}\n\n\tspp.call(0)\n\tif spp.done != false {\n\t\tt.Fatal(\"should not be done.\")\n\t}\n\n\tif spp.seenSoFar != 0 {\n\t\tt.Fatalf(\"expected seenSoFar to be 0 but was %v\", spp.seenSoFar)\n\t}\n\n\tspp.call(1516)\n\tif spp.done != true {\n\t\tt.Fatal(\"should be done after updating progess\")\n\t}\n}\n\nfunc TestCustomTmpDirPath(t *testing.T) {\n\ttmpDir, err := os.MkdirTemp(\"\", \"\")\n\tif err != nil {\n\t\tt.Fatalf(\"cannot create temp directory: %v\", err)\n\t}\n\tdefer func() {\n\t\tassertNilF(t, os.RemoveAll(tmpDir))\n\t}()\n\tuploadFile := filepath.Join(tmpDir, \"data.txt\")\n\tf, err := os.Create(uploadFile)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\t_, err = f.WriteString(\"test1,test2\\ntest3,test4\\n\")\n\tassertNilF(t, err)\n\tassertNilF(t, f.Close())\n\n\tuploadMeta := &fileMetadata{\n\t\tname:              \"data.txt.gz\",\n\t\tstageLocationType: \"local\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            local,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo: &execResponseStageInfo{\n\t\t\tLocation:     tmpDir,\n\t\t\tLocationType: \"local\",\n\t\t},\n\t\tdstFileName: \"data.txt.gz\",\n\t\tsrcFileName: uploadFile,\n\t\toverwrite:   true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t}\n\n\tdownloadFile := filepath.Join(tmpDir, \"download.txt\")\n\tdownloadMeta := &fileMetadata{\n\t\tname:              \"data.txt.gz\",\n\t\tstageLocationType: \"local\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            local,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo: &execResponseStageInfo{\n\t\t\tLocation:     tmpDir,\n\t\t\tLocationType: \"local\",\n\t\t},\n\t\tsrcFileName: \"data.txt.gz\",\n\t\tdstFileName: downloadFile,\n\t\toverwrite:   true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t}\n\n\tsfa := snowflakeFileTransferAgent{\n\t\tctx: context.Background(),\n\t\tsc: &snowflakeConn{\n\t\t\tcfg: &Config{\n\t\t\t\tTmpDirPath: tmpDir,\n\t\t\t},\n\t\t},\n\t\tstageLocationType: local,\n\t}\n\t_, err = sfa.uploadOneFile(uploadMeta)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\t_, err = sfa.downloadOneFile(context.Background(), downloadMeta)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.Remove(\"download.txt\")\n}\n\nfunc TestReadonlyTmpDirPathShouldFail(t *testing.T) {\n\tif isWindows {\n\t\tt.Skip(\"permission model is different\")\n\t}\n\ttmpDir, err := os.MkdirTemp(\"\", \"\")\n\tif err != nil {\n\t\tt.Fatalf(\"cannot create temp directory: %v\", err)\n\t}\n\tdefer func() {\n\t\tassertNilF(t, os.RemoveAll(tmpDir))\n\t}()\n\n\tuploadFile := filepath.Join(tmpDir, \"data.txt\")\n\tf, err := os.Create(uploadFile)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\t_, err = f.WriteString(\"test1,test2\\ntest3,test4\\n\")\n\tassertNilF(t, err)\n\tassertNilF(t, f.Close())\n\n\terr = os.Chmod(tmpDir, 0500)\n\tif err != nil {\n\t\tt.Fatalf(\"cannot mark directory as readonly: %v\", err)\n\t}\n\tdefer func() {\n\t\tassertNilF(t, os.Chmod(tmpDir, 0700))\n\t}()\n\n\tuploadMeta := &fileMetadata{\n\t\tname:              \"data.txt.gz\",\n\t\tstageLocationType: \"local\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            local,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo: &execResponseStageInfo{\n\t\t\tLocation:     tmpDir,\n\t\t\tLocationType: \"local\",\n\t\t},\n\t\tdstFileName: \"data.txt.gz\",\n\t\tsrcFileName: uploadFile,\n\t\toverwrite:   true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t}\n\n\tsfa := snowflakeFileTransferAgent{\n\t\tctx: context.Background(),\n\t\tsc: &snowflakeConn{\n\t\t\tcfg: &Config{\n\t\t\t\tTmpDirPath: tmpDir,\n\t\t\t},\n\t\t},\n\t\tstageLocationType: local,\n\t}\n\t_, err = sfa.uploadOneFile(uploadMeta)\n\tif err == nil {\n\t\tt.Fatalf(\"should not upload file as temporary directory is not readable\")\n\t}\n}\n\nfunc TestUploadDownloadOneFileRequireCompress(t *testing.T) {\n\ttestUploadDownloadOneFile(t, false)\n}\n\nfunc TestUploadDownloadOneFileRequireCompressStream(t *testing.T) {\n\ttestUploadDownloadOneFile(t, true)\n}\n\nfunc testUploadDownloadOneFile(t *testing.T, isStream bool) {\n\ttmpDir, err := os.MkdirTemp(\"\", \"data\")\n\tif err != nil {\n\t\tt.Fatalf(\"cannot create temp directory: %v\", err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\tuploadFile := filepath.Join(tmpDir, \"data.txt\")\n\tf, err := os.Create(uploadFile)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\t_, err = f.WriteString(\"test1,test2\\ntest3,test4\\n\")\n\tassertNilF(t, err)\n\tassertNilF(t, f.Close())\n\n\tuploadMeta := &fileMetadata{\n\t\tname:              \"data.txt.gz\",\n\t\tstageLocationType: \"local\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            local,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo: &execResponseStageInfo{\n\t\t\tLocation:     tmpDir,\n\t\t\tLocationType: \"local\",\n\t\t},\n\t\tdstFileName: \"data.txt.gz\",\n\t\tsrcFileName: uploadFile,\n\t\toverwrite:   true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\trequireCompress: true,\n\t}\n\n\tdownloadFile := filepath.Join(tmpDir, \"download.txt\")\n\tdownloadMeta := &fileMetadata{\n\t\tname:              \"data.txt.gz\",\n\t\tstageLocationType: \"local\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            local,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo: &execResponseStageInfo{\n\t\t\tLocation:     tmpDir,\n\t\t\tLocationType: \"local\",\n\t\t},\n\t\tsrcFileName: \"data.txt.gz\",\n\t\tdstFileName: downloadFile,\n\t\toverwrite:   true,\n\t\tparallel:    int64(10),\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t}\n\n\tsfa := snowflakeFileTransferAgent{\n\t\tctx: context.Background(),\n\t\tsc: &snowflakeConn{\n\t\t\tcfg: &Config{\n\t\t\t\tTmpDirPath: tmpDir,\n\t\t\t},\n\t\t},\n\t\tstageLocationType: local,\n\t}\n\n\tif isStream {\n\t\tfileStream, _ := os.Open(uploadFile)\n\t\tctx := WithFilePutStream(context.Background(), fileStream)\n\t\tuploadMeta.fileStream, err = getFileStream(ctx)\n\t\tassertNilF(t, err)\n\t}\n\n\t_, err = sfa.uploadOneFile(uploadMeta)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif uploadMeta.resStatus != uploaded {\n\t\tt.Fatalf(\"failed to upload file\")\n\t}\n\n\t_, err = sfa.downloadOneFile(context.Background(), downloadMeta)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer func() {\n\t\tassertNilF(t, os.Remove(\"download.txt\"))\n\t}()\n\tif downloadMeta.resStatus != downloaded {\n\t\tt.Fatalf(\"failed to download file\")\n\t}\n}\n\nfunc TestPutGetRegexShouldIgnoreWhitespaceAtTheBeginning(t *testing.T) {\n\tfor _, test := range []struct {\n\t\tregex string\n\t\tquery string\n\t}{\n\t\t{\n\t\t\tregex: putRegexp,\n\t\t\tquery: \"PUT abc\",\n\t\t},\n\t\t{\n\t\t\tregex: putRegexp,\n\t\t\tquery: \"   PUT abc\",\n\t\t},\n\t\t{\n\t\t\tregex: putRegexp,\n\t\t\tquery: \"\\tPUT abc\",\n\t\t},\n\t\t{\n\t\t\tregex: putRegexp,\n\t\t\tquery: \"\\nPUT abc\",\n\t\t},\n\t\t{\n\t\t\tregex: putRegexp,\n\t\t\tquery: \"\\r\\nPUT abc\",\n\t\t},\n\t\t{\n\t\t\tregex: getRegexp,\n\t\t\tquery: \"GET abc\",\n\t\t},\n\t\t{\n\t\t\tregex: getRegexp,\n\t\t\tquery: \"   GET abc\",\n\t\t},\n\t\t{\n\t\t\tregex: getRegexp,\n\t\t\tquery: \"\\tGET abc\",\n\t\t},\n\t\t{\n\t\t\tregex: getRegexp,\n\t\t\tquery: \"\\nGET abc\",\n\t\t},\n\t\t{\n\t\t\tregex: getRegexp,\n\t\t\tquery: \"\\r\\nGET abc\",\n\t\t},\n\t} {\n\t\t{\n\t\t\tt.Run(test.regex+\" \"+test.query, func(t *testing.T) {\n\t\t\t\tregex := regexp.MustCompile(test.regex)\n\t\t\t\tassertTrueE(t, regex.Match([]byte(test.query)))\n\t\t\t\tassertFalseE(t, regex.Match([]byte(\"prefix \"+test.query)))\n\t\t\t})\n\t\t}\n\t}\n}\n\nfunc TestEncryptStream(t *testing.T) {\n\tsrcBytes := []byte{63, 64, 65}\n\tinitStr := bytes.NewBuffer(srcBytes)\n\n\tfor _, tc := range []struct {\n\t\tct            cloudType\n\t\tencrypt       bool\n\t\trealSrcStream bool\n\t\tencryptMat    bool\n\t}{\n\t\t{\n\t\t\tct:            s3Client,\n\t\t\tencrypt:       true,\n\t\t\trealSrcStream: true,\n\t\t\tencryptMat:    true,\n\t\t},\n\t\t{\n\t\t\tct:            s3Client,\n\t\t\tencrypt:       true,\n\t\t\trealSrcStream: false,\n\t\t\tencryptMat:    true,\n\t\t},\n\t\t{\n\t\t\tct:            s3Client,\n\t\t\tencrypt:       false,\n\t\t\trealSrcStream: false,\n\t\t\tencryptMat:    false,\n\t\t},\n\t\t{\n\t\t\tct:            azureClient,\n\t\t\tencrypt:       true,\n\t\t\trealSrcStream: true,\n\t\t\tencryptMat:    true,\n\t\t},\n\t\t{\n\t\t\tct:            azureClient,\n\t\t\tencrypt:       true,\n\t\t\trealSrcStream: false,\n\t\t\tencryptMat:    true,\n\t\t},\n\t\t{\n\t\t\tct:            azureClient,\n\t\t\tencrypt:       false,\n\t\t\trealSrcStream: false,\n\t\t\tencryptMat:    false,\n\t\t},\n\t\t{\n\t\t\tct:            gcsClient,\n\t\t\tencrypt:       true,\n\t\t\trealSrcStream: true,\n\t\t\tencryptMat:    true,\n\t\t},\n\t\t{\n\t\t\tct:            gcsClient,\n\t\t\tencrypt:       true,\n\t\t\trealSrcStream: false,\n\t\t\tencryptMat:    true,\n\t\t},\n\t\t{\n\t\t\tct:            gcsClient,\n\t\t\tencrypt:       false,\n\t\t\trealSrcStream: false,\n\t\t\tencryptMat:    false,\n\t\t},\n\t\t{\n\t\t\tct:            local,\n\t\t\tencrypt:       false,\n\t\t\trealSrcStream: true,\n\t\t\tencryptMat:    true,\n\t\t},\n\t\t{\n\t\t\tct:            local,\n\t\t\tencrypt:       false,\n\t\t\trealSrcStream: true,\n\t\t\tencryptMat:    false,\n\t\t},\n\t\t{\n\t\t\tct:            local,\n\t\t\tencrypt:       false,\n\t\t\trealSrcStream: false,\n\t\t\tencryptMat:    true,\n\t\t},\n\t\t{\n\t\t\tct:            local,\n\t\t\tencrypt:       false,\n\t\t\trealSrcStream: false,\n\t\t\tencryptMat:    false,\n\t\t},\n\t} {\n\t\t{\n\t\t\tvar encMat *snowflakeFileEncryption = nil\n\t\t\tif tc.encryptMat {\n\t\t\t\tencMat = &snowflakeFileEncryption{\n\t\t\t\t\tQueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\",\n\t\t\t\t\tQueryID:             \"01abc874-0406-1bf0-0000-53b10668e056\",\n\t\t\t\t\tSMKID:               92019681909886,\n\t\t\t\t}\n\t\t\t}\n\t\t\tvar realSrcStr *bytes.Buffer = nil\n\t\t\tif tc.realSrcStream {\n\t\t\t\trealSrcStr = initStr\n\t\t\t}\n\t\t\tuploadMeta := fileMetadata{\n\t\t\t\tname:               \"data1.txt.gz\",\n\t\t\t\tstageLocationType:  tc.ct,\n\t\t\t\tnoSleepingTime:     true,\n\t\t\t\tparallel:           int64(100),\n\t\t\t\tclient:             nil,\n\t\t\t\tsha256Digest:       \"123456789abcdef\",\n\t\t\t\tstageInfo:          nil,\n\t\t\t\tdstFileName:        \"data1.txt.gz\",\n\t\t\t\tsrcStream:          initStr,\n\t\t\t\trealSrcStream:      realSrcStr,\n\t\t\t\toverwrite:          true,\n\t\t\t\toptions:            nil,\n\t\t\t\tencryptionMaterial: encMat,\n\t\t\t\tmockUploader:       nil,\n\t\t\t\tsfa:                nil,\n\t\t\t}\n\n\t\t\tt.Run(string(tc.ct)+\" encrypt \"+strconv.FormatBool(tc.encrypt)+\" realSrcStream \"+strconv.FormatBool(tc.realSrcStream)+\" encryptMat \"+strconv.FormatBool(tc.encryptMat), func(t *testing.T) {\n\t\t\t\terr := encryptDataIfRequired(&uploadMeta, tc.ct)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tif tc.encrypt {\n\t\t\t\t\tassertNotNilF(t, uploadMeta.encryptMeta, \"encryption metadata should be present\")\n\t\t\t\t\tif tc.realSrcStream {\n\t\t\t\t\t\tassertNotEqualF(t, uploadMeta.realSrcStream, realSrcStr, \"stream should be encrypted\")\n\t\t\t\t\t} else {\n\t\t\t\t\t\tassertNotEqualF(t, uploadMeta.realSrcStream, initStr, \"stream should not be encrypted\")\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tassertNilF(t, uploadMeta.encryptMeta, \"encryption metadata should be empty\")\n\t\t\t\t\tassertEqualF(t, uploadMeta.realSrcStream, realSrcStr, \"stream should not be encrypted\")\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t}\n}\n\nfunc TestEncryptFile(t *testing.T) {\n\tfor _, tc := range []struct {\n\t\tct         cloudType\n\t\tencrypt    bool\n\t\tencryptMat bool\n\t}{\n\t\t{\n\t\t\tct:         s3Client,\n\t\t\tencrypt:    true,\n\t\t\tencryptMat: true,\n\t\t},\n\t\t{\n\t\t\tct:         s3Client,\n\t\t\tencrypt:    false,\n\t\t\tencryptMat: false,\n\t\t},\n\t\t{\n\t\t\tct:         azureClient,\n\t\t\tencrypt:    true,\n\t\t\tencryptMat: true,\n\t\t},\n\t\t{\n\t\t\tct:         azureClient,\n\t\t\tencrypt:    false,\n\t\t\tencryptMat: false,\n\t\t},\n\t\t{\n\t\t\tct:         gcsClient,\n\t\t\tencrypt:    true,\n\t\t\tencryptMat: true,\n\t\t},\n\t\t{\n\t\t\tct:         gcsClient,\n\t\t\tencrypt:    false,\n\t\t\tencryptMat: false,\n\t\t},\n\t\t{\n\t\t\tct:         local,\n\t\t\tencrypt:    false,\n\t\t\tencryptMat: true,\n\t\t},\n\t\t{\n\t\t\tct:         local,\n\t\t\tencrypt:    false,\n\t\t\tencryptMat: false,\n\t\t},\n\t} {\n\t\tdir, err := os.Getwd()\n\t\tsrcF := path.Join(dir, \"/test_data/put_get_1.txt\")\n\t\tassertNilF(t, err, \"error getting current directory\")\n\n\t\tvar encMat *snowflakeFileEncryption = nil\n\t\tif tc.encryptMat {\n\t\t\tencMat = &snowflakeFileEncryption{\n\t\t\t\tQueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\",\n\t\t\t\tQueryID:             \"01abc874-0406-1bf0-0000-53b10668e056\",\n\t\t\t\tSMKID:               92019681909886,\n\t\t\t}\n\t\t}\n\n\t\tuploadMeta := fileMetadata{\n\t\t\tname:               \"data1.txt.gz\",\n\t\t\tstageLocationType:  tc.ct,\n\t\t\tnoSleepingTime:     true,\n\t\t\tparallel:           int64(100),\n\t\t\tclient:             nil,\n\t\t\tsha256Digest:       \"123456789abcdef\",\n\t\t\tstageInfo:          nil,\n\t\t\tdstFileName:        \"data1.txt.gz\",\n\t\t\tsrcFileName:        srcF,\n\t\t\trealSrcFileName:    srcF,\n\t\t\toverwrite:          true,\n\t\t\toptions:            nil,\n\t\t\tencryptionMaterial: encMat,\n\t\t\tmockUploader:       nil,\n\t\t\tsfa:                nil,\n\t\t}\n\n\t\tt.Run(string(tc.ct)+\" encrypt \"+strconv.FormatBool(tc.encrypt)+\" encryptMat \"+strconv.FormatBool(tc.encryptMat), func(t *testing.T) {\n\t\t\terr := encryptDataIfRequired(&uploadMeta, tc.ct)\n\t\t\tassertNilF(t, err)\n\t\t\tif tc.encrypt {\n\t\t\t\tassertNotNilF(t, uploadMeta.encryptMeta, \"encryption metadata should be present\")\n\t\t\t\tassertNotEqualF(t, uploadMeta.realSrcFileName, srcF, \"file should be encrypted\")\n\t\t\t\tsrcBytes, err := os.ReadFile(srcF)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tencBytes, err := os.ReadFile(uploadMeta.realSrcFileName)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertFalseF(t, bytes.Equal(srcBytes, encBytes), \"file contents should differ\")\n\t\t\t} else {\n\t\t\t\tassertNilF(t, uploadMeta.encryptMeta, \"encryption metadata should be empty\")\n\t\t\t\tassertEqualF(t, uploadMeta.realSrcFileName, srcF, \"file should not be encrypted\")\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "file_util.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"compress/gzip\"\n\t\"crypto/sha256\"\n\t\"encoding/base64\"\n\t\"io\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n)\n\ntype snowflakeFileUtil struct {\n}\n\nconst (\n\tfileChunkSize                 = 16 * 4 * 1024\n\treadWriteFileMode os.FileMode = 0666\n)\n\nfunc (util *snowflakeFileUtil) compressFileWithGzipFromStream(srcStream **bytes.Buffer) (*bytes.Buffer, int, error) {\n\tr := getReaderFromBuffer(srcStream)\n\tbuf, err := io.ReadAll(r)\n\tif err != nil {\n\t\treturn nil, -1, err\n\t}\n\tvar c bytes.Buffer\n\tw := gzip.NewWriter(&c)\n\tif _, err := w.Write(buf); err != nil { // write buf to gzip writer\n\t\treturn nil, -1, err\n\t}\n\tif err := w.Close(); err != nil {\n\t\treturn nil, -1, err\n\t}\n\treturn &c, c.Len(), nil\n}\n\nfunc (util *snowflakeFileUtil) compressFileWithGzip(fileName string, tmpDir string) (gzipFileName string, size int64, err error) {\n\tbasename := baseName(fileName)\n\tgzipFileName = filepath.Join(tmpDir, basename+\"_c.gz\")\n\n\tfr, err := os.Open(fileName)\n\tif err != nil {\n\t\treturn \"\", -1, err\n\t}\n\tdefer func() {\n\t\tif tmpErr := fr.Close(); tmpErr != nil {\n\t\t\terr = tmpErr\n\t\t}\n\t}()\n\tfw, err := os.OpenFile(gzipFileName, os.O_WRONLY|os.O_CREATE, readWriteFileMode)\n\tif err != nil {\n\t\treturn \"\", -1, err\n\t}\n\tgzw := gzip.NewWriter(fw)\n\tdefer func() {\n\t\tif tmpErr := gzw.Close(); tmpErr != nil {\n\t\t\terr = tmpErr\n\t\t}\n\t}()\n\t_, err = io.Copy(gzw, fr)\n\tif err != nil {\n\t\treturn \"\", -1, err\n\t}\n\n\tstat, err := os.Stat(gzipFileName)\n\tif err != nil {\n\t\treturn \"\", -1, err\n\t}\n\treturn gzipFileName, stat.Size(), err\n}\n\nfunc (util *snowflakeFileUtil) getDigestAndSizeForStream(stream io.Reader) (string, int64, error) {\n\tm := sha256.New()\n\tchunk := make([]byte, fileChunkSize)\n\tvar total int64\n\n\tfor {\n\t\tn, err := stream.Read(chunk)\n\t\tif err == io.EOF {\n\t\t\tbreak\n\t\t} else if err != nil {\n\t\t\treturn \"\", 0, err\n\t\t}\n\t\ttotal += int64(n)\n\t\tm.Write(chunk[:n])\n\t}\n\treturn base64.StdEncoding.EncodeToString(m.Sum(nil)), total, nil\n}\n\nfunc (util *snowflakeFileUtil) getDigestAndSizeForFile(fileName string) (digest string, size int64, err error) {\n\tf, err := os.Open(fileName)\n\tif err != nil {\n\t\treturn \"\", 0, err\n\t}\n\tdefer func() {\n\t\tif tmpErr := f.Close(); tmpErr != nil {\n\t\t\terr = tmpErr\n\t\t}\n\t}()\n\n\tvar total int64\n\tm := sha256.New()\n\tchunk := make([]byte, fileChunkSize)\n\n\tfor {\n\t\tn, err := f.Read(chunk)\n\t\tif err == io.EOF {\n\t\t\tbreak\n\t\t} else if err != nil {\n\t\t\treturn \"\", 0, err\n\t\t}\n\t\ttotal += int64(n)\n\t\tm.Write(chunk[:n])\n\t}\n\tif _, err = f.Seek(0, io.SeekStart); err != nil {\n\t\treturn \"\", -1, err\n\t}\n\treturn base64.StdEncoding.EncodeToString(m.Sum(nil)), total, err\n}\n\n// file metadata for PUT/GET\ntype fileMetadata struct {\n\tname               string\n\tsfa                *snowflakeFileTransferAgent\n\tstageLocationType  cloudType\n\tresStatus          resultStatus\n\tstageInfo          *execResponseStageInfo\n\tencryptionMaterial *snowflakeFileEncryption\n\tencryptMeta        *encryptMetadata\n\n\tsrcFileName        string\n\trealSrcFileName    string\n\tsrcFileSize        int64\n\tsrcCompressionType *compressionType\n\tuploadSize         int64\n\tdstFileSize        int64\n\tdstFileName        string\n\tdstCompressionType *compressionType\n\n\tclient             cloudClient // *s3.Client (S3), *azblob.ContainerURL (Azure), string (GCS)\n\trequireCompress    bool\n\tparallel           int64\n\tsha256Digest       string\n\toverwrite          bool\n\ttmpDir             string\n\terrorDetails       error\n\tlastError          error\n\tnoSleepingTime     bool\n\tlastMaxConcurrency int\n\tlocalLocation      string\n\toptions            *SnowflakeFileTransferOptions\n\n\t/* streaming PUT */\n\tfileStream    io.Reader\n\tsrcStream     *bytes.Buffer\n\trealSrcStream *bytes.Buffer\n\n\t/* streaming GET */\n\tdstStream *bytes.Buffer\n\n\t/* GCS */\n\tpresignedURL                *url.URL\n\tgcsFileHeaderDigest         string\n\tgcsFileHeaderContentLength  int64\n\tgcsFileHeaderEncryptionMeta *encryptMetadata\n\n\t/* mock */\n\tmockUploader    s3UploadAPI\n\tmockDownloader  s3DownloadAPI\n\tmockHeader      s3HeaderAPI\n\tmockGcsClient   gcsAPI\n\tmockAzureClient azureAPI\n}\n\ntype fileTransferResultType struct {\n\tname               string\n\tsrcFileName        string\n\tdstFileName        string\n\tsrcFileSize        int64\n\tdstFileSize        int64\n\tsrcCompressionType *compressionType\n\tdstCompressionType *compressionType\n\tresStatus          resultStatus\n\terrorDetails       error\n}\n\ntype fileHeader struct {\n\tdigest             string\n\tcontentLength      int64\n\tencryptionMetadata *encryptMetadata\n}\n\nfunc getReaderFromBuffer(src **bytes.Buffer) io.Reader {\n\tvar b bytes.Buffer\n\ttee := io.TeeReader(*src, &b) // read src to buf\n\t*src = &b                     // revert pointer back\n\treturn tee\n}\n\n// baseName returns the pathname of the path provided\nfunc baseName(path string) string {\n\tbase := filepath.Base(path)\n\tif base == \".\" || base == \"/\" {\n\t\treturn \"\"\n\t}\n\tif len(base) > 1 && (path[len(path)-1:] == \".\" || path[len(path)-1:] == \"/\") {\n\t\treturn \"\"\n\t}\n\treturn base\n}\n\n// expandUser returns the argument with an initial component of ~\nfunc expandUser(path string) (string, error) {\n\tif !strings.HasPrefix(path, \"~\") {\n\t\treturn path, nil\n\t}\n\thomeDir, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif path == \"~\" {\n\t\tpath = homeDir\n\t} else if strings.HasPrefix(path, \"~/\") {\n\t\tpath = filepath.Join(homeDir, path[2:])\n\t}\n\treturn path, nil\n}\n\n// getDirectory retrieves the current working directory\nfunc getDirectory() (string, error) {\n\tex, err := os.Executable()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn filepath.Dir(ex), nil\n}\n"
  },
  {
    "path": "file_util_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"os/user\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestGetDigestAndSizeForInvalidDir(t *testing.T) {\n\tfileUtil := new(snowflakeFileUtil)\n\tdigest, size, err := fileUtil.getDigestAndSizeForFile(\"/home/file.txt\")\n\tif digest != \"\" {\n\t\tt.Fatal(\"should be empty\")\n\t}\n\tif size != 0 {\n\t\tt.Fatal(\"should be 0\")\n\t}\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\ntype tcBaseName struct {\n\tin  string\n\tout string\n}\n\nfunc TestBaseName(t *testing.T) {\n\ttestcases := []tcBaseName{\n\t\t{\"/tmp\", \"tmp\"},\n\t\t{\"/home/desktop/.\", \"\"},\n\t\t{\"/home/desktop/..\", \"\"},\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.in, func(t *testing.T) {\n\t\t\tbase := baseName(test.in)\n\t\t\tif test.out != base {\n\t\t\t\tt.Errorf(\"Failed to get base, input %v, expected: %v, got: %v\", test.in, test.out, base)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestExpandUser(t *testing.T) {\n\tskipOnMissingHome(t)\n\tusr, err := user.Current()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\thomeDir := usr.HomeDir\n\tuser, err := expandUser(\"~\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif homeDir != user {\n\t\tt.Fatalf(\"failed to expand user, expected: %v, got: %v\", homeDir, user)\n\t}\n\n\tuser, err = expandUser(\"~/storage\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\texpectedPath := filepath.Join(homeDir, \"storage\")\n\tif expectedPath != user {\n\t\tt.Fatalf(\"failed to expand user, expected: %v, got: %v\", expectedPath, user)\n\t}\n}\n"
  },
  {
    "path": "function_wrapper_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"sync\"\n\t\"testing\"\n)\n\nfunc TestGoWrapper(t *testing.T) {\n\tvar (\n\t\tgoWrapperCalled          = false\n\t\ttestGoRoutineWrapperLock sync.Mutex\n\t)\n\n\tsetGoWrapperCalled := func(value bool) {\n\t\ttestGoRoutineWrapperLock.Lock()\n\t\tdefer testGoRoutineWrapperLock.Unlock()\n\t\tgoWrapperCalled = value\n\t}\n\tgetGoWrapperCalled := func() bool {\n\t\ttestGoRoutineWrapperLock.Lock()\n\t\tdefer testGoRoutineWrapperLock.Unlock()\n\t\treturn goWrapperCalled\n\t}\n\n\t// this is the go wrapper function we are going to pass into GoroutineWrapper.\n\t// we will know that this has been called if the channel is closed\n\tvar closeGoWrapperCalledChannel = func(ctx context.Context, f func()) {\n\t\tsetGoWrapperCalled(true)\n\t\tf()\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\toldGoroutineWrapper := GoroutineWrapper\n\t\tt.Cleanup(func() {\n\t\t\tGoroutineWrapper = oldGoroutineWrapper\n\t\t})\n\n\t\tGoroutineWrapper = closeGoWrapperCalledChannel\n\n\t\tctx := WithAsyncMode(context.Background())\n\t\trows := dbt.mustQueryContext(ctx, \"SELECT 1\")\n\t\tassertTrueE(t, rows.Next())\n\t\tvar i int\n\t\tassertNilF(t, rows.Scan(&i))\n\t\trows.Close()\n\n\t\tassertTrueF(t, getGoWrapperCalled(), \"channel should be closed, indicating our wrapper worked\")\n\t})\n}\n"
  },
  {
    "path": "function_wrappers.go",
    "content": "package gosnowflake\n\nimport \"context\"\n\n// GoroutineWrapperFunc is used to wrap goroutines. This is useful if the caller wants\n// to recover panics, rather than letting panics cause a system crash. A suggestion would be to\n// use use the recover functionality, and log the panic as is most useful to you\ntype GoroutineWrapperFunc func(ctx context.Context, f func())\n\n// The default GoroutineWrapperFunc; this does nothing. With this default wrapper\n// panics will take down binary as expected\nvar noopGoroutineWrapper = func(_ context.Context, f func()) {\n\tf()\n}\n\n// GoroutineWrapper is used to hold the GoroutineWrapperFunc set by the client, or to\n// store the default goroutine wrapper which does nothing\nvar GoroutineWrapper GoroutineWrapperFunc = noopGoroutineWrapper\n"
  },
  {
    "path": "gcs_storage_client.go",
    "content": "package gosnowflake\n\nimport (\n\t\"cmp\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n)\n\nconst (\n\tgcsMetadataPrefix             = \"x-goog-meta-\"\n\tgcsMetadataSfcDigest          = gcsMetadataPrefix + sfcDigest\n\tgcsMetadataMatdescKey         = gcsMetadataPrefix + \"matdesc\"\n\tgcsMetadataEncryptionDataProp = gcsMetadataPrefix + \"encryptiondata\"\n\tgcsFileHeaderDigest           = \"gcs-file-header-digest\"\n\tgcsRegionMeCentral2           = \"me-central2\"\n\tminimumDownloadPartSize       = 1024 * 1024 * 5 // 5MB\n)\n\ntype snowflakeGcsClient struct {\n\tcfg       *Config\n\ttelemetry *snowflakeTelemetry\n}\n\ntype gcsLocation struct {\n\tbucketName string\n\tpath       string\n}\n\nfunc (util *snowflakeGcsClient) createClient(info *execResponseStageInfo, _ bool, telemetry *snowflakeTelemetry) (cloudClient, error) {\n\tif info.Creds.GcsAccessToken != \"\" {\n\t\tlogger.Debug(\"Using GCS downscoped token\")\n\t\treturn info.Creds.GcsAccessToken, nil\n\t}\n\tlogger.Debugf(\"No access token received from GS, using presigned url: %s\", info.PresignedURL)\n\treturn \"\", nil\n}\n\n// cloudUtil implementation\nfunc (util *snowflakeGcsClient) getFileHeader(ctx context.Context, meta *fileMetadata, filename string) (*fileHeader, error) {\n\tif meta.resStatus == uploaded || meta.resStatus == downloaded {\n\t\treturn &fileHeader{\n\t\t\tdigest:             meta.gcsFileHeaderDigest,\n\t\t\tcontentLength:      meta.gcsFileHeaderContentLength,\n\t\t\tencryptionMetadata: meta.gcsFileHeaderEncryptionMeta,\n\t\t}, nil\n\t}\n\tif meta.presignedURL != nil {\n\t\tmeta.resStatus = notFoundFile\n\t} else {\n\t\tURL, err := util.generateFileURL(meta.stageInfo, strings.TrimLeft(filename, \"/\"))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\taccessToken, ok := meta.client.(string)\n\t\tif !ok {\n\t\t\treturn nil, fmt.Errorf(\"interface convertion. expected type string but got %T\", meta.client)\n\t\t}\n\t\tgcsHeaders := map[string]string{\n\t\t\t\"Authorization\": \"Bearer \" + accessToken,\n\t\t}\n\n\t\tresp, err := withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (*http.Response, error) {\n\t\t\treq, err := http.NewRequestWithContext(ctx, \"HEAD\", URL.String(), nil)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tfor k, v := range gcsHeaders {\n\t\t\t\treq.Header.Add(k, v)\n\t\t\t}\n\t\t\tclient, err := newGcsClient(util.cfg, util.telemetry)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\t// for testing only\n\t\t\tif meta.mockGcsClient != nil {\n\t\t\t\tclient = meta.mockGcsClient\n\t\t\t}\n\t\t\tresp, err := client.Do(req)\n\t\t\tif err != nil && strings.HasSuffix(err.Error(), \"EOF\") {\n\t\t\t\tlogger.Debug(\"Retrying HEAD request because of EOF\")\n\t\t\t\tresp, err = client.Do(req)\n\t\t\t}\n\t\t\treturn resp, err\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tdefer func() {\n\t\t\tif resp.Body != nil {\n\t\t\t\tif err := resp.Body.Close(); err != nil {\n\t\t\t\t\tlogger.Warnf(\"failed to close response body: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\t\tif resp.StatusCode != http.StatusOK {\n\t\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\t\tmeta.resStatus = errStatus\n\t\t\tif resp.StatusCode == 403 || resp.StatusCode == 408 || resp.StatusCode == 429 || resp.StatusCode == 500 || resp.StatusCode == 503 {\n\t\t\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\t\t\tmeta.resStatus = needRetry\n\t\t\t\treturn nil, meta.lastError\n\t\t\t}\n\t\t\tif resp.StatusCode == 404 {\n\t\t\t\tmeta.resStatus = notFoundFile\n\t\t\t} else if util.isTokenExpired(resp) {\n\t\t\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\t\t\tmeta.resStatus = renewToken\n\t\t\t}\n\t\t\treturn nil, meta.lastError\n\t\t}\n\n\t\tdigest := resp.Header.Get(gcsMetadataSfcDigest)\n\t\tcontentLength, err := strconv.Atoi(resp.Header.Get(\"content-length\"))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tvar encryptionMeta *encryptMetadata\n\t\tif resp.Header.Get(gcsMetadataEncryptionDataProp) != \"\" {\n\t\t\tvar encryptData *encryptionData\n\t\t\terr := json.Unmarshal([]byte(resp.Header.Get(gcsMetadataEncryptionDataProp)), &encryptData)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"cannot unmarshal encryption data: %v\", err)\n\t\t\t}\n\t\t\tif encryptData != nil {\n\t\t\t\tencryptionMeta = &encryptMetadata{\n\t\t\t\t\tkey: encryptData.WrappedContentKey.EncryptionKey,\n\t\t\t\t\tiv:  encryptData.ContentEncryptionIV,\n\t\t\t\t}\n\t\t\t\tif resp.Header.Get(gcsMetadataMatdescKey) != \"\" {\n\t\t\t\t\tencryptionMeta.matdesc = resp.Header.Get(gcsMetadataMatdescKey)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tmeta.resStatus = uploaded\n\t\treturn &fileHeader{\n\t\t\tdigest:             digest,\n\t\t\tcontentLength:      int64(contentLength),\n\t\t\tencryptionMetadata: encryptionMeta,\n\t\t}, nil\n\t}\n\treturn nil, nil\n}\n\ntype gcsAPI interface {\n\tDo(req *http.Request) (*http.Response, error)\n}\n\n// cloudUtil implementation\nfunc (util *snowflakeGcsClient) uploadFile(\n\tctx context.Context,\n\tdataFile string,\n\tmeta *fileMetadata,\n\tmaxConcurrency int,\n\tmultiPartThreshold int64) error {\n\tuploadURL := meta.presignedURL\n\tvar accessToken string\n\tvar err error\n\n\tif uploadURL == nil {\n\t\tuploadURL, err = util.generateFileURL(meta.stageInfo, strings.TrimLeft(meta.dstFileName, \"/\"))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar ok bool\n\t\taccessToken, ok = meta.client.(string)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"interface convertion. expected type string but got %T\", meta.client)\n\t\t}\n\t}\n\n\tvar contentEncoding string\n\tif meta.dstCompressionType != nil {\n\t\tcontentEncoding = strings.ToLower(meta.dstCompressionType.name)\n\t}\n\n\tif contentEncoding == \"gzip\" {\n\t\tcontentEncoding = \"\"\n\t}\n\n\tgcsHeaders := make(map[string]string)\n\tgcsHeaders[httpHeaderContentEncoding] = contentEncoding\n\tgcsHeaders[gcsMetadataSfcDigest] = meta.sha256Digest\n\tif accessToken != \"\" {\n\t\tgcsHeaders[\"Authorization\"] = \"Bearer \" + accessToken\n\t}\n\n\tif meta.encryptMeta != nil {\n\t\tencryptData := encryptionData{\n\t\t\t\"FullBlob\",\n\t\t\tcontentKey{\n\t\t\t\t\"symmKey1\",\n\t\t\t\tmeta.encryptMeta.key,\n\t\t\t\t\"AES_CBC_256\",\n\t\t\t},\n\t\t\tencryptionAgent{\n\t\t\t\t\"1.0\",\n\t\t\t\t\"AES_CBC_256\",\n\t\t\t},\n\t\t\tmeta.encryptMeta.iv,\n\t\t\tkeyMetadata{\n\t\t\t\t\"Java 5.3.0\",\n\t\t\t},\n\t\t}\n\t\tb, err := json.Marshal(&encryptData)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tgcsHeaders[gcsMetadataEncryptionDataProp] = string(b)\n\t\tgcsHeaders[gcsMetadataMatdescKey] = meta.encryptMeta.matdesc\n\t}\n\n\tvar uploadSrc io.Reader\n\tif meta.srcStream != nil {\n\t\tuploadSrc = meta.srcStream\n\t\tif meta.realSrcStream != nil {\n\t\t\tuploadSrc = meta.realSrcStream\n\t\t}\n\t} else {\n\t\tvar err error\n\t\tuploadSrc, err = os.Open(dataFile)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer func(src io.Closer) {\n\t\t\tif err := src.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"failed to close %v file: %v\", dataFile, err)\n\t\t\t}\n\t\t}(uploadSrc.(io.Closer))\n\t}\n\n\tresp, err := withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (*http.Response, error) {\n\t\treq, err := http.NewRequestWithContext(ctx, \"PUT\", uploadURL.String(), uploadSrc)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tfor k, v := range gcsHeaders {\n\t\t\treq.Header.Add(k, v)\n\t\t}\n\t\tclient, err := newGcsClient(util.cfg, util.telemetry)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t// for testing only\n\t\tif meta.mockGcsClient != nil {\n\t\t\tclient = meta.mockGcsClient\n\t\t}\n\t\treturn client.Do(req)\n\t})\n\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif resp.Body != nil {\n\t\t\tif err := resp.Body.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"failed to close response body: %v\", err)\n\t\t\t}\n\t\t}\n\t}()\n\tif resp.StatusCode != http.StatusOK {\n\t\tif resp.StatusCode == 403 || resp.StatusCode == 408 || resp.StatusCode == 429 || resp.StatusCode == 500 || resp.StatusCode == 503 {\n\t\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\t\tmeta.resStatus = needRetry\n\t\t} else if accessToken == \"\" && resp.StatusCode == 400 && meta.lastError == nil {\n\t\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\t\tmeta.resStatus = renewPresignedURL\n\t\t} else if accessToken != \"\" && util.isTokenExpired(resp) {\n\t\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\t\tmeta.resStatus = renewToken\n\t\t} else {\n\t\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\t}\n\t\treturn meta.lastError\n\t}\n\n\tif meta.options.putCallback != nil {\n\t\tmeta.options.putCallback = &snowflakeProgressPercentage{\n\t\t\tfilename:        dataFile,\n\t\t\tfileSize:        float64(meta.srcFileSize),\n\t\t\toutputStream:    meta.options.putCallbackOutputStream,\n\t\t\tshowProgressBar: meta.options.showProgressBar,\n\t\t}\n\t}\n\n\tmeta.dstFileSize = meta.uploadSize\n\tmeta.resStatus = uploaded\n\n\tmeta.gcsFileHeaderDigest = gcsHeaders[gcsFileHeaderDigest]\n\tmeta.gcsFileHeaderContentLength = meta.uploadSize\n\tif err = json.Unmarshal([]byte(gcsHeaders[gcsMetadataEncryptionDataProp]), &meta.encryptMeta); err != nil {\n\t\treturn err\n\t}\n\tmeta.gcsFileHeaderEncryptionMeta = meta.encryptMeta\n\treturn nil\n}\n\n// cloudUtil implementation\nfunc (util *snowflakeGcsClient) nativeDownloadFile(\n\tctx context.Context,\n\tmeta *fileMetadata,\n\tfullDstFileName string,\n\tmaxConcurrency int64,\n\tpartSize int64) error {\n\tpartSize = int64Max(partSize, minimumDownloadPartSize)\n\tdownloadURL := meta.presignedURL\n\tvar accessToken string\n\tvar err error\n\tgcsHeaders := make(map[string]string)\n\n\tif downloadURL == nil || downloadURL.String() == \"\" {\n\t\tdownloadURL, err = util.generateFileURL(meta.stageInfo, strings.TrimLeft(meta.srcFileName, \"/\"))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar ok bool\n\t\taccessToken, ok = meta.client.(string)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"interface convertion. expected type string but got %T\", meta.client)\n\t\t}\n\t\tif accessToken != \"\" {\n\t\t\tgcsHeaders[\"Authorization\"] = \"Bearer \" + accessToken\n\t\t}\n\t}\n\tlogger.Debugf(\"GCS Client: Send Get Request to %v\", downloadURL.String())\n\n\t// First, get file size with a HEAD request to determine if multi-part download is needed\n\t// Also extract metadata during this request\n\tfileHeader, err := util.getFileHeaderForDownload(ctx, downloadURL, gcsHeaders, accessToken, meta)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfileSize := fileHeader.ContentLength\n\n\t// Use multi-part download for files larger than partSize or when maxConcurrency > 1\n\tif fileSize > partSize && maxConcurrency > 1 {\n\t\terr = util.downloadFileInParts(ctx, downloadURL, gcsHeaders, accessToken, meta, fullDstFileName, fileSize, maxConcurrency, partSize)\n\t} else {\n\t\t// Fall back to single-part download for smaller files\n\t\terr = util.downloadFileSinglePart(ctx, downloadURL, gcsHeaders, accessToken, meta, fullDstFileName)\n\t}\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar encryptMeta encryptMetadata\n\tif fileHeader.Header.Get(gcsMetadataEncryptionDataProp) != \"\" {\n\t\tvar encryptData *encryptionData\n\t\tif err = json.Unmarshal([]byte(fileHeader.Header.Get(gcsMetadataEncryptionDataProp)), &encryptData); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif encryptData != nil {\n\t\t\tencryptMeta = encryptMetadata{\n\t\t\t\tencryptData.WrappedContentKey.EncryptionKey,\n\t\t\t\tencryptData.ContentEncryptionIV,\n\t\t\t\t\"\",\n\t\t\t}\n\t\t\tif key := fileHeader.Header.Get(gcsMetadataMatdescKey); key != \"\" {\n\t\t\t\tencryptMeta.matdesc = key\n\t\t\t}\n\t\t}\n\t}\n\tmeta.resStatus = downloaded\n\tmeta.gcsFileHeaderDigest = fileHeader.Header.Get(gcsMetadataSfcDigest)\n\tmeta.gcsFileHeaderContentLength = fileSize\n\tmeta.gcsFileHeaderEncryptionMeta = &encryptMeta\n\treturn nil\n}\n\n// getFileHeaderForDownload gets the file header using a HEAD request\nfunc (util *snowflakeGcsClient) getFileHeaderForDownload(ctx context.Context, downloadURL *url.URL, gcsHeaders map[string]string, accessToken string, meta *fileMetadata) (*http.Response, error) {\n\tresp, err := withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (*http.Response, error) {\n\t\treq, err := http.NewRequestWithContext(ctx, \"HEAD\", downloadURL.String(), nil)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tfor k, v := range gcsHeaders {\n\t\t\treq.Header.Add(k, v)\n\t\t}\n\t\tclient, err := newGcsClient(util.cfg, util.telemetry)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t// for testing only\n\t\tif meta.mockGcsClient != nil {\n\t\t\tclient = meta.mockGcsClient\n\t\t}\n\t\treturn client.Do(req)\n\t})\n\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif resp.Body != nil {\n\t\t\tif err := resp.Body.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"Failed to close response body: %v\", err)\n\t\t\t}\n\t\t}\n\t}()\n\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn nil, util.handleHTTPError(resp, meta, accessToken)\n\t}\n\n\treturn resp, nil\n}\n\n// downloadPart is a struct for downloading a part of a file in memory\ntype downloadPart struct {\n\tdata  []byte\n\tindex int64\n\terr   error\n}\n\n// downloadPartStream is a struct for downloading a part of a file in a stream\ntype downloadPartStream struct {\n\tstream io.ReadCloser\n\tindex  int64\n\terr    error\n}\n\ntype downloadJob struct {\n\tindex int64\n\tstart int64\n\tend   int64\n}\n\nfunc (util *snowflakeGcsClient) downloadFileInParts(\n\tctx context.Context,\n\tdownloadURL *url.URL,\n\tgcsHeaders map[string]string,\n\taccessToken string,\n\tmeta *fileMetadata,\n\tfullDstFileName string,\n\tfileSize int64,\n\tmaxConcurrency int64,\n\tpartSize int64) error {\n\n\t// Calculate number of parts based on desired part size\n\tnumParts := (fileSize + partSize - 1) / partSize\n\n\t// For streaming, use batched approach to avoid buffering all parts in memory\n\tif isFileGetStream(ctx) {\n\t\treturn util.downloadInPartsForStream(ctx, downloadURL, gcsHeaders, accessToken, meta, fileSize, numParts, maxConcurrency, partSize)\n\t}\n\treturn util.downloadInPartsForFile(ctx, downloadURL, gcsHeaders, accessToken, meta, fullDstFileName, fileSize, numParts, maxConcurrency, partSize)\n}\n\n// downloadInPartsForStream downloads file in batches, streaming parts sequentially\nfunc (util *snowflakeGcsClient) downloadInPartsForStream(\n\tctx context.Context,\n\tdownloadURL *url.URL,\n\tgcsHeaders map[string]string,\n\taccessToken string,\n\tmeta *fileMetadata,\n\tfileSize, numParts, maxConcurrency, partSize int64) error {\n\n\t// Create a single HTTP client for all downloads to reuse connections\n\tclient, err := newGcsClient(util.cfg, util.telemetry)\n\tif err != nil {\n\t\treturn err\n\t}\n\t// for testing only\n\tif meta.mockGcsClient != nil {\n\t\tclient = meta.mockGcsClient\n\t}\n\n\t// The first part's index for each batch\n\tvar nextPartIndex int64 = 0\n\n\tfor nextPartIndex < numParts {\n\t\t// Calculate this batch size\n\t\tbatchSize := maxConcurrency\n\t\tif nextPartIndex+batchSize > numParts {\n\t\t\tbatchSize = numParts - nextPartIndex\n\t\t}\n\n\t\t// Download this batch\n\t\tjobs := make(chan downloadJob, batchSize)\n\t\tresults := make(chan downloadPartStream, batchSize)\n\n\t\t// Start workers for this batch\n\t\tfor i := int64(0); i < batchSize; i++ {\n\t\t\tgo func() {\n\t\t\t\tfor job := range jobs {\n\t\t\t\t\tstream, err := util.downloadRangeStream(ctx, downloadURL, gcsHeaders, accessToken, meta, client, job.start, job.end)\n\t\t\t\t\tresults <- downloadPartStream{stream: stream, index: job.index, err: err}\n\t\t\t\t}\n\t\t\t}()\n\t\t}\n\n\t\t// Send jobs for this batch\n\t\tfor i := int64(0); i < batchSize; i++ {\n\t\t\tpartIndex := nextPartIndex + i\n\t\t\tstart := partIndex * partSize\n\t\t\tend := start + partSize - 1\n\t\t\tif end >= fileSize {\n\t\t\t\tend = fileSize - 1\n\t\t\t}\n\t\t\tjobs <- downloadJob{index: i, start: start, end: end}\n\t\t}\n\t\tclose(jobs) // Signal no more jobs\n\n\t\t// Collect results for this batch\n\t\tbatchResults := make([]downloadPartStream, batchSize)\n\t\tfor i := int64(0); i < batchSize; i++ {\n\t\t\tresult := <-results\n\t\t\tif result.err != nil {\n\t\t\t\t// Close any successful streams before returning error\n\t\t\t\tfor j := int64(0); j < i; j++ {\n\t\t\t\t\tif batchResults[j].stream != nil {\n\t\t\t\t\t\tif closeErr := batchResults[j].stream.Close(); closeErr != nil {\n\t\t\t\t\t\t\tlogger.Warnf(\"Failed to close stream: %v\", closeErr)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn result.err\n\t\t\t}\n\t\t\tbatchResults[result.index] = result\n\t\t}\n\n\t\t// Stream parts sequentially in order, closing streams as we go\n\t\tfor i := int64(0); i < batchSize; i++ {\n\t\t\tpart := batchResults[i]\n\t\t\tif part.stream != nil {\n\t\t\t\t// Stream directly from HTTP response to destination stream\n\t\t\t\t_, err := io.Copy(meta.dstStream, part.stream)\n\t\t\t\t// Close the stream immediately after copying\n\t\t\t\tif closeErr := part.stream.Close(); closeErr != nil {\n\t\t\t\t\tlogger.Warnf(\"Failed to close stream: %v\", closeErr)\n\t\t\t\t}\n\t\t\t\tif err != nil {\n\t\t\t\t\t// Close remaining streams before returning error\n\t\t\t\t\tfor j := i + 1; j < batchSize; j++ {\n\t\t\t\t\t\tif batchResults[j].stream != nil {\n\t\t\t\t\t\t\tif closeErr := batchResults[j].stream.Close(); closeErr != nil {\n\t\t\t\t\t\t\t\tlogger.Warnf(\"Failed to close stream: %v\", closeErr)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tnextPartIndex += batchSize\n\t}\n\n\treturn nil\n}\n\n// downloadInPartsForFile downloads all parts and writes to file\nfunc (util *snowflakeGcsClient) downloadInPartsForFile(\n\tctx context.Context,\n\tdownloadURL *url.URL,\n\tgcsHeaders map[string]string,\n\taccessToken string,\n\tmeta *fileMetadata,\n\tfullDstFileName string,\n\tfileSize, numParts, maxConcurrency, partSize int64) error {\n\n\t// Create a single HTTP client for all downloads to reuse connections\n\tclient, err := newGcsClient(util.cfg, util.telemetry)\n\tif err != nil {\n\t\treturn err\n\t}\n\t// for testing only\n\tif meta.mockGcsClient != nil {\n\t\tclient = meta.mockGcsClient\n\t}\n\n\t// Start all workers and download all parts\n\tjobs := make(chan downloadJob, numParts)\n\tresults := make(chan downloadPart, numParts)\n\n\t// Start worker pool with maxConcurrency workers\n\tfor range maxConcurrency {\n\t\tgo func() {\n\t\t\tfor job := range jobs {\n\t\t\t\tdata, err := util.downloadRangeBytes(ctx, downloadURL, gcsHeaders, accessToken, meta, client, job.start, job.end)\n\t\t\t\tresults <- downloadPart{data: data, index: job.index, err: err}\n\t\t\t}\n\t\t}()\n\t}\n\n\t// Send all jobs to workers\n\tfor i := range numParts {\n\t\tstart := i * partSize\n\t\tend := start + partSize - 1\n\t\tif end >= fileSize {\n\t\t\tend = fileSize - 1\n\t\t}\n\t\tjobs <- downloadJob{index: i, start: start, end: end}\n\t}\n\tclose(jobs) // Signal no more jobs\n\n\t// Collect results and store in order\n\tparts := make([][]byte, numParts)\n\tfor range numParts {\n\t\tresult := <-results\n\t\tif result.err != nil {\n\t\t\treturn result.err\n\t\t}\n\t\tparts[result.index] = result.data\n\t}\n\n\tf, err := os.OpenFile(fullDstFileName, os.O_CREATE|os.O_WRONLY, readWriteFileMode)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif err := f.Close(); err != nil {\n\t\t\tlogger.Warnf(\"Failed to close file: %v\", err)\n\t\t}\n\t}()\n\n\tfor _, part := range parts {\n\t\tif _, err := f.Write(part); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tfi, err := os.Stat(fullDstFileName)\n\tif err != nil {\n\t\treturn err\n\t}\n\tmeta.srcFileSize = fi.Size()\n\n\treturn nil\n}\n\n// downloadRangeStream downloads a specific byte range and returns the response stream\nfunc (util *snowflakeGcsClient) downloadRangeStream(\n\tctx context.Context,\n\tdownloadURL *url.URL,\n\tgcsHeaders map[string]string,\n\taccessToken string,\n\tmeta *fileMetadata,\n\tclient gcsAPI,\n\tstart, end int64) (io.ReadCloser, error) {\n\n\tresp, err := withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (*http.Response, error) {\n\t\treq, err := http.NewRequestWithContext(ctx, \"GET\", downloadURL.String(), nil)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\t// Add range header for partial content\n\t\treq.Header.Set(\"Range\", fmt.Sprintf(\"bytes=%d-%d\", start, end))\n\n\t\tfor k, v := range gcsHeaders {\n\t\t\treq.Header.Add(k, v)\n\t\t}\n\n\t\treturn client.Do(req)\n\t})\n\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Accept both 200 (full content) and 206 (partial content) status codes\n\tif resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusPartialContent {\n\t\t_ = resp.Body.Close()\n\t\treturn nil, util.handleHTTPError(resp, meta, accessToken)\n\t}\n\n\t// Return the response body stream directly - caller is responsible for closing\n\treturn resp.Body, nil\n}\n\n// downloadRangeBytes downloads a specific byte range and returns the bytes\nfunc (util *snowflakeGcsClient) downloadRangeBytes(\n\tctx context.Context,\n\tdownloadURL *url.URL,\n\tgcsHeaders map[string]string,\n\taccessToken string,\n\tmeta *fileMetadata,\n\tclient gcsAPI,\n\tstart, end int64) ([]byte, error) {\n\n\tstream, err := util.downloadRangeStream(ctx, downloadURL, gcsHeaders, accessToken, meta, client, start, end)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif err := stream.Close(); err != nil {\n\t\t\tlogger.Warnf(\"Failed to close stream: %v\", err)\n\t\t}\n\t}()\n\n\t// Download the data into memory\n\tdata, err := io.ReadAll(stream)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn data, nil\n}\n\n// downloadFileSinglePart downloads a file using a single request (original implementation)\nfunc (util *snowflakeGcsClient) downloadFileSinglePart(\n\tctx context.Context,\n\tdownloadURL *url.URL,\n\tgcsHeaders map[string]string,\n\taccessToken string,\n\tmeta *fileMetadata,\n\tfullDstFileName string) error {\n\n\tresp, err := withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (*http.Response, error) {\n\t\treq, err := http.NewRequestWithContext(ctx, \"GET\", downloadURL.String(), nil)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tfor k, v := range gcsHeaders {\n\t\t\treq.Header.Add(k, v)\n\t\t}\n\t\tclient, err := newGcsClient(util.cfg, util.telemetry)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t// for testing only\n\t\tif meta.mockGcsClient != nil {\n\t\t\tclient = meta.mockGcsClient\n\t\t}\n\t\treturn client.Do(req)\n\t})\n\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif resp.Body != nil {\n\t\t\tif err := resp.Body.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"Failed to close response body: %v\", err)\n\t\t\t}\n\t\t}\n\t}()\n\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn util.handleHTTPError(resp, meta, accessToken)\n\t}\n\n\tif isFileGetStream(ctx) {\n\t\tif _, err := io.Copy(meta.dstStream, resp.Body); err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\tf, err := os.OpenFile(fullDstFileName, os.O_CREATE|os.O_WRONLY, readWriteFileMode)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer func() {\n\t\t\tif err = f.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"Failed to close the file: %v\", err)\n\t\t\t}\n\t\t}()\n\t\tif _, err = io.Copy(f, resp.Body); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tfi, err := os.Stat(fullDstFileName)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmeta.srcFileSize = fi.Size()\n\t}\n\n\treturn nil\n}\n\n// handleHTTPError handles HTTP error responses consistently\nfunc (util *snowflakeGcsClient) handleHTTPError(resp *http.Response, meta *fileMetadata, accessToken string) error {\n\tif resp.StatusCode == 403 || resp.StatusCode == 408 || resp.StatusCode == 429 || resp.StatusCode == 500 || resp.StatusCode == 503 {\n\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\tmeta.resStatus = needRetry\n\t} else if resp.StatusCode == 404 {\n\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\tmeta.resStatus = notFoundFile\n\t} else if accessToken == \"\" && resp.StatusCode == 400 && meta.lastError == nil {\n\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\tmeta.resStatus = renewPresignedURL\n\t} else if accessToken != \"\" && util.isTokenExpired(resp) {\n\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t\tmeta.resStatus = renewToken\n\t} else {\n\t\tmeta.lastError = fmt.Errorf(\"%v\", resp.Status)\n\t}\n\treturn meta.lastError\n}\n\nfunc (util *snowflakeGcsClient) extractBucketNameAndPath(location string) *gcsLocation {\n\tcontainerName := location\n\tvar path string\n\tif strings.Contains(location, \"/\") {\n\t\tcontainerName = location[:strings.Index(location, \"/\")]\n\t\tpath = location[strings.Index(location, \"/\")+1:]\n\t\tif path != \"\" && !strings.HasSuffix(path, \"/\") {\n\t\t\tpath += \"/\"\n\t\t}\n\t}\n\treturn &gcsLocation{containerName, path}\n}\n\nfunc (util *snowflakeGcsClient) generateFileURL(stageInfo *execResponseStageInfo, filename string) (result *url.URL, err error) {\n\tgcsLoc := util.extractBucketNameAndPath(stageInfo.Location)\n\tfullFilePath := gcsLoc.path + filename\n\tendPoint := \"https://storage.googleapis.com\"\n\n\t// TODO: SNOW-1789759 hardcoded region will be replaced in the future\n\tisRegionalURLEnabled := (strings.ToLower(stageInfo.Region) == gcsRegionMeCentral2) || stageInfo.UseRegionalURL\n\tif stageInfo.EndPoint != \"\" {\n\t\tendPoint = fmt.Sprintf(\"https://%s\", stageInfo.EndPoint)\n\t} else if stageInfo.UseVirtualURL {\n\t\tendPoint = fmt.Sprintf(\"https://%s.storage.googleapis.com\", gcsLoc.bucketName)\n\t} else if stageInfo.Region != \"\" && isRegionalURLEnabled {\n\t\tendPoint = fmt.Sprintf(\"https://storage.%s.rep.googleapis.com\", strings.ToLower(stageInfo.Region))\n\t}\n\n\tif stageInfo.UseVirtualURL {\n\t\tresult, err = url.Parse(endPoint + \"/\" + url.PathEscape(fullFilePath))\n\t} else {\n\t\tresult, err = url.Parse(endPoint + \"/\" + gcsLoc.bucketName + \"/\" + url.PathEscape(fullFilePath))\n\t}\n\tlogger.Debugf(\"generated file URL from location=%v, path=%v, fileName=%v, endpoint=%v, useVirtualUrl=%v, result=%v, err=%v\", stageInfo.Location, gcsLoc.path, filename, stageInfo.EndPoint, stageInfo.UseVirtualURL, cmp.Or(result, &url.URL{}).String(), err)\n\treturn result, err\n}\n\nfunc (util *snowflakeGcsClient) isTokenExpired(resp *http.Response) bool {\n\treturn resp.StatusCode == 401\n}\n\nfunc newGcsClient(cfg *Config, telemetry *snowflakeTelemetry) (gcsAPI, error) {\n\ttransport, err := newTransportFactory(cfg, telemetry).createTransport(transportConfigFor(transportTypeCloudProvider))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &http.Client{\n\t\tTransport: transport,\n\t}, nil\n}\n"
  },
  {
    "path": "gcs_storage_client_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"path\"\n\t\"strings\"\n\t\"testing\"\n)\n\ntype tcFileURL struct {\n\tlocation string\n\tfname    string\n\tbucket   string\n\tfilepath string\n}\n\nfunc TestExtractBucketAndPath(t *testing.T) {\n\tgcsUtil := new(snowflakeGcsClient)\n\ttestcases := []tcBucketPath{\n\t\t{\"sfc-eng-regression/test_sub_dir/\", \"sfc-eng-regression\", \"test_sub_dir/\"},\n\t\t{\"sfc-eng-regression/dir/test_stg/test_sub_dir/\", \"sfc-eng-regression\", \"dir/test_stg/test_sub_dir/\"},\n\t\t{\"sfc-eng-regression/\", \"sfc-eng-regression\", \"\"},\n\t\t{\"sfc-eng-regression//\", \"sfc-eng-regression\", \"/\"},\n\t\t{\"sfc-eng-regression///\", \"sfc-eng-regression\", \"//\"},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(test.in, func(t *testing.T) {\n\t\t\tgcsLoc := gcsUtil.extractBucketNameAndPath(test.in)\n\t\t\tif gcsLoc.bucketName != test.bucket {\n\t\t\t\tt.Errorf(\"failed. in: %v, expected: %v, got: %v\", test.in, test.bucket, gcsLoc.bucketName)\n\t\t\t}\n\t\t\tif gcsLoc.path != test.path {\n\t\t\t\tt.Errorf(\"failed. in: %v, expected: %v, got: %v\", test.in, test.path, gcsLoc.path)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestIsTokenExpiredWith401(t *testing.T) {\n\tgcsUtil := new(snowflakeGcsClient)\n\tdd := &execResponseData{}\n\texecResp := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"token expired\",\n\t\tCode:    \"401\",\n\t\tSuccess: true,\n\t}\n\tba, err := json.Marshal(execResp)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tresp := &http.Response{StatusCode: http.StatusUnauthorized, Body: &fakeResponseBody{body: ba}}\n\tif !gcsUtil.isTokenExpired(resp) {\n\t\tt.Fatalf(\"expected true for token expired\")\n\t}\n}\n\nfunc TestIsTokenExpiredWith404(t *testing.T) {\n\tgcsUtil := new(snowflakeGcsClient)\n\tdd := &execResponseData{}\n\texecResp := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"file not found\",\n\t\tCode:    \"404\",\n\t\tSuccess: true,\n\t}\n\tba, err := json.Marshal(execResp)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tresp := &http.Response{StatusCode: http.StatusNotFound, Body: &fakeResponseBody{body: ba}}\n\tif gcsUtil.isTokenExpired(resp) {\n\t\tt.Fatalf(\"should be false\")\n\t}\n\tresp = &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}}}\n\n\tif gcsUtil.isTokenExpired(resp) {\n\t\tt.Fatalf(\"should be false\")\n\t}\n\tresp = &http.Response{\n\t\tStatusCode: http.StatusUnauthorized,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}}}\n\n\tif !gcsUtil.isTokenExpired(resp) {\n\t\tt.Fatalf(\"should be true\")\n\t}\n}\n\nfunc TestGenerateFileURL(t *testing.T) {\n\tgcsUtil := new(snowflakeGcsClient)\n\ttestcases := []tcFileURL{\n\t\t{\"sfc-eng-regression/test_sub_dir/\", \"file1\", \"sfc-eng-regression\", \"test_sub_dir/file1\"},\n\t\t{\"sfc-eng-regression/dir/test_stg/test_sub_dir/\", \"file2\", \"sfc-eng-regression\", \"dir/test_stg/test_sub_dir/file2\"},\n\t\t{\"sfc-eng-regression/dir/test_stg/test sub dir/\", \"file2\", \"sfc-eng-regression\", \"dir/test_stg/test sub dir/file2\"},\n\t\t{\"sfc-eng-regression/\", \"file3\", \"sfc-eng-regression\", \"file3\"},\n\t\t{\"sfc-eng-regression//\", \"file4\", \"sfc-eng-regression\", \"/file4\"},\n\t\t{\"sfc-eng-regression///\", \"file5\", \"sfc-eng-regression\", \"//file5\"},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(test.location, func(t *testing.T) {\n\t\t\tstageInfo := &execResponseStageInfo{}\n\t\t\tstageInfo.Location = test.location\n\t\t\tgcsURL, err := gcsUtil.generateFileURL(stageInfo, test.fname)\n\t\t\tassertNilF(t, err, \"error should be nil\")\n\t\t\texpectedURL, err := url.Parse(\"https://storage.googleapis.com/\" + test.bucket + \"/\" + url.PathEscape(test.filepath))\n\t\t\tassertNilF(t, err, \"error should be nil\")\n\t\t\tassertEqualE(t, gcsURL.String(), expectedURL.String(), \"failed. expected: %v but got: %v\", expectedURL.String(), gcsURL.String())\n\t\t})\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.location, func(t *testing.T) {\n\t\t\tstageInfo := &execResponseStageInfo{}\n\t\t\tstageInfo.Location = test.location\n\t\t\tgcsURL, err := gcsUtil.generateFileURL(stageInfo, test.fname)\n\t\t\tassertNilF(t, err, \"error should be nil\")\n\t\t\texpectedURL, err := url.Parse(\"https://storage.googleapis.com/\" + test.bucket + \"/\" + url.PathEscape(test.filepath))\n\t\t\tassertNilF(t, err, \"error should be nil\")\n\t\t\tassertEqualE(t, gcsURL.String(), expectedURL.String(), \"failed. expected: %v but got: %v\", expectedURL.String(), gcsURL.String())\n\t\t})\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.location, func(t *testing.T) {\n\t\t\tstageInfo := &execResponseStageInfo{}\n\t\t\tstageInfo.Location = test.location\n\t\t\tstageInfo.UseVirtualURL = true\n\t\t\tgcsURL, err := gcsUtil.generateFileURL(stageInfo, test.fname)\n\t\t\tassertNilF(t, err, \"error should be nil\")\n\t\t\texpectedURL, err := url.Parse(\"https://sfc-eng-regression.storage.googleapis.com/\" + url.PathEscape(test.filepath))\n\t\t\tassertNilF(t, err, \"error should be nil\")\n\t\t\tassertEqualE(t, gcsURL.String(), expectedURL.String(), \"failed. expected: %v but got: %v\", expectedURL.String(), gcsURL.String())\n\t\t})\n\t}\n}\n\ntype clientMock struct {\n\tDoFunc func(req *http.Request) (*http.Response, error)\n}\n\nfunc (c *clientMock) Do(req *http.Request) (*http.Response, error) {\n\treturn c.DoFunc(req)\n}\n\nfunc TestUploadFileWithGcsUploadFailedError(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:               \"data1.txt.gz\",\n\t\tstageLocationType:  \"GCS\",\n\t\tnoSleepingTime:     true,\n\t\tparallel:           initialParallel,\n\t\tclient:             gcsCli,\n\t\tsha256Digest:       \"123456789abcdef\",\n\t\tstageInfo:          &info,\n\t\tdstFileName:        \"data1.txt.gz\",\n\t\tsrcFileName:        path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\toverwrite:          true,\n\t\tdstCompressionType: compressionTypes[\"GZIP\"],\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn nil, errors.New(\"unexpected error uploading file\")\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestUploadFileWithGcsUploadFailedWithRetry(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tencMat := snowflakeFileEncryption{\n\t\tQueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\",\n\t\tQueryID:             \"01abc874-0406-1bf0-0000-53b10668e056\",\n\t\tSMKID:               92019681909886,\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:               \"data1.txt.gz\",\n\t\tstageLocationType:  \"GCS\",\n\t\tnoSleepingTime:     true,\n\t\tparallel:           initialParallel,\n\t\tclient:             gcsCli,\n\t\tsha256Digest:       \"123456789abcdef\",\n\t\tstageInfo:          &info,\n\t\tdstFileName:        \"data1.txt.gz\",\n\t\tsrcFileName:        path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\toverwrite:          true,\n\t\tdstCompressionType: compressionTypes[\"GZIP\"],\n\t\tencryptionMaterial: &encMat,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"403 Forbidden\",\n\t\t\t\t\tStatusCode: 403,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\n\tif uploadMeta.resStatus != needRetry {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\tneedRetry, uploadMeta.resStatus)\n\t}\n}\n\nfunc TestUploadFileWithGcsUploadFailedWithTokenExpired(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tparallel:          initialParallel,\n\t\tclient:            gcsCli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"401 Unauthorized\",\n\t\t\t\t\tStatusCode: 401,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tif uploadMeta.resStatus != renewToken {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\trenewToken, uploadMeta.resStatus)\n\t}\n}\n\nfunc TestDownloadOneFileFromGcsFailed(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            gcsCli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn nil, errors.New(\"unexpected error downloading file\")\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t\tresStatus: downloaded, // bypass file header request\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n}\n\nfunc TestDownloadOneFileFromGcsFailedWithRetry(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            gcsCli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"403 Forbidden\",\n\t\t\t\t\tStatusCode: 403,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t\tresStatus: downloaded, // bypass file header request\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\n\tif downloadMeta.resStatus != needRetry {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\tneedRetry, downloadMeta.resStatus)\n\t}\n}\n\nfunc TestDownloadOneFileFromGcsFailedWithTokenExpired(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            gcsCli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"401 Unauthorized\",\n\t\t\t\t\tStatusCode: 401,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t\tresStatus: downloaded, // bypass file header request\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\n\tif downloadMeta.resStatus != renewToken {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\trenewToken, downloadMeta.resStatus)\n\t}\n}\n\nfunc TestDownloadOneFileFromGcsFailedWithFileNotFound(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            gcsCli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"404 Not Found\",\n\t\t\t\t\tStatusCode: 404,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t\tresStatus: downloaded, // bypass file header request\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\n\tif downloadMeta.resStatus != notFoundFile {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\tnotFoundFile, downloadMeta.resStatus)\n\t}\n}\n\nfunc TestGetHeaderTokenExpiredError(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tmeta := fileMetadata{\n\t\tclient:    info.Creds.GcsAccessToken,\n\t\tstageInfo: &info,\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"401 Unauthorized\",\n\t\t\t\t\tStatusCode: 401,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\tif header, err := (&snowflakeGcsClient{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\tif meta.resStatus != renewToken {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\trenewToken, meta.resStatus)\n\t}\n}\n\nfunc TestGetHeaderFileNotFound(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tmeta := fileMetadata{\n\t\tclient:    info.Creds.GcsAccessToken,\n\t\tstageInfo: &info,\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"404 Not Found\",\n\t\t\t\t\tStatusCode: 404,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\tif header, err := (&snowflakeGcsClient{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\tif meta.resStatus != notFoundFile {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\tnotFoundFile, meta.resStatus)\n\t}\n}\n\nfunc TestGetHeaderPresignedUrlReturns404(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tpresignedURL, err := url.Parse(\"https://google-cloud.test.com\")\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tmeta := fileMetadata{\n\t\tclient:       info.Creds.GcsAccessToken,\n\t\tstageInfo:    &info,\n\t\tpresignedURL: presignedURL,\n\t}\n\theader, err := (&snowflakeGcsClient{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\")\n\tif header != nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif meta.resStatus != notFoundFile {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\tnotFoundFile, meta.resStatus)\n\t}\n}\n\nfunc TestGetHeaderReturnsError(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tmeta := fileMetadata{\n\t\tclient:    info.Creds.GcsAccessToken,\n\t\tstageInfo: &info,\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn nil, errors.New(\"unexpected exception getting file header\")\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\tif header, err := (&snowflakeGcsClient{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n}\n\nfunc TestGetHeaderBadRequest(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tmeta := fileMetadata{\n\t\tclient:    info.Creds.GcsAccessToken,\n\t\tstageInfo: &info,\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"400 Bad Request\",\n\t\t\t\t\tStatusCode: 400,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\tif header, err := (&snowflakeGcsClient{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\n\tif meta.resStatus != errStatus {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\terrStatus, meta.resStatus)\n\t}\n}\n\nfunc TestGetHeaderRetryableError(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tmeta := fileMetadata{\n\t\tclient:    info.Creds.GcsAccessToken,\n\t\tstageInfo: &info,\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"403 Forbidden\",\n\t\t\t\t\tStatusCode: 403,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\tif header, err := (&snowflakeGcsClient{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\tif meta.resStatus != needRetry {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\tneedRetry, meta.resStatus)\n\t}\n}\n\nfunc TestUploadStreamFailed(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tinitialParallel := int64(100)\n\tsrc := []byte{65, 66, 67}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tparallel:          initialParallel,\n\t\tclient:            gcsCli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcStream:         bytes.NewBuffer(src),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn nil, errors.New(\"unexpected error uploading file\")\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcStream = uploadMeta.srcStream\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestUploadFileWithBadRequest(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tparallel:          initialParallel,\n\t\tclient:            gcsCli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\toverwrite:         true,\n\t\tlastError:         nil,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatusCode: 400,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tif uploadMeta.resStatus != renewPresignedURL {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\trenewPresignedURL, uploadMeta.resStatus)\n\t}\n}\n\nfunc TestGetFileHeaderEncryptionData(t *testing.T) {\n\tmockEncDataResp := \"{\\\"EncryptionMode\\\":\\\"FullBlob\\\",\\\"WrappedContentKey\\\": {\\\"KeyId\\\":\\\"symmKey1\\\",\\\"EncryptedKey\\\":\\\"testencryptedkey12345678910==\\\",\\\"Algorithm\\\":\\\"AES_CBC_256\\\"},\\\"EncryptionAgent\\\": {\\\"Protocol\\\":\\\"1.0\\\",\\\"EncryptionAlgorithm\\\":\\\"AES_CBC_256\\\"},\\\"ContentEncryptionIV\\\":\\\"testIVkey12345678910==\\\",\\\"KeyWrappingMetadata\\\":{\\\"EncryptionLibrary\\\":\\\"Java 5.3.0\\\"}}\"\n\tmockMatDesc := \"{\\\"queryid\\\":\\\"01abc874-0406-1bf0-0000-53b10668e056\\\",\\\"smkid\\\":\\\"92019681909886\\\",\\\"key\\\":\\\"128\\\"}\"\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tmeta := fileMetadata{\n\t\tclient:    info.Creds.GcsAccessToken,\n\t\tstageInfo: &info,\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"200 OK\",\n\t\t\t\t\tStatusCode: 200,\n\t\t\t\t\tHeader: http.Header{\n\t\t\t\t\t\t\"X-Goog-Meta-Encryptiondata\": []string{mockEncDataResp},\n\t\t\t\t\t\t\"Content-Length\":             []string{\"4256\"},\n\t\t\t\t\t\t\"X-Goog-Meta-Sfc-Digest\":     []string{\"123456789abcdef\"},\n\t\t\t\t\t\t\"X-Goog-Meta-Matdesc\":        []string{mockMatDesc},\n\t\t\t\t\t},\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\theader, err := (&snowflakeGcsClient{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\texpectedFileHeader := &fileHeader{\n\t\tdigest:        \"123456789abcdef\",\n\t\tcontentLength: 4256,\n\t\tencryptionMetadata: &encryptMetadata{\n\t\t\tkey:     \"testencryptedkey12345678910==\",\n\t\t\tiv:      \"testIVkey12345678910==\",\n\t\t\tmatdesc: mockMatDesc,\n\t\t},\n\t}\n\tif header.contentLength != expectedFileHeader.contentLength || header.digest != expectedFileHeader.digest || header.encryptionMetadata.iv != expectedFileHeader.encryptionMetadata.iv || header.encryptionMetadata.key != expectedFileHeader.encryptionMetadata.key || header.encryptionMetadata.matdesc != expectedFileHeader.encryptionMetadata.matdesc {\n\t\tt.Fatalf(\"unexpected file header. expected: %v, got: %v\", expectedFileHeader, header)\n\t}\n}\n\nfunc TestGetFileHeaderEncryptionDataInterfaceConversionError(t *testing.T) {\n\tmockEncDataResp := \"{\\\"EncryptionMode\\\":\\\"FullBlob\\\",\\\"WrappedContentKey\\\": {\\\"KeyId\\\":\\\"symmKey1\\\",\\\"EncryptedKey\\\":\\\"testencryptedkey12345678910==\\\",\\\"Algorithm\\\":\\\"AES_CBC_256\\\"},\\\"EncryptionAgent\\\": {\\\"Protocol\\\":\\\"1.0\\\",\\\"EncryptionAlgorithm\\\":\\\"AES_CBC_256\\\"},\\\"ContentEncryptionIV\\\":\\\"testIVkey12345678910==\\\",\\\"KeyWrappingMetadata\\\":{\\\"EncryptionLibrary\\\":\\\"Java 5.3.0\\\"}}\"\n\tmockMatDesc := \"{\\\"queryid\\\":\\\"01abc874-0406-1bf0-0000-53b10668e056\\\",\\\"smkid\\\":\\\"92019681909886\\\",\\\"key\\\":\\\"128\\\"}\"\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tmeta := fileMetadata{\n\t\tclient:    1,\n\t\tstageInfo: &info,\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"200 OK\",\n\t\t\t\t\tStatusCode: 200,\n\t\t\t\t\tHeader: http.Header{\n\t\t\t\t\t\t\"X-Goog-Meta-Encryptiondata\": []string{mockEncDataResp},\n\t\t\t\t\t\t\"Content-Length\":             []string{\"4256\"},\n\t\t\t\t\t\t\"X-Goog-Meta-Sfc-Digest\":     []string{\"123456789abcdef\"},\n\t\t\t\t\t\t\"X-Goog-Meta-Matdesc\":        []string{mockMatDesc},\n\t\t\t\t\t},\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\t_, err := (&snowflakeGcsClient{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\")\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n}\n\nfunc TestUploadFileToGcsNoStatus(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs-blob/storage/users/456/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tencMat := snowflakeFileEncryption{\n\t\tQueryStageMasterKey: \"abCdEFO0upIT36dAxGsa0w==\",\n\t\tQueryID:             \"01abc874-0406-1bf0-0000-53b10668e056\",\n\t\tSMKID:               92019681909886,\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:               \"data1.txt.gz\",\n\t\tstageLocationType:  \"GCS\",\n\t\tnoSleepingTime:     true,\n\t\tparallel:           initialParallel,\n\t\tclient:             gcsCli,\n\t\tsha256Digest:       \"123456789abcdef\",\n\t\tstageInfo:          &info,\n\t\tdstFileName:        \"data1.txt.gz\",\n\t\tsrcFileName:        path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\toverwrite:          true,\n\t\tdstCompressionType: compressionTypes[\"GZIP\"],\n\t\tencryptionMaterial: &encMat,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"401 Unauthorized\",\n\t\t\t\t\tStatusCode: 401,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n}\n\nfunc TestDownloadFileFromGcsError(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            gcsCli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"403 Unauthorized\",\n\t\t\t\t\tStatusCode: 401,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t\tresStatus: downloaded, // bypass file header request\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n}\n\nfunc TestDownloadFileWithBadRequest(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tgcsCli, err := new(snowflakeGcsClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"GCS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            gcsCli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockGcsClient: &clientMock{\n\t\t\tDoFunc: func(req *http.Request) (*http.Response, error) {\n\t\t\t\treturn &http.Response{\n\t\t\t\t\tStatus:     \"400 Bad Request\",\n\t\t\t\t\tStatusCode: 400,\n\t\t\t\t\tHeader:     make(http.Header),\n\t\t\t\t\tBody:       io.NopCloser(strings.NewReader(\"\")),\n\t\t\t\t}, nil\n\t\t\t},\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t\tresStatus: downloaded, // bypass file header request\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\n\tif downloadMeta.resStatus != renewPresignedURL {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\trenewPresignedURL, downloadMeta.resStatus)\n\t}\n}\n\nfunc Test_snowflakeGcsClient_uploadFile(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tmeta := fileMetadata{\n\t\tclient:    1,\n\t\tstageInfo: &info,\n\t}\n\terr := new(snowflakeGcsClient).uploadFile(context.Background(), \"somedata\", &meta, 1, 1)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n}\n\nfunc Test_snowflakeGcsClient_nativeDownloadFile(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"gcs/teststage/users/34/\",\n\t\tLocationType: \"GCS\",\n\t\tCreds: execResponseCredentials{\n\t\t\tGcsAccessToken: \"test-token-124456577\",\n\t\t},\n\t}\n\tmeta := fileMetadata{\n\t\tclient:    1,\n\t\tstageInfo: &info,\n\t}\n\terr := new(snowflakeGcsClient).nativeDownloadFile(context.Background(), &meta, \"dummy data\", 1, multiPartThreshold)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n}\n\nfunc TestGetGcsCustomEndpoint(t *testing.T) {\n\ttestcases := []struct {\n\t\tdesc            string\n\t\tin              execResponseStageInfo\n\t\texpectedFileURL string\n\t}{\n\t\t{\n\t\t\tdesc: \"when the endPoint is not specified and UseRegionalURL is false\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseRegionalURL: false,\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tEndPoint:       \"\",\n\t\t\t\tRegion:         \"WEST-1\",\n\t\t\t\tUseVirtualURL:  false,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://storage.googleapis.com/my-travel-maps\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when the useRegionalURL is only enabled\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseRegionalURL: true,\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tEndPoint:       \"\",\n\t\t\t\tRegion:         \"mockLocation\",\n\t\t\t\tUseVirtualURL:  false,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://storage.mocklocation.rep.googleapis.com/my-travel-maps\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when the region is me-central2\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseRegionalURL: false,\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tEndPoint:       \"\",\n\t\t\t\tRegion:         \"me-central2\",\n\t\t\t\tUseVirtualURL:  false,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://storage.me-central2.rep.googleapis.com/my-travel-maps\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when the region is me-central2 (mixed case)\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseRegionalURL: false,\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tEndPoint:       \"\",\n\t\t\t\tRegion:         \"ME-cEntRal2\",\n\t\t\t\tUseVirtualURL:  false,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://storage.me-central2.rep.googleapis.com/my-travel-maps\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when the region is me-central2 (uppercase)\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseRegionalURL: false,\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tEndPoint:       \"\",\n\t\t\t\tRegion:         \"ME-CENTRAL2\",\n\t\t\t\tUseVirtualURL:  false,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://storage.me-central2.rep.googleapis.com/my-travel-maps\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when the endPoint is specified\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseRegionalURL: false,\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tEndPoint:       \"storage.specialEndPoint.rep.googleapis.com\",\n\t\t\t\tRegion:         \"ME-cEntRal1\",\n\t\t\t\tUseVirtualURL:  false,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://storage.specialEndPoint.rep.googleapis.com/my-travel-maps\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when both the endPoint and the useRegionalUrl are specified\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseRegionalURL: true,\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tEndPoint:       \"storage.specialEndPoint.rep.googleapis.com\",\n\t\t\t\tRegion:         \"ME-cEntRal1\",\n\t\t\t\tUseVirtualURL:  false,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://storage.specialEndPoint.rep.googleapis.com/my-travel-maps\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when both the endPoint is specified and the region is me-central2\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseRegionalURL: true,\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tEndPoint:       \"storage.specialEndPoint.rep.googleapis.com\",\n\t\t\t\tRegion:         \"ME-CENTRAL2\",\n\t\t\t\tUseVirtualURL:  false,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://storage.specialEndPoint.rep.googleapis.com/my-travel-maps\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when only the useVirtualUrl is enabled\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tUseRegionalURL: false,\n\t\t\t\tEndPoint:       \"\",\n\t\t\t\tRegion:         \"WEST-1\",\n\t\t\t\tUseVirtualURL:  true,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://my-travel-maps.storage.googleapis.com\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when both the useRegionalURL and useVirtualUrl are enabled\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tUseRegionalURL: true,\n\t\t\t\tEndPoint:       \"\",\n\t\t\t\tRegion:         \"ME-CENTRAL2\",\n\t\t\t\tUseVirtualURL:  true,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://my-travel-maps.storage.googleapis.com\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when all the options are enabled\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tLocation:       \"my-travel-maps/mock_directory/mock_path/\",\n\t\t\t\tUseRegionalURL: true,\n\t\t\t\tEndPoint:       \"storage.specialEndPoint.rep.googleapis.com\",\n\t\t\t\tRegion:         \"ME-CENTRAL2\",\n\t\t\t\tUseVirtualURL:  true,\n\t\t\t},\n\t\t\texpectedFileURL: \"https://storage.specialEndPoint.rep.googleapis.com\",\n\t\t},\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.desc, func(t *testing.T) {\n\t\t\tgcs := new(snowflakeGcsClient)\n\t\t\tfileURL, err := gcs.generateFileURL(&test.in, \"mock_file\")\n\t\t\tassertNilF(t, err, \"Should not fail\")\n\n\t\t\texpectedURL, err := url.Parse(test.expectedFileURL + \"/\" + url.QueryEscape(\"mock_directory/mock_path/mock_file\"))\n\t\t\tassertNilF(t, err, \"Should not fail\")\n\n\t\t\tassertEqualF(t, fileURL.String(), expectedURL.String(), \"failed. in: %v, expected: %v, got: %v\", fmt.Sprintf(\"%v\", test.in), expectedURL.String(), fileURL.String())\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/snowflakedb/gosnowflake/v2\n\ngo 1.24.0\n\nrequire (\n\tgithub.com/99designs/keyring v1.2.2\n\tgithub.com/Azure/azure-sdk-for-go/sdk/azcore v1.4.0\n\tgithub.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0\n\tgithub.com/BurntSushi/toml v1.4.0\n\tgithub.com/apache/arrow-go/v18 v18.4.0\n\tgithub.com/aws/aws-sdk-go-v2 v1.38.1\n\tgithub.com/aws/aws-sdk-go-v2/config v1.27.11\n\tgithub.com/aws/aws-sdk-go-v2/credentials v1.17.11\n\tgithub.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.1\n\tgithub.com/aws/aws-sdk-go-v2/feature/s3/manager v1.16.15\n\tgithub.com/aws/aws-sdk-go-v2/service/s3 v1.53.1\n\tgithub.com/aws/aws-sdk-go-v2/service/sts v1.28.6\n\tgithub.com/aws/smithy-go v1.22.5\n\tgithub.com/gabriel-vasile/mimetype v1.4.7\n\tgithub.com/golang-jwt/jwt/v5 v5.2.2\n\tgithub.com/pkg/browser v0.0.0-20210911075715-681adbf594b8\n\tgo.opentelemetry.io/otel v1.40.0\n\tgo.opentelemetry.io/otel/sdk v1.40.0\n\tgolang.org/x/crypto v0.46.0\n\tgolang.org/x/net v0.48.0\n\tgolang.org/x/oauth2 v0.34.0\n\tgolang.org/x/sys v0.40.0\n)\n\nrequire (\n\tgithub.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4 // indirect\n\tgithub.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2 // indirect\n\tgithub.com/andybalholm/brotli v1.2.0 // indirect\n\tgithub.com/apache/thrift v0.22.0 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.2 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/internal/configsources v1.3.5 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.5 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/internal/ini v1.8.0 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/internal/v4a v1.3.5 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.2 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.7 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.7 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.5 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/service/sso v1.20.5 // indirect\n\tgithub.com/aws/aws-sdk-go-v2/service/ssooidc v1.23.4 // indirect\n\tgithub.com/cespare/xxhash/v2 v2.3.0 // indirect\n\tgithub.com/danieljoos/wincred v1.2.2 // indirect\n\tgithub.com/dvsekhvalnov/jose2go v1.7.0 // indirect\n\tgithub.com/go-logr/logr v1.4.3 // indirect\n\tgithub.com/go-logr/stdr v1.2.2 // indirect\n\tgithub.com/goccy/go-json v0.10.5 // indirect\n\tgithub.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 // indirect\n\tgithub.com/golang/snappy v1.0.0 // indirect\n\tgithub.com/google/flatbuffers v25.2.10+incompatible // indirect\n\tgithub.com/google/uuid v1.6.0 // indirect\n\tgithub.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c // indirect\n\tgithub.com/klauspost/asmfmt v1.3.2 // indirect\n\tgithub.com/klauspost/compress v1.18.0 // indirect\n\tgithub.com/klauspost/cpuid/v2 v2.2.11 // indirect\n\tgithub.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8 // indirect\n\tgithub.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3 // indirect\n\tgithub.com/mtibben/percent v0.2.1 // indirect\n\tgithub.com/pierrec/lz4/v4 v4.1.22 // indirect\n\tgithub.com/zeebo/xxh3 v1.0.2 // indirect\n\tgo.opentelemetry.io/auto/sdk v1.2.1 // indirect\n\tgo.opentelemetry.io/otel/metric v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/trace v1.40.0 // indirect\n\tgolang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0 // indirect\n\tgolang.org/x/mod v0.30.0 // indirect\n\tgolang.org/x/sync v0.19.0 // indirect\n\tgolang.org/x/telemetry v0.0.0-20251111182119-bc8e575c7b54 // indirect\n\tgolang.org/x/term v0.38.0 // indirect\n\tgolang.org/x/text v0.32.0 // indirect\n\tgolang.org/x/tools v0.39.0 // indirect\n\tgolang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect\n\tgoogle.golang.org/grpc v1.79.3 // indirect\n\tgoogle.golang.org/protobuf v1.36.10 // indirect\n)\n"
  },
  {
    "path": "go.sum",
    "content": "github.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4 h1:/vQbFIOMbk2FiG/kXiLl8BRyzTWDw7gX/Hz7Dd5eDMs=\ngithub.com/99designs/go-keychain v0.0.0-20191008050251-8e49817e8af4/go.mod h1:hN7oaIRCjzsZ2dE+yG5k+rsdt3qcwykqK6HVGcKwsw4=\ngithub.com/99designs/keyring v1.2.2 h1:pZd3neh/EmUzWONb35LxQfvuY7kiSXAq3HQd97+XBn0=\ngithub.com/99designs/keyring v1.2.2/go.mod h1:wes/FrByc8j7lFOAGLGSNEg8f/PaI3cgTBqhFkHUrPk=\ngithub.com/Azure/azure-sdk-for-go/sdk/azcore v1.4.0 h1:rTnT/Jrcm+figWlYz4Ixzt0SJVR2cMC8lvZcimipiEY=\ngithub.com/Azure/azure-sdk-for-go/sdk/azcore v1.4.0/go.mod h1:ON4tFdPTwRcgWEaVDrN3584Ef+b7GgSJaXxe5fW9t4M=\ngithub.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0 h1:QkAcEIAKbNL4KoFr4SathZPhDhF4mVwpBMFlYjyAqy8=\ngithub.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0/go.mod h1:bhXu1AjYL+wutSL/kpSq6s7733q2Rb0yuot9Zgfqa/0=\ngithub.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2 h1:+5VZ72z0Qan5Bog5C+ZkgSqUbeVUd9wgtHOrIKuc5b8=\ngithub.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w=\ngithub.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0 h1:u/LLAOFgsMv7HmNL4Qufg58y+qElGOt5qv0z1mURkRY=\ngithub.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0/go.mod h1:2e8rMJtl2+2j+HXbTBwnyGpm5Nou7KhvSfxOq8JpTag=\ngithub.com/AzureAD/microsoft-authentication-library-for-go v0.5.1 h1:BWe8a+f/t+7KY7zH2mqygeUD0t8hNFXe08p1Pb3/jKE=\ngithub.com/AzureAD/microsoft-authentication-library-for-go v0.5.1/go.mod h1:Vt9sXTKwMyGcOxSmLDMnGPgqsUg7m8pe215qMLrDXw4=\ngithub.com/BurntSushi/toml v1.4.0 h1:kuoIxZQy2WRRk1pttg9asf+WVv6tWQuBNVmK8+nqPr0=\ngithub.com/BurntSushi/toml v1.4.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=\ngithub.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=\ngithub.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=\ngithub.com/apache/arrow-go/v18 v18.4.0 h1:/RvkGqH517iY8bZKc4FD5/kkdwXJGjxf28JIXbJ/oB0=\ngithub.com/apache/arrow-go/v18 v18.4.0/go.mod h1:Aawvwhj8x2jURIzD9Moy72cF0FyJXOpkYpdmGRHcw14=\ngithub.com/apache/thrift v0.22.0 h1:r7mTJdj51TMDe6RtcmNdQxgn9XcyfGDOzegMDRg47uc=\ngithub.com/apache/thrift v0.22.0/go.mod h1:1e7J/O1Ae6ZQMTYdy9xa3w9k+XHWPfRvdPyJeynQ+/g=\ngithub.com/aws/aws-sdk-go-v2 v1.38.1 h1:j7sc33amE74Rz0M/PoCpsZQ6OunLqys/m5antM0J+Z8=\ngithub.com/aws/aws-sdk-go-v2 v1.38.1/go.mod h1:9Q0OoGQoboYIAJyslFyF1f5K1Ryddop8gqMhWx/n4Wg=\ngithub.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.2 h1:x6xsQXGSmW6frevwDA+vi/wqhp1ct18mVXYN08/93to=\ngithub.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.2/go.mod h1:lPprDr1e6cJdyYeGXnRaJoP4Md+cDBvi2eOj00BlGmg=\ngithub.com/aws/aws-sdk-go-v2/config v1.27.11 h1:f47rANd2LQEYHda2ddSCKYId18/8BhSRM4BULGmfgNA=\ngithub.com/aws/aws-sdk-go-v2/config v1.27.11/go.mod h1:SMsV78RIOYdve1vf36z8LmnszlRWkwMQtomCAI0/mIE=\ngithub.com/aws/aws-sdk-go-v2/credentials v1.17.11 h1:YuIB1dJNf1Re822rriUOTxopaHHvIq0l/pX3fwO+Tzs=\ngithub.com/aws/aws-sdk-go-v2/credentials v1.17.11/go.mod h1:AQtFPsDH9bI2O+71anW6EKL+NcD7LG3dpKGMV4SShgo=\ngithub.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.1 h1:FVJ0r5XTHSmIHJV6KuDmdYhEpvlHpiSd38RQWhut5J4=\ngithub.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.1/go.mod h1:zusuAeqezXzAB24LGuzuekqMAEgWkVYukBec3kr3jUg=\ngithub.com/aws/aws-sdk-go-v2/feature/s3/manager v1.16.15 h1:7Zwtt/lP3KNRkeZre7soMELMGNoBrutx8nobg1jKWmo=\ngithub.com/aws/aws-sdk-go-v2/feature/s3/manager v1.16.15/go.mod h1:436h2adoHb57yd+8W+gYPrrA9U/R/SuAuOO42Ushzhw=\ngithub.com/aws/aws-sdk-go-v2/internal/configsources v1.3.5 h1:aw39xVGeRWlWx9EzGVnhOR4yOjQDHPQ6o6NmBlscyQg=\ngithub.com/aws/aws-sdk-go-v2/internal/configsources v1.3.5/go.mod h1:FSaRudD0dXiMPK2UjknVwwTYyZMRsHv3TtkabsZih5I=\ngithub.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.5 h1:PG1F3OD1szkuQPzDw3CIQsRIrtTlUC3lP84taWzHlq0=\ngithub.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.5/go.mod h1:jU1li6RFryMz+so64PpKtudI+QzbKoIEivqdf6LNpOc=\ngithub.com/aws/aws-sdk-go-v2/internal/ini v1.8.0 h1:hT8rVHwugYE2lEfdFE0QWVo81lF7jMrYJVDWI+f+VxU=\ngithub.com/aws/aws-sdk-go-v2/internal/ini v1.8.0/go.mod h1:8tu/lYfQfFe6IGnaOdrpVgEL2IrrDOf6/m9RQum4NkY=\ngithub.com/aws/aws-sdk-go-v2/internal/v4a v1.3.5 h1:81KE7vaZzrl7yHBYHVEzYB8sypz11NMOZ40YlWvPxsU=\ngithub.com/aws/aws-sdk-go-v2/internal/v4a v1.3.5/go.mod h1:LIt2rg7Mcgn09Ygbdh/RdIm0rQ+3BNkbP1gyVMFtRK0=\ngithub.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.2 h1:Ji0DY1xUsUr3I8cHps0G+XM3WWU16lP6yG8qu1GAZAs=\ngithub.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.2/go.mod h1:5CsjAbs3NlGQyZNFACh+zztPDI7fU6eW9QsxjfnuBKg=\ngithub.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.7 h1:ZMeFZ5yk+Ek+jNr1+uwCd2tG89t6oTS5yVWpa6yy2es=\ngithub.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.7/go.mod h1:mxV05U+4JiHqIpGqqYXOHLPKUC6bDXC44bsUhNjOEwY=\ngithub.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.7 h1:ogRAwT1/gxJBcSWDMZlgyFUM962F51A5CRhDLbxLdmo=\ngithub.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.7/go.mod h1:YCsIZhXfRPLFFCl5xxY+1T9RKzOKjCut+28JSX2DnAk=\ngithub.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.5 h1:f9RyWNtS8oH7cZlbn+/JNPpjUk5+5fLd5lM9M0i49Ys=\ngithub.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.5/go.mod h1:h5CoMZV2VF297/VLhRhO1WF+XYWOzXo+4HsObA4HjBQ=\ngithub.com/aws/aws-sdk-go-v2/service/s3 v1.53.1 h1:6cnno47Me9bRykw9AEv9zkXE+5or7jz8TsskTTccbgc=\ngithub.com/aws/aws-sdk-go-v2/service/s3 v1.53.1/go.mod h1:qmdkIIAC+GCLASF7R2whgNrJADz0QZPX+Seiw/i4S3o=\ngithub.com/aws/aws-sdk-go-v2/service/sso v1.20.5 h1:vN8hEbpRnL7+Hopy9dzmRle1xmDc7o8tmY0klsr175w=\ngithub.com/aws/aws-sdk-go-v2/service/sso v1.20.5/go.mod h1:qGzynb/msuZIE8I75DVRCUXw3o3ZyBmUvMwQ2t/BrGM=\ngithub.com/aws/aws-sdk-go-v2/service/ssooidc v1.23.4 h1:Jux+gDDyi1Lruk+KHF91tK2KCuY61kzoCpvtvJJBtOE=\ngithub.com/aws/aws-sdk-go-v2/service/ssooidc v1.23.4/go.mod h1:mUYPBhaF2lGiukDEjJX2BLRRKTmoUSitGDUgM4tRxak=\ngithub.com/aws/aws-sdk-go-v2/service/sts v1.28.6 h1:cwIxeBttqPN3qkaAjcEcsh8NYr8n2HZPkcKgPAi1phU=\ngithub.com/aws/aws-sdk-go-v2/service/sts v1.28.6/go.mod h1:FZf1/nKNEkHdGGJP/cI2MoIMquumuRK6ol3QQJNDxmw=\ngithub.com/aws/smithy-go v1.22.5 h1:P9ATCXPMb2mPjYBgueqJNCA5S9UfktsW0tTxi+a7eqw=\ngithub.com/aws/smithy-go v1.22.5/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=\ngithub.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=\ngithub.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=\ngithub.com/danieljoos/wincred v1.2.2 h1:774zMFJrqaeYCK2W57BgAem/MLi6mtSE47MB6BOJ0i0=\ngithub.com/danieljoos/wincred v1.2.2/go.mod h1:w7w4Utbrz8lqeMbDAK0lkNJUv5sAOkFi7nd/ogr0Uh8=\ngithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=\ngithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/dnaeon/go-vcr v1.1.0 h1:ReYa/UBrRyQdant9B4fNHGoCNKw6qh6P0fsdGmZpR7c=\ngithub.com/dnaeon/go-vcr v1.1.0/go.mod h1:M7tiix8f0r6mKKJ3Yq/kqU1OYf3MnfmBWVbPx/yU9ko=\ngithub.com/dvsekhvalnov/jose2go v1.7.0 h1:bnQc8+GMnidJZA8zc6lLEAb4xNrIqHwO+9TzqvtQZPo=\ngithub.com/dvsekhvalnov/jose2go v1.7.0/go.mod h1:QsHjhyTlD/lAVqn/NSbVZmSCGeDehTB/mPZadG+mhXU=\ngithub.com/gabriel-vasile/mimetype v1.4.7 h1:SKFKl7kD0RiPdbht0s7hFtjl489WcQ1VyPW8ZzUMYCA=\ngithub.com/gabriel-vasile/mimetype v1.4.7/go.mod h1:GDlAgAyIRT27BhFl53XNAFtfjzOkLaF35JdEG0P7LtU=\ngithub.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=\ngithub.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=\ngithub.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=\ngithub.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=\ngithub.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=\ngithub.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=\ngithub.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=\ngithub.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 h1:ZpnhV/YsD2/4cESfV5+Hoeu/iUR3ruzNvZ+yQfO03a0=\ngithub.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=\ngithub.com/golang-jwt/jwt v3.2.1+incompatible h1:73Z+4BJcrTC+KczS6WvTPvRGOp1WmfEP4Q1lOd9Z/+c=\ngithub.com/golang-jwt/jwt v3.2.1+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=\ngithub.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=\ngithub.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=\ngithub.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=\ngithub.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=\ngithub.com/google/flatbuffers v25.2.10+incompatible h1:F3vclr7C3HpB1k9mxCGRMXq6FdUalZ6H/pNX4FP1v0Q=\ngithub.com/google/flatbuffers v25.2.10+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=\ngithub.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=\ngithub.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=\ngithub.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=\ngithub.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c h1:6rhixN/i8ZofjG1Y75iExal34USq5p+wiN1tpie8IrU=\ngithub.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c/go.mod h1:NMPJylDgVpX0MLRlPy15sqSwOFv/U1GZ2m21JhFfek0=\ngithub.com/klauspost/asmfmt v1.3.2 h1:4Ri7ox3EwapiOjCki+hw14RyKk201CN4rzyCJRFLpK4=\ngithub.com/klauspost/asmfmt v1.3.2/go.mod h1:AG8TuvYojzulgDAMCnYn50l/5QV3Bs/tp6j0HLHbNSE=\ngithub.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=\ngithub.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=\ngithub.com/klauspost/cpuid/v2 v2.2.11 h1:0OwqZRYI2rFrjS4kvkDnqJkKHdHaRnCm68/DY4OxRzU=\ngithub.com/klauspost/cpuid/v2 v2.2.11/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=\ngithub.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=\ngithub.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=\ngithub.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=\ngithub.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=\ngithub.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=\ngithub.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=\ngithub.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=\ngithub.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=\ngithub.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8 h1:AMFGa4R4MiIpspGNG7Z948v4n35fFGB3RR3G/ry4FWs=\ngithub.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8/go.mod h1:mC1jAcsrzbxHt8iiaC+zU4b1ylILSosueou12R++wfY=\ngithub.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3 h1:+n/aFZefKZp7spd8DFdX7uMikMLXX4oubIzJF4kv/wI=\ngithub.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3/go.mod h1:RagcQ7I8IeTMnF8JTXieKnO4Z6JCsikNEzj0DwauVzE=\ngithub.com/mtibben/percent v0.2.1 h1:5gssi8Nqo8QU/r2pynCm+hBQHpkB/uNK7BJCFogWdzs=\ngithub.com/mtibben/percent v0.2.1/go.mod h1:KG9uO+SZkUp+VkRHsCdYQV3XSZrrSpR3O9ibNBTZrns=\ngithub.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=\ngithub.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=\ngithub.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=\ngithub.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 h1:KoWmjvw+nsYOo29YJK9vDA65RGE3NrOnUtO7a+RF9HU=\ngithub.com/pkg/browser v0.0.0-20210911075715-681adbf594b8/go.mod h1:HKlIX3XHQyzLZPlr7++PzdhaXEj94dEiJgZDTsxEqUI=\ngithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=\ngithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=\ngithub.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=\ngithub.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=\ngithub.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=\ngithub.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=\ngithub.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=\ngithub.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=\ngithub.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=\ngithub.com/zeebo/assert v1.3.0 h1:g7C04CbJuIDKNPFHmsk4hwZDO5O+kntRxzaUoNXj+IQ=\ngithub.com/zeebo/assert v1.3.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=\ngithub.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=\ngithub.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=\ngo.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=\ngo.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=\ngo.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=\ngo.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=\ngo.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=\ngo.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=\ngo.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=\ngo.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=\ngo.opentelemetry.io/otel/sdk/metric v1.40.0 h1:mtmdVqgQkeRxHgRv4qhyJduP3fYJRMX4AtAlbuWdCYw=\ngo.opentelemetry.io/otel/sdk/metric v1.40.0/go.mod h1:4Z2bGMf0KSK3uRjlczMOeMhKU2rhUqdWNoKcYrtcBPg=\ngo.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=\ngo.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=\ngo.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=\ngo.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=\ngolang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=\ngolang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=\ngolang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0 h1:R84qjqJb5nVJMxqWYb3np9L5ZsaDtB+a39EqjV0JSUM=\ngolang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0/go.mod h1:S9Xr4PYopiDyqSyp5NjCrhFrqg6A5zA2E/iPHPhqnS8=\ngolang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=\ngolang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=\ngolang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=\ngolang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=\ngolang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=\ngolang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=\ngolang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=\ngolang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=\ngolang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=\ngolang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=\ngolang.org/x/telemetry v0.0.0-20251111182119-bc8e575c7b54 h1:E2/AqCUMZGgd73TQkxUMcMla25GB9i/5HOdLr+uH7Vo=\ngolang.org/x/telemetry v0.0.0-20251111182119-bc8e575c7b54/go.mod h1:hKdjCMrbv9skySur+Nek8Hd0uJ0GuxJIoIX2payrIdQ=\ngolang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=\ngolang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=\ngolang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=\ngolang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=\ngolang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=\ngolang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=\ngolang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da h1:noIWHXmPHxILtqtCOPIhSt0ABwskkZKjD3bXGnZGpNY=\ngolang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90=\ngonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=\ngonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=\ngoogle.golang.org/grpc v1.79.3 h1:sybAEdRIEtvcD68Gx7dmnwjZKlyfuc61Dyo9pGXXkKE=\ngoogle.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=\ngoogle.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=\ngoogle.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=\ngopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=\ngopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=\ngopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=\ngopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=\ngopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\n"
  },
  {
    "path": "gosnowflake.mak",
    "content": "## Setup\nSHELL := /bin/bash\nSRC = $(shell find . -type f -name '*.go' -not -path \"./vendor/*\")\n\nsetup:\n\t@which golint &> /dev/null  || go install golang.org/x/lint/golint@latest\n\t@which make2help &> /dev/null || go install github.com/Songmu/make2help/cmd/make2help@latest\n\n## Install dependencies\ndeps: setup\n\tgo mod tidy\t\n\n## Show help\nhelp:\n\t@make2help $(MAKEFILE_LIST)\n\n# Format source codes (internally used)\ncfmt: setup\n\t@gofmt -l -w $(SRC)\n\n# Lint (internally used)\nclint: deps\n\t@echo \"Running go vet and lint\"\n\t@for pkg in $$(go list ./... | grep -v /vendor/); do \\\n\t\techo \"Verifying $$pkg\"; \\\n\t\tgo vet $$pkg; \\\n\t\tgolint -set_exit_status $$pkg || exit $$?; \\\n\tdone\n\n# Install (internally used)\ncinstall:\n\t@export GOBIN=$$GOPATH/bin; \\\n\tgo install -tags=sfdebug $(CMD_TARGET).go\n\n# Run (internally used)\ncrun: install\n\t$(CMD_TARGET)\n\n.PHONY: setup help cfmt clint cinstall crun\n"
  },
  {
    "path": "heartbeat.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"time\"\n)\n\nconst (\n\tminHeartBeatInterval     = 900 * time.Second\n\tmaxHeartBeatInterval     = 3600 * time.Second\n\tdefaultHeartBeatInterval = 3600 * time.Second\n)\n\nfunc newDefaultHeartBeat(restful *snowflakeRestful) *heartbeat {\n\treturn newHeartBeat(restful, defaultHeartBeatInterval)\n}\n\nfunc newHeartBeat(restful *snowflakeRestful, heartbeatInterval time.Duration) *heartbeat {\n\tlogger.Debugf(\"Using heartbeat with custom interval: %v\", heartbeatInterval)\n\tif heartbeatInterval < minHeartBeatInterval {\n\t\tlogger.Warnf(\"Heartbeat interval %v is less than minimum %v, using minimum\", heartbeatInterval, minHeartBeatInterval)\n\t\theartbeatInterval = minHeartBeatInterval\n\t} else if heartbeatInterval > maxHeartBeatInterval {\n\t\tlogger.Warnf(\"Heartbeat interval %v is greater than maximum %v, using maximum\", heartbeatInterval, maxHeartBeatInterval)\n\t\theartbeatInterval = maxHeartBeatInterval\n\t}\n\n\treturn &heartbeat{\n\t\trestful:           restful,\n\t\theartbeatInterval: heartbeatInterval,\n\t}\n}\n\ntype heartbeat struct {\n\trestful      *snowflakeRestful\n\tshutdownChan chan bool\n\n\theartbeatInterval time.Duration\n}\n\nfunc (hc *heartbeat) run() {\n\t_, _, sessionID := safeGetTokens(hc.restful)\n\tctx := context.WithValue(context.Background(), SFSessionIDKey, sessionID)\n\thbTicker := time.NewTicker(hc.heartbeatInterval)\n\tdefer hbTicker.Stop()\n\tfor {\n\t\tselect {\n\t\tcase <-hbTicker.C:\n\t\t\terr := hc.heartbeatMain()\n\t\t\tif err != nil {\n\t\t\t\tlogger.WithContext(ctx).Errorf(\"failed to heartbeat: %v\", err)\n\t\t\t}\n\t\tcase <-hc.shutdownChan:\n\t\t\tlogger.WithContext(ctx).Info(\"stopping heartbeat\")\n\t\t\treturn\n\t\t}\n\t}\n}\n\nfunc (hc *heartbeat) start() {\n\t_, _, sessionID := safeGetTokens(hc.restful)\n\tctx := context.WithValue(context.Background(), SFSessionIDKey, sessionID)\n\thc.shutdownChan = make(chan bool)\n\tgo hc.run()\n\tlogger.WithContext(ctx).Info(\"heartbeat started\")\n}\n\nfunc (hc *heartbeat) stop() {\n\t_, _, sessionID := safeGetTokens(hc.restful)\n\tctx := context.WithValue(context.Background(), SFSessionIDKey, sessionID)\n\thc.shutdownChan <- true\n\tclose(hc.shutdownChan)\n\tlogger.WithContext(ctx).Info(\"heartbeat stopped\")\n}\n\nfunc (hc *heartbeat) heartbeatMain() error {\n\tparams := &url.Values{}\n\tparams.Set(requestIDKey, NewUUID().String())\n\tparams.Set(requestGUIDKey, NewUUID().String())\n\theaders := getHeaders()\n\ttoken, _, sessionID := safeGetTokens(hc.restful)\n\tctx := context.WithValue(context.Background(), SFSessionIDKey, sessionID)\n\tlogger.WithContext(ctx).Info(\"Heartbeating!\")\n\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, token)\n\n\tfullURL := hc.restful.getFullURL(heartBeatPath, params)\n\ttimeout := hc.restful.RequestTimeout\n\tresp, err := hc.restful.FuncPost(context.Background(), hc.restful, fullURL, headers, nil, timeout, defaultTimeProvider, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif err = resp.Body.Close(); err != nil {\n\t\t\tlogger.WithContext(ctx).Warnf(\"failed to close response body for %v. err: %v\", fullURL, err)\n\t\t}\n\t}()\n\tif resp.StatusCode == http.StatusOK {\n\t\tlogger.WithContext(ctx).Debugf(\"heartbeatMain: resp: %v\", resp)\n\t\tvar respd execResponse\n\t\terr = json.NewDecoder(resp.Body).Decode(&respd)\n\t\tif err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to decode heartbeat response JSON. err: %v\", err)\n\t\t\treturn err\n\t\t}\n\t\tif respd.Code == sessionExpiredCode {\n\t\t\tlogger.WithContext(ctx).Info(\"Snowflake returned 'session expired', trying to renew expired token.\")\n\t\t\terr = hc.restful.renewExpiredSessionToken(context.Background(), timeout, token)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}\n\tb, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to extract HTTP response body. err: %v\", err)\n\t\treturn err\n\t}\n\tlogger.WithContext(ctx).Debugf(\"HTTP: %v, URL: %v, Body: %v\", resp.StatusCode, fullURL, b)\n\tlogger.WithContext(ctx).Debugf(\"Header: %v\", resp.Header)\n\treturn &SnowflakeError{\n\t\tNumber:   ErrFailedToHeartbeat,\n\t\tSQLState: SQLStateConnectionFailure,\n\t\tMessage:  \"Failed to heartbeat.\",\n\t}\n}\n"
  },
  {
    "path": "heartbeat_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestUnitPostHeartbeat(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\t// send heartbeat call and renew expired session\n\t\tsr := &snowflakeRestful{\n\t\t\tFuncPost:         postTestRenew,\n\t\t\tFuncRenewSession: renewSessionTest,\n\t\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t\t\tRequestTimeout:   0,\n\t\t}\n\t\theartbeat := newDefaultHeartBeat(sr)\n\t\terr := heartbeat.heartbeatMain()\n\t\tassertNilF(t, err, \"failed to heartbeat and renew session\")\n\n\t\theartbeat.restful.FuncPost = postTestError\n\t\terr = heartbeat.heartbeatMain()\n\t\tassertNotNilF(t, err, \"should have failed to start heartbeat\")\n\t\tassertEqualE(t, err.Error(), \"failed to run post method\")\n\n\t\theartbeat.restful.FuncPost = postTestSuccessButInvalidJSON\n\t\terr = heartbeat.heartbeatMain()\n\t\tassertNotNilF(t, err, \"should have failed to start heartbeat\")\n\t\tassertHasPrefixE(t, err.Error(), \"invalid character\")\n\n\t\theartbeat.restful.FuncPost = postTestAppForbiddenError\n\t\terr = heartbeat.heartbeatMain()\n\t\tassertNotNilF(t, err, \"should have failed to start heartbeat\")\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tassertTrueF(t, ok, \"connection should be snowflakeConn\")\n\t\tassertEqualE(t, driverErr.Number, ErrFailedToHeartbeat)\n\t})\n}\n\nfunc TestHeartbeatStartAndStop(t *testing.T) {\n\tcustomDsn := dsn + \"&client_session_keep_alive=true\"\n\tconfig, err := ParseDSN(customDsn)\n\tassertNilF(t, err, \"failed to parse dsn\")\n\tdriver := SnowflakeDriver{}\n\tdb, err := driver.OpenWithConfig(context.Background(), *config)\n\tassertNilF(t, err, \"failed to open with config\")\n\n\tconn, ok := db.(*snowflakeConn)\n\tassertTrueF(t, ok, \"connection should be snowflakeConn\")\n\tassertNotNilF(t, conn.rest, \"heartbeat should not be nil\")\n\tassertNotNilF(t, conn.rest.HeartBeat, \"heartbeat should not be nil\")\n\n\terr = db.Close()\n\tassertNilF(t, err, \"should not cause error in Close\")\n\tassertNilF(t, conn.rest.HeartBeat, \"heartbeat should be nil\")\n}\n\nfunc TestHeartbeatIntervalLowerThanMin(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncPost:         postTestRenew,\n\t\tFuncRenewSession: renewSessionTest,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t\tRequestTimeout:   0,\n\t}\n\theartbeat := newHeartBeat(sr, minHeartBeatInterval-1*time.Second)\n\tassertEqualF(t, heartbeat.heartbeatInterval, minHeartBeatInterval, \"heartbeat interval should be set to min\")\n}\n\nfunc TestHeartbeatIntervalHigherThanMax(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncPost:         postTestRenew,\n\t\tFuncRenewSession: renewSessionTest,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t\tRequestTimeout:   0,\n\t}\n\theartbeat := newHeartBeat(sr, maxHeartBeatInterval+1*time.Second)\n\tassertEqualF(t, heartbeat.heartbeatInterval, maxHeartBeatInterval, \"heartbeat interval should be set to max\")\n}\n"
  },
  {
    "path": "htap.go",
    "content": "package gosnowflake\n\nimport (\n\t\"sort\"\n\t\"strconv\"\n\t\"sync\"\n)\n\nconst (\n\tqueryContextCacheSizeParamName = \"QUERY_CONTEXT_CACHE_SIZE\"\n\tdefaultQueryContextCacheSize   = 5\n)\n\ntype queryContext struct {\n\tEntries []queryContextEntry `json:\"entries,omitempty\"`\n}\n\ntype queryContextEntry struct {\n\tID        int    `json:\"id\"`\n\tTimestamp int64  `json:\"timestamp\"`\n\tPriority  int    `json:\"priority\"`\n\tContext   string `json:\"context,omitempty\"`\n}\n\ntype queryContextCache struct {\n\tmutex   sync.Mutex\n\tentries []queryContextEntry\n}\n\nfunc (qcc *queryContextCache) add(sc *snowflakeConn, qces ...queryContextEntry) {\n\tqcc.mutex.Lock()\n\tdefer qcc.mutex.Unlock()\n\tif len(qces) == 0 {\n\t\tqcc.prune(0)\n\t} else {\n\t\tfor _, newQce := range qces {\n\t\t\tlogger.Debugf(\"adding query context: %v\", newQce)\n\t\t\tnewQceProcessed := false\n\t\t\tfor existingQceIdx, existingQce := range qcc.entries {\n\t\t\t\tif newQce.ID == existingQce.ID {\n\t\t\t\t\tnewQceProcessed = true\n\t\t\t\t\tif newQce.Timestamp > existingQce.Timestamp {\n\t\t\t\t\t\tqcc.entries[existingQceIdx] = newQce\n\t\t\t\t\t} else if newQce.Timestamp == existingQce.Timestamp {\n\t\t\t\t\t\tif newQce.Priority != existingQce.Priority {\n\t\t\t\t\t\t\tqcc.entries[existingQceIdx] = newQce\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !newQceProcessed {\n\t\t\t\tfor existingQceIdx, existingQce := range qcc.entries {\n\t\t\t\t\tif newQce.Priority == existingQce.Priority {\n\t\t\t\t\t\tqcc.entries[existingQceIdx] = newQce\n\t\t\t\t\t\tnewQceProcessed = true\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !newQceProcessed {\n\t\t\t\tqcc.entries = append(qcc.entries, newQce)\n\t\t\t}\n\t\t}\n\t\tsort.Slice(qcc.entries, func(idx1, idx2 int) bool {\n\t\t\treturn qcc.entries[idx1].Priority < qcc.entries[idx2].Priority\n\t\t})\n\t\tqcc.prune(qcc.getQueryContextCacheSize(sc))\n\t}\n}\n\nfunc (qcc *queryContextCache) prune(size int) {\n\tif len(qcc.entries) > size {\n\t\tqcc.entries = qcc.entries[0:size]\n\t}\n}\n\nfunc (qcc *queryContextCache) getQueryContextCacheSize(sc *snowflakeConn) int {\n\tsizeStr, ok := sc.syncParams.get(queryContextCacheSizeParamName)\n\tif ok {\n\t\tsize, err := strconv.Atoi(*sizeStr)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"cannot parse %v as int as query context cache size: %v\", sizeStr, err)\n\t\t} else {\n\t\t\treturn size\n\t\t}\n\t}\n\treturn defaultQueryContextCacheSize\n}\n"
  },
  {
    "path": "htap_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestSortingByPriority(t *testing.T) {\n\tqcc := queryContextCache{}\n\tsc := htapTestSnowflakeConn()\n\n\tqceA := queryContextEntry{ID: 12, Timestamp: 123, Priority: 7, Context: \"a\"}\n\tqceB := queryContextEntry{ID: 13, Timestamp: 124, Priority: 9, Context: \"b\"}\n\tqceC := queryContextEntry{ID: 14, Timestamp: 125, Priority: 6, Context: \"c\"}\n\tqceD := queryContextEntry{ID: 15, Timestamp: 126, Priority: 8, Context: \"d\"}\n\n\tt.Run(\"Add to empty cache\", func(t *testing.T) {\n\t\tqcc.add(sc, qceA)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceA}) {\n\t\t\tt.Fatalf(\"no entries added to cache. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with different id, timestamp and priority - greater priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceB)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceA, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with different id, timestamp and priority - lesser priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceC)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceC, qceA, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with different id, timestamp and priority - priority in the middle\", func(t *testing.T) {\n\t\tqcc.add(sc, qceD)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceC, qceA, qceD, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n}\n\nfunc TestAddingQcesWithTheSameIdAndLaterTimestamp(t *testing.T) {\n\tqcc := queryContextCache{}\n\tsc := htapTestSnowflakeConn()\n\n\tqceA := queryContextEntry{ID: 12, Timestamp: 123, Priority: 7, Context: \"a\"}\n\tqceB := queryContextEntry{ID: 13, Timestamp: 124, Priority: 9, Context: \"b\"}\n\tqceC := queryContextEntry{ID: 12, Timestamp: 125, Priority: 6, Context: \"c\"}\n\tqceD := queryContextEntry{ID: 12, Timestamp: 126, Priority: 6, Context: \"d\"}\n\n\tt.Run(\"Add to empty cache\", func(t *testing.T) {\n\t\tqcc.add(sc, qceA)\n\t\tqcc.add(sc, qceB)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceA, qceB}) {\n\t\t\tt.Fatalf(\"no entries added to cache. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with different priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceC)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceC, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with same priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceD)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceD, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n}\n\nfunc TestAddingQcesWithTheSameIdAndSameTimestamp(t *testing.T) {\n\tqcc := queryContextCache{}\n\tsc := htapTestSnowflakeConn()\n\n\tqceA := queryContextEntry{ID: 12, Timestamp: 123, Priority: 7, Context: \"a\"}\n\tqceB := queryContextEntry{ID: 13, Timestamp: 124, Priority: 9, Context: \"b\"}\n\tqceC := queryContextEntry{ID: 12, Timestamp: 123, Priority: 6, Context: \"c\"}\n\tqceD := queryContextEntry{ID: 12, Timestamp: 123, Priority: 6, Context: \"d\"}\n\n\tt.Run(\"Add to empty cache\", func(t *testing.T) {\n\t\tqcc.add(sc, qceA)\n\t\tqcc.add(sc, qceB)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceA, qceB}) {\n\t\t\tt.Fatalf(\"no entries added to cache. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with different priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceC)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceC, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with same priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceD)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceC, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n}\n\nfunc TestAddingQcesWithTheSameIdAndEarlierTimestamp(t *testing.T) {\n\tqcc := queryContextCache{}\n\tsc := htapTestSnowflakeConn()\n\n\tqceA := queryContextEntry{ID: 12, Timestamp: 123, Priority: 7, Context: \"a\"}\n\tqceB := queryContextEntry{ID: 13, Timestamp: 124, Priority: 9, Context: \"b\"}\n\tqceC := queryContextEntry{ID: 12, Timestamp: 122, Priority: 6, Context: \"c\"}\n\tqceD := queryContextEntry{ID: 12, Timestamp: 122, Priority: 7, Context: \"d\"}\n\n\tt.Run(\"Add to empty cache\", func(t *testing.T) {\n\t\tqcc.add(sc, qceA)\n\t\tqcc.add(sc, qceB)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceA, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with different priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceC)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceA, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with same priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceD)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceA, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n}\n\nfunc TestAddingQcesWithDifferentId(t *testing.T) {\n\tqcc := queryContextCache{}\n\tsc := htapTestSnowflakeConn()\n\n\tqceA := queryContextEntry{ID: 12, Timestamp: 123, Priority: 7, Context: \"a\"}\n\tqceB := queryContextEntry{ID: 13, Timestamp: 124, Priority: 9, Context: \"b\"}\n\tqceC := queryContextEntry{ID: 14, Timestamp: 122, Priority: 7, Context: \"c\"}\n\tqceD := queryContextEntry{ID: 15, Timestamp: 122, Priority: 6, Context: \"d\"}\n\n\tt.Run(\"Add to empty cache\", func(t *testing.T) {\n\t\tqcc.add(sc, qceA)\n\t\tqcc.add(sc, qceB)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceA, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with same priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceC)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceC, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n\tt.Run(\"Add another entry with different priority\", func(t *testing.T) {\n\t\tqcc.add(sc, qceD)\n\t\tif !reflect.DeepEqual(qcc.entries, []queryContextEntry{qceD, qceC, qceB}) {\n\t\t\tt.Fatalf(\"unexpected qcc entries. %v\", qcc.entries)\n\t\t}\n\t})\n}\n\nfunc TestAddingQueryContextCacheEntry(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tt.Run(\"First query (may be on empty cache)\", func(t *testing.T) {\n\t\t\tentriesBefore := make([]queryContextEntry, len(sct.sc.queryContextCache.entries))\n\t\t\tcopy(entriesBefore, sct.sc.queryContextCache.entries)\n\t\t\tsct.mustQuery(\"SELECT 1\", nil)\n\t\t\tentriesAfter := sct.sc.queryContextCache.entries\n\n\t\t\tif !containsNewEntries(entriesAfter, entriesBefore) {\n\t\t\t\tt.Error(\"no new entries added to the query context cache\")\n\t\t\t}\n\t\t})\n\n\t\tt.Run(\"Second query (cache should not be empty)\", func(t *testing.T) {\n\t\t\tentriesBefore := make([]queryContextEntry, len(sct.sc.queryContextCache.entries))\n\t\t\tcopy(entriesBefore, sct.sc.queryContextCache.entries)\n\t\t\tif len(entriesBefore) == 0 {\n\t\t\t\tt.Fatalf(\"cache should not be empty after first query\")\n\t\t\t}\n\t\t\tsct.mustQuery(\"SELECT 2\", nil)\n\t\t\tentriesAfter := sct.sc.queryContextCache.entries\n\n\t\t\tif !containsNewEntries(entriesAfter, entriesBefore) {\n\t\t\t\tt.Error(\"no new entries added to the query context cache\")\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc containsNewEntries(entriesAfter []queryContextEntry, entriesBefore []queryContextEntry) bool {\n\tif len(entriesAfter) > len(entriesBefore) {\n\t\treturn true\n\t}\n\n\tfor _, entryAfter := range entriesAfter {\n\t\tfor _, entryBefore := range entriesBefore {\n\t\t\tif !reflect.DeepEqual(entryBefore, entryAfter) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\n\treturn false\n}\n\nfunc TestPruneBySessionValue(t *testing.T) {\n\tqce1 := queryContextEntry{1, 1, 1, \"\"}\n\tqce2 := queryContextEntry{2, 2, 2, \"\"}\n\tqce3 := queryContextEntry{3, 3, 3, \"\"}\n\n\ttestcases := []struct {\n\t\tsize     string\n\t\texpected []queryContextEntry\n\t}{\n\t\t{\n\t\t\tsize:     \"1\",\n\t\t\texpected: []queryContextEntry{qce1},\n\t\t},\n\t\t{\n\t\t\tsize:     \"2\",\n\t\t\texpected: []queryContextEntry{qce1, qce2},\n\t\t},\n\t\t{\n\t\t\tsize:     \"3\",\n\t\t\texpected: []queryContextEntry{qce1, qce2, qce3},\n\t\t},\n\t\t{\n\t\t\tsize:     \"4\",\n\t\t\texpected: []queryContextEntry{qce1, qce2, qce3},\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(tc.size, func(t *testing.T) {\n\t\t\tparams := map[string]*string{\n\t\t\t\tqueryContextCacheSizeParamName: &tc.size,\n\t\t\t}\n\t\t\tsc := &snowflakeConn{\n\t\t\t\tcfg:        &Config{},\n\t\t\t\tsyncParams: syncParams{params: params},\n\t\t\t}\n\n\t\t\tqcc := queryContextCache{}\n\n\t\t\tqcc.add(sc, qce1)\n\t\t\tqcc.add(sc, qce2)\n\t\t\tqcc.add(sc, qce3)\n\n\t\t\tif !reflect.DeepEqual(qcc.entries, tc.expected) {\n\t\t\t\tt.Errorf(\"unexpected cache entries. expected: %v, got: %v\", tc.expected, qcc.entries)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPruneByDefaultValue(t *testing.T) {\n\tqce1 := queryContextEntry{1, 1, 1, \"\"}\n\tqce2 := queryContextEntry{2, 2, 2, \"\"}\n\tqce3 := queryContextEntry{3, 3, 3, \"\"}\n\tqce4 := queryContextEntry{4, 4, 4, \"\"}\n\tqce5 := queryContextEntry{5, 5, 5, \"\"}\n\tqce6 := queryContextEntry{6, 6, 6, \"\"}\n\n\tsc := &snowflakeConn{\n\t\tcfg: &Config{},\n\t}\n\n\tqcc := queryContextCache{}\n\tqcc.add(sc, qce1)\n\tqcc.add(sc, qce2)\n\tqcc.add(sc, qce3)\n\tqcc.add(sc, qce4)\n\tqcc.add(sc, qce5)\n\n\tif len(qcc.entries) != 5 {\n\t\tt.Fatalf(\"Expected 5 elements, got: %v\", len(qcc.entries))\n\t}\n\n\tqcc.add(sc, qce6)\n\tif len(qcc.entries) != 5 {\n\t\tt.Fatalf(\"Expected 5 elements, got: %v\", len(qcc.entries))\n\t}\n}\n\nfunc TestNoQcesClearsCache(t *testing.T) {\n\tqce1 := queryContextEntry{1, 1, 1, \"\"}\n\n\tsc := &snowflakeConn{\n\t\tcfg: &Config{},\n\t}\n\n\tqcc := queryContextCache{}\n\tqcc.add(sc, qce1)\n\n\tif len(qcc.entries) != 1 {\n\t\tt.Fatalf(\"improperly inited cache\")\n\t}\n\n\tqcc.add(sc)\n\n\tif len(qcc.entries) != 0 {\n\t\tt.Errorf(\"after adding empty context list cache should be cleared\")\n\t}\n}\n\nfunc TestQCCUpdatedAfterQueryResponse(t *testing.T) {\n\t// Create initial QCC entry\n\tinitialEntry := queryContextEntry{ID: 1, Timestamp: 100, Priority: 1, Context: \"initial\"}\n\n\t// Create query context that would be returned in the response\n\tnewEntry := queryContextEntry{ID: 2, Timestamp: 200, Priority: 2, Context: \"new\"}\n\tqueryContextJSON := fmt.Sprintf(`{\"entries\":[{\"id\":%d,\"timestamp\":%d,\"priority\":%d,\"context\":\"%s\"}]}`,\n\t\tnewEntry.ID, newEntry.Timestamp, newEntry.Priority, newEntry.Context)\n\n\ttestCases := []bool{true, false}\n\n\tfor _, success := range testCases {\n\t\tt.Run(fmt.Sprintf(\"success=%v\", success), func(t *testing.T) {\n\t\t\t// Mock response with query context\n\t\t\tpostQueryMock := func(_ context.Context, _ *snowflakeRestful, _ *url.Values,\n\t\t\t\t_ map[string]string, _ []byte, _ time.Duration, _ UUID, _ *Config) (*execResponse, error) {\n\t\t\t\tcode := \"0\"\n\t\t\t\tmessage := \"\"\n\t\t\t\tif !success {\n\t\t\t\t\tcode = \"1234\"\n\t\t\t\t\tmessage = \"Query failed\"\n\t\t\t\t}\n\t\t\t\treturn &execResponse{\n\t\t\t\t\tData: execResponseData{\n\t\t\t\t\t\tQueryContext: json.RawMessage(queryContextJSON),\n\t\t\t\t\t},\n\t\t\t\t\tMessage: message,\n\t\t\t\t\tCode:    code,\n\t\t\t\t\tSuccess: success,\n\t\t\t\t}, nil\n\t\t\t}\n\n\t\t\tsr := &snowflakeRestful{\n\t\t\t\tFuncPostQuery: postQueryMock,\n\t\t\t}\n\n\t\t\tsc := &snowflakeConn{\n\t\t\t\tcfg:  &Config{},\n\t\t\t\trest: sr,\n\t\t\t}\n\t\t\tsc.queryContextCache.add(sc, initialEntry)\n\n\t\t\t// Execute query\n\t\t\t_, err := sc.ExecContext(context.Background(), \"SELECT 1\", nil)\n\t\t\tif !success {\n\t\t\t\tassertNotNilF(t, err, \"expected error for failed query\")\n\t\t\t} else {\n\t\t\t\tassertNilF(t, err, \"unexpected error for successful query\")\n\t\t\t}\n\n\t\t\t// Verify QCC WAS updated in both cases - should now contain both entries\n\t\t\tassertEqualE(t, len(sc.queryContextCache.entries), 2, \"expected 2 entries in QCC\")\n\n\t\t\t// Verify new entry was added (entries are sorted by priority)\n\t\t\tfound := false\n\t\t\tfor _, entry := range sc.queryContextCache.entries {\n\t\t\t\tif entry.ID == newEntry.ID {\n\t\t\t\t\tfound = true\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tassertTrueE(t, found, \"new QCC entry not found after query\")\n\t\t})\n\t}\n}\n\nfunc htapTestSnowflakeConn() *snowflakeConn {\n\treturn &snowflakeConn{\n\t\tcfg: &Config{},\n\t}\n}\n\nfunc TestQueryContextCacheDisabled(t *testing.T) {\n\tcustomDsn := dsn + \"&disableQueryContextCache=true\"\n\trunSnowflakeConnTestWithConfig(t, &testConfig{dsn: customDsn}, func(sct *SCTest) {\n\t\tsct.mustExec(\"SELECT 1\", nil)\n\t\tif len(sct.sc.queryContextCache.entries) > 0 {\n\t\t\tt.Error(\"should not contain any entries\")\n\t\t}\n\t})\n}\n\nfunc TestHybridTablesE2E(t *testing.T) {\n\tskipOnJenkins(t, \"HTAP is not enabled on environment\")\n\tif runningOnGithubAction() && !runningOnAWS() {\n\t\tt.Skip(\"HTAP is enabled only on AWS\")\n\t}\n\trunID := time.Now().UnixMilli()\n\ttestDb1 := fmt.Sprintf(\"hybrid_db_test_%v\", runID)\n\ttestDb2 := fmt.Sprintf(\"hybrid_db_test_%v_2\", runID)\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tdbQuery := sct.mustQuery(\"SELECT CURRENT_DATABASE()\", nil)\n\t\tdefer func() {\n\t\t\tassertNilF(t, dbQuery.Close())\n\t\t}()\n\t\tcurrentDb := make([]driver.Value, 1)\n\t\tassertNilF(t, dbQuery.Next(currentDb))\n\t\tdefer func() {\n\t\t\tsct.mustExec(fmt.Sprintf(\"USE DATABASE %v\", currentDb[0]), nil)\n\t\t\tsct.mustExec(fmt.Sprintf(\"DROP DATABASE IF EXISTS %v\", testDb1), nil)\n\t\t\tsct.mustExec(fmt.Sprintf(\"DROP DATABASE IF EXISTS %v\", testDb2), nil)\n\t\t}()\n\n\t\tt.Run(\"Run tests on first database\", func(t *testing.T) {\n\t\t\tsct.mustExec(fmt.Sprintf(\"CREATE DATABASE IF NOT EXISTS %v\", testDb1), nil)\n\t\t\tsct.mustExec(\"CREATE HYBRID TABLE test_hybrid_table (id INT PRIMARY KEY, text VARCHAR)\", nil)\n\n\t\t\tsct.mustExec(\"INSERT INTO test_hybrid_table VALUES (1, 'a')\", nil)\n\t\t\trows := sct.mustQuery(\"SELECT * FROM test_hybrid_table\", nil)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t}()\n\t\t\trow := make([]driver.Value, 2)\n\t\t\tassertNilF(t, rows.Next(row))\n\t\t\tif row[0] != \"1\" || row[1] != \"a\" {\n\t\t\t\tt.Errorf(\"expected 1, got %v and expected a, got %v\", row[0], row[1])\n\t\t\t}\n\n\t\t\tsct.mustExec(\"INSERT INTO test_hybrid_table VALUES (2, 'b')\", nil)\n\t\t\trows2 := sct.mustQuery(\"SELECT * FROM test_hybrid_table\", nil)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, rows2.Close())\n\t\t\t}()\n\t\t\tassertNilF(t, rows2.Next(row))\n\t\t\tif row[0] != \"1\" || row[1] != \"a\" {\n\t\t\t\tt.Errorf(\"expected 1, got %v and expected a, got %v\", row[0], row[1])\n\t\t\t}\n\t\t\tassertNilF(t, rows2.Next(row))\n\t\t\tif row[0] != \"2\" || row[1] != \"b\" {\n\t\t\t\tt.Errorf(\"expected 2, got %v and expected b, got %v\", row[0], row[1])\n\t\t\t}\n\t\t\tif len(sct.sc.queryContextCache.entries) != 2 {\n\t\t\t\tt.Errorf(\"expected two entries in query context cache, got: %v\", sct.sc.queryContextCache.entries)\n\t\t\t}\n\t\t})\n\t\tt.Run(\"Run tests on second database\", func(t *testing.T) {\n\t\t\tsct.mustExec(fmt.Sprintf(\"CREATE DATABASE IF NOT EXISTS %v\", testDb2), nil)\n\t\t\tsct.mustExec(\"CREATE HYBRID TABLE test_hybrid_table_2 (id INT PRIMARY KEY, text VARCHAR)\", nil)\n\t\t\tsct.mustExec(\"INSERT INTO test_hybrid_table_2 VALUES (3, 'c')\", nil)\n\n\t\t\trows := sct.mustQuery(\"SELECT * FROM test_hybrid_table_2\", nil)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t}()\n\t\t\trow := make([]driver.Value, 2)\n\t\t\tassertNilF(t, rows.Next(row))\n\t\t\tif row[0] != \"3\" || row[1] != \"c\" {\n\t\t\t\tt.Errorf(\"expected 3, got %v and expected c, got %v\", row[0], row[1])\n\t\t\t}\n\t\t\tif len(sct.sc.queryContextCache.entries) != 3 {\n\t\t\t\tt.Errorf(\"expected three entries in query context cache, got: %v\", sct.sc.queryContextCache.entries)\n\t\t\t}\n\t\t})\n\t\tt.Run(\"Run tests on first database again\", func(t *testing.T) {\n\t\t\tsct.mustExec(fmt.Sprintf(\"USE DATABASE %v\", testDb1), nil)\n\n\t\t\tsct.mustExec(\"INSERT INTO test_hybrid_table VALUES (4, 'd')\", nil)\n\n\t\t\trows := sct.mustQuery(\"SELECT * FROM test_hybrid_table\", nil)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t}()\n\t\t\tif len(sct.sc.queryContextCache.entries) != 3 {\n\t\t\t\tt.Errorf(\"expected three entries in query context cache, got: %v\", sct.sc.queryContextCache.entries)\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestHTAPOptimizations(t *testing.T) {\n\tif runningOnGithubAction() {\n\t\tt.Skip(\"insufficient permissions\")\n\t}\n\tfor _, useHtapOptimizations := range []bool{true, false} {\n\t\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\t\tt.Run(\"useHtapOptimizations=\"+strconv.FormatBool(useHtapOptimizations), func(t *testing.T) {\n\t\t\t\tif useHtapOptimizations {\n\t\t\t\t\tsct.mustExec(\"ALTER SESSION SET ENABLE_SNOW_654741_FOR_TESTING = true\", nil)\n\t\t\t\t}\n\t\t\t\trunID := time.Now().UnixMilli()\n\t\t\t\tt.Run(\"Schema\", func(t *testing.T) {\n\t\t\t\t\tnewSchema := fmt.Sprintf(\"test_schema_%v\", runID)\n\t\t\t\t\tif strings.EqualFold(sct.sc.cfg.Schema, newSchema) {\n\t\t\t\t\t\tt.Errorf(\"schema should not be switched\")\n\t\t\t\t\t}\n\n\t\t\t\t\tsct.mustExec(fmt.Sprintf(\"CREATE SCHEMA %v\", newSchema), nil)\n\t\t\t\t\tdefer sct.mustExec(fmt.Sprintf(\"DROP SCHEMA %v\", newSchema), nil)\n\n\t\t\t\t\tif !strings.EqualFold(sct.sc.cfg.Schema, newSchema) {\n\t\t\t\t\t\tt.Errorf(\"schema should be switched, expected %v, got %v\", newSchema, sct.sc.cfg.Schema)\n\t\t\t\t\t}\n\n\t\t\t\t\tquery := sct.mustQuery(\"SELECT 1\", nil)\n\t\t\t\t\tquery.Close()\n\n\t\t\t\t\tif !strings.EqualFold(sct.sc.cfg.Schema, newSchema) {\n\t\t\t\t\t\tt.Errorf(\"schema should be switched, expected %v, got %v\", newSchema, sct.sc.cfg.Schema)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t\tt.Run(\"Database\", func(t *testing.T) {\n\t\t\t\t\tnewDatabase := fmt.Sprintf(\"test_database_%v\", runID)\n\t\t\t\t\tif strings.EqualFold(sct.sc.cfg.Database, newDatabase) {\n\t\t\t\t\t\tt.Errorf(\"database should not be switched\")\n\t\t\t\t\t}\n\n\t\t\t\t\tsct.mustExec(fmt.Sprintf(\"CREATE DATABASE %v\", newDatabase), nil)\n\t\t\t\t\tdefer sct.mustExec(fmt.Sprintf(\"DROP DATABASE %v\", newDatabase), nil)\n\n\t\t\t\t\tif !strings.EqualFold(sct.sc.cfg.Database, newDatabase) {\n\t\t\t\t\t\tt.Errorf(\"database should be switched, expected %v, got %v\", newDatabase, sct.sc.cfg.Database)\n\t\t\t\t\t}\n\n\t\t\t\t\tquery := sct.mustQuery(\"SELECT 1\", nil)\n\t\t\t\t\tquery.Close()\n\n\t\t\t\t\tif !strings.EqualFold(sct.sc.cfg.Database, newDatabase) {\n\t\t\t\t\t\tt.Errorf(\"database should be switched, expected %v, got %v\", newDatabase, sct.sc.cfg.Database)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t\tt.Run(\"Warehouse\", func(t *testing.T) {\n\t\t\t\t\tnewWarehouse := fmt.Sprintf(\"test_warehouse_%v\", runID)\n\t\t\t\t\tif strings.EqualFold(sct.sc.cfg.Warehouse, newWarehouse) {\n\t\t\t\t\t\tt.Errorf(\"warehouse should not be switched\")\n\t\t\t\t\t}\n\n\t\t\t\t\tsct.mustExec(fmt.Sprintf(\"CREATE WAREHOUSE %v\", newWarehouse), nil)\n\t\t\t\t\tdefer sct.mustExec(fmt.Sprintf(\"DROP WAREHOUSE %v\", newWarehouse), nil)\n\n\t\t\t\t\tif !strings.EqualFold(sct.sc.cfg.Warehouse, newWarehouse) {\n\t\t\t\t\t\tt.Errorf(\"warehouse should be switched, expected %v, got %v\", newWarehouse, sct.sc.cfg.Warehouse)\n\t\t\t\t\t}\n\n\t\t\t\t\tquery := sct.mustQuery(\"SELECT 1\", nil)\n\t\t\t\t\tquery.Close()\n\n\t\t\t\t\tif !strings.EqualFold(sct.sc.cfg.Warehouse, newWarehouse) {\n\t\t\t\t\t\tt.Errorf(\"warehouse should be switched, expected %v, got %v\", newWarehouse, sct.sc.cfg.Warehouse)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t\tt.Run(\"Role\", func(t *testing.T) {\n\t\t\t\t\tif strings.EqualFold(sct.sc.cfg.Role, \"PUBLIC\") {\n\t\t\t\t\t\tt.Errorf(\"role should not be public for this test\")\n\t\t\t\t\t}\n\n\t\t\t\t\tsct.mustExec(\"USE ROLE public\", nil)\n\n\t\t\t\t\tif !strings.EqualFold(sct.sc.cfg.Role, \"PUBLIC\") {\n\t\t\t\t\t\tt.Errorf(\"role should be switched, expected public, got %v\", sct.sc.cfg.Role)\n\t\t\t\t\t}\n\n\t\t\t\t\tquery := sct.mustQuery(\"SELECT 1\", nil)\n\t\t\t\t\tquery.Close()\n\n\t\t\t\t\tif !strings.EqualFold(sct.sc.cfg.Role, \"PUBLIC\") {\n\t\t\t\t\t\tt.Errorf(\"role should be switched, expected public, got %v\", sct.sc.cfg.Role)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t\tt.Run(\"Session param - DATE_OUTPUT_FORMAT\", func(t *testing.T) {\n\t\t\t\t\tdateFormat, _ := sct.sc.syncParams.get(\"date_output_format\")\n\t\t\t\t\tif !strings.EqualFold(*dateFormat, \"YYYY-MM-DD\") {\n\t\t\t\t\t\tt.Errorf(\"should use default date_output_format, but got: %v\", *dateFormat)\n\t\t\t\t\t}\n\n\t\t\t\t\tsct.mustExec(\"ALTER SESSION SET DATE_OUTPUT_FORMAT = 'DD-MM-YYYY'\", nil)\n\t\t\t\t\tdefer sct.mustExec(\"ALTER SESSION SET DATE_OUTPUT_FORMAT = 'YYYY-MM-DD'\", nil)\n\n\t\t\t\t\tdateFormat, _ = sct.sc.syncParams.get(\"date_output_format\")\n\t\t\t\t\tif !strings.EqualFold(*dateFormat, \"DD-MM-YYYY\") {\n\t\t\t\t\t\tt.Errorf(\"date output format should be switched, expected DD-MM-YYYY, got %v\", *dateFormat)\n\t\t\t\t\t}\n\n\t\t\t\t\tquery := sct.mustQuery(\"SELECT 1\", nil)\n\t\t\t\t\tquery.Close()\n\n\t\t\t\t\tdateFormat, _ = sct.sc.syncParams.get(\"date_output_format\")\n\t\t\t\t\tif !strings.EqualFold(*dateFormat, \"DD-MM-YYYY\") {\n\t\t\t\t\t\tt.Errorf(\"date output format should be switched, expected DD-MM-YYYY, got %v\", *dateFormat)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t}\n}\n\nfunc TestConnIsCleanAfterClose(t *testing.T) {\n\t// We create a new db here to not use the default pool as we can leave it in dirty state.\n\tt.Skip(\"Fails, because connection is returned to a pool dirty\")\n\tctx := context.Background()\n\trunID := time.Now().UnixMilli()\n\n\tdb := openDB(t)\n\tdefer db.Close()\n\tdb.SetMaxOpenConns(1)\n\n\tconn, err := db.Conn(ctx)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer conn.Close()\n\n\tdbt := DBTest{t, conn}\n\n\tdbt.mustExec(forceJSON)\n\n\tvar dbName string\n\trows1 := dbt.mustQuery(\"SELECT CURRENT_DATABASE()\")\n\trows1.Next()\n\tassertNilF(t, rows1.Scan(&dbName))\n\n\tnewDbName := fmt.Sprintf(\"test_database_%v\", runID)\n\tdbt.mustExec(\"CREATE DATABASE \" + newDbName)\n\n\tassertNilF(t, rows1.Close())\n\tassertNilF(t, conn.Close())\n\n\tconn2, err := db.Conn(ctx)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tdbt2 := DBTest{t, conn2}\n\n\tvar dbName2 string\n\trows2 := dbt2.mustQuery(\"SELECT CURRENT_DATABASE()\")\n\tdefer func() {\n\t\tassertNilF(t, rows2.Close())\n\t}()\n\trows2.Next()\n\tassertNilF(t, rows2.Scan(&dbName2))\n\n\tif !strings.EqualFold(dbName, dbName2) {\n\t\tt.Errorf(\"fresh connection from pool should have original database\")\n\t}\n}\n"
  },
  {
    "path": "internal/arrow/arrow.go",
    "content": "package arrow\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n)\n\n// contextKey is a private type for context keys used by this package.\ntype contextKey string\n\n// Context keys for arrow batches configuration.\nconst (\n\tctxArrowBatches             contextKey = \"ARROW_BATCHES\"\n\tctxArrowBatchesTimestampOpt contextKey = \"ARROW_BATCHES_TIMESTAMP_OPTION\"\n\tctxArrowBatchesUtf8Validate contextKey = \"ENABLE_ARROW_BATCHES_UTF8_VALIDATION\"\n\tctxHigherPrecision          contextKey = \"ENABLE_HIGHER_PRECISION\"\n)\n\n// --- Timestamp option ---\n\n// TimestampOption controls how Snowflake timestamps are converted in arrow batches.\ntype TimestampOption int\n\nconst (\n\t// UseNanosecondTimestamp converts Snowflake timestamps to arrow timestamps with nanosecond precision.\n\tUseNanosecondTimestamp TimestampOption = iota\n\t// UseMicrosecondTimestamp converts Snowflake timestamps to arrow timestamps with microsecond precision.\n\tUseMicrosecondTimestamp\n\t// UseMillisecondTimestamp converts Snowflake timestamps to arrow timestamps with millisecond precision.\n\tUseMillisecondTimestamp\n\t// UseSecondTimestamp converts Snowflake timestamps to arrow timestamps with second precision.\n\tUseSecondTimestamp\n\t// UseOriginalTimestamp leaves Snowflake timestamps in their original format without conversion.\n\tUseOriginalTimestamp\n)\n\n// --- Context accessors ---\n\n// EnableArrowBatches sets the arrow batches mode flag in the context.\nfunc EnableArrowBatches(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, ctxArrowBatches, true)\n}\n\n// BatchesEnabled checks if arrow batches mode is enabled.\nfunc BatchesEnabled(ctx context.Context) bool {\n\tv := ctx.Value(ctxArrowBatches)\n\tif v == nil {\n\t\treturn false\n\t}\n\td, ok := v.(bool)\n\treturn ok && d\n}\n\n// WithTimestampOption sets the arrow batches timestamp option in the context.\nfunc WithTimestampOption(ctx context.Context, option TimestampOption) context.Context {\n\treturn context.WithValue(ctx, ctxArrowBatchesTimestampOpt, option)\n}\n\n// GetTimestampOption returns the timestamp option from the context.\nfunc GetTimestampOption(ctx context.Context) TimestampOption {\n\tv := ctx.Value(ctxArrowBatchesTimestampOpt)\n\tif v == nil {\n\t\treturn UseNanosecondTimestamp\n\t}\n\to, ok := v.(TimestampOption)\n\tif !ok {\n\t\treturn UseNanosecondTimestamp\n\t}\n\treturn o\n}\n\n// EnableUtf8Validation enables UTF-8 validation for arrow batch string columns.\nfunc EnableUtf8Validation(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, ctxArrowBatchesUtf8Validate, true)\n}\n\n// Utf8ValidationEnabled checks if UTF-8 validation is enabled.\nfunc Utf8ValidationEnabled(ctx context.Context) bool {\n\tv := ctx.Value(ctxArrowBatchesUtf8Validate)\n\tif v == nil {\n\t\treturn false\n\t}\n\td, ok := v.(bool)\n\treturn ok && d\n}\n\n// WithHigherPrecision enables higher precision mode in the context.\nfunc WithHigherPrecision(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, ctxHigherPrecision, true)\n}\n\n// HigherPrecisionEnabled checks if higher precision is enabled.\nfunc HigherPrecisionEnabled(ctx context.Context) bool {\n\tv := ctx.Value(ctxHigherPrecision)\n\tif v == nil {\n\t\treturn false\n\t}\n\td, ok := v.(bool)\n\treturn ok && d\n}\n\n// BatchRaw holds raw (untransformed) arrow records for a single batch.\ntype BatchRaw struct {\n\tRecords  *[]arrow.Record\n\tIndex    int\n\tRowCount int\n\tLocation *time.Location\n\tDownload func(ctx context.Context) (*[]arrow.Record, int, error)\n}\n\n// BatchDataInfo contains all information needed to build arrow batches.\ntype BatchDataInfo struct {\n\tBatches   []BatchRaw\n\tRowTypes  []query.ExecResponseRowType\n\tAllocator memory.Allocator\n\tCtx       context.Context\n\tQueryID   string\n}\n\n// BatchDataProvider is implemented by SnowflakeRows to expose raw arrow batch data.\ntype BatchDataProvider interface {\n\tGetArrowBatches() (*BatchDataInfo, error)\n}\n"
  },
  {
    "path": "internal/compilation/cgo_disabled.go",
    "content": "//go:build !cgo\n\npackage compilation\n\n// CgoEnabled is set to false if CGO is disabled.\nvar CgoEnabled = false\n"
  },
  {
    "path": "internal/compilation/cgo_enabled.go",
    "content": "//go:build cgo\n\npackage compilation\n\n// CgoEnabled is set to true if CGO is enabled.\nvar CgoEnabled = true\n"
  },
  {
    "path": "internal/compilation/linking_mode.go",
    "content": "package compilation\n\nimport (\n\t\"debug/elf\"\n\t\"fmt\"\n\t\"runtime\"\n\t\"sync\"\n)\n\n// LinkingMode describes what linking mode was detected for the current binary.\ntype LinkingMode int\n\nconst (\n\t// StaticLinking means the static linking.\n\tStaticLinking LinkingMode = iota\n\t// DynamicLinking means the dynamic linking.\n\tDynamicLinking\n\t// UnknownLinking means driver couldn't determine linking or it is not relevant (it is relevant on Linux only).\n\tUnknownLinking\n)\n\nfunc (lm *LinkingMode) String() string {\n\tswitch *lm {\n\tcase StaticLinking:\n\t\treturn \"static\"\n\tcase DynamicLinking:\n\t\treturn \"dynamic\"\n\tdefault:\n\t\treturn \"unknown\"\n\t}\n}\n\n// CheckDynamicLinking checks whether the current binary has a dynamic linker (PT_INTERP).\n// A statically linked glibc binary will crash with SIGFPE if dlopen is called,\n// so this check allows us to skip minicore loading gracefully.\n// The result is cached so the ELF parsing only happens once.\nfunc CheckDynamicLinking() (LinkingMode, error) {\n\tlinkingModeOnce.Do(func() {\n\t\tif runtime.GOOS != \"linux\" {\n\t\t\tlinkingModeCached = UnknownLinking\n\t\t\treturn\n\t\t}\n\t\tf, err := elf.Open(\"/proc/self/exe\")\n\t\tif err != nil {\n\t\t\tlinkingModeErr = fmt.Errorf(\"cannot open /proc/self/exe: %v\", err)\n\t\t\treturn\n\t\t}\n\t\tdefer func() {\n\t\t\t_ = f.Close()\n\t\t}()\n\t\tfor _, p := range f.Progs {\n\t\t\tif p.Type == elf.PT_INTERP {\n\t\t\t\tlinkingModeCached = DynamicLinking\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\tlinkingModeCached = StaticLinking\n\t})\n\treturn linkingModeCached, linkingModeErr\n}\n\nvar (\n\tlinkingModeOnce   sync.Once\n\tlinkingModeCached LinkingMode\n\tlinkingModeErr    error\n)\n"
  },
  {
    "path": "internal/compilation/minicore_disabled.go",
    "content": "//go:build minicore_disabled\n\npackage compilation\n\n// MinicoreEnabled is set to false when building with -tags minicore_disabled.\n// This disables minicore at compile time, which is useful for statically linked binaries\n// that cannot use dynamic library loading (dlopen).\n//\n// Example: go build -tags minicore_disabled ./...\nvar MinicoreEnabled = false\n"
  },
  {
    "path": "internal/compilation/minicore_enabled.go",
    "content": "//go:build !minicore_disabled\n\npackage compilation\n\n// MinicoreEnabled is set to true by default. Build with -tags minicore_disabled to disable\n// minicore at compile time. This is useful when building statically linked binaries,\n// as minicore requires dynamic library loading (dlopen) which is incompatible with static linking.\n//\n// Example: go build -tags minicore_disabled ./...\nvar MinicoreEnabled = true\n"
  },
  {
    "path": "internal/config/assert_test.go",
    "content": "package config\n\nimport (\n\t\"fmt\"\n\t\"reflect\"\n\t\"slices\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\tsflogger \"github.com/snowflakedb/gosnowflake/v2/internal/logger\"\n)\n\n// TODO temporary - move this to a common test utils package when we have one\nfunc maskSecrets(text string) string {\n\treturn sflogger.MaskSecrets(text)\n}\n\nfunc assertNilE(t *testing.T, actual any, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateNil(actual, descriptions...))\n}\n\nfunc assertNilF(t *testing.T, actual any, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateNil(actual, descriptions...))\n}\n\nfunc assertNotNilF(t *testing.T, actual any, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateNotNil(actual, descriptions...))\n}\n\nfunc assertEqualE(t *testing.T, actual any, expected any, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEqual(actual, expected, descriptions...))\n}\n\nfunc assertEqualF(t *testing.T, actual any, expected any, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateEqual(actual, expected, descriptions...))\n}\n\nfunc assertTrueE(t *testing.T, actual bool, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEqual(actual, true, descriptions...))\n}\n\nfunc assertTrueF(t *testing.T, actual bool, descriptions ...string) {\n\tt.Helper()\n\tfatalOnNonEmpty(t, validateEqual(actual, true, descriptions...))\n}\n\nfunc assertFalseE(t *testing.T, actual bool, descriptions ...string) {\n\tt.Helper()\n\terrorOnNonEmpty(t, validateEqual(actual, false, descriptions...))\n}\n\nfunc fatalOnNonEmpty(t *testing.T, errMsg string) {\n\tif errMsg != \"\" {\n\t\tt.Helper()\n\t\tt.Fatal(formatErrorMessage(errMsg))\n\t}\n}\n\nfunc errorOnNonEmpty(t *testing.T, errMsg string) {\n\tif errMsg != \"\" {\n\t\tt.Helper()\n\t\tt.Error(formatErrorMessage(errMsg))\n\t}\n}\n\nfunc formatErrorMessage(errMsg string) string {\n\treturn fmt.Sprintf(\"[%s] %s\", time.Now().Format(time.RFC3339Nano), maskSecrets(errMsg))\n}\n\nfunc validateNil(actual any, descriptions ...string) string {\n\tif isNil(actual) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to be nil but was not. %s\", maskSecrets(fmt.Sprintf(\"%v\", actual)), desc)\n}\n\nfunc validateNotNil(actual any, descriptions ...string) string {\n\tif !isNil(actual) {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected to be not nil but was not. %s\", desc)\n}\n\nfunc validateEqual(actual any, expected any, descriptions ...string) string {\n\tif expected == actual {\n\t\treturn \"\"\n\t}\n\tdesc := joinDescriptions(descriptions...)\n\treturn fmt.Sprintf(\"expected \\\"%s\\\" to be equal to \\\"%s\\\" but was not. %s\",\n\t\tmaskSecrets(fmt.Sprintf(\"%v\", actual)),\n\t\tmaskSecrets(fmt.Sprintf(\"%v\", expected)),\n\t\tdesc)\n}\n\nfunc joinDescriptions(descriptions ...string) string {\n\treturn strings.Join(descriptions, \" \")\n}\n\nfunc isNil(value any) bool {\n\tif value == nil {\n\t\treturn true\n\t}\n\tval := reflect.ValueOf(value)\n\treturn slices.Contains([]reflect.Kind{reflect.Pointer, reflect.Slice, reflect.Map, reflect.Interface, reflect.Func}, val.Kind()) && val.IsNil()\n}\n"
  },
  {
    "path": "internal/config/auth_type.go",
    "content": "package config\n\nimport (\n\t\"net/url\"\n\t\"strings\"\n\n\tsferrors \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n)\n\n// AuthType indicates the type of authentication in Snowflake\ntype AuthType int\n\nconst (\n\t// AuthTypeSnowflake is the general username password authentication\n\tAuthTypeSnowflake AuthType = iota\n\t// AuthTypeOAuth is the OAuth authentication\n\tAuthTypeOAuth\n\t// AuthTypeExternalBrowser is to use a browser to access an Fed and perform SSO authentication\n\tAuthTypeExternalBrowser\n\t// AuthTypeOkta is to use a native okta URL to perform SSO authentication on Okta\n\tAuthTypeOkta\n\t// AuthTypeJwt is to use Jwt to perform authentication\n\tAuthTypeJwt\n\t// AuthTypeTokenAccessor is to use the provided token accessor and bypass authentication\n\tAuthTypeTokenAccessor\n\t// AuthTypeUsernamePasswordMFA is to use username and password with mfa\n\tAuthTypeUsernamePasswordMFA\n\t// AuthTypePat is to use programmatic access token\n\tAuthTypePat\n\t// AuthTypeOAuthAuthorizationCode is to use browser-based OAuth2 flow\n\tAuthTypeOAuthAuthorizationCode\n\t// AuthTypeOAuthClientCredentials is to use non-interactive OAuth2 flow\n\tAuthTypeOAuthClientCredentials\n\t// AuthTypeWorkloadIdentityFederation is to use CSP identity for authentication\n\tAuthTypeWorkloadIdentityFederation\n)\n\nfunc (authType AuthType) String() string {\n\tswitch authType {\n\tcase AuthTypeSnowflake:\n\t\treturn \"SNOWFLAKE\"\n\tcase AuthTypeOAuth:\n\t\treturn \"OAUTH\"\n\tcase AuthTypeExternalBrowser:\n\t\treturn \"EXTERNALBROWSER\"\n\tcase AuthTypeOkta:\n\t\treturn \"OKTA\"\n\tcase AuthTypeJwt:\n\t\treturn \"SNOWFLAKE_JWT\"\n\tcase AuthTypeTokenAccessor:\n\t\treturn \"TOKENACCESSOR\"\n\tcase AuthTypeUsernamePasswordMFA:\n\t\treturn \"USERNAME_PASSWORD_MFA\"\n\tcase AuthTypePat:\n\t\treturn \"PROGRAMMATIC_ACCESS_TOKEN\"\n\tcase AuthTypeOAuthAuthorizationCode:\n\t\treturn \"OAUTH_AUTHORIZATION_CODE\"\n\tcase AuthTypeOAuthClientCredentials:\n\t\treturn \"OAUTH_CLIENT_CREDENTIALS\"\n\tcase AuthTypeWorkloadIdentityFederation:\n\t\treturn \"WORKLOAD_IDENTITY\"\n\tdefault:\n\t\treturn \"UNKNOWN\"\n\t}\n}\n\n// DetermineAuthenticatorType parses the authenticator string and sets the Config.Authenticator field.\nfunc DetermineAuthenticatorType(cfg *Config, value string) error {\n\tupperCaseValue := strings.ToUpper(value)\n\tlowerCaseValue := strings.ToLower(value)\n\tif strings.Trim(value, \" \") == \"\" || upperCaseValue == AuthTypeSnowflake.String() {\n\t\tcfg.Authenticator = AuthTypeSnowflake\n\t\treturn nil\n\t} else if upperCaseValue == AuthTypeOAuth.String() {\n\t\tcfg.Authenticator = AuthTypeOAuth\n\t\treturn nil\n\t} else if upperCaseValue == AuthTypeJwt.String() {\n\t\tcfg.Authenticator = AuthTypeJwt\n\t\treturn nil\n\t} else if upperCaseValue == AuthTypeExternalBrowser.String() {\n\t\tcfg.Authenticator = AuthTypeExternalBrowser\n\t\treturn nil\n\t} else if upperCaseValue == AuthTypeUsernamePasswordMFA.String() {\n\t\tcfg.Authenticator = AuthTypeUsernamePasswordMFA\n\t\treturn nil\n\t} else if upperCaseValue == AuthTypeTokenAccessor.String() {\n\t\tcfg.Authenticator = AuthTypeTokenAccessor\n\t\treturn nil\n\t} else if upperCaseValue == AuthTypePat.String() {\n\t\tcfg.Authenticator = AuthTypePat\n\t\treturn nil\n\t} else if upperCaseValue == AuthTypeOAuthAuthorizationCode.String() {\n\t\tcfg.Authenticator = AuthTypeOAuthAuthorizationCode\n\t\treturn nil\n\t} else if upperCaseValue == AuthTypeOAuthClientCredentials.String() {\n\t\tcfg.Authenticator = AuthTypeOAuthClientCredentials\n\t\treturn nil\n\t} else if upperCaseValue == AuthTypeWorkloadIdentityFederation.String() {\n\t\tcfg.Authenticator = AuthTypeWorkloadIdentityFederation\n\t\treturn nil\n\t} else {\n\t\t// possibly Okta case\n\t\toktaURLString, err := url.QueryUnescape(lowerCaseValue)\n\t\tif err != nil {\n\t\t\treturn &sferrors.SnowflakeError{\n\t\t\t\tNumber:      sferrors.ErrCodeFailedToParseAuthenticator,\n\t\t\t\tMessage:     sferrors.ErrMsgFailedToParseAuthenticator,\n\t\t\t\tMessageArgs: []any{lowerCaseValue},\n\t\t\t}\n\t\t}\n\n\t\toktaURL, err := url.Parse(oktaURLString)\n\t\tif err != nil {\n\t\t\treturn &sferrors.SnowflakeError{\n\t\t\t\tNumber:      sferrors.ErrCodeFailedToParseAuthenticator,\n\t\t\t\tMessage:     sferrors.ErrMsgFailedToParseAuthenticator,\n\t\t\t\tMessageArgs: []any{oktaURLString},\n\t\t\t}\n\t\t}\n\n\t\tif oktaURL.Scheme != \"https\" {\n\t\t\treturn &sferrors.SnowflakeError{\n\t\t\t\tNumber:      sferrors.ErrCodeFailedToParseAuthenticator,\n\t\t\t\tMessage:     sferrors.ErrMsgFailedToParseAuthenticator,\n\t\t\t\tMessageArgs: []any{oktaURLString},\n\t\t\t}\n\t\t}\n\t\tcfg.OktaURL = oktaURL\n\t\tcfg.Authenticator = AuthTypeOkta\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "internal/config/config.go",
    "content": "// Package config provides the Config struct which contains all configuration parameters for the driver and a Validate method to check if the configuration is correct.\npackage config\n\nimport (\n\t\"crypto/rsa\"\n\t\"errors\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"strings\"\n\t\"time\"\n)\n\n// Config is a set of configuration parameters\ntype Config struct {\n\tAccount   string // Account name\n\tUser      string // Username\n\tPassword  string // Password (requires User)\n\tDatabase  string // Database name\n\tSchema    string // Schema\n\tWarehouse string // Warehouse\n\tRole      string // Role\n\tRegion    string // Region\n\n\tOauthClientID                string // Client id for OAuth2 external IdP\n\tOauthClientSecret            string // Client secret for OAuth2 external IdP\n\tOauthAuthorizationURL        string // Authorization URL of Auth2 external IdP\n\tOauthTokenRequestURL         string // Token request URL of Auth2 external IdP\n\tOauthRedirectURI             string // Redirect URI registered in IdP. The default is http://127.0.0.1:<random port>\n\tOauthScope                   string // Comma separated list of scopes. If empty it is derived from role.\n\tEnableSingleUseRefreshTokens bool   // Enables single use refresh tokens for Snowflake IdP\n\n\t// ValidateDefaultParameters disable the validation checks for Database, Schema, Warehouse and Role\n\t// at the time a connection is established\n\tValidateDefaultParameters Bool\n\n\tParams map[string]*string // other connection parameters\n\n\tProtocol string // http or https (optional)\n\tHost     string // hostname (optional)\n\tPort     int    // port (optional)\n\n\tAuthenticator              AuthType // The authenticator type\n\tSingleAuthenticationPrompt Bool     // If enabled prompting for authentication will only occur for the first authentication challenge\n\n\tPasscode           string\n\tPasscodeInPassword bool\n\n\tOktaURL *url.URL\n\n\t// Deprecated: timeouts may be reorganized in a future release.\n\tLoginTimeout time.Duration // Login retry timeout EXCLUDING network roundtrip and read out http response\n\t// Deprecated: timeouts may be reorganized in a future release.\n\tRequestTimeout time.Duration // request retry timeout EXCLUDING network roundtrip and read out http response\n\t// Deprecated: timeouts may be reorganized in a future release.\n\tJWTExpireTimeout time.Duration // JWT expire after timeout\n\t// Deprecated: timeouts may be reorganized in a future release.\n\tClientTimeout time.Duration // Timeout for network round trip + read out http response\n\t// Deprecated: timeouts may be reorganized in a future release.\n\tJWTClientTimeout time.Duration // Timeout for network round trip + read out http response used when JWT token auth is taking place\n\t// Deprecated: timeouts may be reorganized in a future release.\n\tExternalBrowserTimeout time.Duration // Timeout for external browser login\n\t// Deprecated: timeouts may be reorganized in a future release.\n\tCloudStorageTimeout time.Duration // Timeout for a single call to a cloud storage provider\n\tMaxRetryCount       int           // Specifies how many times non-periodic HTTP request can be retried\n\n\tApplication       string           // application name.\n\tDisableOCSPChecks bool             // driver doesn't check certificate revocation status\n\tOCSPFailOpen      OCSPFailOpenMode // OCSP Fail Open\n\n\tToken                  string        // Token to use for OAuth other forms of token based auth\n\tTokenFilePath          string        // TokenFilePath defines a file where to read token from\n\tTokenAccessor          TokenAccessor // TokenAccessor Optional token accessor to use\n\tServerSessionKeepAlive bool          // ServerSessionKeepAlive enables the session to persist even after the driver connection is closed\n\n\tPrivateKey *rsa.PrivateKey // Private key used to sign JWT\n\n\tTransporter http.RoundTripper // RoundTripper to intercept HTTP requests and responses\n\n\tTLSConfigName string // Name of the TLS config to use\n\n\t// Deprecated: may be removed in a future release with logging reorganization.\n\tTracing            string // sets logging level\n\tLogQueryText       bool   // indicates whether query text should be logged.\n\tLogQueryParameters bool   // indicates whether query parameters should be logged.\n\n\tTmpDirPath string // sets temporary directory used by a driver for operations like encrypting, compressing etc\n\n\tClientRequestMfaToken          Bool // When true the MFA token is cached in the credential manager. True by default in Windows/OSX. False for Linux.\n\tClientStoreTemporaryCredential Bool // When true the ID token is cached in the credential manager. True by default in Windows/OSX. False for Linux.\n\n\tDisableQueryContextCache bool // Should HTAP query context cache be disabled\n\n\tIncludeRetryReason Bool // Should retried request contain retry reason\n\n\tClientConfigFile string // File path to the client configuration json file\n\n\tDisableConsoleLogin Bool // Indicates whether console login should be disabled\n\n\tDisableSamlURLCheck Bool // Indicates whether the SAML URL check should be disabled\n\n\tWorkloadIdentityProvider          string   // The workload identity provider to use for WIF authentication\n\tWorkloadIdentityEntraResource     string   // The resource to use for WIF authentication on Azure environment\n\tWorkloadIdentityImpersonationPath []string // The components to use for WIF impersonation.\n\n\tCertRevocationCheckMode           CertRevocationCheckMode // revocation check mode for CRLs\n\tCrlAllowCertificatesWithoutCrlURL Bool                    // Allow certificates (not short-lived) without CRL DP included to be treated as correct ones\n\tCrlInMemoryCacheDisabled          bool                    // Should the in-memory cache be disabled\n\tCrlOnDiskCacheDisabled            bool                    // Should the on-disk cache be disabled\n\tCrlDownloadMaxSize                int                     // Max size in bytes of CRL to download. 0 means use default (20MB).\n\tCrlHTTPClientTimeout              time.Duration           // Timeout for HTTP client used to download CRL\n\n\tConnectionDiagnosticsEnabled       bool   // Indicates whether connection diagnostics should be enabled\n\tConnectionDiagnosticsAllowlistFile string // File path to the allowlist file for connection diagnostics. If not specified, the allowlist.json file in the current directory will be used.\n\n\tProxyHost     string // Proxy host\n\tProxyPort     int    // Proxy port\n\tProxyUser     string // Proxy user\n\tProxyPassword string // Proxy password\n\tProxyProtocol string // Proxy protocol (http or https)\n\tNoProxy       string // No proxy for this host list\n}\n\nvar errTokenConfigConflict = errors.New(\"token and tokenFilePath cannot be specified at the same time\")\n\n// Validate enables testing if config is correct.\n// A driver client may call it manually, but it is also called during opening first connection.\nfunc (c *Config) Validate() error {\n\tif c.TmpDirPath != \"\" {\n\t\tif _, err := os.Stat(c.TmpDirPath); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tif strings.EqualFold(c.WorkloadIdentityProvider, \"azure\") && len(c.WorkloadIdentityImpersonationPath) > 0 {\n\t\treturn errors.New(\"WorkloadIdentityImpersonationPath is not supported for Azure\")\n\t}\n\tif c.Token != \"\" && c.TokenFilePath != \"\" {\n\t\treturn errTokenConfigConflict\n\t}\n\treturn nil\n}\n\n// Param binds Config field names to environment variable names.\ntype Param struct {\n\tName          string\n\tEnvName       string\n\tFailOnMissing bool\n}\n"
  },
  {
    "path": "internal/config/config_bool.go",
    "content": "package config\n\n// Bool is a type to represent true or false in the Config\ntype Bool uint8\n\nconst (\n\t// BoolNotSet represents the default value for the config field which is not set\n\tBoolNotSet Bool = iota // Reserved for unset to let default value fall into this category\n\t// BoolTrue represents true for the config field\n\tBoolTrue\n\t// BoolFalse represents false for the config field\n\tBoolFalse\n)\n\nfunc (cb Bool) String() string {\n\tswitch cb {\n\tcase BoolTrue:\n\t\treturn \"true\"\n\tcase BoolFalse:\n\t\treturn \"false\"\n\tdefault:\n\t\treturn \"not set\"\n\t}\n}\n"
  },
  {
    "path": "internal/config/connection_configuration.go",
    "content": "package config\n\nimport (\n\t\"encoding/base64\"\n\t\"errors\"\n\t\"os\"\n\tpath \"path/filepath\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/BurntSushi/toml\"\n\tsferrors \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n)\n\nconst (\n\tsnowflakeConnectionName = \"SNOWFLAKE_DEFAULT_CONNECTION_NAME\"\n\tsnowflakeHome           = \"SNOWFLAKE_HOME\"\n\tdefaultTokenPath        = \"/snowflake/session/token\"\n\n\tothersCanReadFilePermission  = os.FileMode(0044)\n\tothersCanWriteFilePermission = os.FileMode(0022)\n\texecutableFilePermission     = os.FileMode(0111)\n\n\tskipWarningForReadPermissionsEnv = \"SF_SKIP_WARNING_FOR_READ_PERMISSIONS_ON_CONFIG_FILE\"\n)\n\n// LoadConnectionConfig returns connection configs loaded from the toml file.\n// By default, SNOWFLAKE_HOME(toml file path) is os.snowflakeHome/.snowflake\n// and SNOWFLAKE_DEFAULT_CONNECTION_NAME(DSN) is 'default'\nfunc LoadConnectionConfig() (*Config, error) {\n\tlogger.Trace(\"Loading connection configuration from the local files.\")\n\tcfg := &Config{\n\t\tParams:        make(map[string]*string),\n\t\tAuthenticator: AuthTypeSnowflake, // Default to snowflake\n\t}\n\tdsn := getConnectionDSN(os.Getenv(snowflakeConnectionName))\n\tsnowflakeConfigDir, err := GetTomlFilePath(os.Getenv(snowflakeHome))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tlogger.Debugf(\"Looking for connection file in directory %v\", snowflakeConfigDir)\n\ttomlFilePath := path.Join(snowflakeConfigDir, \"connections.toml\")\n\terr = ValidateFilePermission(tomlFilePath)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\ttomlInfo := make(map[string]any)\n\t_, err = toml.DecodeFile(tomlFilePath, &tomlInfo)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdsnMap, exist := tomlInfo[dsn]\n\tif !exist {\n\t\treturn nil, &sferrors.SnowflakeError{\n\t\t\tNumber:  sferrors.ErrCodeFailedToFindDSNInToml,\n\t\t\tMessage: sferrors.ErrMsgFailedToFindDSNInTomlFile,\n\t\t}\n\t}\n\tconnectionConfig, ok := dsnMap.(map[string]any)\n\tif !ok {\n\t\treturn nil, err\n\t}\n\tlogger.Trace(\"Trying to parse the config file\")\n\terr = ParseToml(cfg, connectionConfig)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\terr = FillMissingConfigParameters(cfg)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn cfg, err\n}\n\n// ParseToml parses a TOML connection map into a Config.\nfunc ParseToml(cfg *Config, connectionMap map[string]any) error {\n\tfor key, value := range connectionMap {\n\t\tif err := HandleSingleParam(cfg, key, value); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// HandleSingleParam processes a single TOML parameter into a Config.\nfunc HandleSingleParam(cfg *Config, key string, value any) error {\n\tvar err error\n\n\t// We normalize the key to handle both snake_case and camelCase.\n\tnormalizedKey := strings.ReplaceAll(strings.ToLower(key), \"_\", \"\")\n\n\t// the cases in switch statement should be in lower case and no _\n\tswitch normalizedKey {\n\tcase \"user\", \"username\":\n\t\tcfg.User, err = parseString(value)\n\tcase \"password\":\n\t\tcfg.Password, err = parseString(value)\n\tcase \"host\":\n\t\tcfg.Host, err = parseString(value)\n\tcase \"account\":\n\t\tcfg.Account, err = parseString(value)\n\tcase \"warehouse\":\n\t\tcfg.Warehouse, err = parseString(value)\n\tcase \"database\":\n\t\tcfg.Database, err = parseString(value)\n\tcase \"schema\":\n\t\tcfg.Schema, err = parseString(value)\n\tcase \"role\":\n\t\tcfg.Role, err = parseString(value)\n\tcase \"region\":\n\t\tcfg.Region, err = parseString(value)\n\tcase \"protocol\":\n\t\tcfg.Protocol, err = parseString(value)\n\tcase \"passcode\":\n\t\tcfg.Passcode, err = parseString(value)\n\tcase \"port\":\n\t\tcfg.Port, err = ParseInt(value)\n\tcase \"passcodeinpassword\":\n\t\tcfg.PasscodeInPassword, err = ParseBool(value)\n\tcase \"clienttimeout\":\n\t\tcfg.ClientTimeout, err = ParseDuration(value)\n\tcase \"jwtclienttimeout\":\n\t\tcfg.JWTClientTimeout, err = ParseDuration(value)\n\tcase \"logintimeout\":\n\t\tcfg.LoginTimeout, err = ParseDuration(value)\n\tcase \"requesttimeout\":\n\t\tcfg.RequestTimeout, err = ParseDuration(value)\n\tcase \"jwttimeout\":\n\t\tcfg.JWTExpireTimeout, err = ParseDuration(value)\n\tcase \"externalbrowsertimeout\":\n\t\tcfg.ExternalBrowserTimeout, err = ParseDuration(value)\n\tcase \"maxretrycount\":\n\t\tcfg.MaxRetryCount, err = ParseInt(value)\n\tcase \"application\":\n\t\tcfg.Application, err = parseString(value)\n\tcase \"authenticator\":\n\t\tvar v string\n\t\tv, err = parseString(value)\n\t\tif err = checkParsingError(err, key, value); err != nil {\n\t\t\treturn err\n\t\t}\n\t\terr = DetermineAuthenticatorType(cfg, v)\n\tcase \"disableocspchecks\":\n\t\tcfg.DisableOCSPChecks, err = ParseBool(value)\n\tcase \"ocspfailopen\":\n\t\tvar vv Bool\n\t\tvv, err = parseConfigBool(value)\n\t\tif err := checkParsingError(err, key, value); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcfg.OCSPFailOpen = OCSPFailOpenMode(vv)\n\tcase \"token\":\n\t\tcfg.Token, err = parseString(value)\n\tcase \"privatekey\":\n\t\tvar v string\n\t\tv, err = parseString(value)\n\t\tif err = checkParsingError(err, key, value); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tblock, decodeErr := base64.URLEncoding.DecodeString(v)\n\t\tif decodeErr != nil {\n\t\t\treturn &sferrors.SnowflakeError{\n\t\t\t\tNumber:  sferrors.ErrCodePrivateKeyParseError,\n\t\t\t\tMessage: \"Base64 decode failed\",\n\t\t\t}\n\t\t}\n\t\tcfg.PrivateKey, err = ParsePKCS8PrivateKey(block)\n\tcase \"validatedefaultparameters\":\n\t\tcfg.ValidateDefaultParameters, err = parseConfigBool(value)\n\tcase \"clientrequestmfatoken\":\n\t\tcfg.ClientRequestMfaToken, err = parseConfigBool(value)\n\tcase \"clientstoretemporarycredential\":\n\t\tcfg.ClientStoreTemporaryCredential, err = parseConfigBool(value)\n\tcase \"tracing\":\n\t\tcfg.Tracing, err = parseString(value)\n\tcase \"logquerytext\":\n\t\tcfg.LogQueryText, err = ParseBool(value)\n\tcase \"logqueryparameters\":\n\t\tcfg.LogQueryParameters, err = ParseBool(value)\n\tcase \"tmpdirpath\":\n\t\tcfg.TmpDirPath, err = parseString(value)\n\tcase \"disablequerycontextcache\":\n\t\tcfg.DisableQueryContextCache, err = ParseBool(value)\n\tcase \"includeretryreason\":\n\t\tcfg.IncludeRetryReason, err = parseConfigBool(value)\n\tcase \"clientconfigfile\":\n\t\tcfg.ClientConfigFile, err = parseString(value)\n\tcase \"disableconsolelogin\":\n\t\tcfg.DisableConsoleLogin, err = parseConfigBool(value)\n\tcase \"disablesamlurlcheck\":\n\t\tcfg.DisableSamlURLCheck, err = parseConfigBool(value)\n\tcase \"oauthauthorizationurl\":\n\t\tcfg.OauthAuthorizationURL, err = parseString(value)\n\tcase \"oauthclientid\":\n\t\tcfg.OauthClientID, err = parseString(value)\n\tcase \"oauthclientsecret\":\n\t\tcfg.OauthClientSecret, err = parseString(value)\n\tcase \"oauthtokenrequesturl\":\n\t\tcfg.OauthTokenRequestURL, err = parseString(value)\n\tcase \"oauthredirecturi\":\n\t\tcfg.OauthRedirectURI, err = parseString(value)\n\tcase \"oauthscope\":\n\t\tcfg.OauthScope, err = parseString(value)\n\tcase \"workloadidentityprovider\":\n\t\tcfg.WorkloadIdentityProvider, err = parseString(value)\n\tcase \"workloadidentityentraresource\":\n\t\tcfg.WorkloadIdentityEntraResource, err = parseString(value)\n\tcase \"workloadidentityimpersonatinpath\":\n\t\tcfg.WorkloadIdentityImpersonationPath, err = parseStrings(value)\n\tcase \"tokenfilepath\":\n\t\tcfg.TokenFilePath, err = parseString(value)\n\t\tif err = checkParsingError(err, key, value); err != nil {\n\t\t\treturn err\n\t\t}\n\tcase \"connectiondiagnosticsenabled\":\n\t\tcfg.ConnectionDiagnosticsEnabled, err = ParseBool(value)\n\tcase \"connectiondiagnosticsallowlistfile\":\n\t\tcfg.ConnectionDiagnosticsAllowlistFile, err = parseString(value)\n\tcase \"proxyhost\":\n\t\tcfg.ProxyHost, err = parseString(value)\n\tcase \"proxyport\":\n\t\tcfg.ProxyPort, err = ParseInt(value)\n\tcase \"proxyuser\":\n\t\tcfg.ProxyUser, err = parseString(value)\n\tcase \"proxypassword\":\n\t\tcfg.ProxyPassword, err = parseString(value)\n\tcase \"proxyprotocol\":\n\t\tcfg.ProxyProtocol, err = parseString(value)\n\tcase \"noproxy\":\n\t\tcfg.NoProxy, err = parseString(value)\n\tdefault:\n\t\tparam, err := parseString(value)\n\t\tif err = checkParsingError(err, key, value); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcfg.Params[urlDecodeIfNeeded(key)] = &param\n\t}\n\treturn checkParsingError(err, key, value)\n}\n\nfunc checkParsingError(err error, key string, value any) error {\n\tif err != nil {\n\t\terr = &sferrors.SnowflakeError{\n\t\t\tNumber:      sferrors.ErrCodeTomlFileParsingFailed,\n\t\t\tMessage:     sferrors.ErrMsgFailedToParseTomlFile,\n\t\t\tMessageArgs: []any{key, value},\n\t\t}\n\t\tlogger.Errorf(\"Parsed key: %s, value: %v is not an option for the connection config\", key, value)\n\t\treturn err\n\t}\n\tlogger.Warnf(\"Parsed key: %s, value: %v — cannot be parsed as string\", key, value)\n\treturn nil\n}\n\n// ParseInt parses an interface value to int.\nfunc ParseInt(i any) (int, error) {\n\tv, ok := i.(string)\n\tif !ok {\n\t\tnum, ok := i.(int)\n\t\tif !ok {\n\t\t\treturn 0, errors.New(\"failed to parse the value to integer\")\n\t\t}\n\t\treturn num, nil\n\t}\n\treturn strconv.Atoi(v)\n}\n\n// ParseBool parses an interface value to bool.\nfunc ParseBool(i any) (bool, error) {\n\tv, ok := i.(string)\n\tif !ok {\n\t\tvv, ok := i.(bool)\n\t\tif !ok {\n\t\t\treturn false, errors.New(\"failed to parse the value to boolean\")\n\t\t}\n\t\treturn vv, nil\n\t}\n\treturn strconv.ParseBool(v)\n}\n\nfunc parseConfigBool(i any) (Bool, error) {\n\tvv, err := ParseBool(i)\n\tif err != nil {\n\t\treturn BoolFalse, err\n\t}\n\tif vv {\n\t\treturn BoolTrue, nil\n\t}\n\treturn BoolFalse, nil\n}\n\n// ParseDuration parses an interface value to time.Duration.\nfunc ParseDuration(i any) (time.Duration, error) {\n\tv, ok := i.(string)\n\tif !ok {\n\t\tnum, err := ParseInt(i)\n\t\tif err != nil {\n\t\t\treturn time.Duration(0), err\n\t\t}\n\t\tt := int64(num)\n\t\treturn time.Duration(t * int64(time.Second)), nil\n\t}\n\treturn parseTimeout(v)\n}\n\n// ReadToken reads a token from the given path (or default path if empty).\nfunc ReadToken(tokenPath string) (string, error) {\n\tif tokenPath == \"\" {\n\t\ttokenPath = defaultTokenPath\n\t}\n\tif !path.IsAbs(tokenPath) {\n\t\tvar err error\n\t\ttokenPath, err = path.Abs(tokenPath)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\terr := ValidateFilePermission(tokenPath)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\ttoken, err := os.ReadFile(tokenPath)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn string(token), nil\n}\n\nfunc parseString(i any) (string, error) {\n\tv, ok := i.(string)\n\tif !ok {\n\t\treturn \"\", errors.New(\"failed to convert the value to string\")\n\t}\n\treturn v, nil\n}\n\nfunc parseStrings(i any) ([]string, error) {\n\ts, ok := i.(string)\n\tif !ok {\n\t\treturn nil, errors.New(\"failed to convert the value to string\")\n\t}\n\treturn strings.Split(s, \",\"), nil\n}\n\n// GetTomlFilePath returns the path to the TOML file directory.\nfunc GetTomlFilePath(filePath string) (string, error) {\n\tif len(filePath) == 0 {\n\t\thomeDir, err := os.UserHomeDir()\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tfilePath = path.Join(homeDir, \".snowflake\")\n\t}\n\tabsDir, err := path.Abs(filePath)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn absDir, nil\n}\n\nfunc getConnectionDSN(dsn string) string {\n\tif len(dsn) != 0 {\n\t\treturn dsn\n\t}\n\treturn \"default\"\n}\n\n// ValidateFilePermission checks that a file does not have overly permissive permissions.\nfunc ValidateFilePermission(filePath string) error {\n\tif runtime.GOOS == \"windows\" {\n\t\treturn nil\n\t}\n\n\tfileInfo, err := os.Stat(filePath)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tpermission := fileInfo.Mode().Perm()\n\n\tif !shouldSkipWarningForReadPermissions() && permission&othersCanReadFilePermission != 0 {\n\t\tlogger.Warnf(\"file '%v' is readable by someone other than the owner. Your Permission: %v. If you want \"+\n\t\t\t\"to disable this warning, either remove read permissions from group and others or set the environment \"+\n\t\t\t\"variable %v to true\", filePath, permission, skipWarningForReadPermissionsEnv)\n\t}\n\n\tif permission&executableFilePermission != 0 {\n\t\treturn &sferrors.SnowflakeError{\n\t\t\tNumber:      sferrors.ErrCodeInvalidFilePermission,\n\t\t\tMessage:     sferrors.ErrMsgInvalidExecutablePermissionToFile,\n\t\t\tMessageArgs: []any{filePath, permission},\n\t\t}\n\t}\n\n\tif permission&othersCanWriteFilePermission != 0 {\n\t\treturn &sferrors.SnowflakeError{\n\t\t\tNumber:      sferrors.ErrCodeInvalidFilePermission,\n\t\t\tMessage:     sferrors.ErrMsgInvalidWritablePermissionToFile,\n\t\t\tMessageArgs: []any{filePath, permission},\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc shouldSkipWarningForReadPermissions() bool {\n\treturn os.Getenv(skipWarningForReadPermissionsEnv) != \"\"\n}\n"
  },
  {
    "path": "internal/config/connection_configuration_test.go",
    "content": "package config\n\nimport (\n\t\"bytes\"\n\t\"crypto/rand\"\n\t\"crypto/rsa\"\n\t\"crypto/x509\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"os\"\n\tpath \"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\tsferrors \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\tsflogger \"github.com/snowflakedb/gosnowflake/v2/internal/logger\"\n)\n\nfunc TestTokenFilePermission(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\treturn\n\t}\n\tos.Setenv(snowflakeHome, \"../../test_data\")\n\n\tconnectionsStat, err := os.Stat(\"../../test_data/connections.toml\")\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to stat connections.toml file: %v\", err)\n\t}\n\n\ttokenStat, err := os.Stat(\"../../test_data/snowflake/session/token\")\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to stat token file: %v\", err)\n\t}\n\n\tdefer func() {\n\t\terr = os.Chmod(\"../../test_data/connections.toml\", connectionsStat.Mode())\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to restore connections.toml file permission: %v\", err)\n\t\t}\n\n\t\terr = os.Chmod(\"../../test_data/snowflake/session/token\", tokenStat.Mode())\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to restore token file permission: %v\", err)\n\t\t}\n\t}()\n\n\tt.Run(\"test warning logger for readable outside owner\", func(t *testing.T) {\n\t\toriginalGlobalLogger := sflogger.GetLogger()\n\t\tnewLogger := sflogger.CreateDefaultLogger()\n\t\tsflogger.SetLogger(newLogger)\n\t\tbuf := &bytes.Buffer{}\n\t\tsflogger.GetLogger().SetOutput(buf)\n\n\t\tdefer func() {\n\t\t\tsflogger.SetLogger(originalGlobalLogger)\n\t\t}()\n\n\t\terr = os.Chmod(\"../../test_data/connections.toml\", 0644)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to change connections.toml file permission: %v\", err)\n\t\t}\n\n\t\t_, err = LoadConnectionConfig()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to load connection config: %v\", err)\n\t\t}\n\n\t\tconnectionsAbsolutePath, err := path.Abs(\"../../test_data/connections.toml\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to get absolute path of connections.toml file: %v\", err)\n\t\t}\n\n\t\texpectedWarn := fmt.Sprintf(\"msg=\\\"file '%v' is readable by someone other than the owner. \"+\n\t\t\t\"Your Permission: -rw-r--r--. If you want to disable this warning, either remove read permissions from group \"+\n\t\t\t\"and others or set the environment variable SF_SKIP_WARNING_FOR_READ_PERMISSIONS_ON_CONFIG_FILE to true\\\"\", connectionsAbsolutePath)\n\t\tif !strings.Contains(buf.String(), expectedWarn) {\n\t\t\tt.Errorf(\"Expected warning message not found in logs.\\nGot: %v\\nWant substring: %v\", buf.String(), expectedWarn)\n\t\t}\n\t})\n\n\tt.Run(\"test warning skipped logger for readable outside owner\", func(t *testing.T) {\n\t\tos.Setenv(skipWarningForReadPermissionsEnv, \"true\")\n\t\tdefer func() {\n\t\t\tos.Unsetenv(skipWarningForReadPermissionsEnv)\n\t\t}()\n\n\t\toriginalGlobalLogger := sflogger.GetLogger()\n\t\tnewLogger := sflogger.CreateDefaultLogger()\n\t\tsflogger.SetLogger(newLogger)\n\t\tbuf := &bytes.Buffer{}\n\t\tsflogger.GetLogger().SetOutput(buf)\n\n\t\tdefer func() {\n\t\t\tsflogger.SetLogger(originalGlobalLogger)\n\t\t}()\n\n\t\terr = os.Chmod(\"../../test_data/connections.toml\", 0644)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to change connections.toml file permission: %v\", err)\n\t\t}\n\n\t\t_, err = LoadConnectionConfig()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to load connection config: %v\", err)\n\t\t}\n\t})\n\n\tt.Run(\"test writable connection file other than owner\", func(t *testing.T) {\n\t\terr = os.Chmod(\"../../test_data/connections.toml\", 0666)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t\t}\n\t\t_, err := LoadConnectionConfig()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"The error should occur because the file is writable by anyone but the owner\")\n\t\t}\n\t\tdriverErr, ok := err.(*sferrors.SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"This should be a Snowflake Error, got: %T\", err)\n\t\t}\n\t\tif driverErr.Number != sferrors.ErrCodeInvalidFilePermission {\n\t\t\tt.Fatalf(\"Expected error code %d, got %d\", sferrors.ErrCodeInvalidFilePermission, driverErr.Number)\n\t\t}\n\t})\n\n\tt.Run(\"test writable token file other than owner\", func(t *testing.T) {\n\t\terr = os.Chmod(\"../../test_data/snowflake/session/token\", 0666)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t\t}\n\t\t_, err := ReadToken(\"../../test_data/snowflake/session/token\")\n\t\tif err == nil {\n\t\t\tt.Fatal(\"The error should occur because the file is writable by anyone but the owner\")\n\t\t}\n\t\tdriverErr, ok := err.(*sferrors.SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"This should be a Snowflake Error, got: %T\", err)\n\t\t}\n\t\tif driverErr.Number != sferrors.ErrCodeInvalidFilePermission {\n\t\t\tt.Fatalf(\"Expected error code %d, got %d\", sferrors.ErrCodeInvalidFilePermission, driverErr.Number)\n\t\t}\n\t})\n\n\tt.Run(\"test executable connection file\", func(t *testing.T) {\n\t\terr = os.Chmod(\"../../test_data/connections.toml\", 0100)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t\t}\n\t\t_, err := LoadConnectionConfig()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"The error should occur because the file is executable\")\n\t\t}\n\t\tdriverErr, ok := err.(*sferrors.SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"This should be a Snowflake Error, got: %T\", err)\n\t\t}\n\t\tif driverErr.Number != sferrors.ErrCodeInvalidFilePermission {\n\t\t\tt.Fatalf(\"Expected error code %d, got %d\", sferrors.ErrCodeInvalidFilePermission, driverErr.Number)\n\t\t}\n\t})\n\n\tt.Run(\"test executable token file\", func(t *testing.T) {\n\t\terr = os.Chmod(\"../../test_data/snowflake/session/token\", 0010)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t\t}\n\t\t_, err := ReadToken(\"../../test_data/snowflake/session/token\")\n\t\tif err == nil {\n\t\t\tt.Fatal(\"The error should occur because the file is executable\")\n\t\t}\n\t\tdriverErr, ok := err.(*sferrors.SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"This should be a Snowflake Error, got: %T\", err)\n\t\t}\n\t\tif driverErr.Number != sferrors.ErrCodeInvalidFilePermission {\n\t\t\tt.Fatalf(\"Expected error code %d, got %d\", sferrors.ErrCodeInvalidFilePermission, driverErr.Number)\n\t\t}\n\t})\n\n\tt.Run(\"test valid file permission for connection config and token file\", func(t *testing.T) {\n\t\terr = os.Chmod(\"../../test_data/connections.toml\", 0600)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t\t}\n\n\t\terr = os.Chmod(\"../../test_data/snowflake/session/token\", 0600)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t\t}\n\n\t\t_, err := LoadConnectionConfig()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"The error occurred because the permission is not 0600: %v\", err)\n\t\t}\n\n\t\t_, err = ReadToken(\"../../test_data/snowflake/session/token\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"The error occurred because the permission is not 0600: %v\", err)\n\t\t}\n\t})\n}\n\nfunc TestLoadConnectionConfigForStandardAuth(t *testing.T) {\n\terr := os.Chmod(\"../../test_data/connections.toml\", 0600)\n\tif err != nil {\n\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t}\n\n\tos.Setenv(snowflakeHome, \"../../test_data\")\n\n\tcfg, err := LoadConnectionConfig()\n\tif err != nil {\n\t\tt.Fatalf(\"The error should not occur: %v\", err)\n\t}\n\tassertEqual(t, cfg.Account, \"snowdriverswarsaw.us-west-2.aws\")\n\tassertEqual(t, cfg.User, \"test_default_user\")\n\tassertEqual(t, cfg.Password, \"test_default_pass\")\n\tassertEqual(t, cfg.Warehouse, \"testw_default\")\n\tassertEqual(t, cfg.Database, \"test_default_db\")\n\tassertEqual(t, cfg.Schema, \"test_default_go\")\n\tassertEqual(t, cfg.Protocol, \"https\")\n\tif cfg.Port != 300 {\n\t\tt.Fatalf(\"Expected port 300, got %d\", cfg.Port)\n\t}\n}\n\nfunc TestLoadConnectionConfigForOAuth(t *testing.T) {\n\terr := os.Chmod(\"../../test_data/connections.toml\", 0600)\n\tif err != nil {\n\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t}\n\n\tos.Setenv(snowflakeHome, \"../../test_data\")\n\tos.Setenv(snowflakeConnectionName, \"aws-oauth\")\n\n\tcfg, err := LoadConnectionConfig()\n\tif err != nil {\n\t\tt.Fatalf(\"The error should not occur: %v\", err)\n\t}\n\tassertEqual(t, cfg.Account, \"snowdriverswarsaw.us-west-2.aws\")\n\tassertEqual(t, cfg.User, \"test_oauth_user\")\n\tassertEqual(t, cfg.Password, \"test_oauth_pass\")\n\tassertEqual(t, cfg.Warehouse, \"testw_oauth\")\n\tassertEqual(t, cfg.Database, \"test_oauth_db\")\n\tassertEqual(t, cfg.Schema, \"test_oauth_go\")\n\tassertEqual(t, cfg.Protocol, \"https\")\n\tif cfg.Authenticator != AuthTypeOAuth {\n\t\tt.Fatalf(\"Expected authenticator %v, got %v\", AuthTypeOAuth, cfg.Authenticator)\n\t}\n\tassertEqual(t, cfg.Token, \"token_value\")\n\tif cfg.Port != 443 {\n\t\tt.Fatalf(\"Expected port 443, got %d\", cfg.Port)\n\t}\n\tif cfg.DisableOCSPChecks != true {\n\t\tt.Fatalf(\"Expected DisableOCSPChecks true, got %v\", cfg.DisableOCSPChecks)\n\t}\n}\n\nfunc TestLoadConnectionConfigForSnakeCaseConfiguration(t *testing.T) {\n\terr := os.Chmod(\"../../test_data/connections.toml\", 0600)\n\tif err != nil {\n\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t}\n\n\tos.Setenv(snowflakeHome, \"../../test_data\")\n\tos.Setenv(snowflakeConnectionName, \"snake-case\")\n\n\tcfg, err := LoadConnectionConfig()\n\tif err != nil {\n\t\tt.Fatalf(\"The error should not occur: %v\", err)\n\t}\n\tif cfg.OCSPFailOpen != OCSPFailOpenTrue {\n\t\tt.Fatalf(\"Expected OCSPFailOpen %v, got %v\", OCSPFailOpenTrue, cfg.OCSPFailOpen)\n\t}\n}\n\nfunc TestReadTokenValueWithTokenFilePath(t *testing.T) {\n\terr := os.Chmod(\"../../test_data/connections.toml\", 0600)\n\tif err != nil {\n\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t}\n\n\terr = os.Chmod(\"../../test_data/snowflake/session/token\", 0600)\n\tif err != nil {\n\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t}\n\n\tos.Setenv(snowflakeHome, \"../../test_data\")\n\tos.Setenv(snowflakeConnectionName, \"read-token\")\n\n\tcfg, err := LoadConnectionConfig()\n\tif err != nil {\n\t\tt.Fatalf(\"The error should not occur: %v\", err)\n\t}\n\tif cfg.Authenticator != AuthTypeOAuth {\n\t\tt.Fatalf(\"Expected authenticator %v, got %v\", AuthTypeOAuth, cfg.Authenticator)\n\t}\n\t// The token_file_path in the TOML is relative (\"./test_data/snowflake/session/token\"),\n\t// so GetToken resolves it relative to CWD. Use an absolute path instead.\n\tabsTokenPath, err := path.Abs(\"../../test_data/snowflake/session/token\")\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to get absolute path: %v\", err)\n\t}\n\tcfg.TokenFilePath = absTokenPath\n\ttoken, err := GetToken(cfg)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to get token: %v\", err)\n\t}\n\tassertEqual(t, token, \"mock_token123456\")\n\tif cfg.DisableOCSPChecks != true {\n\t\tt.Fatalf(\"Expected DisableOCSPChecks true, got %v\", cfg.DisableOCSPChecks)\n\t}\n}\n\nfunc TestLoadConnectionConfigWitNonExistingDSN(t *testing.T) {\n\terr := os.Chmod(\"../../test_data/connections.toml\", 0600)\n\tif err != nil {\n\t\tt.Fatalf(\"The error occurred because you cannot change the file permission: %v\", err)\n\t}\n\n\tos.Setenv(snowflakeHome, \"../../test_data\")\n\tos.Setenv(snowflakeConnectionName, \"unavailableDSN\")\n\n\t_, err = LoadConnectionConfig()\n\tif err == nil {\n\t\tt.Fatal(\"The error should occur\")\n\t}\n\n\tdriverErr, ok := err.(*sferrors.SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"This should be a Snowflake Error, got: %T\", err)\n\t}\n\tif driverErr.Number != sferrors.ErrCodeFailedToFindDSNInToml {\n\t\tt.Fatalf(\"Expected error code %d, got %d\", sferrors.ErrCodeFailedToFindDSNInToml, driverErr.Number)\n\t}\n}\n\nfunc TestParseInt(t *testing.T) {\n\tvar i any\n\n\ti = 20\n\tnum, err := ParseInt(i)\n\tif err != nil {\n\t\tt.Fatalf(\"This value should be parsed: %v\", err)\n\t}\n\tif num != 20 {\n\t\tt.Fatalf(\"Expected 20, got %d\", num)\n\t}\n\n\ti = \"40\"\n\tnum, err = ParseInt(i)\n\tif err != nil {\n\t\tt.Fatalf(\"This value should be parsed: %v\", err)\n\t}\n\tif num != 40 {\n\t\tt.Fatalf(\"Expected 40, got %d\", num)\n\t}\n\n\ti = \"wrong_num\"\n\t_, err = ParseInt(i)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestParseBool(t *testing.T) {\n\tvar i any\n\n\ti = true\n\tb, err := ParseBool(i)\n\tif err != nil {\n\t\tt.Fatalf(\"This value should be parsed: %v\", err)\n\t}\n\tif b != true {\n\t\tt.Fatalf(\"Expected true, got %v\", b)\n\t}\n\n\ti = \"false\"\n\tb, err = ParseBool(i)\n\tif err != nil {\n\t\tt.Fatalf(\"This value should be parsed: %v\", err)\n\t}\n\tif b != false {\n\t\tt.Fatalf(\"Expected false, got %v\", b)\n\t}\n\n\ti = \"wrong_bool\"\n\t_, err = ParseBool(i)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestParseDuration(t *testing.T) {\n\tvar i any\n\n\ti = 300\n\tdur, err := ParseDuration(i)\n\tif err != nil {\n\t\tt.Fatalf(\"This value should be parsed: %v\", err)\n\t}\n\tif dur != time.Duration(300*int64(time.Second)) {\n\t\tt.Fatalf(\"Expected %v, got %v\", time.Duration(300*int64(time.Second)), dur)\n\t}\n\n\ti = \"30\"\n\tdur, err = ParseDuration(i)\n\tif err != nil {\n\t\tt.Fatalf(\"This value should be parsed: %v\", err)\n\t}\n\tif dur != time.Duration(int64(time.Minute)/2) {\n\t\tt.Fatalf(\"Expected %v, got %v\", time.Duration(int64(time.Minute)/2), dur)\n\t}\n\n\ti = false\n\t_, err = ParseDuration(i)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\ntype paramList struct {\n\ttestParams []string\n\tvalues     []any\n}\n\nfunc testGeneratePKCS8String(key *rsa.PrivateKey) string {\n\ttmpBytes, _ := x509.MarshalPKCS8PrivateKey(key)\n\treturn base64.URLEncoding.EncodeToString(tmpBytes)\n}\n\nfunc TestParseToml(t *testing.T) {\n\tlocalTestKey, err := rsa.GenerateKey(rand.Reader, 2048)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to generate test private key: %s\", err.Error())\n\t}\n\n\ttestCases := []paramList{\n\t\t{\n\t\t\ttestParams: []string{\"user\", \"password\", \"host\", \"account\", \"warehouse\", \"database\",\n\t\t\t\t\"schema\", \"role\", \"region\", \"protocol\", \"passcode\", \"application\", \"token\",\n\t\t\t\t\"tracing\", \"tmpDirPath\", \"tmp_dir_path\", \"clientConfigFile\", \"client_config_file\", \"oauth_authorization_url\", \"oauth_client_id\",\n\t\t\t\t\"oauth_client_secret\", \"oauth_token_request_url\", \"oauth_redirect_uri\", \"oauth_scope\",\n\t\t\t\t\"workload_identity_provider\", \"workload_identity_entra_resource\", \"proxyHost\", \"noProxy\", \"proxyUser\", \"proxyPassword\", \"proxyProtocol\"},\n\t\t\tvalues: []any{\"value\"},\n\t\t},\n\t\t{\n\t\t\ttestParams: []string{\"privatekey\", \"private_key\"},\n\t\t\tvalues:     []any{testGeneratePKCS8String(localTestKey)},\n\t\t},\n\t\t{\n\t\t\ttestParams: []string{\"port\", \"maxRetryCount\", \"max_retry_count\", \"clientTimeout\", \"client_timeout\", \"jwtClientTimeout\", \"jwt_client_timeout\", \"loginTimeout\",\n\t\t\t\t\"login_timeout\", \"requestTimeout\", \"request_timeout\", \"jwtTimeout\", \"jwt_timeout\", \"externalBrowserTimeout\", \"external_browser_timeout\", \"proxyPort\"},\n\t\t\tvalues: []any{\"300\", 500},\n\t\t},\n\t\t{\n\t\t\ttestParams: []string{\"ocspFailOpen\", \"ocsp_fail_open\", \"PasscodeInPassword\", \"passcode_in_password\", \"validateDEFAULTParameters\", \"validate_default_parameters\",\n\t\t\t\t\"clientRequestMFAtoken\", \"client_request_mfa_token\", \"clientStoreTemporaryCredential\", \"client_store_temporary_credential\", \"disableQueryContextCache\", \"disable_query_context_cache\", \"disable_ocsp_checks\",\n\t\t\t\t\"includeRetryReason\", \"include_retry_reason\", \"disableConsoleLogin\", \"disable_console_login\", \"disableSamlUrlCheck\", \"disable_saml_url_check\"},\n\t\t\tvalues: []any{true, \"true\", false, \"false\"},\n\t\t},\n\t\t{\n\t\t\ttestParams: []string{\"connectionDiagnosticsEnabled\", \"connection_diagnostics_enabled\"},\n\t\t\tvalues:     []any{true, false},\n\t\t},\n\t\t{\n\t\t\ttestParams: []string{\"connectionDiagnosticsAllowlistFile\", \"connection_diagnostics_allowlist_file\"},\n\t\t\tvalues:     []any{\"myallowlist.json\"},\n\t\t},\n\t}\n\n\tfor _, testCase := range testCases {\n\t\tfor _, param := range testCase.testParams {\n\t\t\tfor _, value := range testCase.values {\n\t\t\t\tt.Run(param, func(t *testing.T) {\n\t\t\t\t\tcfg := &Config{}\n\t\t\t\t\tconnectionMap := make(map[string]any)\n\t\t\t\t\tconnectionMap[param] = value\n\t\t\t\t\terr := ParseToml(cfg, connectionMap)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatalf(\"The value should be parsed: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc TestParseTomlWithWrongValue(t *testing.T) {\n\ttestCases := []paramList{\n\t\t{\n\t\t\ttestParams: []string{\"user\", \"password\", \"host\", \"account\", \"warehouse\", \"database\",\n\t\t\t\t\"schema\", \"role\", \"region\", \"protocol\", \"passcode\", \"application\", \"token\", \"privateKey\",\n\t\t\t\t\"tracing\", \"tmpDirPath\", \"clientConfigFile\", \"wrongParams\", \"token_file_path\", \"proxyhost\", \"noproxy\", \"proxyUser\", \"proxyPassword\", \"proxyProtocol\"},\n\t\t\tvalues: []any{1, false},\n\t\t},\n\t\t{\n\t\t\ttestParams: []string{\"port\", \"maxRetryCount\", \"clientTimeout\", \"jwtClientTimeout\", \"loginTimeout\",\n\t\t\t\t\"requestTimeout\", \"jwtTimeout\", \"externalBrowserTimeout\", \"authenticator\"},\n\t\t\tvalues: []any{\"wrong_value\", false},\n\t\t},\n\t\t{\n\t\t\ttestParams: []string{\"ocspFailOpen\", \"PasscodeInPassword\", \"validateDEFAULTParameters\", \"clientRequestMFAtoken\",\n\t\t\t\t\"clientStoreTemporaryCredential\", \"disableQueryContextCache\", \"includeRetryReason\", \"disableConsoleLogin\", \"disableSamlUrlCheck\"},\n\t\t\tvalues: []any{\"wrong_value\", 1},\n\t\t},\n\t}\n\n\tfor _, testCase := range testCases {\n\t\tfor _, param := range testCase.testParams {\n\t\t\tfor _, value := range testCase.values {\n\t\t\t\tt.Run(param, func(t *testing.T) {\n\t\t\t\t\tcfg := &Config{}\n\t\t\t\t\tconnectionMap := make(map[string]any)\n\t\t\t\t\tconnectionMap[param] = value\n\t\t\t\t\terr := ParseToml(cfg, connectionMap)\n\t\t\t\t\tif err == nil {\n\t\t\t\t\t\tt.Fatal(\"should have failed\")\n\t\t\t\t\t}\n\t\t\t\t\tdriverErr, ok := err.(*sferrors.SnowflakeError)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Fatalf(\"This should be a Snowflake Error, got: %T\", err)\n\t\t\t\t\t}\n\t\t\t\t\tif driverErr.Number != sferrors.ErrCodeTomlFileParsingFailed {\n\t\t\t\t\t\tt.Fatalf(\"Expected error code %d, got %d\", sferrors.ErrCodeTomlFileParsingFailed, driverErr.Number)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc TestGetTomlFilePath(t *testing.T) {\n\tif (runtime.GOOS == \"linux\" || runtime.GOOS == \"darwin\") && os.Getenv(\"HOME\") == \"\" {\n\t\tt.Skip(\"skipping on missing HOME environment variable\")\n\t}\n\tdir, err := GetTomlFilePath(\"\")\n\tif err != nil {\n\t\tt.Fatalf(\"should not have failed: %v\", err)\n\t}\n\thomeDir, err := os.UserHomeDir()\n\tif err != nil {\n\t\tt.Fatalf(\"The connection cannot find the user home directory: %v\", err)\n\t}\n\tassertEqual(t, dir, path.Join(homeDir, \".snowflake\"))\n\n\tlocation := \"../user//somelocation///b\"\n\tdir, err = GetTomlFilePath(location)\n\tif err != nil {\n\t\tt.Fatalf(\"should not have failed: %v\", err)\n\t}\n\tresult, err := path.Abs(location)\n\tif err != nil {\n\t\tt.Fatalf(\"should not have failed: %v\", err)\n\t}\n\tassertEqual(t, dir, result)\n\n\t//Absolute path for windows can be varied depend on which disk the driver is located.\n\t// As a result, this test is available on non-Window machines.\n\tif !(runtime.GOOS == \"windows\") {\n\t\tresult = \"/user/somelocation/b\"\n\t\tlocation = \"/user//somelocation///b\"\n\t\tdir, err = GetTomlFilePath(location)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"should not have failed: %v\", err)\n\t\t}\n\t\tassertEqual(t, dir, result)\n\t}\n}\n\n// assertEqual is a simple test helper for string comparison.\nfunc assertEqual[T comparable](t *testing.T, got, want T) {\n\tt.Helper()\n\tif got != want {\n\t\tt.Fatalf(\"got %v, want %v\", got, want)\n\t}\n}\n"
  },
  {
    "path": "internal/config/crl_mode.go",
    "content": "package config\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n)\n\n// CertRevocationCheckMode defines the modes for certificate revocation checks.\ntype CertRevocationCheckMode int\n\nconst (\n\t// CertRevocationCheckDisabled means that certificate revocation checks are disabled.\n\tCertRevocationCheckDisabled CertRevocationCheckMode = iota\n\t// CertRevocationCheckAdvisory means that certificate revocation checks are advisory, and the driver will not fail if the checks end with error (cannot verify revocation status).\n\t// Driver will fail only if a certicate is revoked.\n\tCertRevocationCheckAdvisory\n\t// CertRevocationCheckEnabled means that every certificate revocation check must pass, otherwise the driver will fail.\n\tCertRevocationCheckEnabled\n)\n\nfunc (m CertRevocationCheckMode) String() string {\n\tswitch m {\n\tcase CertRevocationCheckDisabled:\n\t\treturn \"DISABLED\"\n\tcase CertRevocationCheckAdvisory:\n\t\treturn \"ADVISORY\"\n\tcase CertRevocationCheckEnabled:\n\t\treturn \"ENABLED\"\n\tdefault:\n\t\treturn fmt.Sprintf(\"unknown CertRevocationCheckMode: %d\", m)\n\t}\n}\n\n// ParseCertRevocationCheckMode parses a string into a CertRevocationCheckMode.\nfunc ParseCertRevocationCheckMode(s string) (CertRevocationCheckMode, error) {\n\tswitch strings.ToLower(s) {\n\tcase \"disabled\":\n\t\treturn CertRevocationCheckDisabled, nil\n\tcase \"advisory\":\n\t\treturn CertRevocationCheckAdvisory, nil\n\tcase \"enabled\":\n\t\treturn CertRevocationCheckEnabled, nil\n\t}\n\treturn 0, fmt.Errorf(\"unknown CertRevocationCheckMode: %s\", s)\n}\n"
  },
  {
    "path": "internal/config/dsn.go",
    "content": "package config\n\nimport (\n\t\"crypto/rsa\"\n\t\"crypto/x509\"\n\t\"encoding/base64\"\n\t\"encoding/pem\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\tsferrors \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\tloggerinternal \"github.com/snowflakedb/gosnowflake/v2/internal/logger\"\n)\n\nvar logger = loggerinternal.NewLoggerProxy()\n\nconst (\n\t// DefaultClientTimeout is the timeout for network round trip + read out http response\n\tDefaultClientTimeout = 900 * time.Second\n\t// DefaultJWTClientTimeout is the timeout for network round trip + read out http response but used for JWT auth\n\tDefaultJWTClientTimeout = 10 * time.Second\n\t// DefaultLoginTimeout is the timeout for retry for login EXCLUDING clientTimeout\n\tDefaultLoginTimeout = 300 * time.Second\n\t// DefaultRequestTimeout is the timeout for retry for request EXCLUDING clientTimeout\n\tDefaultRequestTimeout = 0 * time.Second\n\t// DefaultJWTTimeout is the timeout for JWT token expiration\n\tDefaultJWTTimeout = 60 * time.Second\n\t// DefaultExternalBrowserTimeout is the timeout for external browser login\n\tDefaultExternalBrowserTimeout = 120 * time.Second\n\tdefaultCloudStorageTimeout    = -1 // Timeout for calling cloud storage.\n\tdefaultMaxRetryCount          = 7  // specifies maximum number of subsequent retries\n\t// DefaultDomain is the default domain for Snowflake accounts\n\tDefaultDomain = \".snowflakecomputing.com\"\n\t// CnDomain is the domain for Snowflake accounts in China\n\tCnDomain             = \".snowflakecomputing.cn\"\n\ttopLevelDomainPrefix = \".snowflakecomputing.\" // used to extract the domain from host\n)\n\nconst clientType = \"Go\"\n\n// GetFromEnv retrieves the value of an environment variable.\n// If failOnMissing is true and the variable is not set, an error is returned.\nfunc GetFromEnv(name string, failOnMissing bool) (string, error) {\n\tif value := os.Getenv(name); value != \"\" {\n\t\treturn value, nil\n\t}\n\tif failOnMissing {\n\t\treturn \"\", fmt.Errorf(\"%v environment variable is not set\", name)\n\t}\n\treturn \"\", nil\n}\n\n// DSN constructs a DSN for Snowflake db.\nfunc DSN(cfg *Config) (dsn string, err error) {\n\tif strings.ToLower(cfg.Region) == \"us-west-2\" {\n\t\tcfg.Region = \"\"\n\t}\n\t// in case account includes region\n\tregion, posDot := extractRegionFromAccount(cfg.Account)\n\tif strings.ToLower(region) == \"us-west-2\" {\n\t\tregion = \"\"\n\t\tcfg.Account = cfg.Account[:posDot]\n\t\tlogger.Info(\"Ignoring default region .us-west-2 in DSN from Account configuration.\")\n\t}\n\tif region != \"\" {\n\t\tif cfg.Region != \"\" {\n\t\t\treturn \"\", sferrors.ErrRegionConflict()\n\t\t}\n\t\tcfg.Region = region\n\t\tcfg.Account = cfg.Account[:posDot]\n\t}\n\thasHost := true\n\tif cfg.Host == \"\" {\n\t\thasHost = false\n\t\tif cfg.Region == \"\" {\n\t\t\tcfg.Host = cfg.Account + DefaultDomain\n\t\t} else {\n\t\t\tcfg.Host = buildHostFromAccountAndRegion(cfg.Account, cfg.Region)\n\t\t}\n\t}\n\terr = FillMissingConfigParameters(cfg)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tparams := &url.Values{}\n\tif hasHost && cfg.Account != \"\" {\n\t\t// account may not be included in a Host string\n\t\tparams.Add(\"account\", cfg.Account)\n\t}\n\tif cfg.Database != \"\" {\n\t\tparams.Add(\"database\", cfg.Database)\n\t}\n\tif cfg.Schema != \"\" {\n\t\tparams.Add(\"schema\", cfg.Schema)\n\t}\n\tif cfg.Warehouse != \"\" {\n\t\tparams.Add(\"warehouse\", cfg.Warehouse)\n\t}\n\tif cfg.Role != \"\" {\n\t\tparams.Add(\"role\", cfg.Role)\n\t}\n\tif cfg.Region != \"\" {\n\t\tparams.Add(\"region\", cfg.Region)\n\t}\n\tif cfg.OauthClientID != \"\" {\n\t\tparams.Add(\"oauthClientId\", cfg.OauthClientID)\n\t}\n\tif cfg.OauthClientSecret != \"\" {\n\t\tparams.Add(\"oauthClientSecret\", cfg.OauthClientSecret)\n\t}\n\tif cfg.OauthAuthorizationURL != \"\" {\n\t\tparams.Add(\"oauthAuthorizationUrl\", cfg.OauthAuthorizationURL)\n\t}\n\tif cfg.OauthTokenRequestURL != \"\" {\n\t\tparams.Add(\"oauthTokenRequestUrl\", cfg.OauthTokenRequestURL)\n\t}\n\tif cfg.OauthRedirectURI != \"\" {\n\t\tparams.Add(\"oauthRedirectUri\", cfg.OauthRedirectURI)\n\t}\n\tif cfg.OauthScope != \"\" {\n\t\tparams.Add(\"oauthScope\", cfg.OauthScope)\n\t}\n\tif cfg.EnableSingleUseRefreshTokens {\n\t\tparams.Add(\"enableSingleUseRefreshTokens\", strconv.FormatBool(cfg.EnableSingleUseRefreshTokens))\n\t}\n\tif cfg.WorkloadIdentityProvider != \"\" {\n\t\tparams.Add(\"workloadIdentityProvider\", cfg.WorkloadIdentityProvider)\n\t}\n\tif cfg.WorkloadIdentityEntraResource != \"\" {\n\t\tparams.Add(\"workloadIdentityEntraResource\", cfg.WorkloadIdentityEntraResource)\n\t}\n\tif len(cfg.WorkloadIdentityImpersonationPath) > 0 {\n\t\tparams.Add(\"workloadIdentityImpersonationPath\", strings.Join(cfg.WorkloadIdentityImpersonationPath, \",\"))\n\t}\n\tif cfg.Authenticator != AuthTypeSnowflake {\n\t\tif cfg.Authenticator == AuthTypeOkta {\n\t\t\tparams.Add(\"authenticator\", strings.ToLower(cfg.OktaURL.String()))\n\t\t} else {\n\t\t\tparams.Add(\"authenticator\", strings.ToLower(cfg.Authenticator.String()))\n\t\t}\n\t}\n\tif cfg.SingleAuthenticationPrompt != BoolNotSet {\n\t\tif cfg.SingleAuthenticationPrompt == BoolTrue {\n\t\t\tparams.Add(\"singleAuthenticationPrompt\", \"true\")\n\t\t} else {\n\t\t\tparams.Add(\"singleAuthenticationPrompt\", \"false\")\n\t\t}\n\t}\n\tif cfg.Passcode != \"\" {\n\t\tparams.Add(\"passcode\", cfg.Passcode)\n\t}\n\tif cfg.PasscodeInPassword {\n\t\tparams.Add(\"passcodeInPassword\", strconv.FormatBool(cfg.PasscodeInPassword))\n\t}\n\tif cfg.ClientTimeout != DefaultClientTimeout {\n\t\tparams.Add(\"clientTimeout\", strconv.FormatInt(int64(cfg.ClientTimeout/time.Second), 10))\n\t}\n\tif cfg.JWTClientTimeout != DefaultJWTClientTimeout {\n\t\tparams.Add(\"jwtClientTimeout\", strconv.FormatInt(int64(cfg.JWTClientTimeout/time.Second), 10))\n\t}\n\tif cfg.LoginTimeout != DefaultLoginTimeout {\n\t\tparams.Add(\"loginTimeout\", strconv.FormatInt(int64(cfg.LoginTimeout/time.Second), 10))\n\t}\n\tif cfg.RequestTimeout != DefaultRequestTimeout {\n\t\tparams.Add(\"requestTimeout\", strconv.FormatInt(int64(cfg.RequestTimeout/time.Second), 10))\n\t}\n\tif cfg.JWTExpireTimeout != DefaultJWTTimeout {\n\t\tparams.Add(\"jwtTimeout\", strconv.FormatInt(int64(cfg.JWTExpireTimeout/time.Second), 10))\n\t}\n\tif cfg.ExternalBrowserTimeout != DefaultExternalBrowserTimeout {\n\t\tparams.Add(\"externalBrowserTimeout\", strconv.FormatInt(int64(cfg.ExternalBrowserTimeout/time.Second), 10))\n\t}\n\tif cfg.CloudStorageTimeout != defaultCloudStorageTimeout {\n\t\tparams.Add(\"cloudStorageTimeout\", strconv.FormatInt(int64(cfg.CloudStorageTimeout/time.Second), 10))\n\t}\n\tif cfg.MaxRetryCount != defaultMaxRetryCount {\n\t\tparams.Add(\"maxRetryCount\", strconv.Itoa(cfg.MaxRetryCount))\n\t}\n\tif cfg.Application != clientType {\n\t\tparams.Add(\"application\", cfg.Application)\n\t}\n\tif cfg.Protocol != \"\" && cfg.Protocol != \"https\" {\n\t\tparams.Add(\"protocol\", cfg.Protocol)\n\t}\n\tif cfg.Token != \"\" {\n\t\tparams.Add(\"token\", cfg.Token)\n\t}\n\tif cfg.TokenFilePath != \"\" {\n\t\tparams.Add(\"tokenFilePath\", cfg.TokenFilePath)\n\t}\n\tif cfg.CertRevocationCheckMode != CertRevocationCheckDisabled {\n\t\tparams.Add(\"certRevocationCheckMode\", cfg.CertRevocationCheckMode.String())\n\t}\n\tif cfg.CrlAllowCertificatesWithoutCrlURL == BoolTrue {\n\t\tparams.Add(\"crlAllowCertificatesWithoutCrlURL\", \"true\")\n\t}\n\tif cfg.CrlInMemoryCacheDisabled {\n\t\tparams.Add(\"crlInMemoryCacheDisabled\", \"true\")\n\t}\n\tif cfg.CrlOnDiskCacheDisabled {\n\t\tparams.Add(\"crlOnDiskCacheDisabled\", \"true\")\n\t}\n\tif cfg.CrlDownloadMaxSize != 0 {\n\t\tparams.Add(\"crlDownloadMaxSize\", strconv.Itoa(cfg.CrlDownloadMaxSize))\n\t}\n\tif cfg.CrlHTTPClientTimeout != 0 {\n\t\tparams.Add(\"crlHttpClientTimeout\", strconv.FormatInt(int64(cfg.CrlHTTPClientTimeout/time.Second), 10))\n\t}\n\tif cfg.Params != nil {\n\t\tfor k, v := range cfg.Params {\n\t\t\tparams.Add(k, *v)\n\t\t}\n\t}\n\tif cfg.PrivateKey != nil {\n\t\tprivateKeyInBytes, err := MarshalPKCS8PrivateKey(cfg.PrivateKey)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tkeyBase64 := base64.URLEncoding.EncodeToString(privateKeyInBytes)\n\t\tparams.Add(\"privateKey\", keyBase64)\n\t}\n\tif cfg.DisableOCSPChecks {\n\t\tparams.Add(\"disableOCSPChecks\", strconv.FormatBool(cfg.DisableOCSPChecks))\n\t}\n\tif cfg.Tracing != \"\" {\n\t\tparams.Add(\"tracing\", cfg.Tracing)\n\t}\n\tif cfg.LogQueryText {\n\t\tparams.Add(\"logQueryText\", strconv.FormatBool(cfg.LogQueryText))\n\t}\n\tif cfg.LogQueryParameters {\n\t\tparams.Add(\"logQueryParameters\", strconv.FormatBool(cfg.LogQueryParameters))\n\t}\n\tif cfg.TmpDirPath != \"\" {\n\t\tparams.Add(\"tmpDirPath\", cfg.TmpDirPath)\n\t}\n\tif cfg.DisableQueryContextCache {\n\t\tparams.Add(\"disableQueryContextCache\", \"true\")\n\t}\n\tif cfg.IncludeRetryReason == BoolFalse {\n\t\tparams.Add(\"includeRetryReason\", \"false\")\n\t}\n\tif cfg.ServerSessionKeepAlive {\n\t\tparams.Add(\"serverSessionKeepAlive\", \"true\")\n\t}\n\n\tparams.Add(\"ocspFailOpen\", strconv.FormatBool(cfg.OCSPFailOpen != OCSPFailOpenFalse))\n\n\tparams.Add(\"validateDefaultParameters\", strconv.FormatBool(cfg.ValidateDefaultParameters != BoolFalse))\n\n\tif cfg.ClientRequestMfaToken != BoolNotSet {\n\t\tparams.Add(\"clientRequestMfaToken\", strconv.FormatBool(cfg.ClientRequestMfaToken != BoolFalse))\n\t}\n\n\tif cfg.ClientStoreTemporaryCredential != BoolNotSet {\n\t\tparams.Add(\"clientStoreTemporaryCredential\", strconv.FormatBool(cfg.ClientStoreTemporaryCredential != BoolFalse))\n\t}\n\tif cfg.ClientConfigFile != \"\" {\n\t\tparams.Add(\"clientConfigFile\", cfg.ClientConfigFile)\n\t}\n\tif cfg.DisableConsoleLogin != BoolNotSet {\n\t\tparams.Add(\"disableConsoleLogin\", strconv.FormatBool(cfg.DisableConsoleLogin != BoolFalse))\n\t}\n\tif cfg.DisableSamlURLCheck != BoolNotSet {\n\t\tparams.Add(\"disableSamlURLCheck\", strconv.FormatBool(cfg.DisableSamlURLCheck != BoolFalse))\n\t}\n\tif cfg.ConnectionDiagnosticsEnabled {\n\t\tparams.Add(\"connectionDiagnosticsEnabled\", strconv.FormatBool(cfg.ConnectionDiagnosticsEnabled))\n\t}\n\tif cfg.ConnectionDiagnosticsAllowlistFile != \"\" {\n\t\tparams.Add(\"connectionDiagnosticsAllowlistFile\", cfg.ConnectionDiagnosticsAllowlistFile)\n\t}\n\tif cfg.TLSConfigName != \"\" {\n\t\tparams.Add(\"tlsConfigName\", cfg.TLSConfigName)\n\t}\n\tif cfg.ProxyHost != \"\" {\n\t\tparams.Add(\"proxyHost\", cfg.ProxyHost)\n\t}\n\tif cfg.ProxyPort != 0 {\n\t\tparams.Add(\"proxyPort\", strconv.Itoa(cfg.ProxyPort))\n\t}\n\tif cfg.ProxyProtocol != \"\" {\n\t\tparams.Add(\"proxyProtocol\", cfg.ProxyProtocol)\n\t}\n\tif cfg.ProxyUser != \"\" {\n\t\tparams.Add(\"proxyUser\", cfg.ProxyUser)\n\t}\n\tif cfg.ProxyPassword != \"\" {\n\t\tparams.Add(\"proxyPassword\", cfg.ProxyPassword)\n\t}\n\tif cfg.NoProxy != \"\" {\n\t\tparams.Add(\"noProxy\", cfg.NoProxy)\n\t}\n\n\tdsn = fmt.Sprintf(\"%v:%v@%v:%v\", url.QueryEscape(cfg.User), url.QueryEscape(cfg.Password), cfg.Host, cfg.Port)\n\tif params.Encode() != \"\" {\n\t\tdsn += \"?\" + params.Encode()\n\t}\n\treturn\n}\n\n// ParseDSN parses the DSN string to a Config.\nfunc ParseDSN(dsn string) (cfg *Config, err error) {\n\t// New config with some default values\n\tcfg = &Config{\n\t\tParams:        make(map[string]*string),\n\t\tAuthenticator: AuthTypeSnowflake, // Default to snowflake\n\t}\n\n\t// user[:password]@account/database/schema[?param1=value1&paramN=valueN]\n\t// or\n\t// user[:password]@account/database[?param1=value1&paramN=valueN]\n\t// or\n\t// user[:password]@host:port/database/schema?account=user_account[?param1=value1&paramN=valueN]\n\t// or\n\t// host:port/database/schema?account=user_account[?param1=value1&paramN=valueN]\n\n\tfoundSlash := false\n\tsecondSlash := false\n\tdone := false\n\tvar i int\n\tposQuestion := len(dsn)\n\tfor i = len(dsn) - 1; i >= 0; i-- {\n\t\tswitch dsn[i] {\n\t\tcase '/':\n\t\t\tfoundSlash = true\n\n\t\t\t// left part is empty if i <= 0\n\t\t\tvar j int\n\t\t\tposSecondSlash := i\n\t\t\tif i > 0 {\n\t\t\t\tfor j = i - 1; j >= 0; j-- {\n\t\t\t\t\tswitch dsn[j] {\n\t\t\t\t\tcase '/':\n\t\t\t\t\t\t// second slash\n\t\t\t\t\t\tsecondSlash = true\n\t\t\t\t\t\tposSecondSlash = j\n\t\t\t\t\tcase '@':\n\t\t\t\t\t\t// username[:password]@...\n\t\t\t\t\t\tcfg.User, cfg.Password = parseUserPassword(j, dsn)\n\t\t\t\t\t}\n\t\t\t\t\tif dsn[j] == '@' {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// account or host:port\n\t\t\t\terr = parseAccountHostPort(cfg, j, posSecondSlash, dsn)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\t\t\t// [?param1=value1&...&paramN=valueN]\n\t\t\t// Find the first '?' in dsn[i+1:]\n\t\t\terr = parseParams(cfg, i, dsn)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif secondSlash {\n\t\t\t\tcfg.Database = dsn[posSecondSlash+1 : i]\n\t\t\t\tcfg.Schema = dsn[i+1 : posQuestion]\n\t\t\t} else {\n\t\t\t\tcfg.Database = dsn[posSecondSlash+1 : posQuestion]\n\t\t\t}\n\t\t\tdone = true\n\t\tcase '?':\n\t\t\tposQuestion = i\n\t\t}\n\t\tif done {\n\t\t\tbreak\n\t\t}\n\t}\n\tif !foundSlash {\n\t\t// no db or schema is specified\n\t\tvar j int\n\t\tfor j = len(dsn) - 1; j >= 0; j-- {\n\t\t\tswitch dsn[j] {\n\t\t\tcase '@':\n\t\t\t\tcfg.User, cfg.Password = parseUserPassword(j, dsn)\n\t\t\tcase '?':\n\t\t\t\tposQuestion = j\n\t\t\t}\n\t\t\tif dsn[j] == '@' {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\terr = parseAccountHostPort(cfg, j, posQuestion, dsn)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\terr = parseParams(cfg, posQuestion-1, dsn)\n\t\tif err != nil {\n\t\t\treturn\n\t\t}\n\t}\n\tif posDot := strings.Index(cfg.Account, \".\"); posDot >= 0 {\n\t\tcfg.Account = cfg.Account[:posDot]\n\t}\n\n\terr = FillMissingConfigParameters(cfg)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// unescape parameters\n\tvar s string\n\ts, err = url.QueryUnescape(cfg.User)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcfg.User = s\n\ts, err = url.QueryUnescape(cfg.Password)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcfg.Password = s\n\ts, err = url.QueryUnescape(cfg.Database)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcfg.Database = s\n\ts, err = url.QueryUnescape(cfg.Schema)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcfg.Schema = s\n\ts, err = url.QueryUnescape(cfg.Role)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcfg.Role = s\n\ts, err = url.QueryUnescape(cfg.Warehouse)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcfg.Warehouse = s\n\treturn cfg, nil\n}\n\n// applyAccountFromHostIfMissing sets Account to the first DNS label of Host when Account is empty\n// and Host matches the Snowflake hostname heuristic (hostIncludesTopLevelDomain). FillMissingConfigParameters\n// invokes this so programmatic Config (e.g. database/sql.Connector) matches behavior that DSN users\n// already got via ParseDSN plus FillMissingConfigParameters. ParseDSN still truncates dotted account\n// values from parameters before FillMissingConfigParameters; that step does not apply to non-empty\n// Account set directly on a programmatic Config.\nfunc applyAccountFromHostIfMissing(cfg *Config) {\n\tif strings.TrimSpace(cfg.Account) != \"\" {\n\t\treturn\n\t}\n\tif !hostIncludesTopLevelDomain(cfg.Host) {\n\t\treturn\n\t}\n\tposDot := strings.Index(cfg.Host, \".\")\n\tif posDot <= 0 {\n\t\treturn\n\t}\n\tcfg.Account = cfg.Host[:posDot]\n}\n\n// FillMissingConfigParameters fills in default values for missing config parameters.\nfunc FillMissingConfigParameters(cfg *Config) error {\n\tapplyAccountFromHostIfMissing(cfg)\n\n\tposDash := strings.LastIndex(cfg.Account, \"-\")\n\tif posDash > 0 {\n\t\tif strings.Contains(strings.ToLower(cfg.Host), \".global.\") {\n\t\t\tcfg.Account = cfg.Account[:posDash]\n\t\t}\n\t}\n\tif strings.Trim(cfg.Account, \" \") == \"\" {\n\t\treturn sferrors.ErrEmptyAccount()\n\t}\n\n\tif authRequiresUser(cfg) && strings.TrimSpace(cfg.User) == \"\" {\n\t\treturn sferrors.ErrEmptyUsername()\n\t}\n\n\tif authRequiresPassword(cfg) && strings.TrimSpace(cfg.Password) == \"\" {\n\t\treturn sferrors.ErrEmptyPassword()\n\t}\n\n\tif authRequiresEitherPasswordOrToken(cfg) && strings.TrimSpace(cfg.Password) == \"\" && strings.TrimSpace(cfg.Token) == \"\" {\n\t\treturn sferrors.ErrEmptyPasswordAndToken()\n\t}\n\n\tif authRequiresClientIDAndSecret(cfg) && (strings.TrimSpace(cfg.OauthClientID) == \"\" || strings.TrimSpace(cfg.OauthClientSecret) == \"\") {\n\t\treturn sferrors.ErrEmptyOAuthParameters()\n\t}\n\tif strings.Trim(cfg.Protocol, \" \") == \"\" {\n\t\tcfg.Protocol = \"https\"\n\t}\n\tif cfg.Port == 0 {\n\t\tcfg.Port = 443\n\t}\n\n\tcfg.Region = strings.Trim(cfg.Region, \" \")\n\tif cfg.Region != \"\" {\n\t\t// region is specified but not included in Host\n\t\tdomain, i := extractDomainFromHost(cfg.Host)\n\t\tif i >= 1 {\n\t\t\thostPrefix := cfg.Host[0:i]\n\t\t\tif !strings.HasSuffix(hostPrefix, cfg.Region) {\n\t\t\t\tcfg.Host = fmt.Sprintf(\"%v.%v%v\", hostPrefix, cfg.Region, domain)\n\t\t\t}\n\t\t}\n\t}\n\tif cfg.Host == \"\" {\n\t\tif cfg.Region != \"\" {\n\t\t\tcfg.Host = cfg.Account + \".\" + cfg.Region + getDomainBasedOnRegion(cfg.Region)\n\t\t} else {\n\t\t\tregion, _ := extractRegionFromAccount(cfg.Account)\n\t\t\tif region != \"\" {\n\t\t\t\tcfg.Host = cfg.Account + getDomainBasedOnRegion(region)\n\t\t\t} else {\n\t\t\t\tcfg.Host = cfg.Account + DefaultDomain\n\t\t\t}\n\t\t}\n\t}\n\tif cfg.LoginTimeout == 0 {\n\t\tcfg.LoginTimeout = DefaultLoginTimeout\n\t}\n\tif cfg.RequestTimeout == 0 {\n\t\tcfg.RequestTimeout = DefaultRequestTimeout\n\t}\n\tif cfg.JWTExpireTimeout == 0 {\n\t\tcfg.JWTExpireTimeout = DefaultJWTTimeout\n\t}\n\tif cfg.ClientTimeout == 0 {\n\t\tcfg.ClientTimeout = DefaultClientTimeout\n\t}\n\tif cfg.JWTClientTimeout == 0 {\n\t\tcfg.JWTClientTimeout = DefaultJWTClientTimeout\n\t}\n\tif cfg.ExternalBrowserTimeout == 0 {\n\t\tcfg.ExternalBrowserTimeout = DefaultExternalBrowserTimeout\n\t}\n\tif cfg.CloudStorageTimeout == 0 {\n\t\tcfg.CloudStorageTimeout = defaultCloudStorageTimeout\n\t}\n\tif cfg.MaxRetryCount == 0 {\n\t\tcfg.MaxRetryCount = defaultMaxRetryCount\n\t}\n\tif strings.Trim(cfg.Application, \" \") == \"\" {\n\t\tcfg.Application = clientType\n\t}\n\n\tif cfg.OCSPFailOpen == OCSPFailOpenNotSet {\n\t\tcfg.OCSPFailOpen = OCSPFailOpenTrue\n\t}\n\n\tif cfg.ValidateDefaultParameters == BoolNotSet {\n\t\tcfg.ValidateDefaultParameters = BoolTrue\n\t}\n\n\tif cfg.IncludeRetryReason == BoolNotSet {\n\t\tcfg.IncludeRetryReason = BoolTrue\n\t}\n\n\tif cfg.ProxyHost != \"\" && cfg.ProxyProtocol == \"\" {\n\t\tcfg.ProxyProtocol = \"http\" // Default to http if not specified\n\t}\n\n\tdomain, _ := extractDomainFromHost(cfg.Host)\n\tif len(cfg.Host) == len(domain) {\n\t\treturn &sferrors.SnowflakeError{\n\t\t\tNumber:      sferrors.ErrCodeFailedToParseHost,\n\t\t\tMessage:     sferrors.ErrMsgFailedToParseHost,\n\t\t\tMessageArgs: []any{cfg.Host},\n\t\t}\n\t}\n\tif cfg.TLSConfigName != \"\" {\n\t\tif _, ok := GetTLSConfig(cfg.TLSConfigName); !ok {\n\t\t\treturn &sferrors.SnowflakeError{\n\t\t\t\tNumber:  sferrors.ErrCodeMissingTLSConfig,\n\t\t\t\tMessage: fmt.Sprintf(sferrors.ErrMsgMissingTLSConfig, cfg.TLSConfigName),\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc extractDomainFromHost(host string) (domain string, index int) {\n\ti := strings.LastIndex(strings.ToLower(host), topLevelDomainPrefix)\n\tif i >= 1 {\n\t\tdomain = host[i:]\n\t\treturn domain, i\n\t}\n\treturn \"\", i\n}\n\nfunc getDomainBasedOnRegion(region string) string {\n\tif strings.HasPrefix(strings.ToLower(region), \"cn-\") {\n\t\treturn CnDomain\n\t}\n\treturn DefaultDomain\n}\n\nfunc extractRegionFromAccount(account string) (region string, posDot int) {\n\tposDot = strings.Index(strings.ToLower(account), \".\")\n\tif posDot > 0 {\n\t\treturn account[posDot+1:], posDot\n\t}\n\treturn \"\", posDot\n}\n\nfunc hostIncludesTopLevelDomain(host string) bool {\n\treturn strings.Contains(strings.ToLower(host), topLevelDomainPrefix)\n}\n\nfunc buildHostFromAccountAndRegion(account, region string) string {\n\treturn account + \".\" + region + getDomainBasedOnRegion(region)\n}\n\nfunc authRequiresUser(cfg *Config) bool {\n\treturn cfg.Authenticator != AuthTypeOAuth &&\n\t\tcfg.Authenticator != AuthTypeTokenAccessor &&\n\t\tcfg.Authenticator != AuthTypeExternalBrowser &&\n\t\tcfg.Authenticator != AuthTypePat &&\n\t\tcfg.Authenticator != AuthTypeOAuthAuthorizationCode &&\n\t\tcfg.Authenticator != AuthTypeOAuthClientCredentials &&\n\t\tcfg.Authenticator != AuthTypeWorkloadIdentityFederation\n}\n\nfunc authRequiresPassword(cfg *Config) bool {\n\treturn cfg.Authenticator != AuthTypeOAuth &&\n\t\tcfg.Authenticator != AuthTypeTokenAccessor &&\n\t\tcfg.Authenticator != AuthTypeExternalBrowser &&\n\t\tcfg.Authenticator != AuthTypeJwt &&\n\t\tcfg.Authenticator != AuthTypePat &&\n\t\tcfg.Authenticator != AuthTypeOAuthAuthorizationCode &&\n\t\tcfg.Authenticator != AuthTypeOAuthClientCredentials &&\n\t\tcfg.Authenticator != AuthTypeWorkloadIdentityFederation\n}\n\nfunc authRequiresEitherPasswordOrToken(cfg *Config) bool {\n\treturn cfg.Authenticator == AuthTypePat\n}\n\nfunc authRequiresClientIDAndSecret(cfg *Config) bool {\n\treturn cfg.Authenticator == AuthTypeOAuthAuthorizationCode\n}\n\n// transformAccountToHost transforms account to host\nfunc transformAccountToHost(cfg *Config) (err error) {\n\tif cfg.Port == 0 && cfg.Host != \"\" && !hostIncludesTopLevelDomain(cfg.Host) {\n\t\t// account name is specified instead of host:port\n\t\tcfg.Account = cfg.Host\n\t\tregion, posDot := extractRegionFromAccount(cfg.Account)\n\t\tif strings.ToLower(region) == \"us-west-2\" {\n\t\t\tregion = \"\"\n\t\t\tcfg.Account = cfg.Account[:posDot]\n\t\t\tlogger.Info(\"Ignoring default region .us-west-2 from Account configuration.\")\n\t\t}\n\t\tif region != \"\" {\n\t\t\tcfg.Region = region\n\t\t\tcfg.Account = cfg.Account[:posDot]\n\t\t\tcfg.Host = buildHostFromAccountAndRegion(cfg.Account, cfg.Region)\n\t\t} else {\n\t\t\tcfg.Host = cfg.Account + DefaultDomain\n\t\t}\n\t\tcfg.Port = 443\n\t}\n\treturn nil\n}\n\n// parseAccountHostPort parses the DSN string to attempt to get account or host and port.\nfunc parseAccountHostPort(cfg *Config, posAt, posSlash int, dsn string) (err error) {\n\t// account or host:port\n\tvar k int\n\tfor k = posAt + 1; k < posSlash; k++ {\n\t\tif dsn[k] == ':' {\n\t\t\tcfg.Port, err = strconv.Atoi(dsn[k+1 : posSlash])\n\t\t\tif err != nil {\n\t\t\t\terr = &sferrors.SnowflakeError{\n\t\t\t\t\tNumber:      sferrors.ErrCodeFailedToParsePort,\n\t\t\t\t\tMessage:     sferrors.ErrMsgFailedToParsePort,\n\t\t\t\t\tMessageArgs: []any{dsn[k+1 : posSlash]},\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t}\n\tcfg.Host = dsn[posAt+1 : k]\n\treturn transformAccountToHost(cfg)\n}\n\n// parseUserPassword parses the DSN string for username and password\nfunc parseUserPassword(posAt int, dsn string) (user, password string) {\n\tvar k int\n\tfor k = 0; k < posAt; k++ {\n\t\tif dsn[k] == ':' {\n\t\t\tpassword = dsn[k+1 : posAt]\n\t\t\tbreak\n\t\t}\n\t}\n\tuser = dsn[:k]\n\treturn\n}\n\n// parseParams parse parameters\nfunc parseParams(cfg *Config, posQuestion int, dsn string) (err error) {\n\tfor j := posQuestion + 1; j < len(dsn); j++ {\n\t\tif dsn[j] == '?' {\n\t\t\tif err = parseDSNParams(cfg, dsn[j+1:]); err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t}\n\treturn\n}\n\n// parseDSNParams parses the DSN \"query string\". Values must be url.QueryEscape'ed\nfunc parseDSNParams(cfg *Config, params string) (err error) {\n\tlogger.Infof(\"Query String: %v\\n\", params)\n\tparamsSlice := strings.SplitSeq(params, \"&\")\n\tfor v := range paramsSlice {\n\t\tparam := strings.SplitN(v, \"=\", 2)\n\t\tif len(param) != 2 {\n\t\t\tcontinue\n\t\t}\n\t\tvar value string\n\t\tvalue, err = url.QueryUnescape(param[1])\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tswitch param[0] {\n\t\t// Disable INFILE whitelist / enable all files\n\t\tcase \"account\":\n\t\t\tcfg.Account = value\n\t\tcase \"warehouse\":\n\t\t\tcfg.Warehouse = value\n\t\tcase \"database\":\n\t\t\tcfg.Database = value\n\t\tcase \"schema\":\n\t\t\tcfg.Schema = value\n\t\tcase \"role\":\n\t\t\tcfg.Role = value\n\t\tcase \"region\":\n\t\t\tcfg.Region = value\n\t\tcase \"protocol\":\n\t\t\tcfg.Protocol = value\n\t\tcase \"singleAuthenticationPrompt\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.SingleAuthenticationPrompt = BoolTrue\n\t\t\t} else {\n\t\t\t\tcfg.SingleAuthenticationPrompt = BoolFalse\n\t\t\t}\n\t\tcase \"passcode\":\n\t\t\tcfg.Passcode = value\n\t\tcase \"oauthClientId\":\n\t\t\tcfg.OauthClientID = value\n\t\tcase \"oauthClientSecret\":\n\t\t\tcfg.OauthClientSecret = value\n\t\tcase \"oauthAuthorizationUrl\":\n\t\t\tcfg.OauthAuthorizationURL = value\n\t\tcase \"oauthTokenRequestUrl\":\n\t\t\tcfg.OauthTokenRequestURL = value\n\t\tcase \"oauthRedirectUri\":\n\t\t\tcfg.OauthRedirectURI = value\n\t\tcase \"oauthScope\":\n\t\t\tcfg.OauthScope = value\n\t\tcase \"enableSingleUseRefreshTokens\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.EnableSingleUseRefreshTokens = vv\n\t\tcase \"passcodeInPassword\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.PasscodeInPassword = vv\n\t\tcase \"clientTimeout\":\n\t\t\tcfg.ClientTimeout, err = parseTimeout(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\tcase \"jwtClientTimeout\":\n\t\t\tcfg.JWTClientTimeout, err = parseTimeout(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\tcase \"loginTimeout\":\n\t\t\tcfg.LoginTimeout, err = parseTimeout(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\tcase \"requestTimeout\":\n\t\t\tcfg.RequestTimeout, err = parseTimeout(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\tcase \"jwtTimeout\":\n\t\t\tcfg.JWTExpireTimeout, err = parseTimeout(value)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase \"externalBrowserTimeout\":\n\t\t\tcfg.ExternalBrowserTimeout, err = parseTimeout(value)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase \"cloudStorageTimeout\":\n\t\t\tcfg.CloudStorageTimeout, err = parseTimeout(value)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase \"maxRetryCount\":\n\t\t\tcfg.MaxRetryCount, err = strconv.Atoi(value)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase \"serverSessionKeepAlive\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.ServerSessionKeepAlive = vv\n\t\tcase \"application\":\n\t\t\tcfg.Application = value\n\t\tcase \"authenticator\":\n\t\t\terr := DetermineAuthenticatorType(cfg, value)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase \"disableOCSPChecks\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.DisableOCSPChecks = vv\n\t\tcase \"ocspFailOpen\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.OCSPFailOpen = OCSPFailOpenTrue\n\t\t\t} else {\n\t\t\t\tcfg.OCSPFailOpen = OCSPFailOpenFalse\n\t\t\t}\n\n\t\tcase \"token\":\n\t\t\tcfg.Token = value\n\t\tcase \"tokenFilePath\":\n\t\t\tcfg.TokenFilePath = value\n\t\tcase \"tlsConfigName\":\n\t\t\tcfg.TLSConfigName = value\n\t\tcase \"workloadIdentityProvider\":\n\t\t\tcfg.WorkloadIdentityProvider = value\n\t\tcase \"workloadIdentityEntraResource\":\n\t\t\tcfg.WorkloadIdentityEntraResource = value\n\t\tcase \"workloadIdentityImpersonationPath\":\n\t\t\tcfg.WorkloadIdentityImpersonationPath = strings.Split(value, \",\")\n\t\tcase \"privateKey\":\n\t\t\tvar decodeErr error\n\t\t\tblock, decodeErr := base64.URLEncoding.DecodeString(value)\n\t\t\tif decodeErr != nil {\n\t\t\t\terr = &sferrors.SnowflakeError{\n\t\t\t\t\tNumber:  sferrors.ErrCodePrivateKeyParseError,\n\t\t\t\t\tMessage: \"Base64 decode failed\",\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.PrivateKey, err = ParsePKCS8PrivateKey(block)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase \"validateDefaultParameters\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.ValidateDefaultParameters = BoolTrue\n\t\t\t} else {\n\t\t\t\tcfg.ValidateDefaultParameters = BoolFalse\n\t\t\t}\n\t\tcase \"clientRequestMfaToken\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.ClientRequestMfaToken = BoolTrue\n\t\t\t} else {\n\t\t\t\tcfg.ClientRequestMfaToken = BoolFalse\n\t\t\t}\n\t\tcase \"clientStoreTemporaryCredential\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.ClientStoreTemporaryCredential = BoolTrue\n\t\t\t} else {\n\t\t\t\tcfg.ClientStoreTemporaryCredential = BoolFalse\n\t\t\t}\n\t\tcase \"tracing\":\n\t\t\tcfg.Tracing = value\n\t\tcase \"logQueryText\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.LogQueryText = vv\n\t\tcase \"logQueryParameters\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.LogQueryParameters = vv\n\t\tcase \"tmpDirPath\":\n\t\t\tcfg.TmpDirPath = value\n\t\tcase \"disableQueryContextCache\":\n\t\t\tvar b bool\n\t\t\tb, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.DisableQueryContextCache = b\n\t\tcase \"includeRetryReason\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.IncludeRetryReason = BoolTrue\n\t\t\t} else {\n\t\t\t\tcfg.IncludeRetryReason = BoolFalse\n\t\t\t}\n\t\tcase \"clientConfigFile\":\n\t\t\tcfg.ClientConfigFile = value\n\t\tcase \"disableConsoleLogin\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.DisableConsoleLogin = BoolTrue\n\t\t\t} else {\n\t\t\t\tcfg.DisableConsoleLogin = BoolFalse\n\t\t\t}\n\t\tcase \"disableSamlURLCheck\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.DisableSamlURLCheck = BoolTrue\n\t\t\t} else {\n\t\t\t\tcfg.DisableSamlURLCheck = BoolFalse\n\t\t\t}\n\t\tcase \"certRevocationCheckMode\":\n\t\t\tvar certRevocationCheckMode CertRevocationCheckMode\n\t\t\tcertRevocationCheckMode, err = ParseCertRevocationCheckMode(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.CertRevocationCheckMode = certRevocationCheckMode\n\t\tcase \"crlAllowCertificatesWithoutCrlURL\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif vv {\n\t\t\t\tcfg.CrlAllowCertificatesWithoutCrlURL = BoolTrue\n\t\t\t} else {\n\t\t\t\tcfg.CrlAllowCertificatesWithoutCrlURL = BoolFalse\n\t\t\t}\n\t\tcase \"crlInMemoryCacheDisabled\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.CrlInMemoryCacheDisabled = true\n\t\t\t} else {\n\t\t\t\tcfg.CrlInMemoryCacheDisabled = false\n\t\t\t}\n\t\tcase \"crlOnDiskCacheDisabled\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif vv {\n\t\t\t\tcfg.CrlOnDiskCacheDisabled = true\n\t\t\t} else {\n\t\t\t\tcfg.CrlOnDiskCacheDisabled = false\n\t\t\t}\n\t\tcase \"crlDownloadMaxSize\":\n\t\t\tcfg.CrlDownloadMaxSize, err = strconv.Atoi(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\tcase \"crlHttpClientTimeout\":\n\t\t\tvar vv int64\n\t\t\tvv, err = strconv.ParseInt(value, 10, 64)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.CrlHTTPClientTimeout = time.Duration(vv * int64(time.Second))\n\t\tcase \"connectionDiagnosticsEnabled\":\n\t\t\tvar vv bool\n\t\t\tvv, err = strconv.ParseBool(value)\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tcfg.ConnectionDiagnosticsEnabled = vv\n\t\tcase \"connectionDiagnosticsAllowlistFile\":\n\t\t\tcfg.ConnectionDiagnosticsAllowlistFile = value\n\t\tcase \"proxyHost\":\n\t\t\tcfg.ProxyHost, err = parseString(value)\n\t\tcase \"proxyPort\":\n\t\t\tcfg.ProxyPort, err = ParseInt(value)\n\t\tcase \"proxyUser\":\n\t\t\tcfg.ProxyUser, err = parseString(value)\n\t\tcase \"proxyPassword\":\n\t\t\tcfg.ProxyPassword, err = parseString(value)\n\t\tcase \"noProxy\":\n\t\t\tcfg.NoProxy, err = parseString(value)\n\t\tcase \"proxyProtocol\":\n\t\t\tcfg.ProxyProtocol, err = parseString(value)\n\t\tdefault:\n\t\t\tif cfg.Params == nil {\n\t\t\t\tcfg.Params = make(map[string]*string)\n\t\t\t}\n\t\t\t// handle session variables $variable=value\n\t\t\tcfg.Params[urlDecodeIfNeeded(param[0])] = &value\n\t\t}\n\t}\n\treturn\n}\n\nfunc parseTimeout(value string) (time.Duration, error) {\n\tvar vv int64\n\tvar err error\n\tvv, err = strconv.ParseInt(value, 10, 64)\n\tif err != nil {\n\t\treturn time.Duration(0), err\n\t}\n\treturn time.Duration(vv * int64(time.Second)), nil\n}\n\n// GetConfigFromEnv is used to parse the environment variable values to specific fields of the Config\nfunc GetConfigFromEnv(properties []*Param) (*Config, error) {\n\tvar account, user, password, token, tokenFilePath, role, host, portStr, protocol, warehouse, database, schema, region, passcode, application string\n\tvar oauthClientID, oauthClientSecret, oauthAuthorizationURL, oauthTokenRequestURL, oauthRedirectURI, oauthScope string\n\tvar privateKey *rsa.PrivateKey\n\tvar err error\n\tif len(properties) == 0 || properties == nil {\n\t\treturn nil, errors.New(\"missing configuration parameters for the connection\")\n\t}\n\tfor _, prop := range properties {\n\t\tvalue, err := GetFromEnv(prop.EnvName, prop.FailOnMissing)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tswitch prop.Name {\n\t\tcase \"Account\":\n\t\t\taccount = value\n\t\tcase \"User\":\n\t\t\tuser = value\n\t\tcase \"Password\":\n\t\t\tpassword = value\n\t\tcase \"Token\":\n\t\t\ttoken = value\n\t\tcase \"TokenFilePath\":\n\t\t\ttokenFilePath = value\n\t\tcase \"Role\":\n\t\t\trole = value\n\t\tcase \"Host\":\n\t\t\thost = value\n\t\tcase \"Port\":\n\t\t\tportStr = value\n\t\tcase \"Protocol\":\n\t\t\tprotocol = value\n\t\tcase \"Warehouse\":\n\t\t\twarehouse = value\n\t\tcase \"Database\":\n\t\t\tdatabase = value\n\t\tcase \"Region\":\n\t\t\tregion = value\n\t\tcase \"Passcode\":\n\t\t\tpasscode = value\n\t\tcase \"Schema\":\n\t\t\tschema = value\n\t\tcase \"Application\":\n\t\t\tapplication = value\n\t\tcase \"PrivateKey\":\n\t\t\tprivateKey, err = parsePrivateKeyFromFile(value)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\tcase \"OAuthClientId\":\n\t\t\toauthClientID = value\n\t\tcase \"OAuthClientSecret\":\n\t\t\toauthClientSecret = value\n\t\tcase \"OAuthAuthorizationURL\":\n\t\t\toauthAuthorizationURL = value\n\t\tcase \"OAuthTokenRequestURL\":\n\t\t\toauthTokenRequestURL = value\n\t\tcase \"OAuthRedirectURI\":\n\t\t\toauthRedirectURI = value\n\t\tcase \"OAuthScope\":\n\t\t\toauthScope = value\n\t\tdefault:\n\t\t\treturn nil, errors.New(\"unknown property: \" + prop.Name)\n\t\t}\n\t}\n\n\tport := 443 // snowflake default port\n\tif len(portStr) > 0 {\n\t\tport, err = strconv.Atoi(portStr)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tcfg := &Config{\n\t\tAccount:               account,\n\t\tUser:                  user,\n\t\tPassword:              password,\n\t\tToken:                 token,\n\t\tTokenFilePath:         tokenFilePath,\n\t\tRole:                  role,\n\t\tHost:                  host,\n\t\tPort:                  port,\n\t\tProtocol:              protocol,\n\t\tWarehouse:             warehouse,\n\t\tDatabase:              database,\n\t\tSchema:                schema,\n\t\tPrivateKey:            privateKey,\n\t\tRegion:                region,\n\t\tPasscode:              passcode,\n\t\tApplication:           application,\n\t\tOauthClientID:         oauthClientID,\n\t\tOauthClientSecret:     oauthClientSecret,\n\t\tOauthAuthorizationURL: oauthAuthorizationURL,\n\t\tOauthTokenRequestURL:  oauthTokenRequestURL,\n\t\tOauthRedirectURI:      oauthRedirectURI,\n\t\tOauthScope:            oauthScope,\n\t\tParams:                map[string]*string{},\n\t}\n\treturn cfg, nil\n}\n\nfunc parsePrivateKeyFromFile(path string) (*rsa.PrivateKey, error) {\n\tbytes, err := os.ReadFile(path)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tblock, _ := pem.Decode(bytes)\n\tif block == nil {\n\t\treturn nil, errors.New(\"failed to parse PEM block containing the private key\")\n\t}\n\tprivateKey, err := x509.ParsePKCS8PrivateKey(block.Bytes)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tpk, ok := privateKey.(*rsa.PrivateKey)\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"interface convertion. expected type *rsa.PrivateKey, but got %T\", privateKey)\n\t}\n\treturn pk, nil\n}\n\n// ExtractAccountName extract an account name from a raw account.\nfunc ExtractAccountName(rawAccount string) string {\n\tposDot := strings.Index(rawAccount, \".\")\n\tif posDot > 0 {\n\t\treturn strings.ToUpper(rawAccount[:posDot])\n\t}\n\treturn strings.ToUpper(rawAccount)\n}\n\nfunc urlDecodeIfNeeded(param string) (decodedParam string) {\n\tunescaped, err := url.QueryUnescape(param)\n\tif err != nil {\n\t\treturn param\n\t}\n\treturn unescaped\n}\n\n// GetToken retrieves the token from the Config, reading from file if TokenFilePath is set.\nfunc GetToken(c *Config) (string, error) {\n\tif c.TokenFilePath != \"\" {\n\t\treturn ReadToken(c.TokenFilePath)\n\t}\n\treturn c.Token, nil\n}\n\n// DescribeIdentityAttributes returns a string describing the identity attributes of the Config.\nfunc DescribeIdentityAttributes(c *Config) string {\n\treturn fmt.Sprintf(\"host: %v, account: %v, user: %v, password existed: %v, role: %v, database: %v, schema: %v, warehouse: %v, %v\",\n\t\tc.Host, c.Account, c.User, (c.Password != \"\"), c.Role, c.Database, c.Schema, c.Warehouse, DescribeProxy(c))\n}\n\n// DescribeProxy returns a string describing the proxy configuration.\nfunc DescribeProxy(c *Config) string {\n\tif c.ProxyHost != \"\" {\n\t\treturn fmt.Sprintf(\"proxyHost: %v, proxyPort: %v proxyUser: %v, proxyPassword %v, proxyProtocol: %v, noProxy: %v\", c.ProxyHost, c.ProxyPort, c.ProxyUser, c.ProxyPassword != \"\", c.ProxyProtocol, c.NoProxy)\n\t}\n\treturn \"proxy was not configured\"\n}\n"
  },
  {
    "path": "internal/config/dsn_test.go",
    "content": "package config\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/elliptic\"\n\tcr \"crypto/rand\"\n\t\"crypto/rsa\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/base64\"\n\t\"encoding/pem\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"os\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/aws/smithy-go/rand\"\n\tsferrors \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n)\n\ntype tcParseDSN struct {\n\tdsn      string\n\tconfig   *Config\n\tocspMode string\n\terr      error\n}\n\nfunc TestParseDSN(t *testing.T) {\n\ttestPrivKey, _ := rsa.GenerateKey(cr.Reader, 2048)\n\tprivKeyPKCS8 := generatePKCS8StringSupress(testPrivKey)\n\tprivKeyPKCS1 := generatePKCS1String(testPrivKey)\n\ttestcases := []tcParseDSN{\n\t\t{\n\t\t\tdsn: \"user:pass@ac-1-laksdnflaf.global/db/schema\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac-1\", User: \"user\", Password: \"pass\", Region: \"global\",\n\t\t\t\tProtocol: \"https\", Host: \"ac-1-laksdnflaf.global.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@ac-laksdnflaf.global/db/schema\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"user\", Password: \"pass\", Region: \"global\",\n\t\t\t\tProtocol: \"https\", Host: \"ac-laksdnflaf.global.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@asnowflakecomputing.com/db/pa?account=a&protocol=https&role=r&timezone=UTC&aehouse=w\",\n\t\t\tconfig: &Config{Account: \"a\", User: \"u\", Password: \"p\", Database: \"db\", Schema: \"pa\",\n\t\t\t\tProtocol: \"https\", Role: \"r\", Host: \"asnowflakecomputing.com.snowflakecomputing.com\", Port: 443, Region: \"com\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@/db?account=ac\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"u\", Password: \"p\", Database: \"db\",\n\t\t\t\tProtocol: \"https\", Host: \"ac.snowflakecomputing.com\", Port: 443,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@/db?account=ac&workloadIdentityEntraResource=https%3A%2F%2Fexample.com%2F.default&workloadIdentityProvider=azure&workloadIdentityImpersonationPath=%2Fdefault,%2Fdefault2\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"u\", Password: \"p\", Database: \"db\",\n\t\t\t\tProtocol: \"https\", Host: \"ac.snowflakecomputing.com\", Port: 443,\n\t\t\t\tWorkloadIdentityProvider: \"azure\", WorkloadIdentityEntraResource: \"https://example.com/.default\", WorkloadIdentityImpersonationPath: []string{\"/default\", \"/default2\"},\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@/db?account=ac&region=cn-region\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"u\", Password: \"p\", Database: \"db\", Region: \"cn-region\",\n\t\t\t\tProtocol: \"https\", Host: \"ac.cn-region.snowflakecomputing.cn\", Port: 443,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account-hfdw89q748ew9gqf48w9qgf.global/db/s\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\", Region: \"global\",\n\t\t\t\tProtocol: \"https\", Host: \"account-hfdw89q748ew9gqf48w9qgf.global.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\",\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account-hfdw89q748ew9gqf48w9qgf/db/s\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account-hfdw89q748ew9gqf48w9qgf\", User: \"user\", Password: \"pass\", Region: \"\",\n\t\t\t\tProtocol: \"https\", Host: \"account-hfdw89q748ew9gqf48w9qgf.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\",\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\", Region: \"\",\n\t\t\t\tProtocol: \"https\", Host: \"account.snowflakecomputing.com\", Port: 443,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account.cn-region\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\", Region: \"cn-region\",\n\t\t\t\tProtocol: \"https\", Host: \"account.cn-region.snowflakecomputing.cn\", Port: 443,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account.eu-faraway\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\", Region: \"eu-faraway\",\n\t\t\t\tProtocol: \"https\", Host: \"account.eu-faraway.snowflakecomputing.com\", Port: 443,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account?region=eu-faraway\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\", Region: \"eu-faraway\",\n\t\t\t\tProtocol: \"https\", Host: \"account.eu-faraway.snowflakecomputing.com\", Port: 443,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account/db\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"https\", Host: \"account.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase:                  \"db\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account?oauthRedirectUri=http:%2F%2Flocalhost:8001%2Fsome-path&oauthClientId=testClientId&oauthClientSecret=testClientSecret&oauthAuthorizationUrl=http:%2F%2Fsomehost.com&oauthTokenRequestUrl=https:%2F%2Fsomehost2.com%2Fsomepath&oauthScope=test+scope\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"https\", Host: \"account.snowflakecomputing.com\", Port: 443,\n\t\t\t\tOauthClientID: \"testClientId\", OauthClientSecret: \"testClientSecret\", OauthAuthorizationURL: \"http://somehost.com\", OauthTokenRequestURL: \"https://somehost2.com/somepath\", OauthRedirectURI: \"http://localhost:8001/some-path\", OauthScope: \"test scope\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account?oauthRedirectUri=http:%2F%2Flocalhost:8001%2Fsome-path&oauthClientId=testClientId&oauthClientSecret=testClientSecret&oauthAuthorizationUrl=http:%2F%2Fsomehost.com&oauthTokenRequestUrl=https:%2F%2Fsomehost2.com%2Fsomepath&oauthScope=test+scope&enableSingleUseRefreshTokens=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"https\", Host: \"account.snowflakecomputing.com\", Port: 443,\n\t\t\t\tOauthClientID: \"testClientId\", OauthClientSecret: \"testClientSecret\", OauthAuthorizationURL: \"http://somehost.com\", OauthTokenRequestURL: \"https://somehost2.com/somepath\", OauthRedirectURI: \"http://localhost:8001/some-path\", OauthScope: \"test scope\",\n\t\t\t\tEnableSingleUseRefreshTokens: true,\n\t\t\t\tOCSPFailOpen:                 OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters:    BoolTrue,\n\t\t\t\tClientTimeout:                time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:             time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:       time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:          defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:           BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@host:123/db/schema?account=ac&protocol=http\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"http\", Host: \"host\", Port: 123,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user@host:123/db/schema?account=ac&protocol=http\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"http\", Host: \"host\", Port: 123,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      sferrors.ErrEmptyPassword(),\n\t\t},\n\t\t{\n\t\t\tdsn: \"@host:123/db/schema?account=ac&protocol=http\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"http\", Host: \"host\", Port: 123,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      sferrors.ErrEmptyUsername(),\n\t\t},\n\t\t{\n\t\t\tdsn: \"@host:123/db/schema?account=ac&protocol=http&authenticator=oauth_authorization_code\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"http\", Host: \"host\", Port: 123,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      sferrors.ErrEmptyOAuthParameters(),\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@host:123/db/schema?protocol=http\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"http\", Host: \"host\", Port: 123,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      sferrors.ErrEmptyAccount(),\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:@host:123/db/schema?protocol=http&authenticator=programmatic_access_token&account=ac\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"http\", Host: \"host\", Port: 123,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      sferrors.ErrEmptyPasswordAndToken(),\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com/db/pa?account=a&protocol=https&role=r&timezone=UTC&warehouse=w\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"pa\", Role: \"r\", Warehouse: \"w\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflakecomputing.mil/db/pa?account=a\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\", Region: \"\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.mil\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"pa\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.eu-faraway.snowflakecomputing.mil/db/pa?account=a&region=eu-faraway\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\", Region: \"eu-faraway\",\n\t\t\t\tProtocol: \"https\", Host: \"a.eu-faraway.snowflakecomputing.mil\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"pa\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflakecomputing.gov.pl/db/pa?account=a\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\", Region: \"\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.gov.pl\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"pa\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflakecomputing.cn/db/pa?account=a\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\", Region: \"\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.cn\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"pa\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.cn-region.snowflakecomputing.mil/db/pa?account=a&region=cn-region\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\", Region: \"cn-region\",\n\t\t\t\tProtocol: \"https\", Host: \"a.cn-region.snowflakecomputing.mil\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"pa\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.cn-region.snowflakecomputing.cn/db/pa?account=a&region=cn-region&protocol=https&role=r&timezone=UTC&warehouse=w\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\", Region: \"cn-region\",\n\t\t\t\tProtocol: \"https\", Host: \"a.cn-region.snowflakecomputing.cn\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"pa\", Role: \"r\", Warehouse: \"w\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@snowflake.local:9876?account=a&protocol=http\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"http\", Host: \"snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"snowflake.local:9876?account=a&protocol=http&authenticator=OAUTH\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", Authenticator: AuthTypeOAuth,\n\t\t\t\tProtocol: \"http\", Host: \"snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"snowflake.local:9876?account=a&protocol=http&authenticator=OAUTH_AUTHORIZATION_CODE&oauthClientId=testClientId&oauthClientSecret=testClientSecret\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", Authenticator: AuthTypeOAuthAuthorizationCode,\n\t\t\t\tProtocol: \"http\", Host: \"snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tOauthClientID:             \"testClientId\",\n\t\t\t\tOauthClientSecret:         \"testClientSecret\",\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"snowflake.local:9876?account=a&protocol=http&authenticator=OAUTH_CLIENT_CREDENTIALS\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", Authenticator: AuthTypeOAuthClientCredentials,\n\t\t\t\tProtocol: \"http\", Host: \"snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:@a.snowflake.local:9876?account=a&protocol=http&authenticator=SNOWFLAKE_JWT\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Authenticator: AuthTypeJwt,\n\t\t\t\tProtocol: \"http\", Host: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\n\t\t{\n\t\t\tdsn: \"u:p@a?database=d&jwtTimeout=20\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"d\", Schema: \"\",\n\t\t\t\tJWTExpireTimeout:          20 * time.Second,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a?database=d&externalBrowserTimeout=20&cloudStorageTimeout=7\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"d\", Schema: \"\",\n\t\t\t\tExternalBrowserTimeout:    20 * time.Second,\n\t\t\t\tCloudStorageTimeout:       7 * time.Second,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tMaxRetryCount:             defaultMaxRetryCount,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a?database=d&maxRetryCount=20\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"d\", Schema: \"\",\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tMaxRetryCount:             20,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a?database=d\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"d\", Schema: \"\",\n\t\t\t\tJWTExpireTimeout:          time.Duration(DefaultJWTTimeout),\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@snowflake.local:NNNN?account=a&protocol=http\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"http\", Host: \"snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr: &sferrors.SnowflakeError{\n\t\t\t\tMessage:     sferrors.ErrMsgFailedToParsePort,\n\t\t\t\tMessageArgs: []any{\"NNNN\"},\n\t\t\t\tNumber:      sferrors.ErrCodeFailedToParsePort,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a?database=d&schema=s&role=r&application=aa&authenticator=snowflake&disableOCSPChecks=true&passcode=pp&passcodeInPassword=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"d\", Schema: \"s\", Role: \"r\", Authenticator: AuthTypeSnowflake, Application: \"aa\",\n\t\t\t\tDisableOCSPChecks: true, Passcode: \"pp\", PasscodeInPassword: true,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeDisabled,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a?database=d&schema=s&role=r&application=aa&authenticator=snowflake&disableOCSPChecks=true&passcode=pp&passcodeInPassword=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"d\", Schema: \"s\", Role: \"r\", Authenticator: AuthTypeSnowflake, Application: \"aa\",\n\t\t\t\tDisableOCSPChecks: true, Passcode: \"pp\", PasscodeInPassword: true,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeDisabled,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\t// schema should be ignored as no value is specified.\n\t\t\tdsn: \"u:p@a?database=d&schema\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"d\", Schema: \"\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn:    \"u:p@a?database= %Sd\",\n\t\t\tconfig: &Config{},\n\t\t\terr:    url.EscapeError(`invalid URL escape`),\n\t\t},\n\t\t{\n\t\t\tdsn:    \"u:p@a?schema= %Sd\",\n\t\t\tconfig: &Config{},\n\t\t\terr:    url.EscapeError(`invalid URL escape`),\n\t\t},\n\t\t{\n\t\t\tdsn:    \"u:p@a?warehouse= %Sd\",\n\t\t\tconfig: &Config{},\n\t\t\terr:    url.EscapeError(`invalid URL escape`),\n\t\t},\n\t\t{\n\t\t\tdsn:    \"u:p@a?role= %Sd\",\n\t\t\tconfig: &Config{},\n\t\t\terr:    url.EscapeError(`invalid URL escape`),\n\t\t},\n\t\t{\n\t\t\tdsn:    \":/\",\n\t\t\tconfig: &Config{},\n\t\t\terr: &sferrors.SnowflakeError{\n\t\t\t\tNumber: sferrors.ErrCodeFailedToParsePort,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tdsn:    \"u:u@/+/+?account=+&=0\",\n\t\t\tconfig: &Config{},\n\t\t\terr:    sferrors.ErrEmptyAccount(),\n\t\t},\n\t\t{\n\t\t\tdsn:    \"u:u@/+/+?account=+&=+&=+\",\n\t\t\tconfig: &Config{},\n\t\t\terr:    sferrors.ErrEmptyAccount(),\n\t\t},\n\t\t{\n\t\t\tdsn: \"user%40%2F1:p%3A%40s@/db%2F?account=ac\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"user@/1\", Password: \"p:@s\", Database: \"db/\",\n\t\t\t\tProtocol: \"https\", Host: \"ac.snowflakecomputing.com\", Port: 443,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: fmt.Sprintf(\"u:p@ac.snowflake.local:9876?account=ac&protocol=http&authenticator=SNOWFLAKE_JWT&privateKey=%v\", privKeyPKCS8),\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypeJwt, PrivateKey: testPrivKey,\n\t\t\t\tProtocol: \"http\", Host: \"ac.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: fmt.Sprintf(\"u:p@ac.snowflake.local:9876?account=ac&protocol=http&authenticator=%v\", url.QueryEscape(\"https://ac.okta.com\")),\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypeOkta,\n\t\t\t\tOktaURL: &url.URL{\n\t\t\t\t\tScheme: \"https\",\n\t\t\t\t\tHost:   \"ac.okta.com\",\n\t\t\t\t},\n\t\t\t\tPrivateKey: testPrivKey,\n\t\t\t\tProtocol:   \"http\", Host: \"ac.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: fmt.Sprintf(\"u:p@ac.snowflake.local:9876?account=ac&protocol=http&authenticator=%v\", url.QueryEscape(\"https://ac.some-host.com/custom-okta-url\")),\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypeOkta,\n\t\t\t\tOktaURL: &url.URL{\n\t\t\t\t\tScheme: \"https\",\n\t\t\t\t\tHost:   \"ac.some-host.com\",\n\t\t\t\t\tPath:   \"/custom-okta-url\",\n\t\t\t\t},\n\t\t\t\tPrivateKey: testPrivKey,\n\t\t\t\tProtocol:   \"http\", Host: \"ac.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: fmt.Sprintf(\"u:p@a.snowflake.local:9876?account=a&protocol=http&authenticator=SNOWFLAKE_JWT&privateKey=%v\", privKeyPKCS1),\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypeJwt, PrivateKey: testPrivKey,\n\t\t\t\tProtocol: \"http\", Host: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      &sferrors.SnowflakeError{Number: sferrors.ErrCodePrivateKeyParseError},\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account/db/s?ocspFailOpen=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"https\", Host: \"account.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account/db/s?ocspFailOpen=false\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"https\", Host: \"account.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", OCSPFailOpen: OCSPFailOpenFalse,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailClosed,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account/db/s?validateDefaultParameters=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"https\", Host: \"account.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolTrue, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:          time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:       time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout: time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:    defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:     BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account/db/s?validateDefaultParameters=false\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"account\", User: \"user\", Password: \"pass\",\n\t\t\t\tProtocol: \"https\", Host: \"account.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolFalse, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:          time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:       time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout: time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:    defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:     BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&validateDefaultParameters=false\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.r.c.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolFalse, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:          time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:       time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout: time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:    defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:     BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&clientTimeout=300&jwtClientTimeout=45&includeRetryReason=false\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.r.c.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolTrue, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:            300 * time.Second,\n\t\t\t\tJWTClientTimeout:         45 * time.Second,\n\t\t\t\tExternalBrowserTimeout:   time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:      defaultCloudStorageTimeout,\n\t\t\t\tDisableQueryContextCache: false,\n\t\t\t\tIncludeRetryReason:       BoolFalse,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&serverSessionKeepAlive=false\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.r.c.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolTrue, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:          time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:       time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout: time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:    defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:     BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&serverSessionKeepAlive=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.r.c.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolTrue, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tServerSessionKeepAlive: true,\n\t\t\t\tClientTimeout:          time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:       time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout: time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:    defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:     BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&tmpDirPath=%2Ftmp\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.r.c.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolTrue, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:          time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:       time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout: time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:    defaultCloudStorageTimeout,\n\t\t\t\tTmpDirPath:             \"/tmp\",\n\t\t\t\tIncludeRetryReason:     BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&disableQueryContextCache=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.r.c.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolTrue, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:            time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:         time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:   time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:      defaultCloudStorageTimeout,\n\t\t\t\tDisableQueryContextCache: true,\n\t\t\t\tIncludeRetryReason:       BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&includeRetryReason=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.r.c.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolTrue, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:          time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:       time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout: time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:    defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:     BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&includeRetryReason=true&clientConfigFile=%2FUsers%2Fuser%2Fconfig.json\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.r.c.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolTrue, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:          time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:       time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout: time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:    defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:     BoolTrue,\n\t\t\t\tClientConfigFile:       \"/Users/user/config.json\",\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&includeRetryReason=true&clientConfigFile=c%3A%5CUsers%5Cuser%5Cconfig.json\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tProtocol: \"https\", Host: \"a.r.c.snowflakecomputing.com\", Port: 443,\n\t\t\t\tDatabase: \"db\", Schema: \"s\", ValidateDefaultParameters: BoolTrue, OCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t\tClientTimeout:          time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:       time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout: time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:    defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:     BoolTrue,\n\t\t\t\tClientConfigFile:       \"c:\\\\Users\\\\user\\\\config.json\",\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?authenticator=http%3A%2F%2Fsc.okta.com&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t\terr: sferrors.ErrFailedToParseAuthenticator(),\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&protocol=http&authenticator=EXTERNALBROWSER&disableConsoleLogin=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypeExternalBrowser,\n\t\t\t\tProtocol:      \"http\", Host: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tDisableConsoleLogin:       BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&protocol=http&authenticator=EXTERNALBROWSER&disableConsoleLogin=false\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypeExternalBrowser,\n\t\t\t\tProtocol:      \"http\", Host: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tDisableConsoleLogin:       BoolFalse,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&protocol=http&authenticator=EXTERNALBROWSER&disableSamlURLCheck=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypeExternalBrowser,\n\t\t\t\tProtocol:      \"http\", Host: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tDisableSamlURLCheck:       BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&protocol=http&authenticator=EXTERNALBROWSER&disableSamlURLCheck=false\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypeExternalBrowser,\n\t\t\t\tProtocol:      \"http\", Host: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tDisableSamlURLCheck:       BoolFalse,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&protocol=http&authenticator=PROGRAMMATIC_ACCESS_TOKEN&disableSamlURLCheck=false&token=t\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypePat,\n\t\t\t\tProtocol:      \"http\", Host: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tDisableSamlURLCheck:       BoolFalse,\n\t\t\t\tToken:                     \"t\",\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&protocol=http&authenticator=PROGRAMMATIC_ACCESS_TOKEN&disableSamlURLCheck=false&tokenFilePath=..%2F..%2Ftest_data%2Fsnowflake%2Fsession%2Ftoken\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tAuthenticator: AuthTypePat,\n\t\t\t\tProtocol:      \"http\", Host: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tDisableSamlURLCheck:       BoolFalse,\n\t\t\t\tTokenFilePath:             \"../../test_data/snowflake/session/token\",\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&certRevocationCheckMode=enabled&crlAllowCertificatesWithoutCrlURL=true&crlInMemoryCacheDisabled=true&crlOnDiskCacheDisabled=true&crlDownloadMaxSize=10&crlHttpClientTimeout=10\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tHost: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tProtocol:                          \"https\",\n\t\t\t\tOCSPFailOpen:                      OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters:         BoolTrue,\n\t\t\t\tClientTimeout:                     time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:                  time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:            time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:               defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:                BoolTrue,\n\t\t\t\tCertRevocationCheckMode:           CertRevocationCheckEnabled,\n\t\t\t\tCrlAllowCertificatesWithoutCrlURL: BoolTrue,\n\t\t\t\tCrlInMemoryCacheDisabled:          true,\n\t\t\t\tCrlOnDiskCacheDisabled:            true,\n\t\t\t\tCrlDownloadMaxSize:                10,\n\t\t\t\tCrlHTTPClientTimeout:              10 * time.Second,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t},\n\t\t{\n\t\t\tdsn: \"user:pass@account/db?tlsConfigName=custom\",\n\t\t\terr: &sferrors.SnowflakeError{\n\t\t\t\tNumber:  sferrors.ErrCodeMissingTLSConfig,\n\t\t\t\tMessage: fmt.Sprintf(sferrors.ErrMsgMissingTLSConfig, \"custom\"),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&&singleAuthenticationPrompt=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tHost: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tProtocol:                   \"https\",\n\t\t\t\tOCSPFailOpen:               OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters:  BoolTrue,\n\t\t\t\tClientTimeout:              time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:           time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:     time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:        defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:         BoolTrue,\n\t\t\t\tSingleAuthenticationPrompt: BoolTrue,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&&singleAuthenticationPrompt=false\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tHost: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tProtocol:                   \"https\",\n\t\t\t\tOCSPFailOpen:               OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters:  BoolTrue,\n\t\t\t\tClientTimeout:              time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:           time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:     time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:        defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:         BoolTrue,\n\t\t\t\tSingleAuthenticationPrompt: BoolFalse,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tdsn: \"u:p@a.snowflake.local:9876?account=a&tracing=debug&logQueryText=true&logQueryParameters=true\",\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"a\", User: \"u\", Password: \"p\",\n\t\t\t\tHost: \"a.snowflake.local\", Port: 9876,\n\t\t\t\tProtocol:                  \"https\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tTracing:                   \"debug\",\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tLogQueryText:              true,\n\t\t\t\tLogQueryParameters:        true,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t},\n\t}\n\n\tfor _, at := range []AuthType{AuthTypeExternalBrowser, AuthTypeOAuth} {\n\t\ttestcases = append(testcases, tcParseDSN{\n\t\t\tdsn: fmt.Sprintf(\"@host:777/db/schema?account=ac&protocol=http&authenticator=%v\", strings.ToLower(at.String())),\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"\", Password: \"\",\n\t\t\t\tProtocol: \"http\", Host: \"host\", Port: 777,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tAuthenticator:             at,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      nil,\n\t\t})\n\t}\n\n\tfor _, at := range []AuthType{AuthTypeSnowflake, AuthTypeUsernamePasswordMFA, AuthTypeJwt} {\n\t\ttestcases = append(testcases, tcParseDSN{\n\t\t\tdsn: fmt.Sprintf(\"@host:888/db/schema?account=ac&protocol=http&authenticator=%v\", strings.ToLower(at.String())),\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"\", Password: \"\",\n\t\t\t\tProtocol: \"http\", Host: \"host\", Port: 888,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tAuthenticator:             at,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      sferrors.ErrEmptyUsername(),\n\t\t})\n\t}\n\n\tfor _, at := range []AuthType{AuthTypeSnowflake, AuthTypeUsernamePasswordMFA} {\n\t\ttestcases = append(testcases, tcParseDSN{\n\t\t\tdsn: fmt.Sprintf(\"user@host:888/db/schema?account=ac&protocol=http&authenticator=%v\", strings.ToLower(at.String())),\n\t\t\tconfig: &Config{\n\t\t\t\tAccount: \"ac\", User: \"user\", Password: \"\",\n\t\t\t\tProtocol: \"http\", Host: \"host\", Port: 888,\n\t\t\t\tDatabase: \"db\", Schema: \"schema\",\n\t\t\t\tOCSPFailOpen:              OCSPFailOpenTrue,\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t\tClientTimeout:             time.Duration(DefaultClientTimeout),\n\t\t\t\tJWTClientTimeout:          time.Duration(DefaultJWTClientTimeout),\n\t\t\t\tExternalBrowserTimeout:    time.Duration(DefaultExternalBrowserTimeout),\n\t\t\t\tCloudStorageTimeout:       defaultCloudStorageTimeout,\n\t\t\t\tIncludeRetryReason:        BoolTrue,\n\t\t\t\tAuthenticator:             at,\n\t\t\t},\n\t\t\tocspMode: ocspModeFailOpen,\n\t\t\terr:      sferrors.ErrEmptyPassword(),\n\t\t})\n\t}\n\n\tfor i, test := range testcases {\n\t\tt.Run(maskSecrets(test.dsn), func(t *testing.T) {\n\t\t\tcfg, err := ParseDSN(test.dsn)\n\t\t\tswitch {\n\t\t\tcase test.err == nil:\n\t\t\t\tassertNilF(t, err, fmt.Sprintf(\"%d: Failed to parse the DSN. dsn: %v\", i, test.dsn))\n\t\t\t\tassertEqualE(t, cfg.Host, test.config.Host, fmt.Sprintf(\"Test %d: Host mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Account, test.config.Account, fmt.Sprintf(\"Test %d: Account mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.User, test.config.User, fmt.Sprintf(\"Test %d: User mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Password, test.config.Password, fmt.Sprintf(\"Test %d: Password mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Database, test.config.Database, fmt.Sprintf(\"Test %d: Database mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Schema, test.config.Schema, fmt.Sprintf(\"Test %d: Schema mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Warehouse, test.config.Warehouse, fmt.Sprintf(\"Test %d: Warehouse mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Role, test.config.Role, fmt.Sprintf(\"Test %d: Role mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Region, test.config.Region, fmt.Sprintf(\"Test %d: Region mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Protocol, test.config.Protocol, fmt.Sprintf(\"Test %d: Protocol mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Passcode, test.config.Passcode, fmt.Sprintf(\"Test %d: Passcode mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.PasscodeInPassword, test.config.PasscodeInPassword, fmt.Sprintf(\"Test %d: PasscodeInPassword mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Authenticator, test.config.Authenticator, fmt.Sprintf(\"Test %d: Authenticator mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.SingleAuthenticationPrompt, test.config.SingleAuthenticationPrompt, fmt.Sprintf(\"Test %d: SingleAuthenticationPrompt mismatch\", i))\n\t\t\t\tif test.config.Authenticator == AuthTypeOkta {\n\t\t\t\t\tassertEqualE(t, *cfg.OktaURL, *test.config.OktaURL, fmt.Sprintf(\"Test %d: OktaURL mismatch\", i))\n\t\t\t\t}\n\t\t\t\tassertEqualE(t, cfg.OCSPFailOpen, test.config.OCSPFailOpen, fmt.Sprintf(\"Test %d: OCSPFailOpen mismatch\", i))\n\t\t\t\tassertEqualE(t, OcspMode(cfg), test.ocspMode, fmt.Sprintf(\"Test %d: OCSPMode mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.ValidateDefaultParameters, test.config.ValidateDefaultParameters, fmt.Sprintf(\"Test %d: ValidateDefaultParameters mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.ClientTimeout, test.config.ClientTimeout, fmt.Sprintf(\"Test %d: ClientTimeout mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.JWTClientTimeout, test.config.JWTClientTimeout, fmt.Sprintf(\"Test %d: JWTClientTimeout mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.ExternalBrowserTimeout, test.config.ExternalBrowserTimeout, fmt.Sprintf(\"Test %d: ExternalBrowserTimeout mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.CloudStorageTimeout, test.config.CloudStorageTimeout, fmt.Sprintf(\"Test %d: CloudStorageTimeout mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.TmpDirPath, test.config.TmpDirPath, fmt.Sprintf(\"Test %d: TmpDirPath mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.DisableQueryContextCache, test.config.DisableQueryContextCache, fmt.Sprintf(\"Test %d: DisableQueryContextCache mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.IncludeRetryReason, test.config.IncludeRetryReason, fmt.Sprintf(\"Test %d: IncludeRetryReason mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.ServerSessionKeepAlive, test.config.ServerSessionKeepAlive, fmt.Sprintf(\"Test %d: ServerSessionKeepAlive mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.DisableConsoleLogin, test.config.DisableConsoleLogin, fmt.Sprintf(\"Test %d: DisableConsoleLogin mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.DisableSamlURLCheck, test.config.DisableSamlURLCheck, fmt.Sprintf(\"Test %d: DisableSamlURLCheck mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.OauthClientID, test.config.OauthClientID, fmt.Sprintf(\"Test %d: OauthClientID mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.OauthClientSecret, test.config.OauthClientSecret, fmt.Sprintf(\"Test %d: OauthClientSecret mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.OauthAuthorizationURL, test.config.OauthAuthorizationURL, fmt.Sprintf(\"Test %d: OauthAuthorizationURL mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.OauthTokenRequestURL, test.config.OauthTokenRequestURL, fmt.Sprintf(\"Test %d: OauthTokenRequestURL mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.OauthRedirectURI, test.config.OauthRedirectURI, fmt.Sprintf(\"Test %d: OauthRedirectURI mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.OauthScope, test.config.OauthScope, fmt.Sprintf(\"Test %d: OauthScope mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.EnableSingleUseRefreshTokens, test.config.EnableSingleUseRefreshTokens, fmt.Sprintf(\"Test %d: EnableSingleUseRefreshTokens mismatch\", i))\n\t\t\t\tassertEqualE(t, cfg.Token, test.config.Token, \"token\")\n\t\t\t\tassertEqualE(t, cfg.ClientConfigFile, test.config.ClientConfigFile, \"client config file\")\n\t\t\t\tassertEqualE(t, cfg.CertRevocationCheckMode, test.config.CertRevocationCheckMode, \"cert revocation check mode\")\n\t\t\t\tassertEqualE(t, cfg.CrlAllowCertificatesWithoutCrlURL, test.config.CrlAllowCertificatesWithoutCrlURL, \"crl allow certificates without crl url\")\n\t\t\t\tassertEqualE(t, cfg.CrlInMemoryCacheDisabled, test.config.CrlInMemoryCacheDisabled, \"crl in memory cache disabled\")\n\t\t\t\tassertEqualE(t, cfg.CrlOnDiskCacheDisabled, test.config.CrlOnDiskCacheDisabled, \"crl on disk cache disabled\")\n\t\t\t\tassertEqualE(t, cfg.CrlHTTPClientTimeout, test.config.CrlHTTPClientTimeout, \"crl http client timeout\")\n\t\t\tcase test.err != nil:\n\t\t\t\tdriverErrE, okE := test.err.(*sferrors.SnowflakeError)\n\t\t\t\tdriverErrG, okG := err.(*sferrors.SnowflakeError)\n\t\t\t\tif okE && !okG || !okE && okG {\n\t\t\t\t\tt.Fatalf(\"%d: Wrong error. expected: %v, got: %v\", i, test.err, err)\n\t\t\t\t}\n\t\t\t\tif okE && okG {\n\t\t\t\t\tif driverErrE.Number != driverErrG.Number {\n\t\t\t\t\t\tt.Fatalf(\"%d: Wrong error number. expected: %v, got: %v\", i, driverErrE.Number, driverErrG.Number)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tt1 := reflect.TypeOf(err)\n\t\t\t\t\tt2 := reflect.TypeOf(test.err)\n\t\t\t\t\tif t1 != t2 {\n\t\t\t\t\t\tt.Fatalf(\"%d: Wrong error. expected: %T:%v, got: %T:%v\", i, test.err, test.err, err, err)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t})\n\t}\n}\n\ntype tcDSN struct {\n\tcfg *Config\n\tdsn string\n\terr error\n}\n\nfunc TestDSN(t *testing.T) {\n\ttmfmt := \"MM-DD-YYYY\"\n\ttestcases := []tcDSN{\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a-aofnadsf.somewhere.azure\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a-aofnadsf.somewhere.azure.snowflakecomputing.com:443?ocspFailOpen=true&region=somewhere.azure&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a-aofnadsf.global\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a-aofnadsf.global.snowflakecomputing.com:443?ocspFailOpen=true&region=global&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a-aofnadsf.global\",\n\t\t\t\tRegion:   \"us-west-2\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a-aofnadsf.global.snowflakecomputing.com:443?ocspFailOpen=true&region=global&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"account-name\",\n\t\t\t\tRegion:   \"cn-region\",\n\t\t\t},\n\t\t\tdsn: \"u:p@account-name.cn-region.snowflakecomputing.cn:443?ocspFailOpen=true&region=cn-region&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"account-name.cn-region\",\n\t\t\t},\n\t\t\tdsn: \"u:p@account-name.cn-region.snowflakecomputing.cn:443?ocspFailOpen=true&region=cn-region&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"account-name.cn-region\",\n\t\t\t\tHost:     \"account-name.cn-region.snowflakecomputing.cn\",\n\t\t\t},\n\t\t\tdsn: \"u:p@account-name.cn-region.snowflakecomputing.cn:443?account=account-name&ocspFailOpen=true&region=cn-region&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"account.us-west-2\",\n\t\t\t},\n\t\t\tdsn: \"u:p@account.snowflakecomputing.com:443?ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"account_us-west-2\",\n\t\t\t},\n\t\t\tdsn: \"u:p@account_us-west-2.snowflakecomputing.com:443?ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"account-name\",\n\t\t\t\tHost:     \"account-name.snowflakecomputing.mil\",\n\t\t\t},\n\t\t\tdsn: \"u:p@account-name.snowflakecomputing.mil:443?account=account-name&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"account-name\",\n\t\t\t\tHost:     \"account-name.snowflakecomputing.gov.pl\",\n\t\t\t},\n\t\t\tdsn: \"u:p@account-name.snowflakecomputing.gov.pl:443?account=account-name&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a-aofnadsf.global\",\n\t\t\t\tRegion:   \"r\",\n\t\t\t},\n\t\t\terr: sferrors.ErrRegionConflict(),\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a\",\n\t\t\t\tRegion:   \"us-west-2\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a\",\n\t\t\t\tRegion:   \"r\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.r.snowflakecomputing.com:443?ocspFailOpen=true&region=r&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                  \"u\",\n\t\t\t\tPassword:              \"p\",\n\t\t\t\tAccount:               \"a\",\n\t\t\t\tRegion:                \"r\",\n\t\t\t\tOauthClientID:         \"testClientId\",\n\t\t\t\tOauthClientSecret:     \"testClientSecret\",\n\t\t\t\tOauthAuthorizationURL: \"http://somehost.com\",\n\t\t\t\tOauthTokenRequestURL:  \"https://somehost2.com/somepath\",\n\t\t\t\tOauthRedirectURI:      \"http://localhost:8001/some-path\",\n\t\t\t\tOauthScope:            \"test scope\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.r.snowflakecomputing.com:443?oauthAuthorizationUrl=http%3A%2F%2Fsomehost.com&oauthClientId=testClientId&oauthClientSecret=testClientSecret&oauthRedirectUri=http%3A%2F%2Flocalhost%3A8001%2Fsome-path&oauthScope=test+scope&oauthTokenRequestUrl=https%3A%2F%2Fsomehost2.com%2Fsomepath&ocspFailOpen=true&region=r&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                         \"u\",\n\t\t\t\tPassword:                     \"p\",\n\t\t\t\tAccount:                      \"a\",\n\t\t\t\tRegion:                       \"r\",\n\t\t\t\tOauthClientID:                \"testClientId\",\n\t\t\t\tOauthClientSecret:            \"testClientSecret\",\n\t\t\t\tOauthAuthorizationURL:        \"http://somehost.com\",\n\t\t\t\tOauthTokenRequestURL:         \"https://somehost2.com/somepath\",\n\t\t\t\tOauthRedirectURI:             \"http://localhost:8001/some-path\",\n\t\t\t\tOauthScope:                   \"test scope\",\n\t\t\t\tEnableSingleUseRefreshTokens: true,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.r.snowflakecomputing.com:443?enableSingleUseRefreshTokens=true&oauthAuthorizationUrl=http%3A%2F%2Fsomehost.com&oauthClientId=testClientId&oauthClientSecret=testClientSecret&oauthRedirectUri=http%3A%2F%2Flocalhost%3A8001%2Fsome-path&oauthScope=test+scope&oauthTokenRequestUrl=https%3A%2F%2Fsomehost2.com%2Fsomepath&ocspFailOpen=true&region=r&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                   \"u\",\n\t\t\t\tPassword:               \"p\",\n\t\t\t\tAccount:                \"a\",\n\t\t\t\tRegion:                 \"r\",\n\t\t\t\tExternalBrowserTimeout: 20 * time.Second,\n\t\t\t\tCloudStorageTimeout:    7 * time.Second,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.r.snowflakecomputing.com:443?cloudStorageTimeout=7&externalBrowserTimeout=20&ocspFailOpen=true&region=r&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a\",\n\t\t\t},\n\t\t\terr: sferrors.ErrEmptyUsername(),\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"\",\n\t\t\t\tAccount:  \"a\",\n\t\t\t},\n\t\t\terr: sferrors.ErrEmptyPassword(),\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"\",\n\t\t\t},\n\t\t\terr: sferrors.ErrEmptyAccount(),\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:          \"u\",\n\t\t\t\tPassword:      \"p\",\n\t\t\t\tAccount:       \"ac\",\n\t\t\t\tAuthenticator: AuthTypeOAuthAuthorizationCode,\n\t\t\t},\n\t\t\terr: sferrors.ErrEmptyOAuthParameters(),\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a.e\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.e.snowflakecomputing.com:443?ocspFailOpen=true&region=e&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a.e\",\n\t\t\t\tRegion:   \"us-west-2\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.e.snowflakecomputing.com:443?ocspFailOpen=true&region=e&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a.e\",\n\t\t\t\tRegion:   \"r\",\n\t\t\t},\n\t\t\terr: sferrors.ErrRegionConflict(),\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:               \"u\",\n\t\t\t\tPassword:           \"p\",\n\t\t\t\tAccount:            \"a\",\n\t\t\t\tDatabase:           \"db\",\n\t\t\t\tSchema:             \"sc\",\n\t\t\t\tRole:               \"ro\",\n\t\t\t\tRegion:             \"b\",\n\t\t\t\tAuthenticator:      AuthTypeSnowflake,\n\t\t\t\tPasscode:           \"db\",\n\t\t\t\tPasscodeInPassword: true,\n\t\t\t\tLoginTimeout:       10 * time.Second,\n\t\t\t\tRequestTimeout:     300 * time.Second,\n\t\t\t\tApplication:        \"special go\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.snowflakecomputing.com:443?application=special+go&database=db&loginTimeout=10&ocspFailOpen=true&passcode=db&passcodeInPassword=true&region=b&requestTimeout=300&role=ro&schema=sc&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tAccount:                           \"ac\",\n\t\t\t\tUser:                              \"u\",\n\t\t\t\tPassword:                          \"p\",\n\t\t\t\tDatabase:                          \"db\",\n\t\t\t\tAuthenticator:                     AuthTypeWorkloadIdentityFederation,\n\t\t\t\tHost:                              \"ac.snowflakecomputing.com\",\n\t\t\t\tWorkloadIdentityProvider:          \"azure\",\n\t\t\t\tWorkloadIdentityEntraResource:     \"https://example.com/default\",\n\t\t\t\tWorkloadIdentityImpersonationPath: []string{\"/default\", \"/default2\"},\n\t\t\t},\n\t\t\tdsn: \"u:p@ac.snowflakecomputing.com:443?account=ac&authenticator=workload_identity&database=db&ocspFailOpen=true&validateDefaultParameters=true&workloadIdentityEntraResource=https%3A%2F%2Fexample.com%2Fdefault&workloadIdentityImpersonationPath=%2Fdefault%2C%2Fdefault2&workloadIdentityProvider=azure\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                           \"u\",\n\t\t\t\tPassword:                       \"p\",\n\t\t\t\tAccount:                        \"a\",\n\t\t\t\tAuthenticator:                  AuthTypeExternalBrowser,\n\t\t\t\tClientStoreTemporaryCredential: BoolTrue,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?authenticator=externalbrowser&clientStoreTemporaryCredential=true&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                           \"u\",\n\t\t\t\tPassword:                       \"p\",\n\t\t\t\tAccount:                        \"a\",\n\t\t\t\tAuthenticator:                  AuthTypeExternalBrowser,\n\t\t\t\tClientStoreTemporaryCredential: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?authenticator=externalbrowser&clientStoreTemporaryCredential=false&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                           \"u\",\n\t\t\t\tPassword:                       \"p\",\n\t\t\t\tAccount:                        \"a\",\n\t\t\t\tToken:                          \"t\",\n\t\t\t\tAuthenticator:                  AuthTypePat,\n\t\t\t\tClientStoreTemporaryCredential: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?authenticator=programmatic_access_token&clientStoreTemporaryCredential=false&ocspFailOpen=true&token=t&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                           \"u\",\n\t\t\t\tPassword:                       \"p\",\n\t\t\t\tAccount:                        \"a\",\n\t\t\t\tTokenFilePath:                  \"../../test_data/snowflake/session/token\",\n\t\t\t\tAuthenticator:                  AuthTypePat,\n\t\t\t\tClientStoreTemporaryCredential: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?authenticator=programmatic_access_token&clientStoreTemporaryCredential=false&ocspFailOpen=true&tokenFilePath=..%2F..%2Ftest_data%2Fsnowflake%2Fsession%2Ftoken&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                           \"u\",\n\t\t\t\tPassword:                       \"p\",\n\t\t\t\tAccount:                        \"a\",\n\t\t\t\tAuthenticator:                  AuthTypeOAuthAuthorizationCode,\n\t\t\t\tOauthClientID:                  \"testClientId\",\n\t\t\t\tOauthClientSecret:              \"testClientSecret\",\n\t\t\t\tClientStoreTemporaryCredential: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?authenticator=oauth_authorization_code&clientStoreTemporaryCredential=false&oauthClientId=testClientId&oauthClientSecret=testClientSecret&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                           \"u\",\n\t\t\t\tPassword:                       \"p\",\n\t\t\t\tAccount:                        \"a\",\n\t\t\t\tAuthenticator:                  AuthTypeOAuthClientCredentials,\n\t\t\t\tClientStoreTemporaryCredential: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?authenticator=oauth_client_credentials&clientStoreTemporaryCredential=false&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:          \"u\",\n\t\t\t\tPassword:      \"p\",\n\t\t\t\tAccount:       \"a\",\n\t\t\t\tAuthenticator: AuthTypeOkta,\n\t\t\t\tOktaURL: &url.URL{\n\t\t\t\t\tScheme: \"https\",\n\t\t\t\t\tHost:   \"sc.okta.com\",\n\t\t\t\t},\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?authenticator=https%3A%2F%2Fsc.okta.com&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a.e\",\n\t\t\t\tParams: map[string]*string{\n\t\t\t\t\t\"TIMESTAMP_OUTPUT_FORMAT\": &tmfmt,\n\t\t\t\t},\n\t\t\t},\n\t\t\tdsn: \"u:p@a.e.snowflakecomputing.com:443?TIMESTAMP_OUTPUT_FORMAT=MM-DD-YYYY&ocspFailOpen=true&region=e&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \":@abc\",\n\t\t\t\tAccount:  \"a.e\",\n\t\t\t\tParams: map[string]*string{\n\t\t\t\t\t\"TIMESTAMP_OUTPUT_FORMAT\": &tmfmt,\n\t\t\t\t},\n\t\t\t},\n\t\t\tdsn: \"u:%3A%40abc@a.e.snowflakecomputing.com:443?TIMESTAMP_OUTPUT_FORMAT=MM-DD-YYYY&ocspFailOpen=true&region=e&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:         \"u\",\n\t\t\t\tPassword:     \"p\",\n\t\t\t\tAccount:      \"a\",\n\t\t\t\tOCSPFailOpen: OCSPFailOpenTrue,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:         \"u\",\n\t\t\t\tPassword:     \"p\",\n\t\t\t\tAccount:      \"a\",\n\t\t\t\tOCSPFailOpen: OCSPFailOpenFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?ocspFailOpen=false&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                      \"u\",\n\t\t\t\tPassword:                  \"p\",\n\t\t\t\tAccount:                   \"a\",\n\t\t\t\tValidateDefaultParameters: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?ocspFailOpen=true&validateDefaultParameters=false\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                      \"u\",\n\t\t\t\tPassword:                  \"p\",\n\t\t\t\tAccount:                   \"a\",\n\t\t\t\tValidateDefaultParameters: BoolTrue,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:              \"u\",\n\t\t\t\tPassword:          \"p\",\n\t\t\t\tAccount:           \"a\",\n\t\t\t\tDisableOCSPChecks: true,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?disableOCSPChecks=true&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:              \"u\",\n\t\t\t\tPassword:          \"p\",\n\t\t\t\tAccount:           \"a\",\n\t\t\t\tDisableOCSPChecks: true,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?disableOCSPChecks=true&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                         \"u\",\n\t\t\t\tPassword:                     \"p\",\n\t\t\t\tAccount:                      \"a\",\n\t\t\t\tDisableOCSPChecks:            true,\n\t\t\t\tConnectionDiagnosticsEnabled: true,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.snowflakecomputing.com:443?connectionDiagnosticsEnabled=true&disableOCSPChecks=true&ocspFailOpen=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a.b.c\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"account.snowflakecomputing.com\",\n\t\t\t},\n\t\t\tdsn: \"u:p@account.snowflakecomputing.com.snowflakecomputing.com:443?ocspFailOpen=true&region=snowflakecomputing.com&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a.b.c\",\n\t\t\t\tRegion:   \"us-west-2\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a.b.c\",\n\t\t\t\tRegion:   \"r\",\n\t\t\t},\n\t\t\terr: sferrors.ErrRegionConflict(),\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:             \"u\",\n\t\t\t\tPassword:         \"p\",\n\t\t\t\tAccount:          \"a.b.c\",\n\t\t\t\tClientTimeout:    400 * time.Second,\n\t\t\t\tJWTClientTimeout: 60 * time.Second,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?clientTimeout=400&jwtClientTimeout=60&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:             \"u\",\n\t\t\t\tPassword:         \"p\",\n\t\t\t\tAccount:          \"a.b.c\",\n\t\t\t\tClientTimeout:    400 * time.Second,\n\t\t\t\tJWTExpireTimeout: 30 * time.Second,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?clientTimeout=400&jwtTimeout=30&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a.b.c\",\n\t\t\t\tProtocol: \"http\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&protocol=http&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:               \"u\",\n\t\t\t\tPassword:           \"p\",\n\t\t\t\tAccount:            \"a.b.c\",\n\t\t\t\tTracing:            \"debug\",\n\t\t\t\tLogQueryText:       true,\n\t\t\t\tLogQueryParameters: true,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?logQueryParameters=true&logQueryText=true&ocspFailOpen=true&region=b.c&tracing=debug&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                  \"u\",\n\t\t\t\tPassword:              \"p\",\n\t\t\t\tAccount:               \"a.b.c\",\n\t\t\t\tAuthenticator:         AuthTypeUsernamePasswordMFA,\n\t\t\t\tClientRequestMfaToken: BoolTrue,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?authenticator=username_password_mfa&clientRequestMfaToken=true&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                  \"u\",\n\t\t\t\tPassword:              \"p\",\n\t\t\t\tAccount:               \"a.b.c\",\n\t\t\t\tAuthenticator:         AuthTypeUsernamePasswordMFA,\n\t\t\t\tClientRequestMfaToken: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?authenticator=username_password_mfa&clientRequestMfaToken=false&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:      \"u\",\n\t\t\t\tPassword:  \"p\",\n\t\t\t\tAccount:   \"a.b.c\",\n\t\t\t\tWarehouse: \"wh\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&validateDefaultParameters=true&warehouse=wh\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:     \"u\",\n\t\t\t\tPassword: \"p\",\n\t\t\t\tAccount:  \"a.b.c\",\n\t\t\t\tToken:    \"t\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&token=t&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:          \"u\",\n\t\t\t\tPassword:      \"p\",\n\t\t\t\tAccount:       \"a.b.c\",\n\t\t\t\tAuthenticator: AuthTypeTokenAccessor,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?authenticator=tokenaccessor&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:       \"u\",\n\t\t\t\tPassword:   \"p\",\n\t\t\t\tAccount:    \"a.b.c\",\n\t\t\t\tTmpDirPath: \"/tmp\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&tmpDirPath=%2Ftmp&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:               \"u\",\n\t\t\t\tPassword:           \"p\",\n\t\t\t\tAccount:            \"a.b.c\",\n\t\t\t\tIncludeRetryReason: BoolFalse,\n\t\t\t\tMaxRetryCount:      30,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?includeRetryReason=false&maxRetryCount=30&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                   \"u\",\n\t\t\t\tPassword:               \"p\",\n\t\t\t\tAccount:                \"a.b.c\",\n\t\t\t\tServerSessionKeepAlive: true,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&serverSessionKeepAlive=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                     \"u\",\n\t\t\t\tPassword:                 \"p\",\n\t\t\t\tAccount:                  \"a.b.c\",\n\t\t\t\tDisableQueryContextCache: true,\n\t\t\t\tIncludeRetryReason:       BoolTrue,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?disableQueryContextCache=true&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:               \"u\",\n\t\t\t\tPassword:           \"p\",\n\t\t\t\tAccount:            \"a.b.c\",\n\t\t\t\tIncludeRetryReason: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?includeRetryReason=false&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:               \"u\",\n\t\t\t\tPassword:           \"p\",\n\t\t\t\tAccount:            \"a.b.c\",\n\t\t\t\tIncludeRetryReason: BoolTrue,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:               \"u\",\n\t\t\t\tPassword:           \"p\",\n\t\t\t\tAccount:            \"a.b.c\",\n\t\t\t\tIncludeRetryReason: BoolTrue,\n\t\t\t\tClientConfigFile:   \"/Users/user/config.json\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?clientConfigFile=%2FUsers%2Fuser%2Fconfig.json&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:               \"u\",\n\t\t\t\tPassword:           \"p\",\n\t\t\t\tAccount:            \"a.b.c\",\n\t\t\t\tIncludeRetryReason: BoolTrue,\n\t\t\t\tClientConfigFile:   \"c:\\\\Users\\\\user\\\\config.json\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?clientConfigFile=c%3A%5CUsers%5Cuser%5Cconfig.json&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                \"u\",\n\t\t\t\tPassword:            \"p\",\n\t\t\t\tAccount:             \"a.b.c\",\n\t\t\t\tAuthenticator:       AuthTypeExternalBrowser,\n\t\t\t\tDisableConsoleLogin: BoolTrue,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?authenticator=externalbrowser&disableConsoleLogin=true&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                \"u\",\n\t\t\t\tPassword:            \"p\",\n\t\t\t\tAccount:             \"a.b.c\",\n\t\t\t\tAuthenticator:       AuthTypeExternalBrowser,\n\t\t\t\tDisableConsoleLogin: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?authenticator=externalbrowser&disableConsoleLogin=false&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                \"u\",\n\t\t\t\tPassword:            \"p\",\n\t\t\t\tAccount:             \"a.b.c\",\n\t\t\t\tAuthenticator:       AuthTypeExternalBrowser,\n\t\t\t\tDisableSamlURLCheck: BoolTrue,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?authenticator=externalbrowser&disableSamlURLCheck=true&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                \"u\",\n\t\t\t\tPassword:            \"p\",\n\t\t\t\tAccount:             \"a.b.c\",\n\t\t\t\tAuthenticator:       AuthTypeExternalBrowser,\n\t\t\t\tDisableSamlURLCheck: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?authenticator=externalbrowser&disableSamlURLCheck=false&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                              \"u\",\n\t\t\t\tPassword:                          \"p\",\n\t\t\t\tAccount:                           \"a.b.c\",\n\t\t\t\tCertRevocationCheckMode:           CertRevocationCheckEnabled,\n\t\t\t\tCrlAllowCertificatesWithoutCrlURL: BoolTrue,\n\t\t\t\tCrlInMemoryCacheDisabled:          true,\n\t\t\t\tCrlOnDiskCacheDisabled:            true,\n\t\t\t\tCrlDownloadMaxSize:                10,\n\t\t\t\tCrlHTTPClientTimeout:              5 * time.Second,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?certRevocationCheckMode=ENABLED&crlAllowCertificatesWithoutCrlURL=true&crlDownloadMaxSize=10&crlHttpClientTimeout=5&crlInMemoryCacheDisabled=true&crlOnDiskCacheDisabled=true&ocspFailOpen=true&region=b.c&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:          \"u\",\n\t\t\t\tPassword:      \"p\",\n\t\t\t\tAccount:       \"a.b.c\",\n\t\t\t\tTLSConfigName: \"custom\",\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&tlsConfigName=custom&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                       \"u\",\n\t\t\t\tPassword:                   \"p\",\n\t\t\t\tAccount:                    \"a.b.c\",\n\t\t\t\tSingleAuthenticationPrompt: BoolTrue,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&singleAuthenticationPrompt=true&validateDefaultParameters=true\",\n\t\t},\n\t\t{\n\t\t\tcfg: &Config{\n\t\t\t\tUser:                       \"u\",\n\t\t\t\tPassword:                   \"p\",\n\t\t\t\tAccount:                    \"a.b.c\",\n\t\t\t\tSingleAuthenticationPrompt: BoolFalse,\n\t\t\t},\n\t\t\tdsn: \"u:p@a.b.c.snowflakecomputing.com:443?ocspFailOpen=true&region=b.c&singleAuthenticationPrompt=false&validateDefaultParameters=true\",\n\t\t},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(maskSecrets(test.dsn), func(t *testing.T) {\n\t\t\tif test.cfg.TLSConfigName != \"\" && test.err == nil {\n\t\t\t\terr := RegisterTLSConfig(test.cfg.TLSConfigName, &tls.Config{})\n\t\t\t\tassertNilF(t, err, \"Failed to register test TLS config\")\n\t\t\t\tdefer func() {\n\t\t\t\t\t_ = DeregisterTLSConfig(test.cfg.TLSConfigName)\n\t\t\t\t}()\n\t\t\t}\n\t\t\tdsn, err := DSN(test.cfg)\n\t\t\tif test.err == nil && err == nil {\n\t\t\t\tassertEqualF(t, dsn, test.dsn, fmt.Sprintf(\"failed to get DSN. expected: %v, got:\\n %v\", maskSecrets(test.dsn), maskSecrets(dsn)))\n\t\t\t\t_, err := ParseDSN(dsn)\n\t\t\t\tassertNilF(t, err, \"failed to parse DSN. dsn:\", dsn)\n\t\t\t}\n\t\t\tif test.err != nil {\n\t\t\t\tassertNotNilF(t, err, fmt.Sprintf(\"expected error. dsn: %v, expected err: %v\", maskSecrets(test.dsn), maskSecrets(test.err.Error())))\n\t\t\t}\n\t\t\tif test.err == nil {\n\t\t\t\tassertNilF(t, err, \"failed to match\")\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestParsePrivateKeyFromFileMissingFile(t *testing.T) {\n\t_, err := parsePrivateKeyFromFile(\"nonexistent\")\n\n\tif err == nil {\n\t\tt.Error(\"should report error for nonexistent file\")\n\t}\n}\n\nfunc TestParsePrivateKeyFromFileIncorrectData(t *testing.T) {\n\tpemFile := createTmpFile(t, \"exampleKey.pem\", []byte(\"gibberish\"))\n\t_, err := parsePrivateKeyFromFile(pemFile)\n\n\tif err == nil {\n\t\tt.Error(\"should report error for wrong data in file\")\n\t}\n}\n\nfunc TestParsePrivateKeyFromFileNotRSAPrivateKey(t *testing.T) {\n\t// Generate an ECDSA private key for testing\n\tecdsaPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to generate ECDSA private key: %v\", err)\n\t}\n\n\tecdsaPrivateKeyBytes, err := x509.MarshalECPrivateKey(ecdsaPrivateKey)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to marshal ECDSA private key: %v\", err)\n\t}\n\tpemBlock := &pem.Block{\n\t\tType:  \"EC PRIVATE KEY\",\n\t\tBytes: ecdsaPrivateKeyBytes,\n\t}\n\tpemData := pem.EncodeToMemory(pemBlock)\n\n\t// Write the PEM data to a temporary file\n\tpemFile := createTmpFile(t, \"ecdsaKey.pem\", pemData)\n\n\t// Attempt to parse the private key\n\t_, err = parsePrivateKeyFromFile(pemFile)\n\tif err == nil {\n\t\tt.Error(\"expected an error when trying to parse an ECDSA private key as RSA\")\n\t}\n}\n\nfunc TestParsePrivateKeyFromFile(t *testing.T) {\n\tgeneratedKey, _ := rsa.GenerateKey(cr.Reader, 1024)\n\tpemKey, _ := x509.MarshalPKCS8PrivateKey(generatedKey)\n\tpemData := pem.EncodeToMemory(\n\t\t&pem.Block{\n\t\t\tType:  \"RSA PRIVATE KEY\",\n\t\t\tBytes: pemKey,\n\t\t},\n\t)\n\tkeyFile := createTmpFile(t, \"exampleKey.pem\", pemData)\n\tdefer os.Remove(keyFile)\n\n\tparsedKey, err := parsePrivateKeyFromFile(keyFile)\n\tif err != nil {\n\t\tt.Errorf(\"unable to parse pam file from path: %v, err: %v\", keyFile, err)\n\t} else if !parsedKey.Equal(generatedKey) {\n\t\tt.Errorf(\"generated key does not equal to parsed key from file\\ngeneratedKey=%v\\nparsedKey=%v\",\n\t\t\tgeneratedKey, parsedKey)\n\t}\n}\n\nfunc createTmpFile(t *testing.T, fileName string, content []byte) string {\n\ttempFile, _ := os.CreateTemp(\"\", fileName)\n\t_, err := tempFile.Write(content)\n\tassertNilF(t, err)\n\tabsolutePath := tempFile.Name()\n\treturn absolutePath\n}\n\ntype configParamToValue struct {\n\tconfigParam string\n\tvalue       string\n}\n\nfunc TestGetConfigFromEnv(t *testing.T) {\n\tenvMap := map[string]configParamToValue{\n\t\t\"SF_TEST_ACCOUNT\":     {\"Account\", \"account\"},\n\t\t\"SF_TEST_USER\":        {\"User\", \"user\"},\n\t\t\"SF_TEST_PASSWORD\":    {\"Password\", \"password\"},\n\t\t\"SF_TEST_ROLE\":        {\"Role\", \"role\"},\n\t\t\"SF_TEST_HOST\":        {\"Host\", \"host\"},\n\t\t\"SF_TEST_PORT\":        {\"Port\", \"8080\"},\n\t\t\"SF_TEST_PROTOCOL\":    {\"Protocol\", \"http\"},\n\t\t\"SF_TEST_WAREHOUSE\":   {\"Warehouse\", \"warehouse\"},\n\t\t\"SF_TEST_DATABASE\":    {\"Database\", \"database\"},\n\t\t\"SF_TEST_REGION\":      {\"Region\", \"region\"},\n\t\t\"SF_TEST_PASSCODE\":    {\"Passcode\", \"passcode\"},\n\t\t\"SF_TEST_SCHEMA\":      {\"Schema\", \"schema\"},\n\t\t\"SF_TEST_APPLICATION\": {\"Application\", \"application\"},\n\t}\n\tvar properties = make([]*Param, len(envMap))\n\ti := 0\n\tfor key, ctv := range envMap {\n\t\tos.Setenv(key, ctv.value)\n\t\tcfgParam := Param{Name: ctv.configParam, EnvName: key, FailOnMissing: true}\n\t\tproperties[i] = &cfgParam\n\t\ti++\n\t}\n\tdefer func() {\n\t\tfor key := range envMap {\n\t\t\tos.Unsetenv(key)\n\t\t}\n\t}()\n\n\tcfg, err := GetConfigFromEnv(properties)\n\tif err != nil {\n\t\tt.Errorf(\"unable to parse env variables to Config, err: %v\", err)\n\t}\n\n\terr = checkConfig(*cfg, envMap)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n}\n\nfunc checkConfig(cfg Config, envMap map[string]configParamToValue) error {\n\tappendError := func(errArray []string, envName string, expected string, received string) []string {\n\t\terrArray = append(errArray, fmt.Sprintf(\"field %v expected value: %v, received value: %v\", envName, expected, received))\n\t\treturn errArray\n\t}\n\n\tvalue := reflect.ValueOf(cfg)\n\ttypeOfCfg := value.Type()\n\tcfgValues := make(map[string]any, value.NumField())\n\tfor i := 0; i < value.NumField(); i++ {\n\t\tif value.Field(i).CanInterface() {\n\t\t\tcfgValues[typeOfCfg.Field(i).Name] = value.Field(i).Interface()\n\t\t}\n\t}\n\n\tvar errArray []string\n\tfor key, ctv := range envMap {\n\t\tif ctv.configParam == \"Port\" {\n\t\t\tif portStr := strconv.Itoa(cfgValues[ctv.configParam].(int)); portStr != ctv.value {\n\t\t\t\terrArray = appendError(errArray, key, ctv.value, cfgValues[ctv.configParam].(string))\n\t\t\t}\n\t\t} else if cfgValues[ctv.configParam] != ctv.value {\n\t\t\terrArray = appendError(errArray, key, ctv.value, cfgValues[ctv.configParam].(string))\n\t\t}\n\t}\n\n\tif errArray != nil {\n\t\treturn errors.New(strings.Join(errArray, \"\\n\"))\n\t}\n\n\treturn nil\n}\n\nfunc TestConfigValidateTmpDirPath(t *testing.T) {\n\tcfg := &Config{\n\t\tTmpDirPath: \"/not/existing\",\n\t}\n\tif err := cfg.Validate(); err == nil {\n\t\tt.Fatalf(\"Should fail on not existing TmpDirPath\")\n\t}\n}\n\nfunc TestExtractAccountName(t *testing.T) {\n\ttestcases := map[string]string{\n\t\t\"myaccount\":                          \"MYACCOUNT\",\n\t\t\"myaccount.eu-central-1\":             \"MYACCOUNT\",\n\t\t\"myaccount.eu-central-1.privatelink\": \"MYACCOUNT\",\n\t\t\"myorg-myaccount\":                    \"MYORG-MYACCOUNT\",\n\t\t\"myorg-myaccount.privatelink\":        \"MYORG-MYACCOUNT\",\n\t\t\"myorg-my-account\":                   \"MYORG-MY-ACCOUNT\",\n\t\t\"myorg-my-account.privatelink\":       \"MYORG-MY-ACCOUNT\",\n\t\t\"myorg-my_account\":                   \"MYORG-MY_ACCOUNT\",\n\t\t\"myorg-my_account.privatelink\":       \"MYORG-MY_ACCOUNT\",\n\t}\n\n\tfor account, expected := range testcases {\n\t\tt.Run(account, func(t *testing.T) {\n\t\t\taccountPart := ExtractAccountName(account)\n\t\t\tif accountPart != expected {\n\t\t\t\tt.Fatalf(\"extractAccountName returned unexpected response (%v), should be %v\", accountPart, expected)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestUrlDecodeIfNeeded(t *testing.T) {\n\ttestcases := map[string]string{\n\t\t\"query_tag\":             \"query_tag\",\n\t\t\"%24my_custom_variable\": \"$my_custom_variable\",\n\t}\n\tfor param, expected := range testcases {\n\t\tt.Run(param, func(t *testing.T) {\n\t\t\tdecodedParam := urlDecodeIfNeeded(param)\n\t\t\tassertEqualE(t, decodedParam, expected)\n\t\t})\n\t}\n}\n\nfunc TestDSNParsingWithTLSConfig(t *testing.T) {\n\t// Clean up any existing registry\n\tResetTLSConfigRegistry()\n\n\t// Register test TLS config\n\ttestTLSConfig := tls.Config{\n\t\tInsecureSkipVerify: true,\n\t\tServerName:         \"custom.test.com\",\n\t}\n\terr := RegisterTLSConfig(\"custom\", &testTLSConfig)\n\tassertNilF(t, err, \"Failed to register test TLS config\")\n\tdefer func() {\n\t\terr := DeregisterTLSConfig(\"custom\")\n\t\tassertNilF(t, err, \"Failed to deregister test TLS config\")\n\t}()\n\n\ttestCases := []struct {\n\t\tname     string\n\t\tdsn      string\n\t\texpected string\n\t\terr      bool\n\t}{\n\t\t{\n\t\t\tname:     \"Basic TLS config parameter\",\n\t\t\tdsn:      \"user:pass@account/db?tlsConfigName=custom\",\n\t\t\texpected: \"custom\",\n\t\t\terr:      false,\n\t\t},\n\t\t{\n\t\t\tname:     \"TLS config with other parameters\",\n\t\t\tdsn:      \"user:pass@account/db?tlsConfigName=custom&warehouse=wh&role=admin\",\n\t\t\texpected: \"custom\",\n\t\t\terr:      false,\n\t\t},\n\t\t{\n\t\t\tname: \"No TLS config parameter\",\n\t\t\tdsn:  \"user:pass@account/db?warehouse=wh\",\n\t\t\terr:  false,\n\t\t},\n\t\t{\n\t\t\tname: \"Nonexistent TLS config\",\n\t\t\tdsn:  \"user:pass@account/db?tlsConfigName=nonexistent\",\n\t\t\terr:  true,\n\t\t},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tcfg, err := ParseDSN(tc.dsn)\n\t\t\tif tc.err {\n\t\t\t\tassertNotNilF(t, err, \"ParseDSN should have failed but did not\")\n\t\t\t} else {\n\t\t\t\tassertNilF(t, err, \"ParseDSN failed\")\n\t\t\t\t// For DSN parsing, the TLS config should be resolved and set directly\n\t\t\t\tassertEqualF(t, cfg.TLSConfigName, tc.expected, \"TLSConfigName mismatch\")\n\t\t\t}\n\n\t\t})\n\t}\n}\n\nfunc TestTokenAndTokenFilePathValidation(t *testing.T) {\n\tcfg := &Config{\n\t\tAccount:       \"a\",\n\t\tUser:          \"u\",\n\t\tPassword:      \"p\",\n\t\tToken:         \"direct-token\",\n\t\tTokenFilePath: \"test_data/snowflake/session/token\",\n\t}\n\tif err := cfg.Validate(); !errors.Is(err, errTokenConfigConflict) {\n\t\tt.Error(\"Expected validation error when both Token and TokenFilePath are set\")\n\t}\n\n\tcfg.TokenFilePath = \"\"\n\tassertNilE(t, cfg.Validate(), \"Should have accepted Token on its own\")\n\n\tcfg.Token = \"\"\n\tcfg.TokenFilePath = \"test_data/snowflake/session/token\"\n\tassertNilE(t, cfg.Validate(), \"Should have accepted TokenFilePath on its own\")\n}\n\nfunc TestFillMissingConfigParametersDerivesAccountFromHost(t *testing.T) {\n\tcfg := &Config{\n\t\tUser:          \"u\",\n\t\tPassword:      \"p\",\n\t\tHost:          \"myacct.us-east-1.snowflakecomputing.com\",\n\t\tPort:          443,\n\t\tAccount:       \"\",\n\t\tAuthenticator: AuthTypeSnowflake,\n\t}\n\tassertNilE(t, FillMissingConfigParameters(cfg), \"FillMissingConfigParameters\")\n\tif cfg.Account != \"myacct\" {\n\t\tt.Fatalf(\"Account: want myacct, got %q\", cfg.Account)\n\t}\n}\n\nfunc TestFillMissingConfigParametersDerivesAccountFromCNHost(t *testing.T) {\n\tcfg := &Config{\n\t\tUser:          \"u\",\n\t\tPassword:      \"p\",\n\t\tHost:          \"myacct.cn-north-1.snowflakecomputing.cn\",\n\t\tPort:          443,\n\t\tAccount:       \"\",\n\t\tAuthenticator: AuthTypeSnowflake,\n\t}\n\tassertNilE(t, FillMissingConfigParameters(cfg), \"FillMissingConfigParameters\")\n\tif cfg.Account != \"myacct\" {\n\t\tt.Fatalf(\"Account: want myacct, got %q\", cfg.Account)\n\t}\n}\n\nfunc TestFillMissingConfigParametersNonSnowflakeHostRequiresAccount(t *testing.T) {\n\tcfg := &Config{\n\t\tUser:          \"u\",\n\t\tPassword:      \"p\",\n\t\tHost:          \"snowflake.internal.example.com\",\n\t\tPort:          443,\n\t\tAccount:       \"\",\n\t\tAuthenticator: AuthTypeSnowflake,\n\t}\n\terr := FillMissingConfigParameters(cfg)\n\tassertNotNilF(t, err, \"expected error for empty Account with non-Snowflake host\")\n\tsfErr, ok := err.(*sferrors.SnowflakeError)\n\tassertTrueF(t, ok, \"expected SnowflakeError\")\n\tassertEqualE(t, sfErr.Number, sferrors.ErrCodeEmptyAccountCode, \"error number\")\n}\n\n// helper function to generate PKCS8 encoded base64 string of a private key\nfunc generatePKCS8StringSupress(key *rsa.PrivateKey) string {\n\t// Error would only be thrown when the private key type is not supported\n\t// We would be safe as long as we are using rsa.PrivateKey\n\ttmpBytes, _ := x509.MarshalPKCS8PrivateKey(key)\n\tprivKeyPKCS8 := base64.URLEncoding.EncodeToString(tmpBytes)\n\treturn privKeyPKCS8\n}\n\n// helper function to generate PKCS1 encoded base64 string of a private key\nfunc generatePKCS1String(key *rsa.PrivateKey) string {\n\ttmpBytes := x509.MarshalPKCS1PrivateKey(key)\n\tprivKeyPKCS1 := base64.URLEncoding.EncodeToString(tmpBytes)\n\treturn privKeyPKCS1\n}\n"
  },
  {
    "path": "internal/config/ocsp_mode.go",
    "content": "package config\n\n// OCSPFailOpenMode is OCSP fail open mode. OCSPFailOpenTrue by default and may\n// set to ocspModeFailClosed for fail closed mode\ntype OCSPFailOpenMode uint32\n\nconst (\n\t// OCSPFailOpenNotSet represents OCSP fail open mode is not set, which is the default value.\n\tOCSPFailOpenNotSet OCSPFailOpenMode = iota\n\t// OCSPFailOpenTrue represents OCSP fail open mode.\n\tOCSPFailOpenTrue\n\t// OCSPFailOpenFalse represents OCSP fail closed mode.\n\tOCSPFailOpenFalse\n)\n\nconst (\n\tocspModeFailOpen   = \"FAIL_OPEN\"\n\tocspModeFailClosed = \"FAIL_CLOSED\"\n\tocspModeDisabled   = \"INSECURE\"\n)\n\n// OcspMode returns the OCSP mode in string INSECURE, FAIL_OPEN, FAIL_CLOSED\nfunc OcspMode(c *Config) string {\n\tif c.DisableOCSPChecks {\n\t\treturn ocspModeDisabled\n\t} else if c.OCSPFailOpen == OCSPFailOpenNotSet || c.OCSPFailOpen == OCSPFailOpenTrue {\n\t\t// by default or set to true\n\t\treturn ocspModeFailOpen\n\t}\n\treturn ocspModeFailClosed\n}\n"
  },
  {
    "path": "internal/config/priv_key.go",
    "content": "package config\n\nimport (\n\t\"crypto/rsa\"\n\t\"crypto/x509\"\n\n\tsferrors \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n)\n\n// ParsePKCS8PrivateKey parses a PKCS8 encoded private key.\nfunc ParsePKCS8PrivateKey(block []byte) (*rsa.PrivateKey, error) {\n\tprivKey, err := x509.ParsePKCS8PrivateKey(block)\n\tif err != nil {\n\t\treturn nil, &sferrors.SnowflakeError{\n\t\t\tNumber:  sferrors.ErrCodePrivateKeyParseError,\n\t\t\tMessage: \"Error decoding private key using PKCS8.\",\n\t\t}\n\t}\n\treturn privKey.(*rsa.PrivateKey), nil\n}\n\n// MarshalPKCS8PrivateKey marshals a private key to PKCS8 format.\nfunc MarshalPKCS8PrivateKey(key *rsa.PrivateKey) ([]byte, error) {\n\tkeyInBytes, err := x509.MarshalPKCS8PrivateKey(key)\n\tif err != nil {\n\t\treturn nil, &sferrors.SnowflakeError{\n\t\t\tNumber:  sferrors.ErrCodePrivateKeyParseError,\n\t\t\tMessage: \"Error encoding private key using PKCS8.\"}\n\t}\n\treturn keyInBytes, nil\n}\n"
  },
  {
    "path": "internal/config/tls_config.go",
    "content": "package config\n\nimport (\n\t\"crypto/tls\"\n\t\"sync\"\n)\n\nvar (\n\ttlsConfigLock     sync.RWMutex\n\ttlsConfigRegistry = make(map[string]*tls.Config)\n)\n\n// ResetTLSConfigRegistry clears the TLS config registry. Used in tests.\nfunc ResetTLSConfigRegistry() {\n\ttlsConfigLock.Lock()\n\ttlsConfigRegistry = make(map[string]*tls.Config)\n\ttlsConfigLock.Unlock()\n}\n\n// RegisterTLSConfig registers the tls.Config in configs registry.\n// Use the key as a value in the DSN where tlsConfigName=value.\nfunc RegisterTLSConfig(key string, config *tls.Config) error {\n\ttlsConfigLock.Lock()\n\tlogger.Infof(\"Registering TLS config for key: %s\", key)\n\ttlsConfigRegistry[key] = config.Clone()\n\ttlsConfigLock.Unlock()\n\treturn nil\n}\n\n// DeregisterTLSConfig removes the tls.Config associated with key.\nfunc DeregisterTLSConfig(key string) error {\n\ttlsConfigLock.Lock()\n\tlogger.Infof(\"Deregistering TLS config for key: %s\", key)\n\tdelete(tlsConfigRegistry, key)\n\ttlsConfigLock.Unlock()\n\treturn nil\n}\n\n// GetTLSConfig returns a TLS config from the registry.\nfunc GetTLSConfig(key string) (*tls.Config, bool) {\n\ttlsConfigLock.RLock()\n\ttlsConfig, ok := tlsConfigRegistry[key]\n\ttlsConfigLock.RUnlock()\n\tif !ok {\n\t\treturn nil, false\n\t}\n\treturn tlsConfig.Clone(), true\n}\n"
  },
  {
    "path": "internal/config/tls_config_test.go",
    "content": "package config\n\nimport (\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"testing\"\n)\n\nfunc TestRegisterTLSConfig(t *testing.T) {\n\t// Clean up any existing configs after testing\n\tdefer ResetTLSConfigRegistry()\n\n\ttestConfig := tls.Config{\n\t\tInsecureSkipVerify: true,\n\t\tServerName:         \"test-server\",\n\t}\n\n\t// Test successful registration\n\terr := RegisterTLSConfig(\"test\", &testConfig)\n\tassertNilE(t, err, \"RegisterTLSConfig failed\")\n\n\t// Verify config was registered\n\tretrieved, exists := GetTLSConfig(\"test\")\n\tassertTrueE(t, exists, \"TLS config was not registered\")\n\n\t// Verify the retrieved config matches the original\n\tassertEqualE(t, retrieved.InsecureSkipVerify, testConfig.InsecureSkipVerify, \"InsecureSkipVerify mismatch\")\n\tassertEqualE(t, retrieved.ServerName, testConfig.ServerName, \"ServerName mismatch\")\n}\n\nfunc TestDeregisterTLSConfig(t *testing.T) {\n\t// Clean up any existing configs after testing\n\tdefer ResetTLSConfigRegistry()\n\n\ttestConfig := tls.Config{\n\t\tInsecureSkipVerify: true,\n\t\tServerName:         \"test-server\",\n\t}\n\n\t// Register a config\n\terr := RegisterTLSConfig(\"test\", &testConfig)\n\tassertNilE(t, err, \"RegisterTLSConfig failed\")\n\n\t// Verify it exists\n\t_, exists := GetTLSConfig(\"test\")\n\tassertTrueE(t, exists, \"TLS config should exist after registration\")\n\n\t// Deregister it\n\terr = DeregisterTLSConfig(\"test\")\n\tassertNilE(t, err, \"DeregisterTLSConfig failed\")\n\n\t// Verify it's gone\n\t_, exists = GetTLSConfig(\"test\")\n\tassertFalseE(t, exists, \"TLS config should not exist after deregistration\")\n}\n\nfunc TestGetTLSConfigNonExistent(t *testing.T) {\n\t_, exists := GetTLSConfig(\"nonexistent\")\n\tassertFalseE(t, exists, \"getTLSConfig should return false for non-existent config\")\n}\n\nfunc TestRegisterTLSConfigWithCustomRootCAs(t *testing.T) {\n\t// Clean up any existing configs after testing\n\tdefer ResetTLSConfigRegistry()\n\n\t// Create a test cert pool\n\tcertPool := x509.NewCertPool()\n\n\ttestConfig := tls.Config{\n\t\tRootCAs:            certPool,\n\t\tInsecureSkipVerify: false,\n\t}\n\n\terr := RegisterTLSConfig(\"custom-ca\", &testConfig)\n\tassertNilE(t, err, \"RegisterTLSConfig failed\")\n\n\t// Retrieve and verify\n\tretrieved, exists := GetTLSConfig(\"custom-ca\")\n\tassertTrueE(t, exists, \"TLS config should exist\")\n\n\t// The retrieved should have the same certificates as the original\n\tassertTrueE(t, retrieved.RootCAs.Equal(testConfig.RootCAs), \"RootCAs should match\")\n}\n\nfunc TestMultipleTLSConfigs(t *testing.T) {\n\t// Clean up any existing configs after testing\n\tdefer ResetTLSConfigRegistry()\n\n\tconfigs := map[string]*tls.Config{\n\t\t\"insecure\": {InsecureSkipVerify: true},\n\t\t\"secure\":   {InsecureSkipVerify: false, ServerName: \"secure.example.com\"},\n\t}\n\n\t// Register multiple configs\n\tfor name, config := range configs {\n\t\terr := RegisterTLSConfig(name, config)\n\t\tassertNilE(t, err, \"RegisterTLSConfig failed for \"+name)\n\t}\n\n\t// Verify all can be retrieved\n\tfor name, original := range configs {\n\t\tretrieved, exists := GetTLSConfig(name)\n\t\tassertTrueE(t, exists, \"Config \"+name+\" should exist\")\n\t\tassertEqualE(t, retrieved.InsecureSkipVerify, original.InsecureSkipVerify, name+\" InsecureSkipVerify mismatch\")\n\t\tassertEqualE(t, retrieved.ServerName, original.ServerName, name+\" ServerName mismatch\")\n\t}\n\n\t// Test overwriting\n\tnewConfig := tls.Config{InsecureSkipVerify: false, ServerName: \"new.example.com\"}\n\terr := RegisterTLSConfig(\"insecure\", &newConfig)\n\tassertNilE(t, err, \"RegisterTLSConfig should allow overwriting\")\n\n\tretrieved, _ := GetTLSConfig(\"insecure\")\n\tassertEqualE(t, retrieved.ServerName, \"new.example.com\", \"Config should have been overwritten\")\n}\n"
  },
  {
    "path": "internal/config/token_accessor.go",
    "content": "package config\n\n// TokenAccessor manages the session token and master token\ntype TokenAccessor interface {\n\tGetTokens() (token string, masterToken string, sessionID int64)\n\tSetTokens(token string, masterToken string, sessionID int64)\n\tLock() error\n\tUnlock()\n}\n"
  },
  {
    "path": "internal/errors/errors.go",
    "content": "// Package errors defines error types and error codes for the Snowflake driver.\n// It includes both errors returned by the Snowflake server and errors generated by the driver itself.\n// The SnowflakeError type includes various fields to capture detailed information about an error, such as the error number,\n// SQL state, query ID, and a message with optional arguments for formatting. The package also defines a set of constants\n// for common error codes and message templates for consistent error reporting throughout the driver.\npackage errors\n\nimport \"fmt\"\n\n// SnowflakeError is an error type including various Snowflake specific information.\ntype SnowflakeError struct {\n\tNumber         int\n\tSQLState       string\n\tQueryID        string\n\tMessage        string\n\tMessageArgs    []any\n\tIncludeQueryID bool // TODO: populate this in connection\n}\n\nfunc (se *SnowflakeError) Error() string {\n\tmessage := se.Message\n\tif len(se.MessageArgs) > 0 {\n\t\tmessage = fmt.Sprintf(se.Message, se.MessageArgs...)\n\t}\n\tif se.SQLState != \"\" {\n\t\tif se.IncludeQueryID {\n\t\t\treturn fmt.Sprintf(\"%06d (%s): %s: %s\", se.Number, se.SQLState, se.QueryID, message)\n\t\t}\n\t\treturn fmt.Sprintf(\"%06d (%s): %s\", se.Number, se.SQLState, message)\n\t}\n\tif se.IncludeQueryID {\n\t\treturn fmt.Sprintf(\"%06d: %s: %s\", se.Number, se.QueryID, message)\n\t}\n\treturn fmt.Sprintf(\"%06d: %s\", se.Number, message)\n}\n\n// Snowflake Server Error code\nconst (\n\tQueryNotExecutingCode       = \"000605\"\n\tQueryInProgressCode         = \"333333\"\n\tQueryInProgressAsyncCode    = \"333334\"\n\tSessionExpiredCode          = \"390112\"\n\tInvalidOAuthAccessTokenCode = \"390303\"\n\tExpiredOAuthAccessTokenCode = \"390318\"\n)\n\n// Driver return errors\nconst (\n\t/* connection */\n\n\t// ErrCodeEmptyAccountCode is an error code for the case where a DSN doesn't include account parameter\n\tErrCodeEmptyAccountCode = 260000\n\t// ErrCodeEmptyUsernameCode is an error code for the case where a DSN doesn't include user parameter\n\tErrCodeEmptyUsernameCode = 260001\n\t// ErrCodeEmptyPasswordCode is an error code for the case where a DSN doesn't include password parameter\n\tErrCodeEmptyPasswordCode = 260002\n\t// ErrCodeFailedToParseHost is an error code for the case where a DSN includes an invalid host name\n\tErrCodeFailedToParseHost = 260003\n\t// ErrCodeFailedToParsePort is an error code for the case where a DSN includes an invalid port number\n\tErrCodeFailedToParsePort = 260004\n\t// ErrCodeIdpConnectionError is an error code for the case where a IDP connection failed\n\tErrCodeIdpConnectionError = 260005\n\t// ErrCodeSSOURLNotMatch is an error code for the case where a SSO URL doesn't match\n\tErrCodeSSOURLNotMatch = 260006\n\t// ErrCodeServiceUnavailable is an error code for the case where service is unavailable.\n\tErrCodeServiceUnavailable = 260007\n\t// ErrCodeFailedToConnect is an error code for the case where a DB connection failed due to wrong account name\n\tErrCodeFailedToConnect = 260008\n\t// ErrCodeRegionOverlap is an error code for the case where a region is specified despite an account region present\n\tErrCodeRegionOverlap = 260009\n\t// ErrCodePrivateKeyParseError is an error code for the case where the private key is not parsed correctly\n\tErrCodePrivateKeyParseError = 260010\n\t// ErrCodeFailedToParseAuthenticator is an error code for the case where a DNS includes an invalid authenticator\n\tErrCodeFailedToParseAuthenticator = 260011\n\t// ErrCodeClientConfigFailed is an error code for the case where clientConfigFile is invalid or applying client configuration fails\n\tErrCodeClientConfigFailed = 260012\n\t// ErrCodeTomlFileParsingFailed is an error code for the case where parsing the toml file is failed because of invalid value.\n\tErrCodeTomlFileParsingFailed = 260013\n\t// ErrCodeFailedToFindDSNInToml is an error code for the case where the DSN does not exist in the toml file.\n\tErrCodeFailedToFindDSNInToml = 260014\n\t// ErrCodeInvalidFilePermission is an error code for the case where the user does not have 0600 permission to the toml file.\n\tErrCodeInvalidFilePermission = 260015\n\t// ErrCodeEmptyPasswordAndToken is an error code for the case where a DSN do includes neither password nor token\n\tErrCodeEmptyPasswordAndToken = 260016\n\t// ErrCodeEmptyOAuthParameters is an error code for the case where the client ID or client secret are not provided for OAuth flows.\n\tErrCodeEmptyOAuthParameters = 260017\n\t// ErrMissingAccessATokenButRefreshTokenPresent is an error code for the case when access token is not found in cache, but the refresh token is present.\n\tErrMissingAccessATokenButRefreshTokenPresent = 260018\n\t// ErrCodeMissingTLSConfig is an error code for the case where the TLS config is missing.\n\tErrCodeMissingTLSConfig = 260019\n\n\t/* network */\n\n\t// ErrFailedToPostQuery is an error code for the case where HTTP POST failed.\n\tErrFailedToPostQuery = 261000\n\t// ErrFailedToRenewSession is an error code for the case where session renewal failed.\n\tErrFailedToRenewSession = 261001\n\t// ErrFailedToCancelQuery is an error code for the case where cancel query failed.\n\tErrFailedToCancelQuery = 261002\n\t// ErrFailedToCloseSession is an error code for the case where close session failed.\n\tErrFailedToCloseSession = 261003\n\t// ErrFailedToAuth is an error code for the case where authentication failed for unknown reason.\n\tErrFailedToAuth = 261004\n\t// ErrFailedToAuthSAML is an error code for the case where authentication via SAML failed for unknown reason.\n\tErrFailedToAuthSAML = 261005\n\t// ErrFailedToAuthOKTA is an error code for the case where authentication via OKTA failed for unknown reason.\n\tErrFailedToAuthOKTA = 261006\n\t// ErrFailedToGetSSO is an error code for the case where authentication via OKTA failed for unknown reason.\n\tErrFailedToGetSSO = 261007\n\t// ErrFailedToParseResponse is an error code for when we cannot parse an external browser response from Snowflake.\n\tErrFailedToParseResponse = 261008\n\t// ErrFailedToGetExternalBrowserResponse is an error code for when there's an error reading from the open socket.\n\tErrFailedToGetExternalBrowserResponse = 261009\n\t// ErrFailedToHeartbeat is an error code when a heartbeat fails.\n\tErrFailedToHeartbeat = 261010\n\n\t/* rows */\n\n\t// ErrFailedToGetChunk is an error code for the case where it failed to get chunk of result set\n\tErrFailedToGetChunk = 262000\n\t// ErrNonArrowResponseInArrowBatches is an error code for case where ArrowBatches mode is enabled, but response is not Arrow-based\n\tErrNonArrowResponseInArrowBatches = 262001\n\n\t/* transaction*/\n\n\t// ErrNoReadOnlyTransaction is an error code for the case where readonly mode is specified.\n\tErrNoReadOnlyTransaction = 263000\n\t// ErrNoDefaultTransactionIsolationLevel is an error code for the case where non default isolation level is specified.\n\tErrNoDefaultTransactionIsolationLevel = 263001\n\n\t/* file transfer */\n\n\t// ErrInvalidStageFs is an error code denoting an invalid stage in the file system\n\tErrInvalidStageFs = 264001\n\t// ErrFailedToDownloadFromStage is an error code denoting the failure to download a file from the stage\n\tErrFailedToDownloadFromStage = 264002\n\t// ErrFailedToUploadToStage is an error code denoting the failure to upload a file to the stage\n\tErrFailedToUploadToStage = 264003\n\t// ErrInvalidStageLocation is an error code denoting an invalid stage location\n\tErrInvalidStageLocation = 264004\n\t// ErrLocalPathNotDirectory is an error code denoting a local path that is not a directory\n\tErrLocalPathNotDirectory = 264005\n\t// ErrFileNotExists is an error code denoting the file to be transferred does not exist\n\tErrFileNotExists = 264006\n\t// ErrCompressionNotSupported is an error code denoting the user specified compression type is not supported\n\tErrCompressionNotSupported = 264007\n\t// ErrInternalNotMatchEncryptMaterial is an error code denoting the encryption material specified does not match\n\tErrInternalNotMatchEncryptMaterial = 264008\n\t// ErrCommandNotRecognized is an error code denoting the PUT/GET command was not recognized\n\tErrCommandNotRecognized = 264009\n\t// ErrFailedToConvertToS3Client is an error code denoting the failure of an interface to s3.Client conversion\n\tErrFailedToConvertToS3Client = 264010\n\t// ErrNotImplemented is an error code denoting the file transfer feature is not implemented\n\tErrNotImplemented = 264011\n\t// ErrInvalidPadding is an error code denoting the invalid padding of decryption key\n\tErrInvalidPadding = 264012\n\n\t/* binding */\n\n\t// ErrBindSerialization is an error code for a failed serialization of bind variables\n\tErrBindSerialization = 265001\n\t// ErrBindUpload is an error code for the uploading process of bind elements to the stage\n\tErrBindUpload = 265002\n\n\t/* async */\n\n\t// ErrAsync is an error code for an unknown async error\n\tErrAsync = 266001\n\n\t/* multi-statement */\n\n\t// ErrNoResultIDs is an error code for empty result IDs for multi statement queries\n\tErrNoResultIDs = 267001\n\n\t/* converter */\n\n\t// ErrInvalidTimestampTz is an error code for the case where a returned TIMESTAMP_TZ internal value is invalid\n\tErrInvalidTimestampTz = 268000\n\t// ErrInvalidOffsetStr is an error code for the case where an offset string is invalid. The input string must\n\t// consist of sHHMI where one sign character '+'/'-' followed by zero filled hours and minutes\n\tErrInvalidOffsetStr = 268001\n\t// ErrInvalidBinaryHexForm is an error code for the case where a binary data in hex form is invalid.\n\tErrInvalidBinaryHexForm = 268002\n\t// ErrTooHighTimestampPrecision is an error code for the case where cannot convert Snowflake timestamp to arrow.Timestamp\n\tErrTooHighTimestampPrecision = 268003\n\t// ErrNullValueInArray is an error code for the case where there are null values in an array without arrayValuesNullable set to true\n\tErrNullValueInArray = 268004\n\t// ErrNullValueInMap is an error code for the case where there are null values in a map without mapValuesNullable set to true\n\tErrNullValueInMap = 268005\n\n\t/* OCSP */\n\n\t// ErrOCSPStatusRevoked is an error code for the case where the certificate is revoked.\n\tErrOCSPStatusRevoked = 269001\n\t// ErrOCSPStatusUnknown is an error code for the case where the certificate revocation status is unknown.\n\tErrOCSPStatusUnknown = 269002\n\t// ErrOCSPInvalidValidity is an error code for the case where the OCSP response validity is invalid.\n\tErrOCSPInvalidValidity = 269003\n\t// ErrOCSPNoOCSPResponderURL is an error code for the case where the OCSP responder URL is not attached.\n\tErrOCSPNoOCSPResponderURL = 269004\n\n\t/* query Status*/\n\n\t// ErrQueryStatus when check the status of a query, receive error or no status\n\tErrQueryStatus = 279001\n\t// ErrQueryIDFormat the query ID given to fetch its result is not valid\n\tErrQueryIDFormat = 279101\n\t// ErrQueryReportedError server side reports the query failed with error\n\tErrQueryReportedError = 279201\n\t// ErrQueryIsRunning the query is still running\n\tErrQueryIsRunning = 279301\n\n\t/* GS error code */\n\n\t// ErrSessionGone is an GS error code for the case that session is already closed\n\tErrSessionGone = 390111\n\t// ErrRoleNotExist is a GS error code for the case that the role specified does not exist\n\tErrRoleNotExist = 390189\n\t// ErrObjectNotExistOrAuthorized is a GS error code for the case that the server-side object specified does not exist\n\tErrObjectNotExistOrAuthorized = 390201\n)\n\n// Error message templates\nconst (\n\tErrMsgFailedToParseHost                  = \"failed to parse a host name. host: %v\"\n\tErrMsgFailedToParsePort                  = \"failed to parse a port number. port: %v\"\n\tErrMsgFailedToParseAuthenticator         = \"failed to parse an authenticator: %v\"\n\tErrMsgInvalidOffsetStr                   = \"offset must be a string consist of sHHMI where one sign character '+'/'-' followed by zero filled hours and minutes: %v\"\n\tErrMsgInvalidByteArray                   = \"invalid byte array: %v\"\n\tErrMsgIdpConnectionError                 = \"failed to verify URLs. authenticator: %v, token URL:%v, SSO URL:%v\"\n\tErrMsgSSOURLNotMatch                     = \"SSO URL didn't match. expected: %v, got: %v\"\n\tErrMsgFailedToGetChunk                   = \"failed to get a chunk of result sets. idx: %v\"\n\tErrMsgFailedToPostQuery                  = \"failed to POST. HTTP: %v, URL: %v\"\n\tErrMsgFailedToRenew                      = \"failed to renew session. HTTP: %v, URL: %v\"\n\tErrMsgFailedToCancelQuery                = \"failed to cancel query. HTTP: %v, URL: %v\"\n\tErrMsgFailedToCloseSession               = \"failed to close session. HTTP: %v, URL: %v\"\n\tErrMsgFailedToAuth                       = \"failed to auth for unknown reason. HTTP: %v, URL: %v\"\n\tErrMsgFailedToAuthSAML                   = \"failed to auth via SAML for unknown reason. HTTP: %v, URL: %v\"\n\tErrMsgFailedToAuthOKTA                   = \"failed to auth via OKTA for unknown reason. HTTP: %v, URL: %v\"\n\tErrMsgFailedToGetSSO                     = \"failed to auth via OKTA for unknown reason. HTTP: %v, URL: %v\"\n\tErrMsgFailedToParseResponse              = \"failed to parse a response from Snowflake. Response: %v\"\n\tErrMsgFailedToGetExternalBrowserResponse = \"failed to get an external browser response from Snowflake, err: %s\"\n\tErrMsgNoReadOnlyTransaction              = \"no readonly mode is supported\"\n\tErrMsgNoDefaultTransactionIsolationLevel = \"no default isolation transaction level is supported\"\n\tErrMsgServiceUnavailable                 = \"service is unavailable. check your connectivity. you may need a proxy server. HTTP: %v, URL: %v\"\n\tErrMsgFailedToConnect                    = \"failed to connect to db. verify account name is correct. HTTP: %v, URL: %v\"\n\tErrMsgOCSPStatusRevoked                  = \"OCSP revoked: reason:%v, at:%v\"\n\tErrMsgOCSPStatusUnknown                  = \"OCSP unknown\"\n\tErrMsgOCSPInvalidValidity                = \"invalid validity: producedAt: %v, thisUpdate: %v, nextUpdate: %v\"\n\tErrMsgOCSPNoOCSPResponderURL             = \"no OCSP server is attached to the certificate. %v\"\n\tErrMsgBindColumnMismatch                 = \"column %v has a different number of binds (%v) than column 1 (%v)\"\n\tErrMsgNotImplemented                     = \"not implemented\"\n\tErrMsgFeatureNotSupported                = \"feature is not supported: %v\"\n\tErrMsgCommandNotRecognized               = \"%v command not recognized\"\n\tErrMsgLocalPathNotDirectory              = \"the local path is not a directory: %v\"\n\tErrMsgFileNotExists                      = \"file does not exist: %v\"\n\tErrMsgFailToReadDataFromBuffer           = \"failed to read data from buffer. err: %v\"\n\tErrMsgInvalidStageFs                     = \"destination location type is not valid: %v\"\n\tErrMsgInternalNotMatchEncryptMaterial    = \"number of downloading files doesn't match the encryption materials. files=%v, encmat=%v\"\n\tErrMsgFailedToConvertToS3Client          = \"failed to convert interface to s3 client\"\n\tErrMsgNoResultIDs                        = \"no result IDs returned with the multi-statement query\"\n\tErrMsgQueryStatus                        = \"server ErrorCode=%s, ErrorMessage=%s\"\n\tErrMsgInvalidPadding                     = \"invalid padding on input\"\n\tErrMsgClientConfigFailed                 = \"client configuration failed: %v\"\n\tErrMsgNullValueInArray                   = \"for handling null values in arrays use WithArrayValuesNullable(ctx)\"\n\tErrMsgNullValueInMap                     = \"for handling null values in maps use WithMapValuesNullable(ctx)\"\n\tErrMsgFailedToParseTomlFile              = \"failed to parse toml file. the params %v occurred error with value %v\"\n\tErrMsgFailedToFindDSNInTomlFile          = \"failed to find DSN in toml file.\"\n\tErrMsgInvalidWritablePermissionToFile    = \"file '%v' is writable by group or others — this poses a security risk because it allows unauthorized users to modify sensitive settings. Your Permission: %v\"\n\tErrMsgInvalidExecutablePermissionToFile  = \"file '%v' is executable — this poses a security risk because the file could be misused as a script or executed unintentionally. Your Permission: %v\"\n\tErrMsgNonArrowResponseInArrowBatches     = \"arrow batches enabled, but the response is not Arrow based\"\n\tErrMsgMissingTLSConfig                   = \"TLS config not found: %v\"\n)\n\n// ErrEmptyAccount is returned if a DSN doesn't include account parameter.\nfunc ErrEmptyAccount() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:  ErrCodeEmptyAccountCode,\n\t\tMessage: \"account is empty\",\n\t}\n}\n\n// ErrEmptyUsername is returned if a DSN doesn't include user parameter.\nfunc ErrEmptyUsername() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:  ErrCodeEmptyUsernameCode,\n\t\tMessage: \"user is empty\",\n\t}\n}\n\n// ErrEmptyPassword is returned if a DSN doesn't include password parameter.\nfunc ErrEmptyPassword() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:  ErrCodeEmptyPasswordCode,\n\t\tMessage: \"password is empty\",\n\t}\n}\n\n// ErrEmptyPasswordAndToken is returned if a DSN includes neither password nor token.\nfunc ErrEmptyPasswordAndToken() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:  ErrCodeEmptyPasswordAndToken,\n\t\tMessage: \"both password and token are empty\",\n\t}\n}\n\n// ErrEmptyOAuthParameters is returned if OAuth is used but required fields are missing.\nfunc ErrEmptyOAuthParameters() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:  ErrCodeEmptyOAuthParameters,\n\t\tMessage: \"client ID or client secret are empty\",\n\t}\n}\n\n// ErrRegionConflict is returned if a DSN's implicit and explicit region parameters conflict.\nfunc ErrRegionConflict() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:  ErrCodeRegionOverlap,\n\t\tMessage: \"two regions specified\",\n\t}\n}\n\n// ErrFailedToParseAuthenticator is returned if a DSN includes an invalid authenticator.\nfunc ErrFailedToParseAuthenticator() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:  ErrCodeFailedToParseAuthenticator,\n\t\tMessage: \"failed to parse an authenticator\",\n\t}\n}\n\n// ErrUnknownError is returned if the server side returns an error without meaningful message.\nfunc ErrUnknownError() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:   -1,\n\t\tSQLState: \"-1\",\n\t\tMessage:  \"an unknown server side error occurred\",\n\t\tQueryID:  \"-1\",\n\t}\n}\n\n// ErrNullValueInArrayError is returned for null values in array without arrayValuesNullable.\nfunc ErrNullValueInArrayError() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:  ErrNullValueInArray,\n\t\tMessage: ErrMsgNullValueInArray,\n\t}\n}\n\n// ErrNullValueInMapError is returned for null values in map without mapValuesNullable.\nfunc ErrNullValueInMapError() *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tNumber:  ErrNullValueInMap,\n\t\tMessage: ErrMsgNullValueInMap,\n\t}\n}\n\n// ErrNonArrowResponseForArrowBatches is returned when arrow batches mode is enabled but response is not Arrow-based.\nfunc ErrNonArrowResponseForArrowBatches(queryID string) *SnowflakeError {\n\treturn &SnowflakeError{\n\t\tQueryID: queryID,\n\t\tNumber:  ErrNonArrowResponseInArrowBatches,\n\t\tMessage: ErrMsgNonArrowResponseInArrowBatches,\n\t}\n}\n"
  },
  {
    "path": "internal/logger/accessor.go",
    "content": "package logger\n\nimport (\n\t\"errors\"\n\t\"log\"\n\t\"sync\"\n\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n)\n\n// LoggerAccessor allows internal packages to access the global logger\n// without importing the main gosnowflake package (avoiding circular dependencies)\nvar (\n\tloggerAccessorMu sync.Mutex\n\t// globalLogger is the actual logger that provides all features (secret masking, level filtering, etc.)\n\tglobalLogger sflog.SFLogger\n)\n\n// GetLogger returns the global logger for use by internal packages\nfunc GetLogger() sflog.SFLogger {\n\tloggerAccessorMu.Lock()\n\tdefer loggerAccessorMu.Unlock()\n\n\treturn globalLogger\n}\n\n// SetLogger sets the raw (base) logger implementation and wraps it with the standard protection layers.\n// This function ALWAYS wraps the provided logger with:\n//  1. Secret masking (to protect sensitive data)\n//  2. Level filtering (for performance optimization)\n//\n// There is no way to bypass these protective layers. The globalLogger structure is:\n//\n//\tglobalLogger = levelFilteringLogger → secretMaskingLogger → rawLogger\n//\n// If the provided logger is already wrapped (e.g., from CreateDefaultLogger), this function\n// automatically extracts the raw logger to prevent double-wrapping.\n//\n// Internal wrapper types that would cause issues are rejected:\n//   - Proxy (would cause infinite recursion)\nfunc SetLogger(providedLogger SFLogger) error {\n\tloggerAccessorMu.Lock()\n\tdefer loggerAccessorMu.Unlock()\n\n\t// Reject Proxy to prevent infinite recursion\n\tif _, isProxy := providedLogger.(*Proxy); isProxy {\n\t\treturn errors.New(\"cannot set Proxy as raw logger - it would create infinite recursion\")\n\t}\n\n\t// Unwrap if the logger is one of our own wrapper types\n\t// This allows SetLogger to accept both raw loggers and fully-wrapped loggers\n\trawLogger := providedLogger\n\n\t// If it's a level filtering logger, unwrap to get the secret masking layer\n\tif levelFiltering, ok := rawLogger.(*levelFilteringLogger); ok {\n\t\trawLogger = levelFiltering.inner\n\t}\n\n\t// If it's a secret masking logger, unwrap to get the raw logger\n\tif secretMasking, ok := rawLogger.(*secretMaskingLogger); ok {\n\t\trawLogger = secretMasking.inner\n\t}\n\n\t// Build the standard protection chain: levelFiltering → secretMasking → rawLogger\n\tmasked := newSecretMaskingLogger(rawLogger)\n\tfiltered := newLevelFilteringLogger(masked)\n\n\tglobalLogger = filtered\n\treturn nil\n}\n\nfunc init() {\n\trawLogger := newRawLogger()\n\tif err := SetLogger(rawLogger); err != nil {\n\t\tlog.Panicf(\"cannot set default logger. %v\", err)\n\t}\n}\n\n// CreateDefaultLogger function creates a new instance of the default logger with the standard protection layers.\nfunc CreateDefaultLogger() sflog.SFLogger {\n\treturn newLevelFilteringLogger(newSecretMaskingLogger(newRawLogger()))\n}\n"
  },
  {
    "path": "internal/logger/accessor_test.go",
    "content": "package logger_test\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/logger\"\n)\n\n// TestLoggerConfiguration verifies configuration methods work\nfunc TestLoggerConfiguration(t *testing.T) {\n\tlog := logger.CreateDefaultLogger()\n\n\t// Get current level\n\tlevel := log.GetLogLevel()\n\tif level == \"\" {\n\t\tt.Error(\"Expected non-empty log level\")\n\t}\n\tt.Logf(\"Current log level: %s\", level)\n\n\t// Set log level\n\terr := log.SetLogLevel(\"debug\")\n\tif err != nil {\n\t\tt.Errorf(\"SetLogLevel failed: %v\", err)\n\t}\n\n\t// Verify it changed\n\tnewLevel := log.GetLogLevel()\n\tif newLevel != \"DEBUG\" {\n\t\tt.Errorf(\"Expected 'debug', got '%s'\", newLevel)\n\t}\n}\n\n// TestLoggerSecretMasking verifies secret masking works\nfunc TestLoggerSecretMasking(t *testing.T) {\n\tlog := logger.CreateDefaultLogger()\n\n\tvar buf bytes.Buffer\n\tlog.SetOutput(&buf)\n\t// Reset log level to ensure info is logged\n\t_ = log.SetLogLevel(\"info\")\n\n\t// Log a secret\n\tlog.Infof(\"password=%s\", \"secret12345\")\n\n\toutput := buf.String()\n\tt.Logf(\"Output: %s\", output) // Debug output\n\n\t// The output should have a masked secret\n\tif strings.Contains(output, \"secret12345\") {\n\t\tt.Errorf(\"Secret masking FAILED: secret leaked in: %s\", output)\n\t}\n\n\t// Verify the message was logged (check for \"password=\")\n\tif !strings.Contains(output, \"password=\") {\n\t\tt.Errorf(\"Message not logged: %s\", output)\n\t}\n\n\tt.Log(\"Secret masking works with GetLogger\")\n}\n\n// TestLoggerAllMethods verifies all logging methods are available and produce output\nfunc TestLoggerAllMethods(t *testing.T) {\n\tlog := logger.CreateDefaultLogger()\n\n\tvar buf bytes.Buffer\n\tlog.SetOutput(&buf)\n\t_ = log.SetLogLevel(\"trace\")\n\n\t// Test all formatted methods\n\tlog.Tracef(\"trace %s\", \"formatted\")\n\tlog.Debugf(\"debug %s\", \"formatted\")\n\tlog.Infof(\"info %s\", \"formatted\")\n\tlog.Warnf(\"warn %s\", \"formatted\")\n\tlog.Errorf(\"error %s\", \"formatted\")\n\t// Fatalf would exit, so skip in test\n\n\t// Test all direct methods\n\tlog.Trace(\"trace direct\")\n\tlog.Debug(\"debug direct\")\n\tlog.Info(\"info direct\")\n\tlog.Warn(\"warn direct\")\n\tlog.Error(\"error direct\")\n\t// Fatal would exit, so skip in test\n\n\toutput := buf.String()\n\n\t// Verify all messages appear in output\n\texpectedMessages := []string{\n\t\t\"trace formatted\", \"debug formatted\", \"info formatted\",\n\t\t\"warn formatted\", \"error formatted\",\n\t\t\"trace direct\", \"debug direct\", \"info direct\",\n\t\t\"warn direct\", \"error direct\",\n\t}\n\n\tfor _, msg := range expectedMessages {\n\t\tif !strings.Contains(output, msg) {\n\t\t\tt.Errorf(\"Expected output to contain '%s', got: %s\", msg, output)\n\t\t}\n\t}\n}\n\n// TestLoggerLevelFiltering verifies log level filtering works correctly\nfunc TestLoggerLevelFiltering(t *testing.T) {\n\tlog := logger.CreateDefaultLogger()\n\n\tvar buf bytes.Buffer\n\tlog.SetOutput(&buf)\n\n\t// Set to INFO level\n\t_ = log.SetLogLevel(\"info\")\n\n\t// Log at different levels\n\tlog.Debug(\"this should not appear\")\n\tlog.Info(\"this should appear\")\n\tlog.Warn(\"this should also appear\")\n\n\toutput := buf.String()\n\n\t// Debug should not appear\n\tif strings.Contains(output, \"this should not appear\") {\n\t\tt.Errorf(\"Debug message appeared when log level is INFO: %s\", output)\n\t}\n\n\t// Info and Warn should appear\n\tif !strings.Contains(output, \"this should appear\") {\n\t\tt.Errorf(\"Info message did not appear: %s\", output)\n\t}\n\tif !strings.Contains(output, \"this should also appear\") {\n\t\tt.Errorf(\"Warn message did not appear: %s\", output)\n\t}\n\n\tt.Log(\"Log level filtering works correctly\")\n}\n\n// TestLogEntry verifies log entry methods and field inclusion\nfunc TestLogEntry(t *testing.T) {\n\tlog := logger.CreateDefaultLogger()\n\n\tvar buf bytes.Buffer\n\tlog.SetOutput(&buf)\n\t_ = log.SetLogLevel(\"info\")\n\n\t// Get entry with field\n\tentry := log.WithField(\"module\", \"test\")\n\n\t// Log with the entry\n\tentry.Infof(\"info with field %s\", \"formatted\")\n\tentry.Info(\"info with field direct\")\n\n\toutput := buf.String()\n\n\t// Verify messages appear\n\tif !strings.Contains(output, \"info with field formatted\") {\n\t\tt.Errorf(\"Expected formatted message in output: %s\", output)\n\t}\n\tif !strings.Contains(output, \"info with field direct\") {\n\t\tt.Errorf(\"Expected direct message in output: %s\", output)\n\t}\n\n\t// Verify field appears in output\n\tif !strings.Contains(output, \"module\") || !strings.Contains(output, \"test\") {\n\t\tt.Errorf(\"Expected field 'module=test' in output: %s\", output)\n\t}\n\n\tt.Log(\"LogEntry methods work correctly\")\n}\n\n// TestLogEntryWithFields verifies WithFields works correctly\nfunc TestLogEntryWithFields(t *testing.T) {\n\tlog := logger.CreateDefaultLogger()\n\n\tvar buf bytes.Buffer\n\tlog.SetOutput(&buf)\n\t_ = log.SetLogLevel(\"info\")\n\n\t// Get entry with multiple fields\n\tentry := log.WithFields(map[string]any{\n\t\t\"requestId\": \"123-456\",\n\t\t\"userId\":    42,\n\t})\n\n\tentry.Info(\"processing request\")\n\n\toutput := buf.String()\n\n\t// Verify message appears\n\tif !strings.Contains(output, \"processing request\") {\n\t\tt.Errorf(\"Expected message in output: %s\", output)\n\t}\n\n\t// Verify both fields appear\n\tif !strings.Contains(output, \"requestId\") {\n\t\tt.Errorf(\"Expected 'requestId' field in output: %s\", output)\n\t}\n\tif !strings.Contains(output, \"123-456\") {\n\t\tt.Errorf(\"Expected '123-456' value in output: %s\", output)\n\t}\n\tif !strings.Contains(output, \"userId\") {\n\t\tt.Errorf(\"Expected 'userId' field in output: %s\", output)\n\t}\n\n\tt.Log(\"WithFields works correctly\")\n}\n\n// TestSetOutput verifies output redirection works correctly\nfunc TestSetOutput(t *testing.T) {\n\tlog := logger.CreateDefaultLogger()\n\n\t// Test with first buffer\n\tvar buf1 bytes.Buffer\n\tlog.SetOutput(&buf1)\n\t_ = log.SetLogLevel(\"info\")\n\n\tlog.Info(\"message to buffer 1\")\n\n\tif !strings.Contains(buf1.String(), \"message to buffer 1\") {\n\t\tt.Errorf(\"Expected message in buffer 1: %s\", buf1.String())\n\t}\n\n\t// Switch to second buffer\n\tvar buf2 bytes.Buffer\n\tlog.SetOutput(&buf2)\n\n\tlog.Info(\"message to buffer 2\")\n\n\t// Should appear only in buf2\n\tif !strings.Contains(buf2.String(), \"message to buffer 2\") {\n\t\tt.Errorf(\"Expected message in buffer 2: %s\", buf2.String())\n\t}\n\n\t// Should NOT appear in buf1\n\tif strings.Contains(buf1.String(), \"message to buffer 2\") {\n\t\tt.Errorf(\"Message should not appear in buffer 1: %s\", buf1.String())\n\t}\n\n\tt.Log(\"SetOutput correctly redirects log output\")\n}\n\n// TestLogEntryWithContext verifies WithContext works correctly\nfunc TestLogEntryWithContext(t *testing.T) {\n\tlog := logger.CreateDefaultLogger()\n\n\tvar buf bytes.Buffer\n\tlog.SetOutput(&buf)\n\t_ = log.SetLogLevel(\"info\")\n\n\t// Create type to avoid collisions\n\ttype contextKey string\n\n\t// Create context with values\n\tctx := context.WithValue(context.Background(), contextKey(\"traceId\"), \"trace-123\")\n\n\t// Get entry with context\n\tentry := log.WithContext(ctx)\n\n\tentry.Info(\"message with context\")\n\n\toutput := buf.String()\n\n\t// Verify message appears\n\tif !strings.Contains(output, \"message with context\") {\n\t\tt.Errorf(\"Expected message in output: %s\", output)\n\t}\n}\n"
  },
  {
    "path": "internal/logger/context.go",
    "content": "package logger\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log/slog\"\n\t\"maps\"\n\t\"sync\"\n)\n\n// Storage for log keys and hooks (single source of truth)\nvar (\n\tcontextConfigMu       sync.RWMutex\n\tlogKeys               []any\n\tclientLogContextHooks map[string]ClientLogContextHook\n)\n\n// SetLogKeys sets the context keys to be extracted from context\n// This function is thread-safe and can be called at runtime.\nfunc SetLogKeys(keys []any) {\n\tcontextConfigMu.Lock()\n\tdefer contextConfigMu.Unlock()\n\n\tlogKeys = make([]any, len(keys))\n\tcopy(logKeys, keys)\n}\n\n// GetLogKeys returns a copy of the current log keys\nfunc GetLogKeys() []any {\n\tcontextConfigMu.RLock()\n\tdefer contextConfigMu.RUnlock()\n\n\tkeysCopy := make([]any, len(logKeys))\n\tcopy(keysCopy, logKeys)\n\treturn keysCopy\n}\n\n// RegisterLogContextHook registers a hook for extracting context fields\n// This function is thread-safe and can be called at runtime.\nfunc RegisterLogContextHook(key string, hook ClientLogContextHook) {\n\tcontextConfigMu.Lock()\n\tdefer contextConfigMu.Unlock()\n\n\tif clientLogContextHooks == nil {\n\t\tclientLogContextHooks = make(map[string]ClientLogContextHook)\n\t}\n\tclientLogContextHooks[key] = hook\n}\n\n// GetClientLogContextHooks returns a copy of registered hooks\nfunc GetClientLogContextHooks() map[string]ClientLogContextHook {\n\tcontextConfigMu.RLock()\n\tdefer contextConfigMu.RUnlock()\n\n\thooksCopy := make(map[string]ClientLogContextHook, len(clientLogContextHooks))\n\tmaps.Copy(hooksCopy, clientLogContextHooks)\n\treturn hooksCopy\n}\n\n// extractContextFields extracts log fields from context using LogKeys and ClientLogContextHooks\nfunc extractContextFields(ctx context.Context) []slog.Attr {\n\tif ctx == nil {\n\t\treturn nil\n\t}\n\n\tcontextConfigMu.RLock()\n\tdefer contextConfigMu.RUnlock()\n\n\tattrs := make([]slog.Attr, 0)\n\n\t// Built-in LogKeys\n\tfor _, key := range logKeys {\n\t\tif val := ctx.Value(key); val != nil {\n\t\t\tkeyStr := fmt.Sprint(key)\n\n\t\t\tif strVal, ok := val.(string); ok {\n\t\t\t\tattrs = append(attrs, slog.String(keyStr, MaskSecrets(strVal)))\n\t\t\t} else {\n\t\t\t\tmasked := MaskSecrets(fmt.Sprint(val))\n\t\t\t\tattrs = append(attrs, slog.String(keyStr, masked))\n\t\t\t}\n\t\t}\n\t}\n\n\t// Custom hooks\n\tfor key, hook := range clientLogContextHooks {\n\t\tif val := hook(ctx); val != \"\" {\n\t\t\tattrs = append(attrs, slog.String(key, MaskSecrets(val)))\n\t\t}\n\t}\n\n\treturn attrs\n}\n"
  },
  {
    "path": "internal/logger/easy_logging_support.go",
    "content": "package logger\n\nimport (\n\t\"fmt\"\n\t\"os\"\n)\n\n// CloseFileOnLoggerReplace closes a log file when the logger is replaced.\n// This is used by the easy logging feature to manage log file handles.\nfunc CloseFileOnLoggerReplace(sflog any, file *os.File) error {\n\t// Try to get the underlying default logger\n\tif ell, ok := unwrapToEasyLoggingLogger(sflog); ok {\n\t\treturn ell.CloseFileOnLoggerReplace(file)\n\t}\n\treturn fmt.Errorf(\"logger does not support closeFileOnLoggerReplace\")\n}\n\n// IsEasyLoggingLogger checks if the given logger is based on the default logger implementation.\n// This is used by easy logging to determine if reconfiguration is allowed.\nfunc IsEasyLoggingLogger(sflog any) bool {\n\t_, ok := unwrapToEasyLoggingLogger(sflog)\n\treturn ok\n}\n\n// unwrapToEasyLoggingLogger unwraps a logger to get to the underlying default logger if present\nfunc unwrapToEasyLoggingLogger(sflog any) (EasyLoggingSupport, bool) {\n\tcurrent := sflog\n\n\t// Special case: if this is a Proxy, get the actual global logger\n\tif _, isProxy := current.(*Proxy); isProxy {\n\t\tcurrent = GetLogger()\n\t}\n\n\t// Unwrap all layers\n\tfor {\n\t\tif u, ok := current.(Unwrapper); ok {\n\t\t\tcurrent = u.Unwrap()\n\t\t\tcontinue\n\t\t}\n\t\tbreak\n\t}\n\n\t// Check if it's a default logger by checking if it has EasyLoggingSupport\n\tif ell, ok := current.(EasyLoggingSupport); ok {\n\t\treturn ell, true\n\t}\n\n\treturn nil, false\n}\n"
  },
  {
    "path": "internal/logger/interfaces.go",
    "content": "package logger\n\nimport (\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n)\n\n// Re-export types from sflog package to avoid circular dependencies\n// while maintaining a clean internal API\ntype (\n\t// LogEntry reexports the LogEntry interface from sflog package.\n\tLogEntry = sflog.LogEntry\n\t// SFLogger reexports the SFLogger interface from sflog package.\n\tSFLogger = sflog.SFLogger\n\t// ClientLogContextHook reexports the ClientLogContextHook type from sflog package.\n\tClientLogContextHook = sflog.ClientLogContextHook\n)\n"
  },
  {
    "path": "internal/logger/level_filtering.go",
    "content": "package logger\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n\t\"io\"\n\t\"log/slog\"\n)\n\n// levelFilteringLogger wraps any logger and filters log messages based on log level.\n// This prevents expensive operations (like secret masking and formatting) from running\n// when the message wouldn't be logged anyway.\ntype levelFilteringLogger struct {\n\tinner SFLogger\n}\n\n// Compile-time verification that levelFilteringLogger implements SFLogger\nvar _ SFLogger = (*levelFilteringLogger)(nil)\n\n// Unwrap returns the inner logger (for introspection by easy_logging)\nfunc (l *levelFilteringLogger) Unwrap() any {\n\treturn l.inner\n}\n\n// shouldLog determines if a message at messageLevel should be logged\n// given the current configured level\nfunc (l *levelFilteringLogger) shouldLog(messageLevel sflog.Level) bool {\n\treturn messageLevel >= l.inner.GetLogLevelInt()\n}\n\n// newLevelFilteringLogger creates a new level filtering wrapper around the provided logger\nfunc newLevelFilteringLogger(inner SFLogger) SFLogger {\n\tif inner == nil {\n\t\tpanic(\"inner logger cannot be nil\")\n\t}\n\treturn &levelFilteringLogger{inner: inner}\n}\n\n// Implement all formatted logging methods (*f variants)\nfunc (l *levelFilteringLogger) Tracef(format string, args ...any) {\n\tif !l.shouldLog(sflog.LevelTrace) {\n\t\treturn\n\t}\n\tl.inner.Tracef(format, args...)\n}\n\nfunc (l *levelFilteringLogger) Debugf(format string, args ...any) {\n\tif !l.shouldLog(sflog.LevelDebug) {\n\t\treturn\n\t}\n\tl.inner.Debugf(format, args...)\n}\n\nfunc (l *levelFilteringLogger) Infof(format string, args ...any) {\n\tif !l.shouldLog(sflog.LevelInfo) {\n\t\treturn\n\t}\n\tl.inner.Infof(format, args...)\n}\n\nfunc (l *levelFilteringLogger) Warnf(format string, args ...any) {\n\tif !l.shouldLog(sflog.LevelWarn) {\n\t\treturn\n\t}\n\tl.inner.Warnf(format, args...)\n}\n\nfunc (l *levelFilteringLogger) Errorf(format string, args ...any) {\n\tif !l.shouldLog(sflog.LevelError) {\n\t\treturn\n\t}\n\tl.inner.Errorf(format, args...)\n}\n\nfunc (l *levelFilteringLogger) Fatalf(format string, args ...any) {\n\tl.inner.Fatalf(format, args...)\n}\n\n// Implement all direct logging methods\nfunc (l *levelFilteringLogger) Trace(msg string) {\n\tif !l.shouldLog(sflog.LevelTrace) {\n\t\treturn\n\t}\n\tl.inner.Trace(msg)\n}\n\nfunc (l *levelFilteringLogger) Debug(msg string) {\n\tif !l.shouldLog(sflog.LevelDebug) {\n\t\treturn\n\t}\n\tl.inner.Debug(msg)\n}\n\nfunc (l *levelFilteringLogger) Info(msg string) {\n\tif !l.shouldLog(sflog.LevelInfo) {\n\t\treturn\n\t}\n\tl.inner.Info(msg)\n}\n\nfunc (l *levelFilteringLogger) Warn(msg string) {\n\tif !l.shouldLog(sflog.LevelWarn) {\n\t\treturn\n\t}\n\tl.inner.Warn(msg)\n}\n\nfunc (l *levelFilteringLogger) Error(msg string) {\n\tif !l.shouldLog(sflog.LevelError) {\n\t\treturn\n\t}\n\tl.inner.Error(msg)\n}\n\nfunc (l *levelFilteringLogger) Fatal(msg string) {\n\tl.inner.Fatal(msg)\n}\n\n// Implement structured logging methods - these return wrapped entries\nfunc (l *levelFilteringLogger) WithField(key string, value any) sflog.LogEntry {\n\tinnerEntry := l.inner.WithField(key, value)\n\treturn &levelFilteringEntry{\n\t\tparent: l,\n\t\tinner:  innerEntry,\n\t}\n}\n\nfunc (l *levelFilteringLogger) WithFields(fields map[string]any) sflog.LogEntry {\n\tinnerEntry := l.inner.WithFields(fields)\n\treturn &levelFilteringEntry{\n\t\tparent: l,\n\t\tinner:  innerEntry,\n\t}\n}\n\nfunc (l *levelFilteringLogger) WithContext(ctx context.Context) sflog.LogEntry {\n\tinnerEntry := l.inner.WithContext(ctx)\n\treturn &levelFilteringEntry{\n\t\tparent: l,\n\t\tinner:  innerEntry,\n\t}\n}\n\n// Delegate configuration methods to inner logger\nfunc (l *levelFilteringLogger) SetLogLevel(level string) error {\n\treturn l.inner.SetLogLevel(level)\n}\n\nfunc (l *levelFilteringLogger) SetLogLevelInt(level sflog.Level) error {\n\treturn l.inner.SetLogLevelInt(level)\n}\n\nfunc (l *levelFilteringLogger) GetLogLevel() string {\n\treturn l.inner.GetLogLevel()\n}\n\nfunc (l *levelFilteringLogger) GetLogLevelInt() sflog.Level {\n\treturn l.inner.GetLogLevelInt()\n}\n\nfunc (l *levelFilteringLogger) SetOutput(output io.Writer) {\n\tl.inner.SetOutput(output)\n}\n\n// SetHandler implements SFSlogLogger interface for advanced slog handler configuration\nfunc (l *levelFilteringLogger) SetHandler(handler slog.Handler) error {\n\tif sh, ok := l.inner.(sflog.SFSlogLogger); ok {\n\t\treturn sh.SetHandler(handler)\n\t}\n\treturn errors.New(\"underlying logger does not support SetHandler\")\n}\n\n// levelFilteringEntry wraps a log entry and filters by level\ntype levelFilteringEntry struct {\n\tparent *levelFilteringLogger\n\tinner  sflog.LogEntry\n}\n\n// Implement all formatted logging methods for entry\nfunc (e *levelFilteringEntry) Tracef(format string, args ...any) {\n\tif !e.parent.shouldLog(sflog.LevelTrace) {\n\t\treturn\n\t}\n\te.inner.Tracef(format, args...)\n}\n\nfunc (e *levelFilteringEntry) Debugf(format string, args ...any) {\n\tif !e.parent.shouldLog(sflog.LevelDebug) {\n\t\treturn\n\t}\n\te.inner.Debugf(format, args...)\n}\n\nfunc (e *levelFilteringEntry) Infof(format string, args ...any) {\n\tif !e.parent.shouldLog(sflog.LevelInfo) {\n\t\treturn\n\t}\n\te.inner.Infof(format, args...)\n}\n\nfunc (e *levelFilteringEntry) Warnf(format string, args ...any) {\n\tif !e.parent.shouldLog(sflog.LevelWarn) {\n\t\treturn\n\t}\n\te.inner.Warnf(format, args...)\n}\n\nfunc (e *levelFilteringEntry) Errorf(format string, args ...any) {\n\tif !e.parent.shouldLog(sflog.LevelError) {\n\t\treturn\n\t}\n\te.inner.Errorf(format, args...)\n}\n\nfunc (e *levelFilteringEntry) Fatalf(format string, args ...any) {\n\te.inner.Fatalf(format, args...)\n}\n\n// Implement all direct logging methods for entry\nfunc (e *levelFilteringEntry) Trace(msg string) {\n\tif !e.parent.shouldLog(sflog.LevelTrace) {\n\t\treturn\n\t}\n\te.inner.Trace(msg)\n}\n\nfunc (e *levelFilteringEntry) Debug(msg string) {\n\tif !e.parent.shouldLog(sflog.LevelDebug) {\n\t\treturn\n\t}\n\te.inner.Debug(msg)\n}\n\nfunc (e *levelFilteringEntry) Info(msg string) {\n\tif !e.parent.shouldLog(sflog.LevelInfo) {\n\t\treturn\n\t}\n\te.inner.Info(msg)\n}\n\nfunc (e *levelFilteringEntry) Warn(msg string) {\n\tif !e.parent.shouldLog(sflog.LevelWarn) {\n\t\treturn\n\t}\n\te.inner.Warn(msg)\n}\n\nfunc (e *levelFilteringEntry) Error(msg string) {\n\tif !e.parent.shouldLog(sflog.LevelError) {\n\t\treturn\n\t}\n\te.inner.Error(msg)\n}\n\nfunc (e *levelFilteringEntry) Fatal(msg string) {\n\te.inner.Fatal(msg)\n}\n"
  },
  {
    "path": "internal/logger/optional_interfaces.go",
    "content": "package logger\n\nimport \"os\"\n\n// EasyLoggingSupport is an optional interface for loggers that support easy_logging.go\n// functionality. This is used for file-based logging configuration.\ntype EasyLoggingSupport interface {\n\t// CloseFileOnLoggerReplace closes the logger's file handle when logger is replaced\n\tCloseFileOnLoggerReplace(file *os.File) error\n}\n\n// Unwrapper is a common interface for unwrapping wrapped loggers\ntype Unwrapper interface {\n\tUnwrap() any\n}\n"
  },
  {
    "path": "internal/logger/proxy.go",
    "content": "package logger\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"log/slog\"\n\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n)\n\n// Proxy is a proxy that delegates all calls to the global logger.\n// This ensures a single source of truth for the current logger.\ntype Proxy struct{}\n\n// Compile-time verification that Proxy implements SFLogger\nvar _ sflog.SFLogger = (*Proxy)(nil)\n\n// Tracef implements the Tracef method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Tracef(format string, args ...any) {\n\tGetLogger().Tracef(format, args...)\n}\n\n// Debugf implements the Debugf method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Debugf(format string, args ...any) {\n\tGetLogger().Debugf(format, args...)\n}\n\n// Infof implements the Infof method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Infof(format string, args ...any) {\n\tGetLogger().Infof(format, args...)\n}\n\n// Warnf implements the Warnf method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Warnf(format string, args ...any) {\n\tGetLogger().Warnf(format, args...)\n}\n\n// Errorf implements the Errorf method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Errorf(format string, args ...any) {\n\tGetLogger().Errorf(format, args...)\n}\n\n// Fatalf implements the Fatalf method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Fatalf(format string, args ...any) {\n\tGetLogger().Fatalf(format, args...)\n}\n\n// Trace implements the Trace method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Trace(msg string) {\n\tGetLogger().Trace(msg)\n}\n\n// Debug implements the Debug method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Debug(msg string) {\n\tGetLogger().Debug(msg)\n}\n\n// Info implements the Info method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Info(msg string) {\n\tGetLogger().Info(msg)\n}\n\n// Warn implements the Warn method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Warn(msg string) {\n\tGetLogger().Warn(msg)\n}\n\n// Error implements the Error method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Error(msg string) {\n\tGetLogger().Error(msg)\n}\n\n// Fatal implements the Fatal method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) Fatal(msg string) {\n\tGetLogger().Fatal(msg)\n}\n\n// WithField implements the WithField method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) WithField(key string, value any) sflog.LogEntry {\n\treturn GetLogger().WithField(key, value)\n}\n\n// WithFields implements the WithFields method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) WithFields(fields map[string]any) sflog.LogEntry {\n\treturn GetLogger().WithFields(fields)\n}\n\n// WithContext implements the WithContext method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) WithContext(ctx context.Context) sflog.LogEntry {\n\treturn GetLogger().WithContext(ctx)\n}\n\n// SetLogLevel implements the SetLogLevel method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) SetLogLevel(level string) error {\n\treturn GetLogger().SetLogLevel(level)\n}\n\n// SetLogLevelInt implements the SetLogLevelInt method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) SetLogLevelInt(level sflog.Level) error {\n\treturn GetLogger().SetLogLevelInt(level)\n}\n\n// GetLogLevel implements the GetLogLevel method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) GetLogLevel() string {\n\treturn GetLogger().GetLogLevel()\n}\n\n// GetLogLevelInt implements the GetLogLevelInt method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) GetLogLevelInt() sflog.Level {\n\treturn GetLogger().GetLogLevelInt()\n}\n\n// SetOutput implements the SetOutput method of the SFLogger interface by delegating to the global logger.\nfunc (p *Proxy) SetOutput(output io.Writer) {\n\tGetLogger().SetOutput(output)\n}\n\n// SetHandler implements SFSlogLogger interface for advanced slog handler configuration.\n// This delegates to the underlying logger if it supports SetHandler.\nfunc (p *Proxy) SetHandler(handler slog.Handler) error {\n\tlogger := GetLogger()\n\n\tif sl, ok := logger.(sflog.SFSlogLogger); ok {\n\t\treturn sl.SetHandler(handler)\n\t}\n\n\treturn fmt.Errorf(\"underlying logger does not support SetHandler\")\n}\n\n// NewLoggerProxy creates a new logger proxy that delegates all calls\n// to the global logger managed by the internal package.\nfunc NewLoggerProxy() sflog.SFLogger {\n\treturn &Proxy{}\n}\n"
  },
  {
    "path": "internal/logger/secret_detector.go",
    "content": "package logger\n\nimport (\n\t\"regexp\"\n)\n\nconst (\n\tawsKeyPattern          = `(?i)(aws_key_id|aws_secret_key|access_key_id|secret_access_key)\\s*=\\s*'([^']+)'`\n\tawsTokenPattern        = `(?i)(accessToken|tempToken|keySecret)\"\\s*:\\s*\"([a-z0-9/+]{32,}={0,2})\"`\n\tsasTokenPattern        = `(?i)(sig|signature|AWSAccessKeyId|password|passcode)=(?P<secret>[a-z0-9%/+]{16,})`\n\tprivateKeyPattern      = `(?im)-----BEGIN PRIVATE KEY-----\\\\n([a-z0-9/+=\\\\n]{32,})\\\\n-----END PRIVATE KEY-----` // pragma: allowlist secret\n\tprivateKeyDataPattern  = `(?i)\"privateKeyData\": \"([a-z0-9/+=\\\\n]{10,})\"`\n\tprivateKeyParamPattern = `(?i)privateKey=([A-Za-z0-9/+=_%-]+)(&|$|\\s)`\n\tconnectionTokenPattern = `(?i)(token|assertion content)([\\'\\\"\\s:=]+)([a-z0-9=/_\\-\\+]{8,})`\n\tpasswordPattern        = `(?i)(password|pwd)([\\'\\\"\\s:=]+)([a-z0-9!\\\"#\\$%&\\\\\\'\\(\\)\\*\\+\\,-\\./:;<=>\\?\\@\\[\\]\\^_\\{\\|\\}~]{8,})`\n\tdsnPasswordPattern     = `([^/:]+):([^@/:]{3,})@` // Matches user:password@host format in DSN strings\n\tclientSecretPattern    = `(?i)(clientSecret)([\\'\\\"\\s:= ]+)([a-z0-9!\\\"#\\$%&\\\\\\'\\(\\)\\*\\+\\,-\\./:;<=>\\?\\@\\[\\]\\^_\\{\\|\\}~]+)`\n\tjwtTokenPattern        = `(?i)(jwt|bearer)[\\s:=]*([a-zA-Z0-9_-]+\\.[a-zA-Z0-9_-]+\\.[a-zA-Z0-9_-]+)` // pragma: allowlist secret\n)\n\ntype patternAndReplace struct {\n\tregex       *regexp.Regexp\n\treplacement string\n}\n\nvar secretDetectorPatterns = []patternAndReplace{\n\t{regexp.MustCompile(awsKeyPattern), \"$1=****$2\"},\n\t{regexp.MustCompile(awsTokenPattern), \"${1}XXXX$2\"},\n\t{regexp.MustCompile(sasTokenPattern), \"${1}****$2\"},\n\t{regexp.MustCompile(privateKeyPattern), \"-----BEGIN PRIVATE KEY-----\\\\\\\\\\\\\\\\nXXXX\\\\\\\\\\\\\\\\n-----END PRIVATE KEY-----\"}, // pragma: allowlist secret\n\t{regexp.MustCompile(privateKeyDataPattern), `\"privateKeyData\": \"XXXX\"`},\n\t{regexp.MustCompile(privateKeyParamPattern), \"privateKey=****$2\"},\n\t{regexp.MustCompile(connectionTokenPattern), \"$1${2}****\"},\n\t{regexp.MustCompile(passwordPattern), \"$1${2}****\"},\n\t{regexp.MustCompile(dsnPasswordPattern), \"$1:****@\"},\n\t{regexp.MustCompile(clientSecretPattern), \"$1${2}****\"},\n\t{regexp.MustCompile(jwtTokenPattern), \"$1 ****\"},\n}\n\n// MaskSecrets masks secrets in text (exported for use by main package and secret masking logger)\nfunc MaskSecrets(text string) (masked string) {\n\tres := text\n\tfor _, pattern := range secretDetectorPatterns {\n\t\tres = pattern.regex.ReplaceAllString(res, pattern.replacement)\n\t}\n\treturn res\n}\n"
  },
  {
    "path": "internal/logger/secret_detector_test.go",
    "content": "package logger\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/golang-jwt/jwt/v5\"\n)\n\nconst (\n\tlongToken = \"_Y1ZNETTn5/qfUWj3Jedby7gipDzQs=UKyJH9DS=nFzzWnfZKGV+C7GopWC\" + // pragma: allowlist secret\n\t\t\"GD4LjOLLFZKOE26LXHDt3pTi4iI1qwKuSpf/FmClCMBSissVsU3Ei590FP0lPQQhcSG\" + // pragma: allowlist secret\n\t\t\"cDu69ZL_1X6e9h5z62t/iY7ZkII28n2qU=nrBJUgPRCIbtJQkVJXIuOHjX4G5yUEKjZ\" + // pragma: allowlist secret\n\t\t\"BAx4w6=_lqtt67bIA=o7D=oUSjfywsRFoloNIkBPXCwFTv+1RVUHgVA2g8A9Lw5XdJY\" + // pragma: allowlist secret\n\t\t\"uI8vhg=f0bKSq7AhQ2Bh\"\n\trandomPassword     = `Fh[+2J~AcqeqW%?`\n\tfalsePositiveToken = \"2020-04-30 23:06:04,069 - MainThread auth.py:397\" +\n\t\t\" - write_temporary_credential() - DEBUG - no ID token is given when \" +\n\t\t\"try to store temporary credential\"\n)\n\n// generateTestJWT creates a test JWT token for masking tests using the JWT library\nfunc generateTestJWT(t *testing.T) string {\n\t// Create claims for the test JWT\n\tclaims := jwt.MapClaims{\n\t\t\"sub\":  \"test123\",\n\t\t\"name\": \"Test User\",\n\t\t\"exp\":  time.Now().Add(time.Hour).Unix(),\n\t\t\"iat\":  time.Now().Unix(),\n\t}\n\n\t// Create the token with HS256 signing method\n\ttoken := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)\n\n\t// Sign the token with a test secret\n\ttestSecret := []byte(\"test-secret-for-masking-validation\")\n\ttokenString, err := token.SignedString(testSecret)\n\tif err != nil {\n\t\t// Fallback to a simple test JWT if signing fails\n\t\tt.Fatalf(\"Failed to generate test JWT: %s\", err)\n\t}\n\n\treturn tokenString\n}\n\nfunc TestSecretsDetector(t *testing.T) {\n\ttestCases := []struct {\n\t\tname     string\n\t\tinput    string\n\t\texpected string\n\t}{\n\t\t// Token masking tests\n\t\t{\"Token with equals\", fmt.Sprintf(\"Token =%s\", longToken), \"Token =****\"},\n\t\t{\"idToken with colon space\", fmt.Sprintf(\"idToken : %s\", longToken), \"idToken : ****\"},\n\t\t{\"sessionToken with colon space\", fmt.Sprintf(\"sessionToken : %s\", longToken), \"sessionToken : ****\"},\n\t\t{\"masterToken with colon space\", fmt.Sprintf(\"masterToken : %s\", longToken), \"masterToken : ****\"},\n\t\t{\"accessToken with colon space\", fmt.Sprintf(\"accessToken : %s\", longToken), \"accessToken : ****\"},\n\t\t{\"refreshToken with colon space\", fmt.Sprintf(\"refreshToken : %s\", longToken), \"refreshToken : ****\"},\n\t\t{\"programmaticAccessToken with colon space\", fmt.Sprintf(\"programmaticAccessToken : %s\", longToken), \"programmaticAccessToken : ****\"},\n\t\t{\"programmatic_access_token with colon space\", fmt.Sprintf(\"programmatic_access_token : %s\", longToken), \"programmatic_access_token : ****\"},\n\t\t{\"JWT - with Bearer prefix\", fmt.Sprintf(\"Bearer %s\", generateTestJWT(t)), \"Bearer ****\"},\n\t\t{\"JWT - with JWT prefix\", fmt.Sprintf(\"JWT %s\", generateTestJWT(t)), \"JWT ****\"},\n\n\t\t// Password masking tests\n\t\t{\"password with colon\", fmt.Sprintf(\"password:%s\", randomPassword), \"password:****\"},\n\t\t{\"PASSWORD uppercase with colon\", fmt.Sprintf(\"PASSWORD:%s\", randomPassword), \"PASSWORD:****\"},\n\t\t{\"PaSsWoRd mixed case with colon\", fmt.Sprintf(\"PaSsWoRd:%s\", randomPassword), \"PaSsWoRd:****\"},\n\t\t{\"password with equals and spaces\", fmt.Sprintf(\"password = %s\", randomPassword), \"password = ****\"},\n\t\t{\"pwd with colon\", fmt.Sprintf(\"pwd:%s\", randomPassword), \"pwd:****\"},\n\n\t\t// Mixed token and password tests\n\t\t{\n\t\t\t\"token and password mixed\",\n\t\t\tfmt.Sprintf(\"token=%s foo bar baz password:%s\", longToken, randomPassword),\n\t\t\t\"token=**** foo bar baz password:****\",\n\t\t},\n\t\t{\n\t\t\t\"PWD and TOKEN mixed\",\n\t\t\tfmt.Sprintf(\"PWD = %s blah blah blah TOKEN:%s\", randomPassword, longToken),\n\t\t\t\"PWD = **** blah blah blah TOKEN:****\",\n\t\t},\n\n\t\t// Client secret tests\n\t\t{\"clientSecret with values\", \"clientSecret abc oauthClientSECRET=def\", \"clientSecret **** oauthClientSECRET=****\"},\n\n\t\t// False positive test\n\t\t{\"false positive should not be masked\", falsePositiveToken, falsePositiveToken},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tresult := MaskSecrets(tc.input)\n\t\t\tif result != tc.expected {\n\t\t\t\tt.Errorf(\"expected %q to be equal to %q but was not\", result, tc.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "internal/logger/secret_masking.go",
    "content": "package logger\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n\t\"io\"\n\t\"log/slog\"\n)\n\n// secretMaskingLogger wraps any logger implementation and ensures\n// all log messages have secrets masked before being passed to the inner logger.\ntype secretMaskingLogger struct {\n\tinner SFLogger\n}\n\n// Compile-time verification that secretMaskingLogger implements SFLogger\nvar _ SFLogger = (*secretMaskingLogger)(nil)\n\n// Unwrap returns the inner logger (for introspection by easy_logging)\nfunc (l *secretMaskingLogger) Unwrap() any {\n\treturn l.inner\n}\n\n// newSecretMaskingLogger creates a new secret masking wrapper around the provided logger.\nfunc newSecretMaskingLogger(inner SFLogger) *secretMaskingLogger {\n\tif inner == nil {\n\t\tpanic(\"inner logger cannot be nil\")\n\t}\n\n\treturn &secretMaskingLogger{inner: inner}\n}\n\n// Helper methods for masking\nfunc (l *secretMaskingLogger) maskValue(value any) any {\n\tif str, ok := value.(string); ok {\n\t\treturn l.maskString(str)\n\t}\n\t// For other types, convert to string, mask, but return original type if no secrets\n\tstrVal := fmt.Sprint(value)\n\tmasked := l.maskString(strVal)\n\tif masked != strVal {\n\t\treturn masked // Secrets found and masked\n\t}\n\treturn value // No secrets, return original\n}\n\nfunc (l *secretMaskingLogger) maskString(value string) string {\n\treturn MaskSecrets(value)\n}\n\n// Implement all formatted logging methods (*f variants)\nfunc (l *secretMaskingLogger) Tracef(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := l.maskString(message)\n\tl.inner.Trace(maskedMessage)\n}\n\nfunc (l *secretMaskingLogger) Debugf(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := l.maskString(message)\n\tl.inner.Debug(maskedMessage)\n}\n\nfunc (l *secretMaskingLogger) Infof(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := l.maskString(message)\n\tl.inner.Info(maskedMessage)\n}\n\nfunc (l *secretMaskingLogger) Warnf(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := l.maskString(message)\n\tl.inner.Warn(maskedMessage)\n}\n\nfunc (l *secretMaskingLogger) Errorf(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := l.maskString(message)\n\tl.inner.Error(maskedMessage)\n}\n\nfunc (l *secretMaskingLogger) Fatalf(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := l.maskString(message)\n\tl.inner.Fatal(maskedMessage)\n}\n\n// Implement all direct logging methods\nfunc (l *secretMaskingLogger) Trace(msg string) {\n\tl.inner.Trace(l.maskString(msg))\n}\n\nfunc (l *secretMaskingLogger) Debug(msg string) {\n\tl.inner.Debug(l.maskString(msg))\n}\n\nfunc (l *secretMaskingLogger) Info(msg string) {\n\tl.inner.Info(l.maskString(msg))\n}\n\nfunc (l *secretMaskingLogger) Warn(msg string) {\n\tl.inner.Warn(l.maskString(msg))\n}\n\nfunc (l *secretMaskingLogger) Error(msg string) {\n\tl.inner.Error(l.maskString(msg))\n}\n\nfunc (l *secretMaskingLogger) Fatal(msg string) {\n\tl.inner.Fatal(l.maskString(msg))\n}\n\n// Implement structured logging methods\n// Note: These return LogEntry to maintain compatibility with the adapter layer\nfunc (l *secretMaskingLogger) WithField(key string, value any) LogEntry {\n\tmaskedValue := l.maskValue(value)\n\tresult := l.inner.WithField(key, maskedValue)\n\treturn &secretMaskingEntry{\n\t\tinner:  result,\n\t\tparent: l,\n\t}\n}\n\nfunc (l *secretMaskingLogger) WithFields(fields map[string]any) LogEntry {\n\tmaskedFields := make(map[string]any, len(fields))\n\tfor k, v := range fields {\n\t\tmaskedFields[k] = l.maskValue(v)\n\t}\n\tresult := l.inner.WithFields(maskedFields)\n\treturn &secretMaskingEntry{\n\t\tinner:  result,\n\t\tparent: l,\n\t}\n}\n\nfunc (l *secretMaskingLogger) WithContext(ctx context.Context) LogEntry {\n\tresult := l.inner.WithContext(ctx)\n\treturn &secretMaskingEntry{\n\t\tinner:  result,\n\t\tparent: l,\n\t}\n}\n\n// Delegate configuration methods\nfunc (l *secretMaskingLogger) SetLogLevel(level string) error {\n\treturn l.inner.SetLogLevel(level)\n}\n\nfunc (l *secretMaskingLogger) SetLogLevelInt(level sflog.Level) error {\n\treturn l.inner.SetLogLevelInt(level)\n}\n\nfunc (l *secretMaskingLogger) GetLogLevel() string {\n\treturn l.inner.GetLogLevel()\n}\n\nfunc (l *secretMaskingLogger) GetLogLevelInt() sflog.Level {\n\treturn l.inner.GetLogLevelInt()\n}\n\nfunc (l *secretMaskingLogger) SetOutput(output io.Writer) {\n\tl.inner.SetOutput(output)\n}\n\n// SetHandler delegates to inner logger's SetHandler (for slog handler configuration)\nfunc (l *secretMaskingLogger) SetHandler(handler slog.Handler) error {\n\tif logger, ok := l.inner.(sflog.SFSlogLogger); ok {\n\t\treturn logger.SetHandler(handler)\n\t}\n\treturn fmt.Errorf(\"inner logger does not support SetHandler\")\n}\n\n// secretMaskingEntry wraps a log entry and masks all secrets.\ntype secretMaskingEntry struct {\n\tinner  LogEntry\n\tparent *secretMaskingLogger\n}\n\n// Compile-time verification that secretMaskingEntry implements LogEntry\nvar _ LogEntry = (*secretMaskingEntry)(nil)\n\n// Implement all formatted logging methods (*f variants)\nfunc (e *secretMaskingEntry) Tracef(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := MaskSecrets(message)\n\te.inner.Trace(maskedMessage)\n}\n\nfunc (e *secretMaskingEntry) Debugf(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := MaskSecrets(message)\n\te.inner.Debug(maskedMessage)\n}\n\nfunc (e *secretMaskingEntry) Infof(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := MaskSecrets(message)\n\te.inner.Info(maskedMessage)\n}\n\nfunc (e *secretMaskingEntry) Warnf(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := MaskSecrets(message)\n\te.inner.Warn(maskedMessage)\n}\n\nfunc (e *secretMaskingEntry) Errorf(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := MaskSecrets(message)\n\te.inner.Error(maskedMessage)\n}\n\nfunc (e *secretMaskingEntry) Fatalf(format string, args ...any) {\n\tmessage := fmt.Sprintf(format, args...)\n\tmaskedMessage := MaskSecrets(message)\n\te.inner.Fatal(maskedMessage)\n}\n\n// Implement all direct logging methods\nfunc (e *secretMaskingEntry) Trace(msg string) {\n\te.inner.Trace(e.parent.maskString(msg))\n}\n\nfunc (e *secretMaskingEntry) Debug(msg string) {\n\te.inner.Debug(e.parent.maskString(msg))\n}\n\nfunc (e *secretMaskingEntry) Info(msg string) {\n\te.inner.Info(e.parent.maskString(msg))\n}\n\nfunc (e *secretMaskingEntry) Warn(msg string) {\n\te.inner.Warn(e.parent.maskString(msg))\n}\n\nfunc (e *secretMaskingEntry) Error(msg string) {\n\te.inner.Error(e.parent.maskString(msg))\n}\n\nfunc (e *secretMaskingEntry) Fatal(msg string) {\n\te.inner.Fatal(e.parent.maskString(msg))\n}\n"
  },
  {
    "path": "internal/logger/secret_masking_test.go",
    "content": "package logger\n\nimport (\n\t\"context\"\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n\t\"io\"\n\t\"testing\"\n)\n\n// mockLogger is a simple logger implementation for testing\ntype mockLogger struct {\n\tlastMessage string\n}\n\nfunc (m *mockLogger) Tracef(format string, args ...any) {}\nfunc (m *mockLogger) Debugf(format string, args ...any) {}\nfunc (m *mockLogger) Infof(format string, args ...any)  {}\nfunc (m *mockLogger) Warnf(format string, args ...any)  {}\nfunc (m *mockLogger) Errorf(format string, args ...any) {}\nfunc (m *mockLogger) Fatalf(format string, args ...any) {}\n\nfunc (m *mockLogger) Trace(msg string) {}\nfunc (m *mockLogger) Debug(msg string) {}\nfunc (m *mockLogger) Info(msg string)  { m.lastMessage = msg }\nfunc (m *mockLogger) Warn(msg string)  {}\nfunc (m *mockLogger) Error(msg string) {}\nfunc (m *mockLogger) Fatal(msg string) {}\n\nfunc (m *mockLogger) WithField(key string, value any) LogEntry  { return m }\nfunc (m *mockLogger) WithFields(fields map[string]any) LogEntry { return m }\nfunc (m *mockLogger) WithContext(ctx context.Context) LogEntry  { return m }\nfunc (m *mockLogger) SetLogLevel(level string) error            { return nil }\nfunc (m *mockLogger) SetLogLevelInt(level sflog.Level) error    { return nil }\nfunc (m *mockLogger) GetLogLevel() string                       { return \"info\" }\nfunc (m *mockLogger) GetLogLevelInt() sflog.Level               { return sflog.LevelInfo }\nfunc (m *mockLogger) SetOutput(output io.Writer)                {}\n\n// Compile-time verification that mockLogger implements SFLogger\nvar _ SFLogger = (*mockLogger)(nil)\n\nfunc TestSecretMaskingLogger(t *testing.T) {\n\tmock := &mockLogger{}\n\tlogger := newSecretMaskingLogger(mock)\n\n\t// Use a real password pattern that will be masked\n\tlogger.Infof(\"test message with %s\", \"password:secret123\")\n\n\t// Secret masking logger formats the message, masks it, then passes with \"%s\" format\n\tif mock.lastMessage != \"test message with password:****\" {\n\t\tt.Errorf(\"Expected format string to be '%%s', got %s\", mock.lastMessage)\n\t}\n\n\t// The masked message should have been passed as the first arg\n\t// (We can't check this with the current mock, but we verified it works in other tests)\n}\n"
  },
  {
    "path": "internal/logger/slog_handler.go",
    "content": "package logger\n\nimport (\n\t\"context\"\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n\t\"log/slog\"\n)\n\n// snowflakeHandler wraps slog.Handler and adds context field extraction\ntype snowflakeHandler struct {\n\tinner    slog.Handler\n\tlevelVar *slog.LevelVar\n}\n\nfunc newSnowflakeHandler(inner slog.Handler, level sflog.Level) *snowflakeHandler {\n\tlevelVar := &slog.LevelVar{}\n\tlevelVar.Set(slog.Level(level))\n\treturn &snowflakeHandler{\n\t\tinner:    inner,\n\t\tlevelVar: levelVar,\n\t}\n}\n\n// Enabled checks if the handler is enabled for the given level\nfunc (h *snowflakeHandler) Enabled(ctx context.Context, level slog.Level) bool {\n\treturn h.inner.Enabled(ctx, level)\n}\n\n// Handle processes a log record\nfunc (h *snowflakeHandler) Handle(ctx context.Context, r slog.Record) error {\n\t// NOTE: Context field extraction is NOT done here because:\n\t// - If WithContext() was used, fields are already added to the logger via .With()\n\t// - If WithContext() was not used, the context passed here is typically context.Background()\n\t//   and wouldn't have any fields anyway\n\n\t// Secret masking is already done in secretMaskingLogger wrapper\n\treturn h.inner.Handle(ctx, r)\n}\n\n// WithAttrs creates a new handler with additional attributes\nfunc (h *snowflakeHandler) WithAttrs(attrs []slog.Attr) slog.Handler {\n\treturn &snowflakeHandler{\n\t\tinner:    h.inner.WithAttrs(attrs),\n\t\tlevelVar: h.levelVar,\n\t}\n}\n\n// WithGroup creates a new handler with a group\nfunc (h *snowflakeHandler) WithGroup(name string) slog.Handler {\n\treturn &snowflakeHandler{\n\t\tinner:    h.inner.WithGroup(name),\n\t\tlevelVar: h.levelVar,\n\t}\n}\n"
  },
  {
    "path": "internal/logger/slog_logger.go",
    "content": "package logger\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n\t\"io\"\n\t\"log/slog\"\n\t\"os\"\n\t\"path\"\n\t\"runtime\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n)\n\n// formatSource formats caller information for logging\nfunc formatSource(frame *runtime.Frame) (string, string) {\n\treturn path.Base(frame.Function), fmt.Sprintf(\"%s:%d\", path.Base(frame.File), frame.Line)\n}\n\n// rawLogger implements SFLogger using slog\ntype rawLogger struct {\n\tinner   *slog.Logger\n\thandler *snowflakeHandler\n\tlevel   sflog.Level\n\tenabled bool // For OFF level support\n\tfile    *os.File\n\toutput  io.Writer\n\tmu      sync.Mutex\n}\n\n// Compile-time verification that rawLogger implements SFLogger\nvar _ SFLogger = (*rawLogger)(nil)\n\n// newRawLogger creates the internal default logger using slog\nfunc newRawLogger() SFLogger {\n\tlevel := sflog.LevelInfo\n\n\topts := createOpts(slog.Level(level))\n\n\ttextHandler := slog.NewTextHandler(os.Stderr, opts)\n\thandler := newSnowflakeHandler(textHandler, level)\n\n\tslogLogger := slog.New(handler)\n\n\treturn &rawLogger{\n\t\tinner:   slogLogger,\n\t\thandler: handler,\n\t\tlevel:   level,\n\t\tenabled: true,\n\t\toutput:  os.Stderr,\n\t}\n}\n\n// isEnabled checks if logging is enabled (for OFF level)\nfunc (log *rawLogger) isEnabled() bool {\n\tlog.mu.Lock()\n\tdefer log.mu.Unlock()\n\treturn log.enabled\n}\n\n// SetLogLevel sets the log level\nfunc (log *rawLogger) SetLogLevel(level string) error {\n\tupperLevel, err := sflog.ParseLevel(strings.ToUpper(level))\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error while setting log level. %v\", err)\n\t}\n\n\tif upperLevel == sflog.LevelOff {\n\t\tlog.mu.Lock()\n\t\tlog.level = sflog.LevelOff\n\t\tlog.enabled = false\n\t\tlog.mu.Unlock()\n\t\treturn nil\n\t}\n\n\tlog.mu.Lock()\n\tlog.enabled = true\n\tlog.level = upperLevel\n\tlog.mu.Unlock()\n\n\treturn nil\n}\n\nfunc (log *rawLogger) SetLogLevelInt(level sflog.Level) error {\n\tlog.mu.Lock()\n\tdefer log.mu.Unlock()\n\n\t_, err := sflog.LevelToString(level)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"invalid log level: %d\", level)\n\t}\n\tlog.level = level\n\treturn nil\n}\n\n// GetLogLevel returns the current log level\nfunc (log *rawLogger) GetLogLevel() string {\n\tif levelStr, err := sflog.LevelToString(log.level); err == nil {\n\t\treturn levelStr\n\t}\n\treturn \"unknown\"\n}\n\nfunc (log *rawLogger) GetLogLevelInt() sflog.Level {\n\tlog.mu.Lock()\n\tdefer log.mu.Unlock()\n\treturn log.level\n}\n\n// SetOutput sets the output writer\nfunc (log *rawLogger) SetOutput(output io.Writer) {\n\tlog.mu.Lock()\n\tdefer log.mu.Unlock()\n\n\tlog.output = output\n\n\t// Create new handler with new output\n\topts := createOpts(slog.Level(log.level))\n\n\ttextHandler := slog.NewTextHandler(output, opts)\n\tlog.handler = newSnowflakeHandler(textHandler, log.level)\n\tlog.inner = slog.New(log.handler)\n}\n\nfunc createOpts(level slog.Level) *slog.HandlerOptions {\n\topts := &slog.HandlerOptions{\n\t\tLevel:     level,\n\t\tAddSource: true,\n\t\tReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {\n\t\t\tif a.Key == slog.TimeKey {\n\t\t\t\tif t, ok := a.Value.Any().(time.Time); ok {\n\t\t\t\t\treturn slog.String(slog.TimeKey, t.Format(time.RFC3339Nano))\n\t\t\t\t}\n\t\t\t}\n\t\t\tif a.Key == slog.SourceKey {\n\t\t\t\tif src, ok := a.Value.Any().(*slog.Source); ok {\n\t\t\t\t\tframe := &runtime.Frame{\n\t\t\t\t\t\tFile:     src.File,\n\t\t\t\t\t\tLine:     src.Line,\n\t\t\t\t\t\tFunction: src.Function,\n\t\t\t\t\t}\n\t\t\t\t\t_, location := formatSource(frame)\n\t\t\t\t\treturn slog.String(slog.SourceKey, location)\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn a\n\t\t},\n\t}\n\treturn opts\n}\n\n// SetHandler sets a custom slog handler (implements SFSlogLogger interface)\n// The provided handler will be wrapped with snowflakeHandler to preserve context extraction.\n// Secret masking is handled at a higher level (secretMaskingLogger wrapper).\nfunc (log *rawLogger) SetHandler(handler slog.Handler) error {\n\tlog.mu.Lock()\n\tdefer log.mu.Unlock()\n\n\t// Wrap user's handler with snowflakeHandler to preserve context extraction\n\tlog.handler = newSnowflakeHandler(handler, log.level)\n\tlog.inner = slog.New(log.handler)\n\n\treturn nil\n}\n\n// logWithSkip logs a message at the given level, skipping 'skip' frames when determining source location.\n// This is used internally to skip wrapper frames (levelFilteringLogger -> secretMaskingLogger -> rawLogger)\n// and report the actual caller's location.\nfunc (log *rawLogger) logWithSkip(skip int, level sflog.Level, msg string) {\n\tif !log.isEnabled() {\n\t\treturn\n\t}\n\tvar pcs [1]uintptr\n\t// Skip: runtime.Callers itself + logWithSkip + specified skip\n\truntime.Callers(skip+2, pcs[:])\n\tr := slog.NewRecord(time.Now(), slog.Level(level), msg, pcs[0])\n\t_ = log.handler.Handle(context.Background(), r)\n}\n\n// Implement all formatted logging methods (*f variants)\n// Skip depth = 3 assumes standard wrapper chain: levelFilteringLogger -> secretMaskingLogger -> rawLogger\n// If wrapper chain changes, update this value. See TestSkipDepthWarning test.\nfunc (log *rawLogger) Tracef(format string, args ...any) {\n\tlog.logWithSkip(3, sflog.LevelTrace, fmt.Sprintf(format, args...))\n}\n\nfunc (log *rawLogger) Debugf(format string, args ...any) {\n\tlog.logWithSkip(3, sflog.LevelDebug, fmt.Sprintf(format, args...))\n}\n\nfunc (log *rawLogger) Infof(format string, args ...any) {\n\tlog.logWithSkip(3, sflog.LevelInfo, fmt.Sprintf(format, args...))\n}\n\nfunc (log *rawLogger) Warnf(format string, args ...any) {\n\tlog.logWithSkip(3, sflog.LevelWarn, fmt.Sprintf(format, args...))\n}\n\nfunc (log *rawLogger) Errorf(format string, args ...any) {\n\tlog.logWithSkip(3, sflog.LevelError, fmt.Sprintf(format, args...))\n}\n\nfunc (log *rawLogger) Fatalf(format string, args ...any) {\n\tlog.logWithSkip(3, sflog.LevelFatal, fmt.Sprintf(format, args...))\n\tos.Exit(1)\n}\n\n// Implement all direct logging methods\n// Skip depth = 3 assumes standard wrapper chain: levelFilteringLogger -> secretMaskingLogger -> rawLogger\n// If wrapper chain changes, update this value. See TestSkipDepthWarning test.\nfunc (log *rawLogger) Trace(msg string) {\n\tlog.logWithSkip(3, sflog.LevelTrace, msg)\n}\n\nfunc (log *rawLogger) Debug(msg string) {\n\tlog.logWithSkip(3, sflog.LevelDebug, msg)\n}\n\nfunc (log *rawLogger) Info(msg string) {\n\tlog.logWithSkip(3, sflog.LevelInfo, msg)\n}\n\nfunc (log *rawLogger) Warn(msg string) {\n\tlog.logWithSkip(3, sflog.LevelWarn, msg)\n}\n\nfunc (log *rawLogger) Error(msg string) {\n\tlog.logWithSkip(3, sflog.LevelError, msg)\n}\n\nfunc (log *rawLogger) Fatal(msg string) {\n\tlog.logWithSkip(3, sflog.LevelFatal, msg)\n\tos.Exit(1)\n}\n\n// Structured logging methods\nfunc (log *rawLogger) WithField(key string, value any) LogEntry {\n\treturn &slogEntry{\n\t\tlogger:  log.inner.With(slog.Any(key, value)),\n\t\tenabled: &log.enabled,\n\t\tmu:      &log.mu,\n\t}\n}\n\nfunc (log *rawLogger) WithFields(fields map[string]any) LogEntry {\n\tattrs := make([]any, 0, len(fields)*2)\n\tfor k, v := range fields {\n\t\tattrs = append(attrs, k, v)\n\t}\n\treturn &slogEntry{\n\t\tlogger:  log.inner.With(attrs...),\n\t\tenabled: &log.enabled,\n\t\tmu:      &log.mu,\n\t}\n}\n\nfunc (log *rawLogger) WithContext(ctx context.Context) LogEntry {\n\tif ctx == nil {\n\t\treturn log\n\t}\n\n\t// Extract fields from context\n\tattrs := extractContextFields(ctx)\n\tif len(attrs) == 0 {\n\t\treturn log\n\t}\n\n\t// Convert []slog.Attr to []any for With()\n\t// slog.Logger.With() can accept slog.Attr directly\n\targs := make([]any, len(attrs))\n\tfor i, attr := range attrs {\n\t\targs[i] = attr\n\t}\n\n\tnewLogger := log.inner.With(args...)\n\n\treturn &slogEntry{\n\t\tlogger:  newLogger,\n\t\tenabled: &log.enabled,\n\t\tmu:      &log.mu,\n\t}\n}\n\n// slogEntry implements LogEntry\ntype slogEntry struct {\n\tlogger  *slog.Logger\n\tenabled *bool\n\tmu      *sync.Mutex\n}\n\n// Compile-time verification that slogEntry implements LogEntry\nvar _ LogEntry = (*slogEntry)(nil)\n\nfunc (e *slogEntry) isEnabled() bool {\n\te.mu.Lock()\n\tdefer e.mu.Unlock()\n\treturn *e.enabled\n}\n\n// logWithSkip logs a message at the given level, skipping 'skip' frames when determining source location.\nfunc (e *slogEntry) logWithSkip(skip int, level sflog.Level, msg string) {\n\tif !e.isEnabled() {\n\t\treturn\n\t}\n\tvar pcs [1]uintptr\n\truntime.Callers(skip+2, pcs[:]) // +2: runtime.Callers itself + logWithSkip\n\tr := slog.NewRecord(time.Now(), slog.Level(level), msg, pcs[0])\n\t_ = e.logger.Handler().Handle(context.Background(), r)\n}\n\n// Implement all formatted logging methods (*f variants)\n// Skip depth = 3 assumes standard wrapper chain: levelFilteringEntry -> secretMaskingEntry -> slogEntry\n// If wrapper chain changes, update this value. See TestSkipDepthWarning test.\nfunc (e *slogEntry) Tracef(format string, args ...any) {\n\te.logWithSkip(3, sflog.LevelTrace, fmt.Sprintf(format, args...))\n}\n\nfunc (e *slogEntry) Debugf(format string, args ...any) {\n\te.logWithSkip(3, sflog.LevelDebug, fmt.Sprintf(format, args...))\n}\n\nfunc (e *slogEntry) Infof(format string, args ...any) {\n\te.logWithSkip(3, sflog.LevelInfo, fmt.Sprintf(format, args...))\n}\n\nfunc (e *slogEntry) Warnf(format string, args ...any) {\n\te.logWithSkip(3, sflog.LevelWarn, fmt.Sprintf(format, args...))\n}\n\nfunc (e *slogEntry) Errorf(format string, args ...any) {\n\te.logWithSkip(3, sflog.LevelError, fmt.Sprintf(format, args...))\n}\n\nfunc (e *slogEntry) Fatalf(format string, args ...any) {\n\te.logWithSkip(3, sflog.LevelFatal, fmt.Sprintf(format, args...))\n\tos.Exit(1)\n}\n\n// Implement all direct logging methods\n// Skip depth = 3 assumes standard wrapper chain: levelFilteringEntry -> secretMaskingEntry -> slogEntry\n// If wrapper chain changes, update this value. See TestSkipDepthWarning test.\nfunc (e *slogEntry) Trace(msg string) {\n\te.logWithSkip(3, sflog.LevelTrace, msg)\n}\n\nfunc (e *slogEntry) Debug(msg string) {\n\te.logWithSkip(3, sflog.LevelDebug, msg)\n}\n\nfunc (e *slogEntry) Info(msg string) {\n\te.logWithSkip(3, sflog.LevelInfo, msg)\n}\n\nfunc (e *slogEntry) Warn(msg string) {\n\te.logWithSkip(3, sflog.LevelWarn, msg)\n}\n\nfunc (e *slogEntry) Error(msg string) {\n\te.logWithSkip(3, sflog.LevelError, msg)\n}\n\nfunc (e *slogEntry) Fatal(msg string) {\n\te.logWithSkip(3, sflog.LevelFatal, msg)\n\tos.Exit(1)\n}\n\n// Helper methods for internal use and easy_logging support\nfunc (log *rawLogger) closeFileOnLoggerReplace(file *os.File) error {\n\tlog.mu.Lock()\n\tdefer log.mu.Unlock()\n\n\tif log.file != nil && log.file != file {\n\t\treturn fmt.Errorf(\"could not set a file to close on logger reset because there were already set one\")\n\t}\n\tlog.file = file\n\treturn nil\n}\n\n// CloseFileOnLoggerReplace is exported for easy_logging support\nfunc (log *rawLogger) CloseFileOnLoggerReplace(file *os.File) error {\n\treturn log.closeFileOnLoggerReplace(file)\n}\n\n// ReplaceGlobalLogger closes the current logger's file (for easy_logging support)\n// The actual global logger replacement is handled by the main package\nfunc (log *rawLogger) ReplaceGlobalLogger(newLogger any) {\n\tif log.file != nil {\n\t\t_ = log.file.Close()\n\t}\n}\n\n// Ensure rawLogger implements SFLogger\nvar _ SFLogger = (*rawLogger)(nil)\n"
  },
  {
    "path": "internal/logger/source_location_test.go",
    "content": "package logger\n\nimport (\n\t\"bytes\"\n\t\"strings\"\n\t\"testing\"\n)\n\n// IMPORTANT: The skip depth values in rawLogger and slogEntry assume the standard wrapper chain:\n// For logger methods: levelFilteringLogger -> secretMaskingLogger -> rawLogger (skip=3)\n// For entry methods: levelFilteringEntry -> secretMaskingEntry -> slogEntry (skip=3)\n//\n// These tests verify the standard configuration. If you add or remove wrapper layers, you MUST update:\n// - internal/logger/slog_logger.go: rawLogger methods (currently skip=3)\n// - internal/logger/slog_logger.go: slogEntry methods (currently skip=3)\n\n// TestSourceLocationWithLevelFiltering verifies that source location is correct\n// with the standard wrapper chain: levelFilteringLogger -> secretMaskingLogger -> rawLogger\nfunc TestSourceLocationWithLevelFiltering(t *testing.T) {\n\tinnerLogger := newRawLogger()\n\tvar buf bytes.Buffer\n\tinnerLogger.SetOutput(&buf)\n\t_ = innerLogger.SetLogLevel(\"debug\")\n\n\t// Build the standard wrapper chain\n\tmasked := newSecretMaskingLogger(innerLogger)\n\tfiltered := newLevelFilteringLogger(masked)\n\n\tfiltered.Debug(\"test message\") // Line 31 - This line should appear in source location\n\n\toutput := buf.String()\n\t// Check that the source location points to this test file, not the wrappers\n\tif !strings.Contains(output, \"source_location_test.go\") {\n\t\tt.Errorf(\"Expected source location to contain 'source_location_test.go', got: %s\", output)\n\t}\n\tif strings.Contains(output, \"level_filtering.go\") {\n\t\tt.Errorf(\"Source location should not contain 'level_filtering.go', got: %s\", output)\n\t}\n\tif strings.Contains(output, \"secret_masking.go\") {\n\t\tt.Errorf(\"Source location should not contain 'secret_masking.go', got: %s\", output)\n\t}\n}\n\n// TestSourceLocationWithDebugf verifies formatted logging also reports correct source\nfunc TestSourceLocationWithDebugf(t *testing.T) {\n\tinnerLogger := newRawLogger()\n\tvar buf bytes.Buffer\n\tinnerLogger.SetOutput(&buf)\n\t_ = innerLogger.SetLogLevel(\"debug\")\n\n\t// Build the standard wrapper chain\n\tmasked := newSecretMaskingLogger(innerLogger)\n\tfiltered := newLevelFilteringLogger(masked)\n\n\tfiltered.Debugf(\"formatted message: %s\", \"test\") // Line 58 - This line should appear\n\n\toutput := buf.String()\n\tif !strings.Contains(output, \"source_location_test.go\") {\n\t\tt.Errorf(\"Expected source location to contain 'source_location_test.go', got: %s\", output)\n\t}\n\tif strings.Contains(output, \"level_filtering.go\") || strings.Contains(output, \"secret_masking.go\") {\n\t\tt.Errorf(\"Source location should not contain wrapper files, got: %s\", output)\n\t}\n}\n\n// TestSourceLocationWithEntry verifies that structured logging (WithField) also works correctly\nfunc TestSourceLocationWithEntry(t *testing.T) {\n\tinnerLogger := newRawLogger()\n\tvar buf bytes.Buffer\n\tinnerLogger.SetOutput(&buf)\n\t_ = innerLogger.SetLogLevel(\"debug\")\n\n\t// Build the standard wrapper chain\n\tmasked := newSecretMaskingLogger(innerLogger)\n\tfiltered := newLevelFilteringLogger(masked)\n\n\tfiltered.WithField(\"key\", \"value\").Debug(\"entry message\") // Line 82 - This line should appear\n\n\toutput := buf.String()\n\tif !strings.Contains(output, \"source_location_test.go\") {\n\t\tt.Errorf(\"Expected source location to contain 'source_location_test.go', got: %s\", output)\n\t}\n\t// Also verify the field is present\n\tif !strings.Contains(output, \"key=value\") {\n\t\tt.Errorf(\"Expected output to contain 'key=value', got: %s\", output)\n\t}\n}\n\n// TestSkipDepthWarning documents the skip depth assumption and fails if wrappers change\n// This test intentionally checks implementation details to warn developers when skip depths need updating.\nfunc TestSkipDepthWarning(t *testing.T) {\n\tinnerLogger := newRawLogger()\n\tvar buf bytes.Buffer\n\tinnerLogger.SetOutput(&buf)\n\t_ = innerLogger.SetLogLevel(\"debug\")\n\n\t// Build the expected standard wrapper chain\n\tmasked := newSecretMaskingLogger(innerLogger)\n\tfiltered := newLevelFilteringLogger(masked)\n\n\t// Log from this test\n\tfiltered.Debug(\"skip depth test\") // Line 102 - This line should appear in source location\n\n\toutput := buf.String()\n\n\tif !strings.Contains(output, \"source_location_test.go:102\") {\n\t\tt.Errorf(`\nSkip depth appears incorrect!\n\nExpected source location: source_location_test.go:102\nGot: %s\n\nIf you added/removed a wrapper layer, update the skip values in:\n  - internal/logger/slog_logger.go: rawLogger methods (currently skip=3)\n  - internal/logger/slog_logger.go: slogEntry methods (currently skip=3)\n\nCurrent wrapper chain for logger methods:\n  Driver code -> levelFilteringLogger -> secretMaskingLogger -> rawLogger\n\nCurrent wrapper chain for entry methods:\n  Driver code -> levelFilteringEntry -> secretMaskingEntry -> slogEntry\n`, output)\n\t}\n}\n"
  },
  {
    "path": "internal/os/libc_info.go",
    "content": "package os\n\nimport (\n\t\"bufio\"\n\t\"debug/elf\"\n\t\"io\"\n\t\"os\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n)\n\nvar (\n\tlibcInfo     LibcInfo\n\tlibcInfoOnce sync.Once\n)\n\n// LibcInfo contains information about the C standard library in use.\ntype LibcInfo struct {\n\tFamily  string // \"glibc\", \"musl\", or \"\" if not detected\n\tVersion string // e.g., \"2.31\", \"1.2.4\", or \"\" if not determined\n}\n\n// parseProcMapsForLibc scans the contents of /proc/self/maps and returns\n// the libc family (\"glibc\" or \"musl\") and the filesystem path to the mapped library.\nfunc parseProcMapsForLibc(r io.Reader) (family string, libcPath string) {\n\tscanner := bufio.NewScanner(r)\n\tfor scanner.Scan() {\n\t\tline := scanner.Text()\n\t\t// /proc/self/maps format: addr perms offset dev inode pathname\n\t\tfields := strings.Fields(line)\n\t\tif len(fields) < 6 {\n\t\t\tcontinue\n\t\t}\n\t\tpath := fields[len(fields)-1]\n\t\tif strings.Contains(path, \"musl\") {\n\t\t\treturn \"musl\", path\n\t\t}\n\t\tif strings.Contains(path, \"libc.so.6\") {\n\t\t\treturn \"glibc\", path\n\t\t}\n\t}\n\treturn \"\", \"\"\n}\n\nvar glibcVersionPattern = regexp.MustCompile(`^GLIBC_(\\d+\\.\\d+(?:\\.\\d+)?)$`)\n\n// glibcVersionFromELF opens the given ELF file (libc.so.6) and extracts the\n// glibc version from its SHT_GNU_verdef section via DynamicVersions().\n// It returns the highest GLIBC_x.y[.z] version found.\nfunc glibcVersionFromELF(path string) string {\n\tf, err := elf.Open(path)\n\tif err != nil {\n\t\treturn \"\"\n\t}\n\tdefer func() {\n\t\t_ = f.Close()\n\t}()\n\n\tversions, err := f.DynamicVersions()\n\tif err != nil {\n\t\treturn \"\"\n\t}\n\n\tvar best string\n\tfor _, v := range versions {\n\t\tm := glibcVersionPattern.FindStringSubmatch(v.Name)\n\t\tif m != nil {\n\t\t\tif best == \"\" || compareVersions(m[1], best) > 0 {\n\t\t\t\tbest = m[1]\n\t\t\t}\n\t\t}\n\t}\n\treturn best\n}\n\nvar muslVersionPattern = regexp.MustCompile(`Version (\\d+\\.\\d+\\.\\d+)`)\n\n// muslVersionFromBinary reads the musl library binary and searches for the\n// embedded version string pattern \"Version X.Y.Z\".\nfunc muslVersionFromBinary(path string) string {\n\tf, err := os.Open(path)\n\tif err != nil {\n\t\treturn \"\"\n\t}\n\tdefer func() {\n\t\t_ = f.Close()\n\t}()\n\n\tbuf := make([]byte, 1<<20) // 1MB limit\n\tn, _ := io.ReadFull(f, buf)\n\tcontent := string(buf[:n])\n\n\tm := muslVersionPattern.FindStringSubmatch(content)\n\tif m != nil {\n\t\treturn m[1]\n\t}\n\treturn \"\"\n}\n\n// compareVersions compares two dotted version strings numerically.\n// Returns -1 if a < b, 0 if a == b, 1 if a > b.\nfunc compareVersions(a, b string) int {\n\tpartsA := strings.Split(a, \".\")\n\tpartsB := strings.Split(b, \".\")\n\tmaxLen := max(len(partsB), len(partsA))\n\tfor i := range maxLen {\n\t\tvar va, vb int\n\t\tif i < len(partsA) {\n\t\t\tva, _ = strconv.Atoi(partsA[i])\n\t\t}\n\t\tif i < len(partsB) {\n\t\t\tvb, _ = strconv.Atoi(partsB[i])\n\t\t}\n\t\tif va < vb {\n\t\t\treturn -1\n\t\t}\n\t\tif va > vb {\n\t\t\treturn 1\n\t\t}\n\t}\n\treturn 0\n}\n"
  },
  {
    "path": "internal/os/libc_info_linux.go",
    "content": "//go:build linux\n\npackage os\n\nimport \"os\"\n\n// GetLibcInfo returns the libc family and version on Linux.\n// The result is cached so the detection only runs once.\nfunc GetLibcInfo() LibcInfo {\n\tlibcInfoOnce.Do(func() {\n\t\tlibcInfo = detectLibcInfo()\n\t})\n\treturn libcInfo\n}\n\nfunc detectLibcInfo() LibcInfo {\n\tfd, err := os.Open(\"/proc/self/maps\")\n\tif err != nil {\n\t\treturn LibcInfo{}\n\t}\n\tdefer func() {\n\t\t_ = fd.Close()\n\t}()\n\n\tfamily, libcPath := parseProcMapsForLibc(fd)\n\tif family == \"\" {\n\t\treturn LibcInfo{}\n\t}\n\n\tvar version string\n\tswitch family {\n\tcase \"glibc\":\n\t\tversion = glibcVersionFromELF(libcPath)\n\tcase \"musl\":\n\t\tversion = muslVersionFromBinary(libcPath)\n\t}\n\n\treturn LibcInfo{Family: family, Version: version}\n}\n"
  },
  {
    "path": "internal/os/libc_info_notlinux.go",
    "content": "//go:build !linux\n\npackage os\n\n// GetLibcInfo returns an empty LibcInfo on non-Linux platforms.\nfunc GetLibcInfo() LibcInfo {\n\treturn LibcInfo{}\n}\n"
  },
  {
    "path": "internal/os/libc_info_test.go",
    "content": "package os\n\nimport (\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestParseProcMapsGlibc(t *testing.T) {\n\tmaps := `7f1234560000-7f1234580000 r-xp 00000000 08:01 12345  /usr/lib/x86_64-linux-gnu/libc.so.6\n7f1234580000-7f1234590000 r--p 00020000 08:01 12345  /usr/lib/x86_64-linux-gnu/libc.so.6`\n\tfamily, path := parseProcMapsForLibc(strings.NewReader(maps))\n\tif family != \"glibc\" {\n\t\tt.Errorf(\"expected glibc, got %q\", family)\n\t}\n\tif path != \"/usr/lib/x86_64-linux-gnu/libc.so.6\" {\n\t\tt.Errorf(\"unexpected path: %q\", path)\n\t}\n}\n\nfunc TestParseProcMapsMusl(t *testing.T) {\n\tmaps := `7f1234560000-7f1234580000 r-xp 00000000 08:01 12345  /lib/ld-musl-x86_64.so.1`\n\tfamily, path := parseProcMapsForLibc(strings.NewReader(maps))\n\tif family != \"musl\" {\n\t\tt.Errorf(\"expected musl, got %q\", family)\n\t}\n\tif path != \"/lib/ld-musl-x86_64.so.1\" {\n\t\tt.Errorf(\"unexpected path: %q\", path)\n\t}\n}\n\nfunc TestParseProcMapsMuslLibc(t *testing.T) {\n\tmaps := `7f1234560000-7f1234580000 r-xp 00000000 08:01 12345  /lib/libc.musl-x86_64.so.1`\n\tfamily, path := parseProcMapsForLibc(strings.NewReader(maps))\n\tif family != \"musl\" {\n\t\tt.Errorf(\"expected musl, got %q\", family)\n\t}\n\tif path != \"/lib/libc.musl-x86_64.so.1\" {\n\t\tt.Errorf(\"unexpected path: %q\", path)\n\t}\n}\n\nfunc TestParseProcMapsEmpty(t *testing.T) {\n\tfamily, path := parseProcMapsForLibc(strings.NewReader(\"\"))\n\tif family != \"\" || path != \"\" {\n\t\tt.Errorf(\"expected empty, got family=%q path=%q\", family, path)\n\t}\n}\n\nfunc TestParseProcMapsNoLibc(t *testing.T) {\n\tmaps := `7f1234560000-7f1234580000 r-xp 00000000 08:01 12345  /usr/lib/libpthread.so.0\n7fff12340000-7fff12360000 rw-p 00000000 00:00 0  [stack]`\n\tfamily, path := parseProcMapsForLibc(strings.NewReader(maps))\n\tif family != \"\" || path != \"\" {\n\t\tt.Errorf(\"expected empty, got family=%q path=%q\", family, path)\n\t}\n}\n\nfunc TestParseProcMapsShortLines(t *testing.T) {\n\tmaps := `7f1234560000-7f1234580000 r-xp 00000000 08:01 12345\n7fff12340000-7fff12360000 rw-p 00000000 00:00 0`\n\tfamily, path := parseProcMapsForLibc(strings.NewReader(maps))\n\tif family != \"\" || path != \"\" {\n\t\tt.Errorf(\"expected empty for short lines, got family=%q path=%q\", family, path)\n\t}\n}\n\nfunc TestCompareVersions(t *testing.T) {\n\tcases := []struct {\n\t\ta, b string\n\t\twant int\n\t}{\n\t\t{\"2.31\", \"2.17\", 1},\n\t\t{\"2.17\", \"2.31\", -1},\n\t\t{\"2.31\", \"2.31\", 0},\n\t\t{\"2.31.1\", \"2.31\", 1},\n\t\t{\"2.31\", \"2.31.1\", -1},\n\t\t{\"1.2.3\", \"1.2.3\", 0},\n\t\t{\"10.0\", \"9.99\", 1},\n\t}\n\tfor _, c := range cases {\n\t\tgot := compareVersions(c.a, c.b)\n\t\tif got != c.want {\n\t\t\tt.Errorf(\"compareVersions(%q, %q) = %d, want %d\", c.a, c.b, got, c.want)\n\t\t}\n\t}\n}\n\nfunc TestGetLibcInfoNonLinux(t *testing.T) {\n\tif runtime.GOOS == \"linux\" {\n\t\tt.Skip(\"this test is for non-Linux platforms\")\n\t}\n\tinfo := GetLibcInfo()\n\tif info.Family != \"\" || info.Version != \"\" {\n\t\tt.Errorf(\"expected empty LibcInfo on non-Linux, got %+v\", info)\n\t}\n}\n"
  },
  {
    "path": "internal/os/os_details.go",
    "content": "package os\n\nimport (\n\t\"bufio\"\n\t\"os\"\n\t\"strings\"\n\t\"sync\"\n)\n\nvar (\n\tosDetails     map[string]string\n\tosDetailsOnce sync.Once\n)\n\n// allowedOsReleaseKeys defines the keys we want to extract from /etc/os-release\nvar allowedOsReleaseKeys = map[string]bool{\n\t\"NAME\":          true,\n\t\"PRETTY_NAME\":   true,\n\t\"ID\":            true,\n\t\"IMAGE_ID\":      true,\n\t\"IMAGE_VERSION\": true,\n\t\"BUILD_ID\":      true,\n\t\"VERSION\":       true,\n\t\"VERSION_ID\":    true,\n}\n\n// readOsRelease reads and parses an os-release file from the given path.\n// Returns nil on any error.\nfunc readOsRelease(filename string) map[string]string {\n\tfile, err := os.Open(filename)\n\tif err != nil {\n\t\treturn nil\n\t}\n\tdefer func() {\n\t\t_ = file.Close()\n\t}()\n\n\tresult := make(map[string]string)\n\tscanner := bufio.NewScanner(file)\n\tfor scanner.Scan() {\n\t\tline := scanner.Text()\n\t\tline = strings.TrimSpace(line)\n\n\t\t// Skip empty lines\n\t\tif line == \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Parse KEY=VALUE format\n\t\tparts := strings.SplitN(line, \"=\", 2)\n\t\tif len(parts) != 2 {\n\t\t\tcontinue\n\t\t}\n\n\t\tkey := strings.TrimSpace(parts[0])\n\t\tvalue := strings.TrimSpace(parts[1])\n\n\t\t// Only include allowed keys\n\t\tif !allowedOsReleaseKeys[key] {\n\t\t\tcontinue\n\t\t}\n\n\t\tvalue = unquoteOsReleaseValue(value)\n\t\tresult[key] = value\n\t}\n\n\tif len(result) == 0 {\n\t\treturn nil\n\t}\n\n\treturn result\n}\n\n// unquoteOsReleaseValue extracts the value from a possibly quoted string.\n// If the value is wrapped in matching single or double quotes, the content\n// between the quotes is returned (ignoring anything after the closing quote).\n// Otherwise the raw value is returned.\nfunc unquoteOsReleaseValue(s string) string {\n\tif len(s) >= 2 && (s[0] == '\"' || s[0] == '\\'') {\n\t\tquote := s[0]\n\t\tif end := strings.IndexByte(s[1:], quote); end >= 0 {\n\t\t\treturn s[1 : 1+end]\n\t\t}\n\t}\n\treturn s\n}\n"
  },
  {
    "path": "internal/os/os_details_linux.go",
    "content": "//go:build linux\n\npackage os\n\n// GetOsDetails returns OS details from /etc/os-release on Linux.\n// The result is cached so it's only read once.\nfunc GetOsDetails() map[string]string {\n\tosDetailsOnce.Do(func() {\n\t\tosDetails = readOsRelease(\"/etc/os-release\")\n\t})\n\treturn osDetails\n}\n"
  },
  {
    "path": "internal/os/os_details_notlinux.go",
    "content": "//go:build !linux\n\npackage os\n\n// GetOsDetails returns nil on non-Linux platforms.\nfunc GetOsDetails() map[string]string {\n\treturn nil\n}\n"
  },
  {
    "path": "internal/os/os_details_test.go",
    "content": "package os\n\nimport (\n\t\"testing\"\n)\n\nfunc TestReadOsRelease(t *testing.T) {\n\tresult := readOsRelease(\"test_data/sample_os_release\")\n\tif result == nil {\n\t\tt.Fatal(\"expected non-nil result from sample_os_release\")\n\t}\n\n\t// Verify only allowed keys are parsed (8 keys expected)\n\t// Note: test file also contains lines with spaces only, spaces+tabs,\n\t// and comments - all should be ignored\n\texpectedEntries := map[string]string{\n\t\t\"NAME\":          \"Ubuntu\",\n\t\t\"PRETTY_NAME\":   \"Ubuntu 22.04.3 LTS\",\n\t\t\"ID\":            \"ubuntu\",\n\t\t\"VERSION_ID\":    \"22.04\",\n\t\t\"VERSION\":       \"22.04.3 LTS (Jammy Jellyfish)\",\n\t\t\"BUILD_ID\":      \"20231115\",\n\t\t\"IMAGE_ID\":      \"ubuntu-jammy\",\n\t\t\"IMAGE_VERSION\": \"1.0.0\",\n\t}\n\n\t// Check correct number of entries (no extra keys parsed)\n\tif len(result) != len(expectedEntries) {\n\t\tt.Errorf(\"expected %d entries, got %d. Result: %v\", len(expectedEntries), len(result), result)\n\t}\n\n\t// Verify each expected entry\n\tfor key, expectedValue := range expectedEntries {\n\t\tactualValue, exists := result[key]\n\t\tif !exists {\n\t\t\tt.Errorf(\"expected key %q not found in result\", key)\n\t\t\tcontinue\n\t\t}\n\t\tif actualValue != expectedValue {\n\t\t\tt.Errorf(\"key %q: expected %q, got %q\", key, expectedValue, actualValue)\n\t\t}\n\t}\n\n\t// Verify all keys are expected\n\tfor key := range result {\n\t\t_, exists := expectedEntries[key]\n\t\tif !exists {\n\t\t\tt.Errorf(\"expected to not contain key %v\", key)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "internal/os/test_data/sample_os_release",
    "content": "# This is a comment and should be ignored\nNAME=\"Ubuntu\"\nPRETTY_NAME='Ubuntu 22.04.3 LTS' #this is pretty name\nID=ubuntu\nVERSION_ID=\"22.04\"\nVERSION=\"22.04.3 LTS (Jammy Jellyfish)\"\nBUILD_ID=20231115\nIMAGE_ID=ubuntu-jammy\nIMAGE_VERSION=1.0.0\n\n# These keys should be ignored (not in allowed list)\nHOME_URL=\"https://www.ubuntu.com/\"\nSUPPORT_URL=https://help.ubuntu.com/\nBUG_REPORT_URL=\"https://bugs.launchpad.net/ubuntu/\"\nPRIVACY_POLICY_URL=\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\"\nUBUNTU_CODENAME=jammy\nVARIANT=\"Server\"\n\n# Empty lines should be ignored\n\n# Line with spaces only should be ignored\n   \n# Line with spaces and tabs should be ignored\n \t \t \n# Lines without = should be ignored\nINVALID_LINE_WITHOUT_EQUALS\n"
  },
  {
    "path": "internal/query/response_types.go",
    "content": "package query\n\n// ExecResponseRowType describes column metadata from a query response.\ntype ExecResponseRowType struct {\n\tName       string          `json:\"name\"`\n\tFields     []FieldMetadata `json:\"fields\"`\n\tByteLength int64           `json:\"byteLength\"`\n\tLength     int64           `json:\"length\"`\n\tType       string          `json:\"type\"`\n\tPrecision  int64           `json:\"precision\"`\n\tScale      int64           `json:\"scale\"`\n\tNullable   bool            `json:\"nullable\"`\n}\n\n// FieldMetadata describes metadata for a field, including nested fields for complex types.\ntype FieldMetadata struct {\n\tName      string          `json:\"name,omitempty\"`\n\tType      string          `json:\"type\"`\n\tNullable  bool            `json:\"nullable\"`\n\tLength    int             `json:\"length\"`\n\tScale     int             `json:\"scale\"`\n\tPrecision int             `json:\"precision\"`\n\tFields    []FieldMetadata `json:\"fields,omitempty\"`\n}\n\n// ExecResponseChunk describes metadata for a chunk of query results, including URL and size information.\ntype ExecResponseChunk struct {\n\tURL              string `json:\"url\"`\n\tRowCount         int    `json:\"rowCount\"`\n\tUncompressedSize int64  `json:\"uncompressedSize\"`\n\tCompressedSize   int64  `json:\"compressedSize\"`\n}\n"
  },
  {
    "path": "internal/query/transform.go",
    "content": "package query\n\n// ToFieldMetadata transforms ExecResponseRowType to FieldMetadata.\nfunc (ex *ExecResponseRowType) ToFieldMetadata() FieldMetadata {\n\treturn FieldMetadata{\n\t\tex.Name,\n\t\tex.Type,\n\t\tex.Nullable,\n\t\tint(ex.Length),\n\t\tint(ex.Scale),\n\t\tint(ex.Precision),\n\t\tex.Fields,\n\t}\n}\n"
  },
  {
    "path": "internal/types/types.go",
    "content": "package types\n\nimport (\n\t\"strings\"\n)\n\n// SnowflakeType represents the various data types supported by Snowflake, including both standard and internal types used by the driver.\ntype SnowflakeType int\n\nconst (\n\t// FixedType represents the FIXED data type in Snowflake, which is a numeric type with a specified precision and scale.\n\tFixedType SnowflakeType = iota\n\t// RealType represents the REAL data type in Snowflake, which is a floating-point numeric type.\n\tRealType\n\t// DecfloatType represents the DECFLOAT data type in Snowflake, which is a decimal floating-point numeric type with high precision.\n\tDecfloatType\n\t// TextType represents the TEXT data type in Snowflake, which is a variable-length string type.\n\tTextType\n\t// DateType represents the DATE data type in Snowflake, which is used to store calendar dates (year, month, day).\n\tDateType\n\t// VariantType represents the VARIANT data type in Snowflake, which is a semi-structured data type that can store values of various types.\n\tVariantType\n\t// TimestampLtzType represents the TIMESTAMP_LTZ data type in Snowflake, which is a timestamp with local time zone information.\n\tTimestampLtzType\n\t// TimestampNtzType represents the TIMESTAMP_NTZ data type in Snowflake, which is a timestamp without time zone information.\n\tTimestampNtzType\n\t// TimestampTzType represents the TIMESTAMP_TZ data type in Snowflake, which is a timestamp with time zone information.\n\tTimestampTzType\n\t// ObjectType represents the OBJECT data type in Snowflake, which is a semi-structured data type that can store key-value pairs.\n\tObjectType\n\t// ArrayType represents the ARRAY data type in Snowflake, which is a semi-structured data type that can store ordered lists of values.\n\tArrayType\n\t// MapType represents the MAP data type in Snowflake, which is a semi-structured data type that can store key-value pairs with unique keys.\n\tMapType\n\t// BinaryType represents the BINARY data type in Snowflake, which is used to store binary data (byte arrays).\n\tBinaryType\n\t// TimeType represents the TIME data type in Snowflake, which is used to store time values (hour, minute, second).\n\tTimeType\n\t// BooleanType represents the BOOLEAN data type in Snowflake, which is used to store boolean values (true/false).\n\tBooleanType\n\n\t// NullType represents a null value type, used internally to represent null values in Snowflake.\n\tNullType\n\t// SliceType represents a slice type, used internally to represent slices of data in Snowflake.\n\tSliceType\n\t// ChangeType represents a change type, used internally to represent changes in data in Snowflake.\n\tChangeType\n\t// UnSupportedType represents an unsupported type, used internally to represent types that are not supported by the driver.\n\tUnSupportedType\n\t// NilObjectType represents a nil object type, used internally to represent null objects in Snowflake.\n\tNilObjectType\n\t// NilArrayType represents a nil array type, used internally to represent null arrays in Snowflake.\n\tNilArrayType\n\t// NilMapType represents a nil map type, used internally to represent null maps in Snowflake.\n\tNilMapType\n)\n\n// SnowflakeToDriverType maps Snowflake data type names (as strings) to their corresponding SnowflakeType constants used internally by the driver.\n// This mapping allows for easy conversion between the string representation of Snowflake types and the internal enumeration used by the driver for type handling.\nvar SnowflakeToDriverType = map[string]SnowflakeType{\n\t\"FIXED\":         FixedType,\n\t\"REAL\":          RealType,\n\t\"DECFLOAT\":      DecfloatType,\n\t\"TEXT\":          TextType,\n\t\"DATE\":          DateType,\n\t\"VARIANT\":       VariantType,\n\t\"TIMESTAMP_LTZ\": TimestampLtzType,\n\t\"TIMESTAMP_NTZ\": TimestampNtzType,\n\t\"TIMESTAMP_TZ\":  TimestampTzType,\n\t\"OBJECT\":        ObjectType,\n\t\"ARRAY\":         ArrayType,\n\t\"MAP\":           MapType,\n\t\"BINARY\":        BinaryType,\n\t\"TIME\":          TimeType,\n\t\"BOOLEAN\":       BooleanType,\n\t\"NULL\":          NullType,\n\t\"SLICE\":         SliceType,\n\t\"CHANGE_TYPE\":   ChangeType,\n\t\"NOT_SUPPORTED\": UnSupportedType}\n\n// DriverTypeToSnowflake is the inverse mapping of SnowflakeToDriverType, allowing for conversion from SnowflakeType constants back to their string representations.\nvar DriverTypeToSnowflake = invertMap(SnowflakeToDriverType)\n\nfunc invertMap(m map[string]SnowflakeType) map[SnowflakeType]string {\n\tinv := make(map[SnowflakeType]string)\n\tfor k, v := range m {\n\t\tif _, ok := inv[v]; ok {\n\t\t\tpanic(\"failed to create DriverTypeToSnowflake map due to duplicated values\")\n\t\t}\n\t\tinv[v] = k\n\t}\n\treturn inv\n}\n\n// Byte returns the byte representation of the SnowflakeType, which can be used for efficient type handling and comparisons within the driver.\nfunc (st SnowflakeType) Byte() byte {\n\treturn byte(st)\n}\n\nfunc (st SnowflakeType) String() string {\n\treturn DriverTypeToSnowflake[st]\n}\n\n// GetSnowflakeType takes a string representation of a Snowflake data type and returns the corresponding SnowflakeType constant used internally by the driver.\nfunc GetSnowflakeType(typ string) SnowflakeType {\n\treturn SnowflakeToDriverType[strings.ToUpper(typ)]\n}\n"
  },
  {
    "path": "local_storage_client.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bufio\"\n\t\"cmp\"\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strings\"\n)\n\ntype localUtil struct {\n}\n\nfunc (util *localUtil) createClient(_ *execResponseStageInfo, _ bool, _ *Config, _ *snowflakeTelemetry) (cloudClient, error) {\n\treturn nil, nil\n}\n\nfunc (util *localUtil) uploadOneFileWithRetry(_ context.Context, meta *fileMetadata) error {\n\tvar frd *bufio.Reader\n\tif meta.srcStream != nil {\n\t\tb := cmp.Or(meta.realSrcStream, meta.srcStream)\n\t\tfrd = bufio.NewReader(b)\n\t} else {\n\t\tf, err := os.Open(meta.realSrcFileName)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer func() {\n\t\t\tif err = f.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"failed to close the file %v: %v\", meta.realSrcFileName, err)\n\t\t\t}\n\t\t}()\n\t\tfrd = bufio.NewReader(f)\n\t}\n\n\tuser, err := expandUser(meta.stageInfo.Location)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif !meta.overwrite {\n\t\tif _, err := os.Stat(filepath.Join(user, meta.dstFileName)); err == nil {\n\t\t\tmeta.dstFileSize = 0\n\t\t\tmeta.resStatus = skipped\n\t\t\treturn nil\n\t\t}\n\t}\n\toutput, err := os.OpenFile(filepath.Join(user, meta.dstFileName), os.O_CREATE|os.O_WRONLY, readWriteFileMode)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif err = output.Close(); err != nil {\n\t\t\tlogger.Warnf(\"failed to close the file %v: %v\", meta.dstFileName, err)\n\t\t}\n\t}()\n\tdata := make([]byte, meta.uploadSize)\n\tfor {\n\t\tn, err := frd.Read(data)\n\t\tif err != nil && err != io.EOF {\n\t\t\treturn err\n\t\t}\n\t\tif n == 0 {\n\t\t\tbreak\n\t\t}\n\n\t\tif _, err = output.Write(data); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tmeta.dstFileSize = meta.uploadSize\n\tmeta.resStatus = uploaded\n\treturn nil\n}\n\nfunc (util *localUtil) downloadOneFile(_ context.Context, meta *fileMetadata) error {\n\tsrcFileName := meta.srcFileName\n\tif strings.HasPrefix(meta.srcFileName, fmt.Sprintf(\"%b\", os.PathSeparator)) {\n\t\tsrcFileName = srcFileName[1:]\n\t}\n\tuser, err := expandUser(meta.stageInfo.Location)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfullSrcFileName := path.Join(user, srcFileName)\n\tuser, err = expandUser(meta.localLocation)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfullDstFileName := path.Join(user, baseName(meta.dstFileName))\n\tbaseDir, err := getDirectory()\n\tif err != nil {\n\t\treturn err\n\t}\n\tif _, err = os.Stat(baseDir); os.IsNotExist(err) {\n\t\tif err = os.MkdirAll(baseDir, os.ModePerm); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tdata, err := os.ReadFile(fullSrcFileName)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif err = os.WriteFile(fullDstFileName, data, readWriteFileMode); err != nil {\n\t\treturn err\n\t}\n\tfi, err := os.Stat(fullDstFileName)\n\tif err != nil {\n\t\treturn err\n\t}\n\tmeta.dstFileSize = fi.Size()\n\tmeta.resStatus = downloaded\n\treturn nil\n}\n"
  },
  {
    "path": "local_storage_client_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"compress/gzip\"\n\t\"context\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestLocalUpload(t *testing.T) {\n\ttmpDir, err := os.MkdirTemp(\"\", \"local_put\")\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\tfname := filepath.Join(tmpDir, \"test_put_get.txt.gz\")\n\toriginalContents := \"123,test1\\n456,test2\\n\"\n\n\tvar b bytes.Buffer\n\tgzw := gzip.NewWriter(&b)\n\t_, err = gzw.Write([]byte(originalContents))\n\tassertNilF(t, err)\n\tassertNilF(t, gzw.Close())\n\tif err := os.WriteFile(fname, b.Bytes(), readWriteFileMode); err != nil {\n\t\tt.Fatal(\"could not write to gzip file\")\n\t}\n\tputDir, err := os.MkdirTemp(\"\", \"put\")\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tinfo := execResponseStageInfo{\n\t\tLocation:     putDir,\n\t\tLocationType: \"LOCAL_FS\",\n\t}\n\tlocalUtil := new(localUtil)\n\tlocalCli, err := localUtil.createClient(&info, false, nil, nil)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"LOCAL_FS\",\n\t\tnoSleepingTime:    true,\n\t\tparallel:          4,\n\t\tclient:            localCli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(tmpDir, \"/test_put_get.txt.gz\"),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t}\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\terr = localUtil.uploadOneFileWithRetry(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif uploadMeta.resStatus != uploaded {\n\t\tt.Fatalf(\"failed to upload file\")\n\t}\n\n\tuploadMeta.overwrite = false\n\terr = localUtil.uploadOneFileWithRetry(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif uploadMeta.resStatus != skipped {\n\t\tt.Fatal(\"overwrite is false. should have skipped\")\n\t}\n\tfileStream, _ := os.Open(fname)\n\tctx := WithFilePutStream(context.Background(), fileStream)\n\tuploadMeta.fileStream, err = getFileStream(ctx)\n\tassertNilF(t, err)\n\n\terr = localUtil.uploadOneFileWithRetry(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif uploadMeta.resStatus != skipped {\n\t\tt.Fatalf(\"overwrite is false. should have skipped\")\n\t}\n\tuploadMeta.overwrite = true\n\terr = localUtil.uploadOneFileWithRetry(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif uploadMeta.resStatus != uploaded {\n\t\tt.Fatalf(\"failed to upload file\")\n\t}\n\n\tuploadMeta.realSrcStream = uploadMeta.srcStream\n\terr = localUtil.uploadOneFileWithRetry(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif uploadMeta.resStatus != uploaded {\n\t\tt.Fatalf(\"failed to upload file\")\n\t}\n}\n\nfunc TestDownloadLocalFile(t *testing.T) {\n\ttmpDir, err := os.MkdirTemp(\"\", \"local_put\")\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer func() {\n\t\tassertNilF(t, os.RemoveAll(tmpDir))\n\t}()\n\tfname := filepath.Join(tmpDir, \"test_put_get.txt.gz\")\n\toriginalContents := \"123,test1\\n456,test2\\n\"\n\n\tvar b bytes.Buffer\n\tgzw := gzip.NewWriter(&b)\n\t_, err = gzw.Write([]byte(originalContents))\n\tassertNilF(t, err)\n\tassertNilF(t, gzw.Close())\n\tif err := os.WriteFile(fname, b.Bytes(), readWriteFileMode); err != nil {\n\t\tt.Fatal(\"could not write to gzip file\")\n\t}\n\tputDir, err := os.MkdirTemp(\"\", \"put\")\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tinfo := execResponseStageInfo{\n\t\tLocation:     tmpDir,\n\t\tLocationType: \"LOCAL_FS\",\n\t}\n\tlocalUtil := new(localUtil)\n\tlocalCli, err := localUtil.createClient(&info, false, nil, nil)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"test_put_get.txt.gz\",\n\t\tstageLocationType: \"LOCAL_FS\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            localCli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"test_put_get.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"test_put_get.txt.gz\",\n\t\tlocalLocation:     putDir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t}\n\terr = localUtil.downloadOneFile(context.Background(), &downloadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif downloadMeta.resStatus != downloaded {\n\t\tt.Fatalf(\"failed to get file in local storage\")\n\t}\n\n\tdownloadMeta.srcFileName = \"test_put_get.txt.gz\"\n\terr = localUtil.downloadOneFile(context.Background(), &downloadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif downloadMeta.resStatus != downloaded {\n\t\tt.Fatalf(\"failed to get file in local storage\")\n\t}\n\n\tdownloadMeta.srcFileName = \"local://test_put_get.txt.gz\"\n\terr = localUtil.downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"file name is invalid. should have returned an error\")\n\t}\n}\n"
  },
  {
    "path": "location.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n)\n\nvar (\n\ttimezones           map[int]*time.Location\n\tupdateTimezoneMutex *sync.Mutex\n)\n\n// Location returns an offset (minutes) based Location object for Snowflake database.\nfunc Location(offset int) *time.Location {\n\tupdateTimezoneMutex.Lock()\n\tdefer updateTimezoneMutex.Unlock()\n\tloc := timezones[offset]\n\tif loc != nil {\n\t\treturn loc\n\t}\n\tloc = genTimezone(offset)\n\ttimezones[offset] = loc\n\treturn loc\n}\n\n// LocationWithOffsetString returns an offset based Location object. The offset string must consist of sHHMI where one sign\n// character '+'/'-' followed by zero filled hours and minutes.\nfunc LocationWithOffsetString(offsets string) (loc *time.Location, err error) {\n\tif len(offsets) != 5 {\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:      ErrInvalidOffsetStr,\n\t\t\tSQLState:    SQLStateInvalidDataTimeFormat,\n\t\t\tMessage:     errors.ErrMsgInvalidOffsetStr,\n\t\t\tMessageArgs: []any{offsets},\n\t\t}\n\t}\n\tif offsets[0] != '-' && offsets[0] != '+' {\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:      ErrInvalidOffsetStr,\n\t\t\tSQLState:    SQLStateInvalidDataTimeFormat,\n\t\t\tMessage:     errors.ErrMsgInvalidOffsetStr,\n\t\t\tMessageArgs: []any{offsets},\n\t\t}\n\t}\n\ts := 1\n\tif offsets[0] == '-' {\n\t\ts = -1\n\t}\n\tvar h, m int64\n\th, err = strconv.ParseInt(offsets[1:3], 10, 64)\n\tif err != nil {\n\t\treturn\n\t}\n\tm, err = strconv.ParseInt(offsets[3:], 10, 64)\n\tif err != nil {\n\t\treturn\n\t}\n\toffset := s * (int(h)*60 + int(m))\n\tloc = Location(offset)\n\treturn\n}\n\nfunc genTimezone(offset int) *time.Location {\n\tvar offsetSign string\n\tvar toffset int\n\tif offset < 0 {\n\t\toffsetSign = \"-\"\n\t\ttoffset = -offset\n\t} else {\n\t\toffsetSign = \"+\"\n\t\ttoffset = offset\n\t}\n\tlogger.Debugf(\"offset: %v\", offset)\n\treturn time.FixedZone(\n\t\tfmt.Sprintf(\"%v%02d%02d\",\n\t\t\toffsetSign, toffset/60, toffset%60), int(offset)*60)\n}\n\nfunc init() {\n\tupdateTimezoneMutex = &sync.Mutex{}\n\ttimezones = make(map[int]*time.Location, 48)\n\t// pre-generate all common timezones\n\tfor i := -720; i <= 720; i += 30 {\n\t\tlogger.Debugf(\"offset: %v\", i)\n\t\ttimezones[i] = genTimezone(i)\n\t}\n}\n\n// retrieve current location based on connection\nfunc getCurrentLocation(sp *syncParams) *time.Location {\n\tloc := time.Now().Location()\n\tif sp == nil {\n\t\treturn loc\n\t}\n\tvar err error\n\tif tz, ok := sp.get(\"timezone\"); ok && tz != nil {\n\t\tloc, err = time.LoadLocation(*tz)\n\t\tif err != nil {\n\t\t\tloc = time.Now().Location()\n\t\t}\n\t}\n\treturn loc\n}\n"
  },
  {
    "path": "location_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"reflect\"\n\t\"testing\"\n\t\"time\"\n)\n\ntype tcLocation struct {\n\tss  string\n\ttt  string\n\terr error\n}\n\nfunc TestWithOffsetString(t *testing.T) {\n\ttestcases := []tcLocation{\n\t\t{\n\t\t\tss:  \"+0700\",\n\t\t\ttt:  \"+0700\",\n\t\t\terr: nil,\n\t\t},\n\t\t{\n\t\t\tss:  \"-1200\",\n\t\t\ttt:  \"-1200\",\n\t\t\terr: nil,\n\t\t},\n\t\t{\n\t\t\tss:  \"+0710\",\n\t\t\ttt:  \"+0710\",\n\t\t\terr: nil,\n\t\t},\n\t\t{\n\t\t\tss: \"1200\",\n\t\t\ttt: \"\",\n\t\t\terr: &SnowflakeError{\n\t\t\t\tNumber:      ErrInvalidOffsetStr,\n\t\t\t\tMessage:     errors2.ErrMsgInvalidOffsetStr,\n\t\t\t\tMessageArgs: []any{\"1200\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tss: \"x1200\",\n\t\t\ttt: \"\",\n\t\t\terr: &SnowflakeError{\n\t\t\t\tNumber:      ErrInvalidOffsetStr,\n\t\t\t\tMessage:     errors2.ErrMsgInvalidOffsetStr,\n\t\t\t\tMessageArgs: []any{\"x1200\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tss: \"+12001\",\n\t\t\ttt: \"\",\n\t\t\terr: &SnowflakeError{\n\t\t\t\tNumber:      ErrInvalidOffsetStr,\n\t\t\t\tMessage:     errors2.ErrMsgInvalidOffsetStr,\n\t\t\t\tMessageArgs: []any{\"+12001\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tss: \"x12001\",\n\t\t\ttt: \"\",\n\t\t\terr: &SnowflakeError{\n\t\t\t\tNumber:      ErrInvalidOffsetStr,\n\t\t\t\tMessage:     errors2.ErrMsgInvalidOffsetStr,\n\t\t\t\tMessageArgs: []any{\"x12001\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tss:  \"-12CD\",\n\t\t\ttt:  \"\",\n\t\t\terr: errors.New(\"parse int error\"), // can this be more specific?\n\t\t},\n\t\t{\n\t\t\tss:  \"+ABCD\",\n\t\t\ttt:  \"\",\n\t\t\terr: errors.New(\"parse int error\"), // can this be more specific?\n\t\t},\n\t}\n\tfor _, t0 := range testcases {\n\t\tt.Run(t0.ss, func(t *testing.T) {\n\t\t\tloc, err := LocationWithOffsetString(t0.ss)\n\t\t\tif t0.err != nil {\n\t\t\t\tif t0.err != err {\n\t\t\t\t\tdriverError1, ok1 := t0.err.(*SnowflakeError)\n\t\t\t\t\tdriverError2, ok2 := err.(*SnowflakeError)\n\t\t\t\t\tif ok1 && ok2 && driverError1.Number != driverError2.Number {\n\t\t\t\t\t\tt.Fatalf(\"error expected: %v, got: %v\", t0.err, err)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"%v\", err)\n\t\t\t\t}\n\t\t\t\tif t0.tt != loc.String() {\n\t\t\t\t\tt.Fatalf(\"location string didn't match. expected: %v, got: %v\", t0.tt, loc)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestGetCurrentLocation(t *testing.T) {\n\tspecificTz := \"Pacific/Honolulu\"\n\tspecificLoc, err := time.LoadLocation(specificTz)\n\tif err != nil {\n\t\tt.Fatalf(\"Cannot initialize specific timezone location\")\n\t}\n\tincorrectTz := \"Not/exists\"\n\ttestcases := []struct {\n\t\tparams syncParams\n\t\tloc    *time.Location\n\t}{\n\t\t{\n\t\t\tparams: newSyncParams(map[string]*string{}),\n\t\t\tloc:    time.Now().Location(),\n\t\t},\n\t\t{\n\t\t\tparams: newSyncParams(map[string]*string{\n\t\t\t\t\"timezone\": nil,\n\t\t\t}),\n\t\t\tloc: time.Now().Location(),\n\t\t},\n\t\t{\n\t\t\tparams: newSyncParams(map[string]*string{\n\t\t\t\t\"timezone\": &specificTz,\n\t\t\t}),\n\t\t\tloc: specificLoc,\n\t\t},\n\t\t{\n\t\t\tparams: newSyncParams(map[string]*string{\n\t\t\t\t\"timezone\": &incorrectTz,\n\t\t\t}),\n\t\t\tloc: time.Now().Location(),\n\t\t},\n\t}\n\tfor i := range testcases {\n\t\ttc := &testcases[i]\n\t\tt.Run(fmt.Sprintf(\"%v\", tc.loc), func(t *testing.T) {\n\t\t\tloc := getCurrentLocation(&tc.params)\n\t\t\tif !reflect.DeepEqual(*loc, *tc.loc) {\n\t\t\t\tt.Fatalf(\"location mismatch. expected: %v, got: %v\", tc.loc, loc)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "locker.go",
    "content": "package gosnowflake\n\nimport \"sync\"\n\n// ---------- API ----------\ntype lockKeyType interface {\n\tlockID() string\n}\n\ntype locker interface {\n\tlock(lockKey lockKeyType) unlocker\n}\n\ntype unlocker interface {\n\tUnlock()\n}\n\nfunc getValueWithLock[T any](locker locker, lockKey lockKeyType, f func() (T, error)) (T, error) {\n\tunlock := locker.lock(lockKey)\n\tdefer unlock.Unlock()\n\treturn f()\n}\n\n// ---------- Locking implementation ----------\ntype exclusiveLockerType struct {\n\tm sync.Map\n}\n\nvar exclusiveLocker = newExclusiveLocker()\n\nfunc (e *exclusiveLockerType) lock(lockKey lockKeyType) unlocker {\n\tlogger.Debugf(\"Acquiring lock for %s\", lockKey.lockID())\n\t// We can ignore clearing up the map because the number of unique lockID is very limited, and they will be probably reused during the lifetime of the app.\n\tmu, _ := e.m.LoadOrStore(lockKey.lockID(), &sync.Mutex{})\n\tmu.(*sync.Mutex).Lock()\n\treturn mu.(*sync.Mutex)\n}\n\nfunc newExclusiveLocker() *exclusiveLockerType {\n\treturn &exclusiveLockerType{}\n}\n\n// ---------- No locking implementation ----------\ntype noopLockerType struct{}\n\nvar noopLocker = &noopLockerType{}\n\ntype noopUnlocker struct{}\n\nfunc (n noopUnlocker) Unlock() {\n\n}\n\nfunc (n *noopLockerType) lock(_ lockKeyType) unlocker {\n\tlogger.Debug(\"No lock is acquired\")\n\treturn noopUnlocker{}\n}\n"
  },
  {
    "path": "log.go",
    "content": "package gosnowflake\n\nimport (\n\tloggerinternal \"github.com/snowflakedb/gosnowflake/v2/internal/logger\"\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n)\n\n// SFSessionIDKey is context key of session id\nconst SFSessionIDKey ContextKey = \"LOG_SESSION_ID\"\n\n// SFSessionUserKey is context key of user id of a session\nconst SFSessionUserKey ContextKey = \"LOG_USER\"\n\nfunc init() {\n\t// Set default log keys in internal package\n\tSetLogKeys(SFSessionIDKey, SFSessionUserKey)\n}\n\n// Re-export types from sflog package for backward compatibility\ntype (\n\t// ClientLogContextHook is a client-defined hook that can be used to insert log\n\t// fields based on the Context.\n\tClientLogContextHook = sflog.ClientLogContextHook\n\n\t// LogEntry allows for logging using a snapshot of field values.\n\t// No implementation-specific logging details should be placed into this interface.\n\tLogEntry = sflog.LogEntry\n\n\t// SFLogger Snowflake logger interface which abstracts away the underlying logging mechanism.\n\t// No implementation-specific logging details should be placed into this interface.\n\tSFLogger = sflog.SFLogger\n\n\t// SFSlogLogger is an optional interface for advanced slog handler configuration.\n\t// This interface is separate from SFLogger to maintain framework-agnostic design.\n\t// Users can type-assert the logger to check if slog handler configuration is supported.\n\tSFSlogLogger = sflog.SFSlogLogger\n\n\t// Level is the log level. Info is set to 0. For more details, see sflog.Level.\n\tLevel = sflog.Level\n)\n\n// SetLogKeys sets the context keys to be written to logs when logger.WithContext is used.\n// This function is thread-safe and can be called at runtime.\nfunc SetLogKeys(keys ...ContextKey) {\n\t// Convert ContextKey to []any for internal package\n\tikeys := make([]any, len(keys))\n\tfor i, k := range keys {\n\t\tikeys[i] = k\n\t}\n\tloggerinternal.SetLogKeys(ikeys)\n}\n\n// GetLogKeys returns the currently configured context keys.\nfunc GetLogKeys() []ContextKey {\n\tikeys := loggerinternal.GetLogKeys()\n\n\t// Convert []any back to []ContextKey\n\tkeys := make([]ContextKey, 0, len(ikeys))\n\tfor _, k := range ikeys {\n\t\tif ck, ok := k.(ContextKey); ok {\n\t\t\tkeys = append(keys, ck)\n\t\t}\n\t}\n\treturn keys\n}\n\n// RegisterLogContextHook registers a hook that can be used to extract fields\n// from the Context and associated with log messages using the provided key.\n// This function is thread-safe and can be called at runtime.\nfunc RegisterLogContextHook(contextKey string, ctxExtractor ClientLogContextHook) {\n\t// Delegate directly to internal package\n\tloggerinternal.RegisterLogContextHook(contextKey, ctxExtractor)\n}\n\n// GetClientLogContextHooks returns the registered log context hooks.\nfunc GetClientLogContextHooks() map[string]ClientLogContextHook {\n\treturn loggerinternal.GetClientLogContextHooks()\n}\n\n// logger is a proxy that delegates all calls to the internal global logger.\n// This ensures a single source of truth for the current logger.\n// This variable is private and should only be used internally within the main package.\nvar logger SFLogger = loggerinternal.NewLoggerProxy()\n\n// SetLogger sets a custom logger implementation for gosnowflake.\n// The provided logger will be used as the base logger and automatically wrapped with:\n//   - Secret masking (to protect sensitive data like passwords and tokens)\n//   - Level filtering (for performance optimization)\n//\n// You cannot bypass these protective layers. If you need to configure them, use the\n// returned logger's methods (SetLogLevel, etc.).\n//\n// Example:\n//\n//\tcustomLogger := mylogger.New()\n//\tgosnowflake.SetLogger(customLogger)\nfunc SetLogger(logger SFLogger) error {\n\treturn loggerinternal.SetLogger(logger)\n}\n\n// GetLogger returns the current global logger with all protective layers applied\n// (secret masking and level filtering). This is the actual wrapped logger instance,\n// not a proxy.\n//\n// Example:\n//\n//\tlogger := gosnowflake.GetLogger()\n//\tlogger.Info(\"message\")\nfunc GetLogger() SFLogger {\n\treturn loggerinternal.GetLogger()\n}\n\n// CreateDefaultLogger creates and returns a new instance of SFLogger with default config.\n// The returned logger is automatically wrapped with secret masking and level filtering.\n// This is a pure factory function and does NOT modify global state.\n// If you want to set it as the global logger, call SetLogger(newLogger).\n//\n// The wrapping chain is: levelFilteringLogger → secretMaskingLogger → rawLogger\nfunc CreateDefaultLogger() SFLogger {\n\treturn loggerinternal.CreateDefaultLogger()\n}\n"
  },
  {
    "path": "log_client_test.go",
    "content": "package gosnowflake_test\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/sflog\"\n\t\"io\"\n\t\"log/slog\"\n\t\"maps\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/snowflakedb/gosnowflake/v2\"\n)\n\n// customLogger is a simple implementation of gosnowflake.SFLogger for testing\ntype customLogger struct {\n\tbuf    *bytes.Buffer\n\tlevel  string\n\tfields map[string]any\n\tmu     sync.Mutex\n}\n\nfunc newCustomLogger() *customLogger {\n\treturn &customLogger{\n\t\tbuf:    &bytes.Buffer{},\n\t\tlevel:  \"info\",\n\t\tfields: make(map[string]any),\n\t}\n}\n\nfunc (l *customLogger) formatMessage(level, format string, args ...any) {\n\tl.mu.Lock()\n\tdefer l.mu.Unlock()\n\n\tmsg := fmt.Sprintf(format, args...)\n\n\t// Include fields if any\n\tfieldStr := \"\"\n\tif len(l.fields) > 0 {\n\t\tparts := []string{}\n\t\tfor k, v := range l.fields {\n\t\t\tparts = append(parts, fmt.Sprintf(\"%s=%v\", k, v))\n\t\t}\n\t\tfieldStr = \" \" + strings.Join(parts, \" \")\n\t}\n\n\tfmt.Fprintf(l.buf, \"%s: %s%s\\n\", level, msg, fieldStr)\n}\n\nfunc (l *customLogger) Tracef(format string, args ...any) {\n\tl.formatMessage(\"TRACE\", format, args...)\n}\n\nfunc (l *customLogger) Debugf(format string, args ...any) {\n\tl.formatMessage(\"DEBUG\", format, args...)\n}\n\nfunc (l *customLogger) Infof(format string, args ...any) {\n\tl.formatMessage(\"INFO\", format, args...)\n}\n\nfunc (l *customLogger) Warnf(format string, args ...any) {\n\tl.formatMessage(\"WARN\", format, args...)\n}\n\nfunc (l *customLogger) Errorf(format string, args ...any) {\n\tl.formatMessage(\"ERROR\", format, args...)\n}\n\nfunc (l *customLogger) Fatalf(format string, args ...any) {\n\tl.formatMessage(\"FATAL\", format, args...)\n}\n\nfunc (l *customLogger) Trace(msg string) {\n\tl.formatMessage(\"TRACE\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (l *customLogger) Debug(msg string) {\n\tl.formatMessage(\"DEBUG\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (l *customLogger) Info(msg string) {\n\tl.formatMessage(\"INFO\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (l *customLogger) Warn(msg string) {\n\tl.formatMessage(\"WARN\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (l *customLogger) Error(msg string) {\n\tl.formatMessage(\"ERROR\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (l *customLogger) Fatal(msg string) {\n\tl.formatMessage(\"FATAL\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (l *customLogger) WithField(key string, value any) gosnowflake.LogEntry {\n\tnewFields := make(map[string]any)\n\tmaps.Copy(newFields, l.fields)\n\tnewFields[key] = value\n\n\treturn &customLogEntry{\n\t\tlogger: l,\n\t\tfields: newFields,\n\t}\n}\n\nfunc (l *customLogger) WithFields(fields map[string]any) gosnowflake.LogEntry {\n\tnewFields := make(map[string]any)\n\tmaps.Copy(newFields, l.fields)\n\tmaps.Copy(newFields, fields)\n\n\treturn &customLogEntry{\n\t\tlogger: l,\n\t\tfields: newFields,\n\t}\n}\n\nfunc (l *customLogger) WithContext(ctx context.Context) gosnowflake.LogEntry {\n\tnewFields := make(map[string]any)\n\tmaps.Copy(newFields, l.fields)\n\n\t// Extract context fields\n\tif sessionID := ctx.Value(gosnowflake.SFSessionIDKey); sessionID != nil {\n\t\tnewFields[\"LOG_SESSION_ID\"] = sessionID\n\t}\n\tif user := ctx.Value(gosnowflake.SFSessionUserKey); user != nil {\n\t\tnewFields[\"LOG_USER\"] = user\n\t}\n\n\treturn &customLogEntry{\n\t\tlogger: l,\n\t\tfields: newFields,\n\t}\n}\n\nfunc (l *customLogger) SetLogLevel(level string) error {\n\tl.mu.Lock()\n\tdefer l.mu.Unlock()\n\tl.level = strings.ToLower(level)\n\treturn nil\n}\n\nfunc (l *customLogger) SetLogLevelInt(level gosnowflake.Level) error {\n\tl.mu.Lock()\n\tdefer l.mu.Unlock()\n\tlevelStr, err := sflog.LevelToString(level)\n\tif err != nil {\n\t\treturn err\n\t}\n\tl.level = levelStr\n\treturn nil\n}\n\nfunc (l *customLogger) GetLogLevel() string {\n\tl.mu.Lock()\n\tdefer l.mu.Unlock()\n\treturn l.level\n}\n\nfunc (l *customLogger) GetLogLevelInt() gosnowflake.Level {\n\tl.mu.Lock()\n\tdefer l.mu.Unlock()\n\tlevel, _ := sflog.ParseLevel(l.level)\n\treturn level\n}\n\nfunc (l *customLogger) SetOutput(output io.Writer) {\n\t// For this test logger, we keep using our internal buffer\n}\n\nfunc (l *customLogger) GetOutput() string {\n\tl.mu.Lock()\n\tdefer l.mu.Unlock()\n\treturn l.buf.String()\n}\n\nfunc (l *customLogger) Reset() {\n\tl.mu.Lock()\n\tdefer l.mu.Unlock()\n\tl.buf.Reset()\n}\n\n// customLogEntry implements gosnowflake.LogEntry\ntype customLogEntry struct {\n\tlogger *customLogger\n\tfields map[string]any\n}\n\nfunc (e *customLogEntry) formatMessage(level, format string, args ...any) {\n\te.logger.mu.Lock()\n\tdefer e.logger.mu.Unlock()\n\n\tmsg := fmt.Sprintf(format, args...)\n\n\t// Include fields\n\tfieldStr := \"\"\n\tif len(e.fields) > 0 {\n\t\tparts := []string{}\n\t\tfor k, v := range e.fields {\n\t\t\tparts = append(parts, fmt.Sprintf(\"%s=%v\", k, v))\n\t\t}\n\t\tfieldStr = \" \" + strings.Join(parts, \" \")\n\t}\n\n\tfmt.Fprintf(e.logger.buf, \"%s: %s%s\\n\", level, msg, fieldStr)\n}\n\nfunc (e *customLogEntry) Tracef(format string, args ...any) {\n\te.formatMessage(\"TRACE\", format, args...)\n}\n\nfunc (e *customLogEntry) Debugf(format string, args ...any) {\n\te.formatMessage(\"DEBUG\", format, args...)\n}\n\nfunc (e *customLogEntry) Infof(format string, args ...any) {\n\te.formatMessage(\"INFO\", format, args...)\n}\n\nfunc (e *customLogEntry) Warnf(format string, args ...any) {\n\te.formatMessage(\"WARN\", format, args...)\n}\n\nfunc (e *customLogEntry) Errorf(format string, args ...any) {\n\te.formatMessage(\"ERROR\", format, args...)\n}\n\nfunc (e *customLogEntry) Fatalf(format string, args ...any) {\n\te.formatMessage(\"FATAL\", format, args...)\n}\n\nfunc (e *customLogEntry) Trace(msg string) {\n\te.formatMessage(\"TRACE\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (e *customLogEntry) Debug(msg string) {\n\te.formatMessage(\"DEBUG\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (e *customLogEntry) Info(msg string) {\n\te.formatMessage(\"INFO\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (e *customLogEntry) Warn(msg string) {\n\te.formatMessage(\"WARN\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (e *customLogEntry) Error(msg string) {\n\te.formatMessage(\"ERROR\", \"%s\", fmt.Sprint(msg))\n}\n\nfunc (e *customLogEntry) Fatal(msg string) {\n\te.formatMessage(\"FATAL\", \"%s\", fmt.Sprint(msg))\n}\n\n// Helper functions\nfunc assertContains(t *testing.T, output, expected string) {\n\tt.Helper()\n\tif !strings.Contains(output, expected) {\n\t\tt.Errorf(\"Expected output to contain %q, got:\\n%s\", expected, output)\n\t}\n}\n\nfunc assertNotContains(t *testing.T, output, unexpected string) {\n\tt.Helper()\n\tif strings.Contains(output, unexpected) {\n\t\tt.Errorf(\"Expected output to NOT contain %q, got:\\n%s\", unexpected, output)\n\t}\n}\n\nfunc assertJSONFormat(t *testing.T, output string) {\n\tt.Helper()\n\tlines := strings.SplitSeq(strings.TrimSpace(output), \"\\n\")\n\tfor line := range lines {\n\t\tif line == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tvar js map[string]any\n\t\tif err := json.Unmarshal([]byte(line), &js); err != nil {\n\t\t\tt.Errorf(\"Expected valid JSON, got error: %v, line: %s\", err, line)\n\t\t}\n\t}\n}\n\nfunc TestCustomSlogHandler(t *testing.T) {\n\t// Save original logger\n\toriginalLogger := gosnowflake.GetLogger()\n\tdefer func() {\n\t\tgosnowflake.SetLogger(originalLogger)\n\t}()\n\n\t// Create a new default logger\n\tlogger := gosnowflake.CreateDefaultLogger()\n\n\t// Set it as global logger first\n\tgosnowflake.SetLogger(logger)\n\n\t// Get the logger and try to set custom handler\n\tcurrentLogger := gosnowflake.GetLogger()\n\n\t// Type assert to SFSlogLogger\n\tslogLogger, ok := currentLogger.(gosnowflake.SFSlogLogger)\n\tif !ok {\n\t\tt.Fatal(\"Logger does not implement SFSlogLogger interface\")\n\t}\n\n\t// Create custom JSON handler with buffer\n\tbuf := &bytes.Buffer{}\n\tjsonHandler := slog.NewJSONHandler(buf, &slog.HandlerOptions{\n\t\tLevel: slog.LevelInfo,\n\t})\n\n\t// Set the custom handler\n\terr := slogLogger.SetHandler(jsonHandler)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to set custom handler: %v\", err)\n\t}\n\n\t// Log some messages\n\t_ = currentLogger.SetLogLevel(\"info\")\n\tcurrentLogger.Info(\"Test message from custom JSON handler\")\n\tcurrentLogger.Infof(\"Formatted message: %d\", 42)\n\n\t// Verify output is in JSON format\n\toutput := buf.String()\n\tassertJSONFormat(t, output)\n\tassertContains(t, output, \"Test message from custom JSON handler\")\n\tassertContains(t, output, \"Formatted message: 42\")\n}\n\nfunc TestCustomLoggerImplementation(t *testing.T) {\n\t// Save original logger\n\toriginalLogger := gosnowflake.GetLogger()\n\tdefer func() {\n\t\tgosnowflake.SetLogger(originalLogger)\n\t}()\n\n\t// Create custom logger\n\tcustomLog := newCustomLogger()\n\tvar sfLogger gosnowflake.SFLogger = customLog\n\n\t// Set as global logger\n\tgosnowflake.SetLogger(sfLogger)\n\n\t// Get logger (should be proxied)\n\tlogger := gosnowflake.GetLogger()\n\n\t// Log various messages\n\tlogger.Info(\"Test info message\")\n\tlogger.Infof(\"Formatted: %s\", \"value\")\n\tlogger.Warn(\"Warning message\")\n\n\t// Verify output\n\toutput := customLog.GetOutput()\n\tassertContains(t, output, \"INFO: Test info message\")\n\tassertContains(t, output, \"INFO: Formatted: value\")\n\tassertContains(t, output, \"WARN: Warning message\")\n}\n\nfunc TestCustomLoggerSecretMasking(t *testing.T) {\n\t// Save original logger\n\toriginalLogger := gosnowflake.GetLogger()\n\tdefer func() {\n\t\tgosnowflake.SetLogger(originalLogger)\n\t}()\n\n\t// Create custom logger\n\tcustomLog := newCustomLogger()\n\tvar sfLogger gosnowflake.SFLogger = customLog\n\n\t// Set as global logger\n\tgosnowflake.SetLogger(sfLogger)\n\n\t// Get logger\n\tlogger := gosnowflake.GetLogger()\n\n\t// Log messages with secrets (use 8+ char secrets for detection)\n\tlogger.Infof(\"Connection string: password='secret123'\")\n\tlogger.Info(\"Token: idToken:abc12345678\")\n\tlogger.Infof(\"Auth: token=def12345678\")\n\n\t// Verify secrets are masked\n\toutput := customLog.GetOutput()\n\tassertContains(t, output, \"****\")\n\tassertNotContains(t, output, \"secret123\")\n\tassertNotContains(t, output, \"abc12345678\") // pragma: allowlist secret\n\tassertNotContains(t, output, \"def12345678\") // pragma: allowlist secret\n}\n\nfunc TestCustomHandlerWithContext(t *testing.T) {\n\t// Save original logger\n\toriginalLogger := gosnowflake.GetLogger()\n\tdefer func() {\n\t\tgosnowflake.SetLogger(originalLogger)\n\t}()\n\n\t// Create a new default logger with JSON handler\n\tlogger := gosnowflake.CreateDefaultLogger()\n\tgosnowflake.SetLogger(logger)\n\n\tcurrentLogger := gosnowflake.GetLogger()\n\n\t// Set custom JSON handler\n\tbuf := &bytes.Buffer{}\n\tjsonHandler := slog.NewJSONHandler(buf, &slog.HandlerOptions{\n\t\tLevel: slog.LevelInfo,\n\t})\n\n\tif slogLogger, ok := currentLogger.(gosnowflake.SFSlogLogger); ok {\n\t\t_ = slogLogger.SetHandler(jsonHandler)\n\t}\n\n\t// Create context with session info\n\tctx := context.Background()\n\tctx = context.WithValue(ctx, gosnowflake.SFSessionIDKey, \"session-123\")\n\tctx = context.WithValue(ctx, gosnowflake.SFSessionUserKey, \"test-user\")\n\n\t// Log with context\n\t_ = currentLogger.SetLogLevel(\"info\")\n\tcurrentLogger.WithContext(ctx).Info(\"Message with context\")\n\n\t// Verify context fields in JSON output\n\toutput := buf.String()\n\tassertJSONFormat(t, output)\n\tassertContains(t, output, \"session-123\")\n\tassertContains(t, output, \"test-user\")\n}\n\nfunc TestCustomLoggerWithFields(t *testing.T) {\n\t// Save original logger\n\toriginalLogger := gosnowflake.GetLogger()\n\tdefer func() {\n\t\tgosnowflake.SetLogger(originalLogger)\n\t}()\n\n\t// Create custom logger\n\tcustomLog := newCustomLogger()\n\tvar sfLogger gosnowflake.SFLogger = customLog\n\n\t// Set as global logger\n\tgosnowflake.SetLogger(sfLogger)\n\n\t// Get logger\n\tlogger := gosnowflake.GetLogger()\n\n\t// Use WithField\n\tlogger.WithField(\"key1\", \"value1\").Info(\"Message with field\")\n\n\t// Use WithFields\n\tlogger.WithFields(map[string]any{\n\t\t\"key2\": \"value2\",\n\t\t\"key3\": 123,\n\t}).Info(\"Message with multiple fields\")\n\n\t// Verify fields in output\n\toutput := customLog.GetOutput()\n\tassertContains(t, output, \"key1=value1\")\n\tassertContains(t, output, \"key2=value2\")\n\tassertContains(t, output, \"key3=123\")\n}\n\nfunc TestCustomLoggerLevelConfiguration(t *testing.T) {\n\t// Save original logger\n\toriginalLogger := gosnowflake.GetLogger()\n\tdefer func() {\n\t\tgosnowflake.SetLogger(originalLogger)\n\t}()\n\n\t// Create custom logger\n\tcustomLog := newCustomLogger()\n\tvar sfLogger gosnowflake.SFLogger = customLog\n\n\t// Set as global logger\n\tgosnowflake.SetLogger(sfLogger)\n\n\t// Get logger\n\tlogger := gosnowflake.GetLogger()\n\n\t// Set level to info\n\terr := logger.SetLogLevel(\"info\")\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to set log level: %v\", err)\n\t}\n\n\t// Verify level\n\tif level := logger.GetLogLevel(); level != \"info\" {\n\t\tt.Errorf(\"Expected level 'info', got %q\", level)\n\t}\n\n\t// Log at different levels\n\tlogger.Debug(\"Debug message - should not appear at info level\")\n\tlogger.Info(\"Info message - should appear\")\n\n\t// Check output\n\toutput := customLog.GetOutput()\n\n\t// Note: Our custom logger doesn't implement level filtering\n\t// This test validates that the API works, actual filtering\n\t// would be implemented in a production custom logger\n\tassertContains(t, output, \"INFO: Info message\")\n}\n\nfunc TestCustomHandlerRestore(t *testing.T) {\n\t// Save original logger\n\toriginalLogger := gosnowflake.GetLogger()\n\tdefer func() {\n\t\tgosnowflake.SetLogger(originalLogger)\n\t}()\n\n\t// Create logger with JSON handler\n\tlogger1 := gosnowflake.CreateDefaultLogger()\n\tgosnowflake.SetLogger(logger1)\n\n\tbuf1 := &bytes.Buffer{}\n\tif slogLogger, ok := gosnowflake.GetLogger().(gosnowflake.SFSlogLogger); ok {\n\t\tjsonHandler := slog.NewJSONHandler(buf1, &slog.HandlerOptions{\n\t\t\tLevel: slog.LevelInfo,\n\t\t})\n\t\t_ = slogLogger.SetHandler(jsonHandler)\n\t}\n\n\t// Log with JSON handler\n\t_ = gosnowflake.GetLogger().SetLogLevel(\"info\")\n\tgosnowflake.GetLogger().Info(\"JSON format message\")\n\n\t// Verify JSON format\n\toutput1 := buf1.String()\n\tassertJSONFormat(t, output1)\n\tassertContains(t, output1, \"JSON format message\")\n\n\t// Create new default logger (text format)\n\tlogger2 := gosnowflake.CreateDefaultLogger()\n\tbuf2 := &bytes.Buffer{}\n\tlogger2.SetOutput(buf2)\n\tgosnowflake.SetLogger(logger2)\n\n\t// Log with default text handler\n\t_ = gosnowflake.GetLogger().SetLogLevel(\"info\")\n\tgosnowflake.GetLogger().Info(\"Text format message\")\n\n\t// Verify text format (not JSON)\n\toutput2 := buf2.String()\n\tassertContains(t, output2, \"Text format message\")\n\n\t// Text format should have \"level=\" in it\n\tassertContains(t, output2, \"level=\")\n}\n"
  },
  {
    "path": "log_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestLogLevelEnabled(t *testing.T) {\n\tlog := CreateDefaultLogger() // via the SFLogger interface.\n\terr := log.SetLogLevel(\"info\")\n\tif err != nil {\n\t\tt.Fatalf(\"log level could not be set %v\", err)\n\t}\n\tif log.GetLogLevel() != \"INFO\" {\n\t\tt.Fatalf(\"log level should be info but is %v\", log.GetLogLevel())\n\t}\n}\n\nfunc TestSetLogLevelError(t *testing.T) {\n\tlogger := CreateDefaultLogger()\n\terr := logger.SetLogLevel(\"unknown\")\n\tif err == nil {\n\t\tt.Fatal(\"should have thrown an error\")\n\t}\n}\n\nfunc TestDefaultLogLevel(t *testing.T) {\n\tlogger := CreateDefaultLogger()\n\tbuf := &bytes.Buffer{}\n\tlogger.SetOutput(buf)\n\n\t// default logger level is info\n\tlogger.Info(\"info\")\n\tlogger.Infof(\"info%v\", \"f\")\n\n\t// debug and trace won't write to log since they are higher than info level\n\tlogger.Debug(\"debug\")\n\tlogger.Debugf(\"debug%v\", \"f\")\n\n\tlogger.Trace(\"trace\")\n\tlogger.Tracef(\"trace%v\", \"f\")\n\n\tlogger.Warn(\"warn\")\n\tlogger.Warnf(\"warn%v\", \"f\")\n\n\tlogger.Error(\"error\")\n\tlogger.Errorf(\"error%v\", \"f\")\n\n\t// verify output\n\tvar strbuf = buf.String()\n\n\tif !strings.Contains(strbuf, \"info\") ||\n\t\t!strings.Contains(strbuf, \"warn\") ||\n\t\t!strings.Contains(strbuf, \"error\") {\n\t\tt.Fatalf(\"unexpected output in log: %v\", strbuf)\n\t}\n\tif strings.Contains(strbuf, \"debug\") ||\n\t\tstrings.Contains(strbuf, \"trace\") {\n\t\tt.Fatalf(\"debug/trace should not be in log: %v\", strbuf)\n\t}\n}\n\nfunc TestOffLogLevel(t *testing.T) {\n\tlogger := CreateDefaultLogger()\n\tbuf := &bytes.Buffer{}\n\tlogger.SetOutput(buf)\n\terr := logger.SetLogLevel(\"OFF\")\n\tassertNilF(t, err)\n\n\tlogger.Info(\"info\")\n\tlogger.Infof(\"info%v\", \"f\")\n\tlogger.Debug(\"debug\")\n\tlogger.Debugf(\"debug%v\", \"f\")\n\tlogger.Trace(\"trace\")\n\tlogger.Tracef(\"trace%v\", \"f\")\n\tlogger.Warn(\"warn\")\n\tlogger.Warnf(\"warn%v\", \"f\")\n\tlogger.Error(\"error\")\n\tlogger.Errorf(\"error%v\", \"f\")\n\n\tassertEqualE(t, buf.Len(), 0, \"log messages count\")\n\tassertEqualE(t, logger.GetLogLevel(), \"OFF\", \"log level\")\n}\n\nfunc TestLogSetLevel(t *testing.T) {\n\tlogger := CreateDefaultLogger()\n\tbuf := &bytes.Buffer{}\n\tlogger.SetOutput(buf)\n\t_ = logger.SetLogLevel(\"trace\")\n\n\tlogger.Trace(\"should print at trace level\")\n\tlogger.Debug(\"should print at debug level\")\n\n\tvar strbuf = buf.String()\n\n\tif !strings.Contains(strbuf, \"trace level\") ||\n\t\t!strings.Contains(strbuf, \"debug level\") {\n\t\tt.Fatalf(\"unexpected output in log: %v\", strbuf)\n\t}\n}\n\nfunc TestLowerLevelsAreSuppressed(t *testing.T) {\n\tlogger := CreateDefaultLogger()\n\tbuf := &bytes.Buffer{}\n\tlogger.SetOutput(buf)\n\t_ = logger.SetLogLevel(\"info\")\n\n\tlogger.Trace(\"should print at trace level\")\n\tlogger.Debug(\"should print at debug level\")\n\tlogger.Info(\"should print at info level\")\n\tlogger.Warn(\"should print at warn level\")\n\tlogger.Error(\"should print at error level\")\n\n\tvar strbuf = buf.String()\n\n\tif strings.Contains(strbuf, \"trace level\") ||\n\t\tstrings.Contains(strbuf, \"debug level\") {\n\t\tt.Fatalf(\"unexpected debug and trace are not present in log: %v\", strbuf)\n\t}\n\n\tif !strings.Contains(strbuf, \"info level\") ||\n\t\t!strings.Contains(strbuf, \"warn level\") ||\n\t\t!strings.Contains(strbuf, \"error level\") {\n\t\tt.Fatalf(\"expected info, warn, error output in log: %v\", strbuf)\n\t}\n}\n\nfunc TestLogWithField(t *testing.T) {\n\tlogger := CreateDefaultLogger()\n\tbuf := &bytes.Buffer{}\n\tlogger.SetOutput(buf)\n\n\tlogger.WithField(\"field\", \"test\").Info(\"hello\")\n\tvar strbuf = buf.String()\n\tif !strings.Contains(strbuf, \"field\") || !strings.Contains(strbuf, \"test\") {\n\t\tt.Fatalf(\"expected field and test in output: %v\", strbuf)\n\t}\n}\n\ntype testRequestIDCtxKey struct{}\n\nfunc TestLogKeysDefault(t *testing.T) {\n\tlogger := CreateDefaultLogger()\n\tbuf := &bytes.Buffer{}\n\tlogger.SetOutput(buf)\n\n\tctx := context.Background()\n\n\t// set the sessionID on the context to see if we have it in the logs\n\tsessionIDContextValue := \"sessionID\"\n\tctx = context.WithValue(ctx, SFSessionIDKey, sessionIDContextValue)\n\n\tuserContextValue := \"madison\"\n\tctx = context.WithValue(ctx, SFSessionUserKey, userContextValue)\n\n\t// base case (not using RegisterContextVariableToLog to add additional types )\n\tlogger.WithContext(ctx).Info(\"test\")\n\tvar strbuf = buf.String()\n\tif !strings.Contains(strbuf, string(SFSessionIDKey)) || !strings.Contains(strbuf, sessionIDContextValue) {\n\t\tt.Fatalf(\"expected that sfSessionIdKey would be in logs if logger.WithContext was used, but got: %v\", strbuf)\n\t}\n\tif !strings.Contains(strbuf, string(SFSessionUserKey)) || !strings.Contains(strbuf, userContextValue) {\n\t\tt.Fatalf(\"expected that SFSessionUserKey would be in logs if logger.WithContext was used, but got: %v\", strbuf)\n\t}\n}\n\nfunc TestLogKeysWithRegisterContextVariableToLog(t *testing.T) {\n\tlogger := CreateDefaultLogger()\n\tbuf := &bytes.Buffer{}\n\tlogger.SetOutput(buf)\n\n\tctx := context.Background()\n\n\t// set the sessionID on the context to see if we have it in the logs\n\tsessionIDContextValue := \"sessionID\"\n\tctx = context.WithValue(ctx, SFSessionIDKey, sessionIDContextValue)\n\n\tuserContextValue := \"testUser\"\n\tctx = context.WithValue(ctx, SFSessionUserKey, userContextValue)\n\n\t// test that RegisterContextVariableToLog works with non string keys\n\tlogKey := \"REQUEST_ID\"\n\tcontextIntVal := 123\n\tctx = context.WithValue(ctx, testRequestIDCtxKey{}, contextIntVal)\n\n\tgetRequestKeyFunc := func(ctx context.Context) string {\n\t\tif requestContext, ok := ctx.Value(testRequestIDCtxKey{}).(int); ok {\n\t\t\treturn fmt.Sprint(requestContext)\n\t\t}\n\t\treturn \"\"\n\t}\n\n\tRegisterLogContextHook(logKey, getRequestKeyFunc)\n\n\t// base case (not using RegisterContextVariableToLog to add additional types )\n\tlogger.WithContext(ctx).Info(\"test\")\n\tvar strbuf = buf.String()\n\n\tif !strings.Contains(strbuf, string(SFSessionIDKey)) || !strings.Contains(strbuf, sessionIDContextValue) {\n\t\tt.Fatalf(\"expected that sfSessionIdKey would be in logs if logger.WithContext and RegisterContextVariableToLog was used, but got: %v\", strbuf)\n\t}\n\tif !strings.Contains(strbuf, string(SFSessionUserKey)) || !strings.Contains(strbuf, userContextValue) {\n\t\tt.Fatalf(\"expected that SFSessionUserKey would be in logs if logger.WithContext and RegisterContextVariableToLog was used, but got: %v\", strbuf)\n\t}\n\tif !strings.Contains(strbuf, logKey) || !strings.Contains(strbuf, fmt.Sprint(contextIntVal)) {\n\t\tt.Fatalf(\"expected that REQUEST_ID would be in logs if logger.WithContext and RegisterContextVariableToLog was used, but got: %v\", strbuf)\n\t}\n}\n\nfunc TestLogMaskSecrets(t *testing.T) {\n\tlogger := CreateDefaultLogger()\n\tbuf := &bytes.Buffer{}\n\tlogger.SetOutput(buf)\n\n\tctx := context.Background()\n\tquery := \"create user testuser password='testpassword'\"\n\tlogger.WithContext(ctx).Infof(\"Query: %#v\", query)\n\n\t// verify output\n\texpected := \"create user testuser password='****\"\n\tvar strbuf = buf.String()\n\tif !strings.Contains(strbuf, expected) {\n\t\tt.Fatalf(\"expected that password would be masked. WithContext was used, but got: %v\", strbuf)\n\t}\n}\n"
  },
  {
    "path": "minicore.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/compilation\"\n\tinternalos \"github.com/snowflakedb/gosnowflake/v2/internal/os\"\n)\n\nconst disableMinicoreEnv = \"SF_DISABLE_MINICORE\"\n\nvar miniCoreOnce sync.Once\nvar miniCoreMutex sync.RWMutex\n\nvar miniCoreInstance miniCore\n\nvar minicoreLoadLogs = struct {\n\tmu        sync.Mutex\n\tlogs      []string\n\tstartTime time.Time\n}{}\n\ntype minicoreDirCandidate struct {\n\tdirType    string\n\tpath       string\n\tpreUseFunc func() error\n}\n\nfunc newMinicoreDirCandidate(dirType, path string) minicoreDirCandidate {\n\treturn minicoreDirCandidate{\n\t\tdirType: dirType,\n\t\tpath:    path,\n\t}\n}\n\nfunc (m minicoreDirCandidate) String() string {\n\treturn m.dirType\n}\n\n// getMiniCoreFileName returns the filename of the loaded minicore library\nfunc getMiniCoreFileName() string {\n\tminiCoreMutex.RLock()\n\tdefer miniCoreMutex.RUnlock()\n\treturn corePlatformConfig.coreLibFileName\n}\n\n// miniCoreErrorType represents the category of minicore error that occurred.\ntype miniCoreErrorType int\n\n// Error type constants for categorizing minicore failures.\nconst (\n\tminiCoreErrorTypeLoad   miniCoreErrorType = iota // Library loading failed\n\tminiCoreErrorTypeSymbol                          // Symbol lookup failed\n\tminiCoreErrorTypeCall                            // Function call failed\n\tminiCoreErrorTypeInit                            // Initialization failed\n\tminiCoreErrorTypeWrite                           // File write failed\n)\n\n// String returns a human-readable string representation of the error type.\nfunc (et miniCoreErrorType) String() string {\n\tswitch et {\n\tcase miniCoreErrorTypeLoad:\n\t\treturn \"load\"\n\tcase miniCoreErrorTypeSymbol:\n\t\treturn \"symbol\"\n\tcase miniCoreErrorTypeCall:\n\t\treturn \"call\"\n\tcase miniCoreErrorTypeInit:\n\t\treturn \"init\"\n\tcase miniCoreErrorTypeWrite:\n\t\treturn \"write\"\n\tdefault:\n\t\treturn \"unknown\"\n\t}\n}\n\n// miniCoreError represents a structured error from minicore operations.\n// It provides detailed context about what went wrong, where, and why.\ntype miniCoreError struct {\n\terrorType miniCoreErrorType // errorType categorizes the kind of error\n\tplatform  string            // platform identifies the OS where error occurred\n\tpath      string            // path to the library file, if applicable\n\terr       error             // err wraps the underlying error cause\n}\n\n// Error returns a formatted error message with context about the failure.\nfunc (e *miniCoreError) Error() string {\n\tif e.path != \"\" {\n\t\treturn fmt.Sprintf(\"minicore %s on %s (path: %s): %v\", e.errorType, e.platform, e.path, e.err)\n\t}\n\treturn fmt.Sprintf(\"minicore %s on %s: %v\", e.errorType, e.platform, e.err)\n}\n\n// Unwrap returns the underlying error for error chain inspection.\nfunc (e *miniCoreError) Unwrap() error {\n\treturn e.err\n}\n\n// newMiniCoreError creates a new structured minicore error with full context.\nfunc newMiniCoreError(errType miniCoreErrorType, platform, path string, err error) *miniCoreError {\n\treturn &miniCoreError{\n\t\terrorType: errType,\n\t\tplatform:  platform,\n\t\tpath:      path,\n\t\terr:       err,\n\t}\n}\n\n// corePlatformConfigType holds platform-specific minicore configuration.\ntype corePlatformConfigType struct {\n\tinitialized     bool   // initialized indicates if the platform is supported\n\tcoreLib         []byte // coreLib contains the embedded native library\n\tcoreLibFileName string // coreLibFileName is the filename from the go:embed directive\n}\n\n// corePlatformConfig holds platform-specific configuration. If not initialized, minicore is unsupported.\nvar corePlatformConfig = corePlatformConfigType{}\n\ntype miniCore interface {\n\t// FullVersion returns the version string from the native library.\n\tFullVersion() (string, error)\n}\n\n// erroredMiniCore implements miniCore but always returns an error.\n// It's used when minicore initialization fails.\ntype erroredMiniCore struct {\n\terr error\n}\n\n// newErroredMiniCore creates a miniCore implementation that always returns the given error.\nfunc newErroredMiniCore(err error) *erroredMiniCore {\n\tminicoreDebugf(\"minicore error: %v\", err)\n\treturn &erroredMiniCore{err: err}\n}\n\n// FullVersion always returns an empty string and the stored error.\nfunc (emc erroredMiniCore) FullVersion() (string, error) {\n\treturn \"\", emc.err\n}\n\n// miniCoreLoaderType manages the loading and initialization of the minicore native library.\ntype miniCoreLoaderType struct {\n\tsearchDirs []minicoreDirCandidate // searchDirs contains directories to search for the library\n}\n\n// newMiniCoreLoader creates a new minicore miniCoreLoaderType with platform-appropriate search directories.\nfunc newMiniCoreLoader() *miniCoreLoaderType {\n\treturn &miniCoreLoaderType{\n\t\tsearchDirs: buildMiniCoreSearchDirs(),\n\t}\n}\n\n// buildMiniCoreSearchDirs constructs the list of directories to search for the minicore library.\nfunc buildMiniCoreSearchDirs() []minicoreDirCandidate {\n\tvar dirs []minicoreDirCandidate\n\n\t// Add temp directory\n\tif tempDir, err := os.MkdirTemp(\"\", \"gosnowflake-cgo\"); err == nil && tempDir != \"\" {\n\t\tminicoreDebugf(\"created temp directory for minicore loading\")\n\t\tswitch runtime.GOOS {\n\t\tcase \"linux\", \"darwin\":\n\t\t\tif err = os.Chmod(tempDir, 0700); err == nil {\n\t\t\t\tminicoreDebugf(\"configured permissions to temp as 0700\")\n\t\t\t\tdirs = append(dirs, newMinicoreDirCandidate(\"temp\", tempDir))\n\t\t\t} else {\n\t\t\t\tminicoreDebugf(\"cannot change minicore directory permissions to 0700\")\n\t\t\t}\n\t\tdefault:\n\t\t\tdirs = append(dirs, newMinicoreDirCandidate(\"temp\", tempDir))\n\t\t}\n\t} else {\n\t\tminicoreDebugf(\"cannot create temp directory for gosnowflakecore: %v\", err)\n\t}\n\n\t// Add platform-specific cache directory\n\tif cacheDir := getMiniCoreCacheDirInHome(); cacheDir != \"\" {\n\t\tdirCandidate := newMinicoreDirCandidate(\"home\", cacheDir)\n\t\tdirCandidate.preUseFunc = func() error {\n\t\t\tminicoreDebugf(\"using cache directory: %v\", cacheDir)\n\t\t\tif err := os.MkdirAll(cacheDir, 0700); err != nil {\n\t\t\t\tminicoreDebugf(\"cannot create %v: %v\", cacheDir, err)\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tminicoreDebugf(\"created cache directory: %v, configured permissions to 0700\", cacheDir)\n\t\t\tif runtime.GOOS == \"linux\" || runtime.GOOS == \"darwin\" {\n\t\t\t\tif err := os.Chmod(cacheDir, 0700); err != nil {\n\t\t\t\t\tminicoreDebugf(\"cannot change minicore cache directory permissions to 0700. %v\", err)\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t\tminicoreDebugf(\"configured permissions to cache directory as 0700\")\n\t\t\treturn nil\n\t\t}\n\t\tdirs = append(dirs, dirCandidate)\n\t}\n\n\t// Add current working directory\n\tif cwd, err := os.Getwd(); err == nil {\n\t\tdirs = append(dirs, newMinicoreDirCandidate(\"cwd\", cwd))\n\t} else {\n\t\tminicoreDebugf(\"cannot get current working directory: %v\", err)\n\t}\n\n\tminicoreDebugf(\"candidate directories for minicore loading: %v\", dirs)\n\treturn dirs\n}\n\n// getMiniCoreCacheDirInHome returns the platform-specific cache directory for storing the minicore library.\nfunc getMiniCoreCacheDirInHome() string {\n\thomeDir, err := os.UserHomeDir()\n\tif err != nil {\n\t\tminicoreDebugf(\"cannot get user home directory: %v\", err)\n\t\treturn \"\"\n\t}\n\n\tswitch runtime.GOOS {\n\tcase \"windows\":\n\t\treturn filepath.Join(homeDir, \"AppData\", \"Local\", \"Snowflake\", \"Caches\", \"minicore\")\n\tcase \"darwin\":\n\t\treturn filepath.Join(homeDir, \"Library\", \"Caches\", \"Snowflake\", \"minicore\")\n\tdefault:\n\t\treturn filepath.Join(homeDir, \".cache\", \"snowflake\", \"minicore\")\n\t}\n}\n\n// loadCore loads and initializes the minicore native library.\nfunc (l *miniCoreLoaderType) loadCore() miniCore {\n\tif !corePlatformConfig.initialized {\n\t\treturn newErroredMiniCore(newMiniCoreError(miniCoreErrorTypeInit, runtime.GOOS, \"\",\n\t\t\tfmt.Errorf(\"minicore is not supported on %v/%v platform\", runtime.GOOS, runtime.GOARCH)))\n\t}\n\n\tif linkingMode, err := compilation.CheckDynamicLinking(); err != nil || linkingMode == compilation.UnknownLinking {\n\t\tminicoreDebugf(\"cannot determine linking mode: %v, proceeding anyway\", err)\n\t} else if linkingMode == compilation.StaticLinking {\n\t\treturn newErroredMiniCore(newMiniCoreError(miniCoreErrorTypeLoad, runtime.GOOS, \"\",\n\t\t\tfmt.Errorf(\"binary is statically linked (no dynamic linker); dlopen is unavailable\")))\n\t}\n\n\tlibDir, libPath, err := l.writeLibraryToFile()\n\tif err != nil {\n\t\treturn newErroredMiniCore(err)\n\t}\n\tdefer func(libDir minicoreDirCandidate, libPath string) {\n\t\tif err = os.Remove(libPath); err != nil {\n\t\t\tminicoreDebugf(\"cannot remove library. %v\", err)\n\t\t}\n\t\tif libDir.dirType == \"temp\" {\n\t\t\tif err = os.Remove(libDir.path); err != nil {\n\t\t\t\tminicoreDebugf(\"cannot remove temp directory. %v\", err)\n\t\t\t}\n\t\t}\n\t}(libDir, libPath)\n\n\tminicoreDebugf(\"Loading minicore library from: %s\", libDir)\n\treturn osSpecificLoadFromPath(libPath)\n}\n\nvar osSpecificLoadFromPath = func(libPath string) miniCore {\n\treturn newErroredMiniCore(fmt.Errorf(\"minicore loader is not available on %v/%v\", runtime.GOOS, runtime.GOARCH))\n}\n\n// writeLibraryToFile writes the embedded library to the first available directory\nfunc (l *miniCoreLoaderType) writeLibraryToFile() (minicoreDirCandidate, string, error) {\n\tvar errs []error\n\n\tfor _, dir := range l.searchDirs {\n\t\tif dir.preUseFunc != nil {\n\t\t\tif err := dir.preUseFunc(); err != nil {\n\t\t\t\tminicoreDebugf(\"Failed to prepare directory %q: %v\", dir.path, err)\n\t\t\t\terrs = append(errs, fmt.Errorf(\"failed to prepare directory %q: %v\", dir.path, err))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t\tlibPath := filepath.Join(dir.path, corePlatformConfig.coreLibFileName)\n\t\tif err := os.WriteFile(libPath, corePlatformConfig.coreLib, 0600); err != nil {\n\t\t\tminicoreDebugf(\"Failed to write embedded library to %q: %v\", libPath, err)\n\t\t\terrs = append(errs, fmt.Errorf(\"failed to write to %q: %v\", libPath, err))\n\t\t\tcontinue\n\t\t}\n\t\tminicoreDebugf(\"Successfully wrote embedded library to %s\", dir)\n\t\treturn dir, libPath, nil\n\t}\n\n\treturn minicoreDirCandidate{}, \"\", newMiniCoreError(miniCoreErrorTypeWrite, runtime.GOOS, \"\",\n\t\tfmt.Errorf(\"failed to write embedded library to any directory (errors: %v)\", errs))\n}\n\n// getMiniCore returns the minicore instance, loading it asynchronously if needed.\nfunc getMiniCore() miniCore {\n\tminiCoreOnce.Do(func() {\n\t\tminicoreDebugf(\"minicore enabled at compile time: %v\", compilation.MinicoreEnabled)\n\t\tminicoreDebugf(\"cgo enabled: %v\", compilation.CgoEnabled)\n\t\tif !compilation.MinicoreEnabled {\n\t\t\tlogger.Debugf(\"minicore disabled at compile time (built with -tags minicore_disabled)\")\n\t\t\treturn\n\t\t}\n\t\tif strings.EqualFold(os.Getenv(disableMinicoreEnv), \"true\") {\n\t\t\tlogger.Debugf(\"minicore loading disabled\")\n\t\t\treturn\n\t\t}\n\t\tgo func() {\n\t\t\tminicoreLoadLogs.mu.Lock()\n\t\t\tminicoreLoadLogs.startTime = time.Now()\n\t\t\tminicoreLoadLogs.mu.Unlock()\n\n\t\t\tminicoreDebugf(\"Starting asynchronous minicore loading\")\n\t\t\tminiCoreLoader := newMiniCoreLoader()\n\t\t\tcore := miniCoreLoader.loadCore()\n\t\t\tminiCoreMutex.Lock()\n\t\t\tminiCoreInstance = core\n\t\t\tminiCoreMutex.Unlock()\n\t\t\tif v, err := core.FullVersion(); err != nil {\n\t\t\t\tminicoreDebugf(\"Minicore version not available: %v\", err)\n\t\t\t} else {\n\t\t\t\tminicoreDebugf(\"Minicore loading completed, version: %s\", v)\n\t\t\t}\n\t\t}()\n\t})\n\n\t// Return current instance (may be nil initially)\n\tminiCoreMutex.RLock()\n\tdefer miniCoreMutex.RUnlock()\n\treturn miniCoreInstance\n}\n\nfunc init() {\n\t// Start async minicore loading but don't block initialization.\n\t// This allows the application to start quickly while minicore loads in the background.\n\tgetMiniCore()\n}\n\nfunc minicoreDebugf(format string, args ...any) {\n\tminicoreLoadLogs.mu.Lock()\n\tdefer minicoreLoadLogs.mu.Unlock()\n\tvar finalArgs []any\n\tfinalArgs = append(finalArgs, time.Since(minicoreLoadLogs.startTime))\n\tfinalArgs = append(finalArgs, args...)\n\tfinalFormat := \"[%v] \" + format\n\tlogger.Debugf(finalFormat, finalArgs...)\n\tminicoreLoadLogs.logs = append(minicoreLoadLogs.logs, maskSecrets(fmt.Sprintf(finalFormat, finalArgs...)))\n}\n\n// libcType represents the type of C library in use\ntype libcType string\n\nconst (\n\tlibcTypeGlibc   libcType = \"glibc\"\n\tlibcTypeMusl    libcType = \"musl\"\n\tlibcTypeIgnored libcType = \"\"\n)\n\n// detectLibc detects whether glibc or musl is in use\nfunc detectLibc() libcType {\n\tif runtime.GOOS != \"linux\" {\n\t\treturn libcTypeIgnored\n\t}\n\n\tinfo := internalos.GetLibcInfo()\n\n\tswitch info.Family {\n\tcase \"glibc\":\n\t\tminicoreDebugf(\"detected glibc environment\")\n\t\tif info.Version != \"\" {\n\t\t\tminicoreDebugf(\"glibc version: %s\", info.Version)\n\t\t}\n\t\treturn libcTypeGlibc\n\tcase \"musl\":\n\t\tminicoreDebugf(\"detected musl environment\")\n\t\tif info.Version != \"\" {\n\t\t\tminicoreDebugf(\"musl version: %s\", info.Version)\n\t\t}\n\t\treturn libcTypeMusl\n\tdefault:\n\t\tminicoreDebugf(\"Could not detect libc type, assuming glibc\")\n\t\treturn libcTypeGlibc\n\t}\n}\n"
  },
  {
    "path": "minicore_disabled_test.go",
    "content": "//go:build minicore_disabled\n\npackage gosnowflake\n\nimport (\n\t\"database/sql\"\n\t\"testing\"\n\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/compilation\"\n)\n\nfunc TestMiniCoreDisabledAtCompileTime(t *testing.T) {\n\tassertFalseF(t, compilation.MinicoreEnabled, \"MinicoreEnabled should be false when built with -tags minicore_disabled\")\n}\n\nfunc TestMiniCoreDisabledE2E(t *testing.T) {\n\twiremock.registerMappings(t, newWiremockMapping(\"minicore/auth/disabled_flow.json\"), newWiremockMapping(\"select1.json\"))\n\tcfg := wiremock.connectionConfig()\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\trunSmokeQuery(t, db)\n}\n"
  },
  {
    "path": "minicore_posix.go",
    "content": "//go:build !windows && !minicore_disabled\n\npackage gosnowflake\n\n/*\n#cgo LDFLAGS: -ldl\n#include <dlfcn.h>\n#include <stdlib.h>\n#include <string.h>\n\nstatic void* dlOpen(const char* path) {\n    return dlopen(path, RTLD_LAZY);\n}\n\nstatic void* dlSym(void* handle, const char* name) {\n    return dlsym(handle, name);\n}\n\nstatic int dlClose(void* handle) {\n\treturn dlclose(handle);\n}\n\nstatic char* dlError() {\n\treturn dlerror();\n}\n\ntypedef const char* (*coreFullVersion)();\n\nstatic const char* callCoreFullVersion(coreFullVersion f) {\n    return f();\n}\n*/\nimport \"C\"\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"unsafe\"\n)\n\ntype posixMiniCore struct {\n\t// fullVersion holds the version string returned from Rust, just to not invoke it multiple times.\n\tfullVersion string\n\t// coreInitError holds any error that occurred during initialization.\n\tcoreInitError error\n}\n\nfunc newPosixMiniCore(fullVersion string) *posixMiniCore {\n\treturn &posixMiniCore{\n\t\tfullVersion: fullVersion,\n\t}\n}\n\nfunc (pmc *posixMiniCore) FullVersion() (string, error) {\n\treturn pmc.fullVersion, pmc.coreInitError\n}\n\nvar _ = func() any {\n\tosSpecificLoadFromPath = loadFromPath\n\treturn nil\n}()\n\nfunc loadFromPath(libPath string) miniCore {\n\tcLibPath := C.CString(libPath)\n\tdefer C.free(unsafe.Pointer(cLibPath))\n\n\t// Loading library\n\tminicoreDebugf(\"Calling dlOpen\")\n\thandle := C.dlOpen(cLibPath)\n\tminicoreDebugf(\"Calling dlOpen finished\")\n\tif handle == nil {\n\t\terr := C.dlError()\n\t\tmcErr := newMiniCoreError(miniCoreErrorTypeLoad, \"posix\", libPath, fmt.Errorf(\"failed to load shared library: %v\", C.GoString(err)))\n\t\treturn newErroredMiniCore(mcErr)\n\t}\n\n\t// Unloading library at the end\n\tdefer func() {\n\t\tminicoreDebugf(\"Calling dlClose\")\n\t\tdefer minicoreDebugf(\"Calling dlClose finished\")\n\t\tif ret := C.dlClose(handle); ret != 0 {\n\t\t\terr := C.dlError()\n\t\t\tminicoreDebugf(\"Error when closing dynamic library: %v\", C.GoString(err))\n\t\t}\n\t}()\n\n\t// Loading symbol\n\tsymbolName := C.CString(\"sf_core_full_version\")\n\tdefer C.free(unsafe.Pointer(symbolName))\n\tminicoreDebugf(\"Loading sf_core_full_version symbol\")\n\tcoreFullVersionSymbol := C.dlSym(handle, symbolName)\n\tminicoreDebugf(\"Loading sf_core_full_version symbol finished\")\n\tif coreFullVersionSymbol == nil {\n\t\terr := C.dlError()\n\t\tmcErr := newMiniCoreError(miniCoreErrorTypeSymbol, \"posix\", libPath, fmt.Errorf(\"symbol 'sf_core_full_version' not found: %v\", C.GoString(err)))\n\t\treturn newErroredMiniCore(mcErr)\n\t}\n\n\t// Calling minicore\n\tvar coreFullVersionFunc C.coreFullVersion = (C.coreFullVersion)(coreFullVersionSymbol)\n\tminicoreDebugf(\"Calling sf_core_full_version\")\n\tfullVersion := C.GoString(C.callCoreFullVersion(coreFullVersionFunc))\n\tminicoreDebugf(\"Calling sf_core_full_version finished\")\n\tif fullVersion == \"\" {\n\t\treturn newErroredMiniCore(newMiniCoreError(miniCoreErrorTypeCall, \"posix\", libPath, errors.New(\"failed to get version from core library function\")))\n\t}\n\treturn newPosixMiniCore(fullVersion)\n}\n"
  },
  {
    "path": "minicore_provider_darwin_amd64.go",
    "content": "//go:build !minicore_disabled\n\npackage gosnowflake\n\nimport (\n\t// embed is used only to initialize go:embed directive\n\t_ \"embed\"\n)\n\n//go:embed libsf_mini_core_darwin_amd64.dylib\nvar coreLibDarwinAmd64 []byte\n\nvar _ = initMinicoreProvider()\n\nfunc initMinicoreProvider() any {\n\tcorePlatformConfig.coreLib = coreLibDarwinAmd64\n\tcorePlatformConfig.coreLibFileName = \"libsf_mini_core_darwin_amd64.dylib\"\n\tcorePlatformConfig.initialized = true\n\treturn nil\n}\n"
  },
  {
    "path": "minicore_provider_darwin_arm64.go",
    "content": "//go:build !minicore_disabled\n\npackage gosnowflake\n\nimport (\n\t// embed is used only to initialize go:embed directive\n\t_ \"embed\"\n)\n\n//go:embed libsf_mini_core_darwin_arm64.dylib\nvar coreLibDarwinArm64 []byte\n\nvar _ = initMinicoreProvider()\n\nfunc initMinicoreProvider() any {\n\tcorePlatformConfig.coreLib = coreLibDarwinArm64\n\tcorePlatformConfig.coreLibFileName = \"libsf_mini_core_darwin_arm64.dylib\"\n\tcorePlatformConfig.initialized = true\n\treturn nil\n}\n"
  },
  {
    "path": "minicore_provider_linux_amd64.go",
    "content": "//go:build !minicore_disabled\n\npackage gosnowflake\n\nimport (\n\t// embed is used only to initialize go:embed directive\n\t_ \"embed\"\n)\n\n//go:embed libsf_mini_core_linux_amd64_glibc.so\nvar coreLibLinuxAmd64Glibc []byte\n\n//go:embed libsf_mini_core_linux_amd64_musl.so\nvar coreLibLinuxAmd64Musl []byte\n\nvar _ = initMinicoreProvider()\n\nfunc initMinicoreProvider() any {\n\tswitch detectLibc() {\n\tcase libcTypeGlibc:\n\t\tcorePlatformConfig.coreLib = coreLibLinuxAmd64Glibc\n\t\tcorePlatformConfig.coreLibFileName = \"libsf_mini_core_linux_amd64_glibc.so\"\n\tcase libcTypeMusl:\n\t\tcorePlatformConfig.coreLib = coreLibLinuxAmd64Musl\n\t\tcorePlatformConfig.coreLibFileName = \"libsf_mini_core_linux_amd64_musl.so\"\n\tdefault:\n\t\tminicoreDebugf(\"unknown libc\")\n\t\treturn nil\n\t}\n\tcorePlatformConfig.initialized = true\n\treturn nil\n}\n"
  },
  {
    "path": "minicore_provider_linux_arm64.go",
    "content": "//go:build !minicore_disabled\n\npackage gosnowflake\n\nimport (\n\t// embed is used only to initialize go:embed directive\n\t_ \"embed\"\n)\n\n//go:embed libsf_mini_core_linux_arm64_glibc.so\nvar coreLibLinuxArm64Glibc []byte\n\n//go:embed libsf_mini_core_linux_arm64_musl.so\nvar coreLibLinuxArm64Musl []byte\n\nvar _ = initMinicoreProvider()\n\nfunc initMinicoreProvider() any {\n\tswitch detectLibc() {\n\tcase libcTypeGlibc:\n\t\tcorePlatformConfig.coreLib = coreLibLinuxArm64Glibc\n\t\tcorePlatformConfig.coreLibFileName = \"libsf_mini_core_linux_arm64_glibc.so\"\n\tcase libcTypeMusl:\n\t\tcorePlatformConfig.coreLib = coreLibLinuxArm64Musl\n\t\tcorePlatformConfig.coreLibFileName = \"libsf_mini_core_linux_arm64_musl.so\"\n\tdefault:\n\t\tminicoreDebugf(\"unknown libc\")\n\t\treturn nil\n\t}\n\tcorePlatformConfig.initialized = true\n\treturn nil\n}\n"
  },
  {
    "path": "minicore_provider_windows_amd64.go",
    "content": "//go:build !minicore_disabled\n\npackage gosnowflake\n\nimport (\n\t// embed is used only to initialize go:embed directive\n\t_ \"embed\"\n)\n\n//go:embed libsf_mini_core_windows_amd64.dll\nvar coreLibWindowsAmd64Glibc []byte\n\nvar _ = initMinicoreProvider()\n\nfunc initMinicoreProvider() any {\n\tcorePlatformConfig.coreLib = coreLibWindowsAmd64Glibc\n\tcorePlatformConfig.coreLibFileName = \"libsf_mini_core_windows_amd64.dll\"\n\tcorePlatformConfig.initialized = true\n\treturn nil\n}\n"
  },
  {
    "path": "minicore_provider_windows_arm64.go",
    "content": "//go:build !minicore_disabled\n\npackage gosnowflake\n\nimport (\n\t// embed is used only to initialize go:embed directive\n\t_ \"embed\"\n)\n\n//go:embed libsf_mini_core_windows_arm64.dll\nvar coreLibWindowsArm64Glibc []byte\n\nvar _ = initMinicoreProvider()\n\nfunc initMinicoreProvider() any {\n\tcorePlatformConfig.coreLib = coreLibWindowsArm64Glibc\n\tcorePlatformConfig.coreLibFileName = \"libsf_mini_core_windows_arm64.dll\"\n\tcorePlatformConfig.initialized = true\n\treturn nil\n}\n"
  },
  {
    "path": "minicore_test.go",
    "content": "//go:build !minicore_disabled\n\npackage gosnowflake\n\nimport (\n\t\"database/sql\"\n\t\"os\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/compilation\"\n)\n\nfunc TestMiniCoreLoadSuccess(t *testing.T) {\n\tmcl := newMiniCoreLoader()\n\tcheckLoadCore(t, mcl)\n}\n\nfunc checkLoadCore(t *testing.T, mcl *miniCoreLoaderType) {\n\tcore := mcl.loadCore()\n\tassertNotNilF(t, core)\n\tfullVersion, err := core.FullVersion()\n\tassertNilF(t, err)\n\tassertEqualE(t, fullVersion, \"0.0.1\")\n}\n\nfunc TestMiniCoreLoaderChoosesCorrectCandidates(t *testing.T) {\n\tskipOnMissingHome(t)\n\tassertNilF(t, os.RemoveAll(getMiniCoreCacheDirInHome()))\n\tmcl := newMiniCoreLoader()\n\tcheckAllLoadDirsAvailable(t, mcl)\n}\n\nfunc TestMiniCoreLoaderChoosesCorrectCandidatesWhenHomeCacheDirAlreadyExists(t *testing.T) {\n\tskipOnMissingHome(t)\n\tmcl := newMiniCoreLoader()\n\tcheckAllLoadDirsAvailable(t, mcl)\n\tmcl = newMiniCoreLoader()\n\tcheckAllLoadDirsAvailable(t, mcl)\n}\n\nfunc checkAllLoadDirsAvailable(t *testing.T, mcl *miniCoreLoaderType) {\n\tassertEqualF(t, len(mcl.searchDirs), 3)\n\tassertEqualE(t, mcl.searchDirs[0].dirType, \"temp\")\n\tassertEqualE(t, mcl.searchDirs[1].dirType, \"home\")\n\tassertEqualE(t, mcl.searchDirs[2].dirType, \"cwd\")\n}\n\nfunc TestMiniCoreNoFolderCandidate(t *testing.T) {\n\tmcl := newMiniCoreLoader()\n\tmcl.searchDirs = []minicoreDirCandidate{}\n\tcore := mcl.loadCore()\n\tversion, err := core.FullVersion()\n\tassertNotNilF(t, err)\n\tassertStringContainsE(t, err.Error(), \"failed to write embedded library to any directory\")\n\tassertEqualE(t, version, \"\")\n}\n\nfunc TestMiniCoreNoWritableFolder(t *testing.T) {\n\tskipOnWindows(t, \"permission system is different\")\n\ttempDir := t.TempDir()\n\terr := os.Chmod(tempDir, 0000)\n\tassertNilF(t, err)\n\tdefer os.Chmod(tempDir, 0700)\n\tmcl := newMiniCoreLoader()\n\tmcl.searchDirs = []minicoreDirCandidate{newMinicoreDirCandidate(\"test\", tempDir)}\n\tcore := mcl.loadCore()\n\tassertNotNilF(t, core)\n\t_, err = core.FullVersion()\n\tassertNotNilF(t, err)\n\tassertStringContainsE(t, err.Error(), \"failed to write embedded library to any directory\")\n}\n\nfunc TestMiniCoreNoWritableFirstFolder(t *testing.T) {\n\ttempDir := t.TempDir()\n\terr := os.Chmod(tempDir, 0000)\n\tdefer os.Chmod(tempDir, 0700)\n\ttempDir2 := t.TempDir()\n\tassertNilF(t, err)\n\tmcl := newMiniCoreLoader()\n\tmcl.searchDirs = []minicoreDirCandidate{newMinicoreDirCandidate(\"test\", tempDir), newMinicoreDirCandidate(\"test\", tempDir2)}\n\tcheckLoadCore(t, mcl)\n}\n\nfunc TestMiniCoreInvalidDynamicLibrary(t *testing.T) {\n\torigCoreLib := corePlatformConfig.coreLib\n\tdefer func() {\n\t\tcorePlatformConfig.coreLib = origCoreLib\n\t}()\n\tcorePlatformConfig.coreLib = []byte(\"invalid content\")\n\tmcl := newMiniCoreLoader()\n\tcore := mcl.loadCore()\n\tassertNotNilF(t, core)\n\t_, err := core.FullVersion()\n\tassertNotNilF(t, err)\n\tassertStringContainsE(t, err.Error(), \"failed to load shared library\")\n}\n\nfunc TestMiniCoreNotInitialized(t *testing.T) {\n\tdefer func() {\n\t\tcorePlatformConfig.initialized = true\n\t}()\n\tcorePlatformConfig.initialized = false\n\tmcl := newMiniCoreLoader()\n\tcore := mcl.loadCore()\n\tassertNotNilF(t, core)\n\t_, err := core.FullVersion()\n\tassertNotNilF(t, err)\n\tassertStringContainsE(t, err.Error(), \"minicore is not supported on\")\n}\n\nfunc TestMiniCoreLoadLogsVersion(t *testing.T) {\n\tminicoreLoadLogs.mu.Lock()\n\tminicoreLoadLogs.logs = nil\n\tminicoreLoadLogs.startTime = time.Now()\n\tminicoreLoadLogs.mu.Unlock()\n\n\tmcl := newMiniCoreLoader()\n\tcore := mcl.loadCore()\n\tassertNotNilF(t, core)\n\n\tv, err := core.FullVersion()\n\tassertNilF(t, err)\n\tminicoreDebugf(\"Minicore loading completed, version: %s\", v)\n\n\tminicoreLoadLogs.mu.Lock()\n\tjoined := strings.Join(minicoreLoadLogs.logs, \"\\n\")\n\tminicoreLoadLogs.mu.Unlock()\n\n\tassertStringContainsE(t, joined, \"Minicore loading completed, version: 0.0.1\")\n}\n\nfunc TestIsDynamicallyLinked(t *testing.T) {\n\tlinkingMode, err := compilation.CheckDynamicLinking()\n\tif runtime.GOOS == \"linux\" {\n\t\tassertNilF(t, err, \"should be able to read /proc/self/exe\")\n\t\tassertEqualE(t, linkingMode, compilation.DynamicLinking, \"go test binaries should be dynamically linked\")\n\t} else {\n\t\tassertEqualE(t, linkingMode, compilation.UnknownLinking, \"linking mode should be unknown on non-linux OS\")\n\t}\n}\n\nfunc TestMiniCoreLoadedE2E(t *testing.T) {\n\tlogger.SetLogLevel(\"debug\")\n\tmappingFile := \"minicore/auth/successful_flow.json\"\n\tif runtime.GOOS == \"linux\" {\n\t\tmappingFile = \"minicore/auth/successful_flow_linux.json\"\n\t}\n\twiremock.registerMappings(t, newWiremockMapping(mappingFile), newWiremockMapping(\"select1.json\"))\n\tcfg := wiremock.connectionConfig()\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\trunSmokeQuery(t, db)\n}\n"
  },
  {
    "path": "minicore_windows.go",
    "content": "//go:build windows && !minicore_disabled\n\npackage gosnowflake\n\nimport (\n\t_ \"embed\"\n\t\"fmt\"\n\t\"golang.org/x/sys/windows\"\n\t\"syscall\"\n\t\"unsafe\"\n)\n\ntype windowsMiniCore struct {\n\t// fullVersion holds the version string returned from the library\n\tfullVersion string\n\t// coreInitError holds any error that occurred during initialization\n\tcoreInitError error\n}\n\nvar _ = func() any {\n\tosSpecificLoadFromPath = loadFromPath\n\treturn nil\n}()\n\nfunc (wmc *windowsMiniCore) FullVersion() (string, error) {\n\treturn wmc.fullVersion, wmc.coreInitError\n}\n\nfunc loadFromPath(libPath string) miniCore {\n\tminicoreDebugf(\"Calling LoadLibrary\")\n\tdllHandle, err := windows.LoadLibrary(libPath)\n\tminicoreDebugf(\"Calling LoadLibrary finished\")\n\tif err != nil {\n\t\tmcErr := newMiniCoreError(miniCoreErrorTypeLoad, \"windows\", libPath, fmt.Errorf(\"failed to load shared library: %v\", err))\n\t\treturn newErroredMiniCore(mcErr)\n\t}\n\n\t// Release the DLL handle, because we cache minicore fullVersion result.\n\tdefer windows.FreeLibrary(dllHandle)\n\n\t// Get the address of the function\n\tminicoreDebugf(\"getting procedure address\")\n\tprocAddr, err := windows.GetProcAddress(dllHandle, \"sf_core_full_version\")\n\tif err != nil {\n\t\tmcErr := newMiniCoreError(miniCoreErrorTypeSymbol, \"windows\", libPath, fmt.Errorf(\"procedure sf_core_full_version not found: %v\", err))\n\t\treturn newErroredMiniCore(mcErr)\n\t}\n\n\tminicoreDebugf(\"Invoking system call\")\n\t// Second return value - omitted, required for syscalls that returns more values\n\tret, _, callErr := syscall.Syscall(\n\t\tprocAddr,\n\t\t0, // nargs: Number of arguments is ZERO\n\t\t0, // a1: Argument 1 (unused)\n\t\t0, // a2: Argument 2 (unused)\n\t\t0, // a3: Argument 3 (unused)\n\t)\n\tminicoreDebugf(\"Invoking system call finished\")\n\n\tif callErr != 0 {\n\t\tmcErr := newMiniCoreError(miniCoreErrorTypeCall, \"windows\", libPath, fmt.Errorf(\"system call failed with error code: %v\", callErr))\n\t\treturn newErroredMiniCore(mcErr)\n\t}\n\n\tcStrPtr := (*byte)(unsafe.Pointer(ret))\n\tif cStrPtr == nil {\n\t\tmcErr := newMiniCoreError(miniCoreErrorTypeCall, \"windows\", libPath, fmt.Errorf(\"native function returned null pointer (error code: %v)\", callErr))\n\t\treturn newErroredMiniCore(mcErr)\n\t}\n\n\tgoStr := windows.BytePtrToString(cStrPtr)\n\treturn &windowsMiniCore{\n\t\tfullVersion: goStr,\n\t}\n}\n"
  },
  {
    "path": "monitoring.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"time\"\n)\n\nconst urlQueriesResultFmt = \"/queries/%s/result\"\n\n// queryResultStatus is status returned from server\ntype queryResultStatus int\n\n// Query Status defined at server side\nconst (\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryRunning queryResultStatus = iota\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryAborting\n\t// Deprecated: will be unexported in the future releases.\n\tSFQuerySuccess\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryFailedWithError\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryAborted\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryQueued\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryFailedWithIncident\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryDisconnected\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryResumingWarehouse\n\t// SFQueryQueueRepairingWarehouse present in QueryDTO.java.\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryQueueRepairingWarehouse\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryRestarted\n\t// SFQueryBlocked is when a statement is waiting on a lock on resource held\n\t// by another statement.\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryBlocked\n\t// Deprecated: will be unexported in the future releases.\n\tSFQueryNoData\n)\n\nfunc (qs queryResultStatus) String() string {\n\treturn [...]string{\"RUNNING\", \"ABORTING\", \"SUCCESS\", \"FAILED_WITH_ERROR\",\n\t\t\"ABORTED\", \"QUEUED\", \"FAILED_WITH_INCIDENT\", \"DISCONNECTED\",\n\t\t\"RESUMING_WAREHOUSE\", \"QUEUED_REPAIRING_WAREHOUSE\", \"RESTARTED\",\n\t\t\"BLOCKED\", \"NO_DATA\"}[qs]\n}\n\nfunc (qs queryResultStatus) isRunning() bool {\n\tswitch qs {\n\tcase SFQueryRunning, SFQueryResumingWarehouse, SFQueryQueued,\n\t\tSFQueryQueueRepairingWarehouse, SFQueryNoData:\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\nfunc (qs queryResultStatus) isError() bool {\n\tswitch qs {\n\tcase SFQueryAborting, SFQueryFailedWithError, SFQueryAborted,\n\t\tSFQueryFailedWithIncident, SFQueryDisconnected, SFQueryBlocked:\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\nvar strQueryStatusMap = map[string]queryResultStatus{\"RUNNING\": SFQueryRunning,\n\t\"ABORTING\": SFQueryAborting, \"SUCCESS\": SFQuerySuccess,\n\t\"FAILED_WITH_ERROR\": SFQueryFailedWithError, \"ABORTED\": SFQueryAborted,\n\t\"QUEUED\": SFQueryQueued, \"FAILED_WITH_INCIDENT\": SFQueryFailedWithIncident,\n\t\"DISCONNECTED\":               SFQueryDisconnected,\n\t\"RESUMING_WAREHOUSE\":         SFQueryResumingWarehouse,\n\t\"QUEUED_REPAIRING_WAREHOUSE\": SFQueryQueueRepairingWarehouse,\n\t\"RESTARTED\":                  SFQueryRestarted,\n\t\"BLOCKED\":                    SFQueryBlocked, \"NO_DATA\": SFQueryNoData}\n\ntype retStatus struct {\n\tStatus       string   `json:\"status\"`\n\tSQLText      string   `json:\"sqlText\"`\n\tStartTime    int64    `json:\"startTime\"`\n\tEndTime      int64    `json:\"endTime\"`\n\tErrorCode    string   `json:\"errorCode\"`\n\tErrorMessage string   `json:\"errorMessage\"`\n\tStats        retStats `json:\"stats\"`\n}\n\ntype retStats struct {\n\tScanBytes    int64 `json:\"scanBytes\"`\n\tProducedRows int64 `json:\"producedRows\"`\n}\n\ntype statusResponse struct {\n\tData struct {\n\t\tQueries []retStatus `json:\"queries\"`\n\t} `json:\"data\"`\n\tMessage string `json:\"message\"`\n\tCode    string `json:\"code\"`\n\tSuccess bool   `json:\"success\"`\n}\n\nfunc strToQueryStatus(in string) queryResultStatus {\n\treturn strQueryStatusMap[in]\n}\n\n// SnowflakeQueryStatus is the query status metadata of a snowflake query\ntype SnowflakeQueryStatus struct {\n\tSQLText      string\n\tStartTime    int64\n\tEndTime      int64\n\tErrorCode    string\n\tErrorMessage string\n\tScanBytes    int64\n\tProducedRows int64\n}\n\n// SnowflakeConnection is a wrapper to snowflakeConn that exposes API functions\ntype SnowflakeConnection interface {\n\tGetQueryStatus(ctx context.Context, queryID string) (*SnowflakeQueryStatus, error)\n\tAddTelemetryData(ctx context.Context, eventDate time.Time, data map[string]string) error\n}\n\n// checkQueryStatus returns the status given the query ID. If successful,\n// the error will be nil, indicating there is a complete query result to fetch.\n// Other than nil, there are three error types that can be returned:\n// 1. ErrQueryStatus, if GS cannot return any kind of status due to any reason,\n// i.e. connection, permission, if a query was just submitted, etc.\n// 2, ErrQueryReportedError, if the requested query was terminated or aborted\n// and GS returned an error status included in query. SFQueryFailedWithError\n// 3, ErrQueryIsRunning, if the requested query is still running and might have\n// a complete result later, these statuses were listed in query. SFQueryRunning\nfunc (sc *snowflakeConn) checkQueryStatus(\n\tctx context.Context,\n\tqid string) (\n\t*retStatus, error) {\n\theaders := make(map[string]string)\n\tparam := make(url.Values)\n\tparam.Set(requestGUIDKey, NewUUID().String())\n\tif tok, _, _ := sc.rest.TokenAccessor.GetTokens(); tok != \"\" {\n\t\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, tok)\n\t}\n\tresultPath := fmt.Sprintf(\"%s/%s\", monitoringQueriesPath, qid)\n\turl := sc.rest.getFullURL(resultPath, &param)\n\n\tres, err := sc.rest.FuncGet(ctx, sc.rest, url, headers, sc.rest.RequestTimeout)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to get response. err: %v\", err)\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif err = res.Body.Close(); err != nil {\n\t\t\tlogger.WithContext(ctx).Warnf(\"failed to close response body. err: %v\", err)\n\t\t}\n\t}()\n\tvar statusResp = statusResponse{}\n\tif err = json.NewDecoder(res.Body).Decode(&statusResp); err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to decode JSON. err: %v\", err)\n\t\treturn nil, err\n\t}\n\n\tif !statusResp.Success || len(statusResp.Data.Queries) == 0 {\n\t\tlogger.WithContext(ctx).Errorf(\"status query returned not-success or no status returned.\")\n\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:  ErrQueryStatus,\n\t\t\tMessage: \"status query returned not-success or no status returned. Please retry\",\n\t\t}, sc)\n\t}\n\n\tqueryRet := statusResp.Data.Queries[0]\n\tif queryRet.ErrorCode != \"\" {\n\t\treturn &queryRet, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:         ErrQueryStatus,\n\t\t\tMessage:        errors.ErrMsgQueryStatus,\n\t\t\tMessageArgs:    []any{queryRet.ErrorCode, queryRet.ErrorMessage},\n\t\t\tIncludeQueryID: true,\n\t\t\tQueryID:        qid,\n\t\t}, sc)\n\t}\n\n\t// returned errorCode is 0. Now check what is the returned status of the query.\n\tqStatus := strToQueryStatus(queryRet.Status)\n\tif qStatus.isError() {\n\t\treturn &queryRet, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber: ErrQueryReportedError,\n\t\t\tMessage: fmt.Sprintf(\"%s: status from server: [%s]\",\n\t\t\t\tqueryRet.ErrorMessage, queryRet.Status),\n\t\t\tIncludeQueryID: true,\n\t\t\tQueryID:        qid,\n\t\t}, sc)\n\t}\n\n\tif qStatus.isRunning() {\n\t\treturn &queryRet, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber: ErrQueryIsRunning,\n\t\t\tMessage: fmt.Sprintf(\"%s: status from server: [%s]\",\n\t\t\t\tqueryRet.ErrorMessage, queryRet.Status),\n\t\t\tIncludeQueryID: true,\n\t\t\tQueryID:        qid,\n\t\t}, sc)\n\t}\n\t//success\n\treturn &queryRet, nil\n}\n\nfunc (sc *snowflakeConn) getQueryResultResp(\n\tctx context.Context,\n\tresultPath string) (\n\t*execResponse, error) {\n\theaders := getHeaders()\n\tif sn, ok := sc.syncParams.get(serviceName); ok {\n\t\theaders[httpHeaderServiceName] = *sn\n\t}\n\tparam := make(url.Values)\n\tparam.Set(requestIDKey, getOrGenerateRequestIDFromContext(ctx).String())\n\tparam.Set(\"clientStartTime\", strconv.FormatInt(sc.currentTimeProvider.currentTime(), 10))\n\tparam.Set(requestGUIDKey, NewUUID().String())\n\ttoken, _, _ := sc.rest.TokenAccessor.GetTokens()\n\tif token != \"\" {\n\t\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, token)\n\t}\n\turl := sc.rest.getFullURL(resultPath, &param)\n\n\trespd, err := getQueryResultWithRetriesForAsyncMode(ctx, sc.rest, url, headers, sc.rest.RequestTimeout)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"error: %v\", err)\n\t\treturn nil, err\n\t}\n\treturn respd, nil\n}\n\n// Fetch query result for a query id from /queries/<qid>/result endpoint.\nfunc (sc *snowflakeConn) rowsForRunningQuery(\n\tctx context.Context, qid string,\n\trows *snowflakeRows) error {\n\tresultPath := fmt.Sprintf(urlQueriesResultFmt, qid)\n\tresp, err := sc.getQueryResultResp(ctx, resultPath)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"error: %v\", err)\n\t\treturn err\n\t}\n\n\tif !resp.Success {\n\t\tcode, err := strconv.Atoi(resp.Code)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   code,\n\t\t\tSQLState: resp.Data.SQLState,\n\t\t\tMessage:  resp.Message,\n\t\t\tQueryID:  resp.Data.QueryID,\n\t\t}, sc)\n\t}\n\trows.addDownloader(populateChunkDownloader(ctx, sc, resp.Data))\n\treturn nil\n}\n\n// prepare a Rows object to return for query of 'qid'\nfunc (sc *snowflakeConn) buildRowsForRunningQuery(\n\tctx context.Context,\n\tqid string) (\n\tdriver.Rows, error) {\n\trows := new(snowflakeRows)\n\trows.sc = sc\n\trows.queryID = qid\n\trows.ctx = ctx\n\tif err := sc.rowsForRunningQuery(ctx, qid, rows); err != nil {\n\t\treturn nil, err\n\t}\n\terr := rows.ChunkDownloader.start()\n\treturn rows, err\n}\n"
  },
  {
    "path": "multistatement.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"strconv\"\n\t\"strings\"\n)\n\ntype childResult struct {\n\tid  string\n\ttyp string\n}\n\nfunc getChildResults(IDs string, types string) []childResult {\n\tif IDs == \"\" {\n\t\treturn nil\n\t}\n\tqueryIDs := strings.Split(IDs, \",\")\n\tresultTypes := strings.Split(types, \",\")\n\tres := make([]childResult, len(queryIDs))\n\tfor i, id := range queryIDs {\n\t\tres[i] = childResult{id, resultTypes[i]}\n\t}\n\treturn res\n}\n\nfunc (sc *snowflakeConn) handleMultiExec(\n\tctx context.Context,\n\tdata execResponseData) (\n\tdriver.Result, error) {\n\tif data.ResultIDs == \"\" {\n\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   ErrNoResultIDs,\n\t\t\tSQLState: data.SQLState,\n\t\t\tMessage:  errors.ErrMsgNoResultIDs,\n\t\t\tQueryID:  data.QueryID,\n\t\t}, sc)\n\t}\n\tvar updatedRows int64\n\tchildResults := getChildResults(data.ResultIDs, data.ResultTypes)\n\tfor _, child := range childResults {\n\t\tresultPath := fmt.Sprintf(urlQueriesResultFmt, child.id)\n\t\tchildResultType, err := strconv.ParseInt(child.typ, 10, 64)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif isDml(childResultType) {\n\t\t\tchildData, err := sc.getQueryResultResp(ctx, resultPath)\n\t\t\tif err != nil {\n\t\t\t\tlogger.WithContext(ctx).Errorf(\"error: %v\", err)\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tif childData != nil && !childData.Success {\n\t\t\t\tcode, err := strconv.Atoi(childData.Code)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\treturn nil, exceptionTelemetry(&SnowflakeError{\n\t\t\t\t\tNumber:   code,\n\t\t\t\t\tSQLState: childData.Data.SQLState,\n\t\t\t\t\tMessage:  childData.Message,\n\t\t\t\t\tQueryID:  childData.Data.QueryID,\n\t\t\t\t}, sc)\n\t\t\t}\n\t\t\tcount, err := updateRows(childData.Data)\n\t\t\tif err != nil {\n\t\t\t\tlogger.WithContext(ctx).Errorf(\"error: %v\", err)\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tupdatedRows += count\n\t\t}\n\t}\n\tlogger.WithContext(ctx).Infof(\"number of updated rows: %#v\", updatedRows)\n\treturn &snowflakeResult{\n\t\taffectedRows: updatedRows,\n\t\tinsertID:     -1,\n\t\tqueryID:      data.QueryID,\n\t}, nil\n}\n\n// Fill the corresponding rows and add chunk downloader into the rows when\n// iterating across the childResults\nfunc (sc *snowflakeConn) handleMultiQuery(\n\tctx context.Context,\n\tdata execResponseData,\n\trows *snowflakeRows) error {\n\tif data.ResultIDs == \"\" {\n\t\treturn exceptionTelemetry(&SnowflakeError{\n\t\t\tNumber:   ErrNoResultIDs,\n\t\t\tSQLState: data.SQLState,\n\t\t\tMessage:  errors.ErrMsgNoResultIDs,\n\t\t\tQueryID:  data.QueryID,\n\t\t}, sc)\n\t}\n\tchildResults := getChildResults(data.ResultIDs, data.ResultTypes)\n\tfor _, child := range childResults {\n\t\tif err := sc.rowsForRunningQuery(ctx, child.id, rows); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "multistatement_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"reflect\"\n\t\"testing\"\n\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\t\"time\"\n)\n\nfunc TestMultiStatementExecuteNoResultSet(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 4)\n\tmultiStmtQuery := \"begin;\\n\" +\n\t\t\"delete from test_multi_statement_txn;\\n\" +\n\t\t\"insert into test_multi_statement_txn values (1, 'a'), (2, 'b');\\n\" +\n\t\t\"commit;\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(`create or replace table test_multi_statement_txn(c1 number, c2 string) as select 10, 'z'`)\n\n\t\tres := dbt.mustExecContext(ctx, multiStmtQuery)\n\t\tcount, err := res.RowsAffected()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"res.RowsAffected() returned error: %v\", err)\n\t\t}\n\t\tif count != 3 {\n\t\t\tt.Fatalf(\"expected 3 affected rows, got %d\", count)\n\t\t}\n\t})\n}\n\nfunc TestMultiStatementQueryResultSet(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 4)\n\tmultiStmtQuery := \"select 123;\\n\" +\n\t\t\"select 456;\\n\" +\n\t\t\"select 789;\\n\" +\n\t\t\"select '000';\"\n\n\tvar v1, v2, v3 int64\n\tvar v4 string\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryContext(ctx, multiStmtQuery)\n\t\tdefer rows.Close()\n\n\t\t// first statement\n\t\tif rows.Next() {\n\t\t\tif err := rows.Scan(&v1); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v1 != 123 {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v1)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\t// second statement\n\t\tif !rows.NextResultSet() {\n\t\t\tt.Error(\"failed to retrieve next result set\")\n\t\t}\n\t\tif rows.Next() {\n\t\t\tif err := rows.Scan(&v2); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v2 != 456 {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v2)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\t// third statement\n\t\tif !rows.NextResultSet() {\n\t\t\tt.Error(\"failed to retrieve next result set\")\n\t\t}\n\t\tif rows.Next() {\n\t\t\tif err := rows.Scan(&v3); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v3 != 789 {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v3)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\t// fourth statement\n\t\tif !rows.NextResultSet() {\n\t\t\tt.Error(\"failed to retrieve next result set\")\n\t\t}\n\t\tif rows.Next() {\n\t\t\tif err := rows.Scan(&v4); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v4 != \"000\" {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v4)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\t})\n}\n\n// TestMultistatementQueryLargeResultSet validates multi-statement queries with\n// chunked results. The 1,000,000 row count per statement is required to trigger\n// Snowflake's chunked result delivery. A bug in HasNextResultSet/NextResultSet\n// (SNOW-1646792) only manifested with large, multi-chunk result sets. Do not\n// reduce the row count — smaller values may fit in a single chunk and miss the\n// bug class this test guards against.\nfunc TestMultistatementQueryLargeResultSet(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 2)\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT 'abc' FROM TABLE(GENERATOR(ROWCOUNT => 1000000)); SELECT 'abc' FROM TABLE(GENERATOR(ROWCOUNT => 1000000))\")\n\t\ttotalRows := 0\n\t\tfor hasNextResultSet := true; hasNextResultSet; hasNextResultSet = rows.NextResultSet() {\n\t\t\tfor rows.Next() {\n\t\t\t\tvar s string\n\t\t\t\trows.mustScan(&s)\n\t\t\t\tassertEqualE(t, s, \"abc\")\n\t\t\t\ttotalRows++\n\t\t\t}\n\t\t}\n\t\tassertEqualE(t, totalRows, 2000000)\n\t})\n}\n\nfunc TestMultiStatementExecuteResultSet(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 6)\n\tmultiStmtQuery := \"begin;\\n\" +\n\t\t\"delete from test_multi_statement_txn_rb;\\n\" +\n\t\t\"insert into test_multi_statement_txn_rb values (1, 'a'), (2, 'b');\\n\" +\n\t\t\"select 1;\\n\" +\n\t\t\"select 2;\\n\" +\n\t\t\"rollback;\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"drop table if exists test_multi_statement_txn_rb\")\n\t\tdbt.mustExec(`create or replace table test_multi_statement_txn_rb(\n\t\t\tc1 number, c2 string) as select 10, 'z'`)\n\t\tdefer dbt.mustExec(\"drop table if exists test_multi_statement_txn_rb\")\n\n\t\tres := dbt.mustExecContext(ctx, multiStmtQuery)\n\t\tcount, err := res.RowsAffected()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"res.RowsAffected() returned error: %v\", err)\n\t\t}\n\t\tif count != 3 {\n\t\t\tt.Fatalf(\"expected 3 affected rows, got %d\", count)\n\t\t}\n\t})\n}\n\nfunc TestMultiStatementQueryNoResultSet(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 4)\n\tmultiStmtQuery := \"begin;\\n\" +\n\t\t\"delete from test_multi_statement_txn;\\n\" +\n\t\t\"insert into test_multi_statement_txn values (1, 'a'), (2, 'b');\\n\" +\n\t\t\"commit;\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"drop table if exists test_multi_statement_txn\")\n\t\tdbt.mustExec(`create or replace table test_multi_statement_txn(\n\t\t\tc1 number, c2 string) as select 10, 'z'`)\n\t\tdefer dbt.mustExec(\"drop table if exists tfmuest_multi_statement_txn\")\n\n\t\trows := dbt.mustQueryContext(ctx, multiStmtQuery)\n\t\tdefer rows.Close()\n\t})\n}\n\nfunc TestMultiStatementExecuteMix(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 3)\n\tmultiStmtQuery := \"create or replace temporary table test_multi (cola int);\\n\" +\n\t\t\"insert into test_multi values (1), (2);\\n\" +\n\t\t\"select cola from test_multi order by cola asc;\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"drop table if exists test_multi_statement_txn\")\n\t\tdbt.mustExec(`create or replace table test_multi_statement_txn(\n\t\t\tc1 number, c2 string) as select 10, 'z'`)\n\t\tdefer dbt.mustExec(\"drop table if exists test_multi_statement_txn\")\n\n\t\tres := dbt.mustExecContext(ctx, multiStmtQuery)\n\t\tcount, err := res.RowsAffected()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"res.RowsAffected() returned error: %v\", err)\n\t\t}\n\t\tif count != 2 {\n\t\t\tt.Fatalf(\"expected 2 affected rows, got %d\", count)\n\t\t}\n\t})\n}\n\nfunc TestMultiStatementQueryMix(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 3)\n\tmultiStmtQuery := \"create or replace temporary table test_multi (cola int);\\n\" +\n\t\t\"insert into test_multi values (1), (2);\\n\" +\n\t\t\"select cola from test_multi order by cola asc;\"\n\n\tvar count, v int\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"drop table if exists test_multi_statement_txn\")\n\t\tdbt.mustExec(`create or replace table test_multi_statement_txn(\n\t\t\tc1 number, c2 string) as select 10, 'z'`)\n\t\tdefer dbt.mustExec(\"drop table if exists test_multi_statement_txn\")\n\n\t\trows := dbt.mustQueryContext(ctx, multiStmtQuery)\n\t\tdefer rows.Close()\n\n\t\t// first statement\n\t\tif !rows.Next() {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\t// second statement\n\t\trows.NextResultSet()\n\t\tif rows.Next() {\n\t\t\tif err := rows.Scan(&count); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif count != 2 {\n\t\t\t\tt.Fatalf(\"expected 2 affected rows, got %d\", count)\n\t\t\t}\n\t\t}\n\n\t\texpected := 1\n\t\t// third statement\n\t\trows.NextResultSet()\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&v); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v != expected {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v)\n\t\t\t}\n\t\t\texpected++\n\t\t}\n\t})\n}\n\nfunc TestMultiStatementCountZero(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 0)\n\tvar v1 int\n\tvar v2 string\n\tvar v3 float64\n\tvar v4 bool\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\t// first query\n\t\tmultiStmtQuery1 := \"select 123;\\n\" +\n\t\t\t\"select '456';\"\n\t\trows1 := dbt.mustQueryContext(ctx, multiStmtQuery1)\n\t\tdefer rows1.Close()\n\t\t// first statement\n\t\tif rows1.Next() {\n\t\t\tif err := rows1.Scan(&v1); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v1 != 123 {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v1)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\t// second statement\n\t\tif !rows1.NextResultSet() {\n\t\t\tt.Error(\"failed to retrieve next result set\")\n\t\t}\n\t\tif rows1.Next() {\n\t\t\tif err := rows1.Scan(&v2); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v2 != \"456\" {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v2)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\t// second query\n\t\tmultiStmtQuery2 := \"select 789;\\n\" +\n\t\t\t\"select 'foo';\\n\" +\n\t\t\t\"select 0.123;\\n\" +\n\t\t\t\"select true;\"\n\t\trows2 := dbt.mustQueryContext(ctx, multiStmtQuery2)\n\t\tdefer rows2.Close()\n\t\t// first statement\n\t\tif rows2.Next() {\n\t\t\tif err := rows2.Scan(&v1); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v1 != 789 {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v1)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\t// second statement\n\t\tif !rows2.NextResultSet() {\n\t\t\tt.Error(\"failed to retrieve next result set\")\n\t\t}\n\t\tif rows2.Next() {\n\t\t\tif err := rows2.Scan(&v2); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v2 != \"foo\" {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v2)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\t// third statement\n\t\tif !rows2.NextResultSet() {\n\t\t\tt.Error(\"failed to retrieve next result set\")\n\t\t}\n\t\tif rows2.Next() {\n\t\t\tif err := rows2.Scan(&v3); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v3 != 0.123 {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v3)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\t// fourth statement\n\t\tif !rows2.NextResultSet() {\n\t\t\tt.Error(\"failed to retrieve next result set\")\n\t\t}\n\t\tif rows2.Next() {\n\t\t\tif err := rows2.Scan(&v4); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v4 != true {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v4)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\t})\n}\n\nfunc TestMultiStatementCountMismatch(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tmultiStmtQuery := \"select 123;\\n\" +\n\t\t\t\"select 456;\\n\" +\n\t\t\t\"select 789;\\n\" +\n\t\t\t\"select '000';\"\n\n\t\tctx := WithMultiStatement(context.Background(), 3)\n\t\tif _, err := dbt.conn.QueryContext(ctx, multiStmtQuery); err == nil {\n\t\t\tt.Fatal(\"should have failed to query multiple statements\")\n\t\t}\n\t})\n}\n\nfunc TestMultiStatementVaryingColumnCount(t *testing.T) {\n\tmultiStmtQuery := \"select c1 from test_tbl;\\n\" +\n\t\t\"select c1,c2 from test_tbl;\"\n\tctx := WithMultiStatement(context.Background(), 0)\n\n\tvar v1, v2 int\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"create or replace table test_tbl(c1 int, c2 int)\")\n\t\tdbt.mustExec(\"insert into test_tbl values(1, 0)\")\n\t\tdefer dbt.mustExec(\"drop table if exists test_tbl\")\n\n\t\trows := dbt.mustQueryContext(ctx, multiStmtQuery)\n\t\tdefer rows.Close()\n\n\t\tif rows.Next() {\n\t\t\tif err := rows.Scan(&v1); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v1 != 1 {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v\", v1)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\tif !rows.NextResultSet() {\n\t\t\tt.Error(\"failed to retrieve next result set\")\n\t\t}\n\n\t\tif rows.Next() {\n\t\t\tif err := rows.Scan(&v1, &v2); err != nil {\n\t\t\t\tt.Errorf(\"failed to scan: %#v\", err)\n\t\t\t}\n\t\t\tif v1 != 1 || v2 != 0 {\n\t\t\t\tt.Fatalf(\"failed to fetch. value: %v, %v\", v1, v2)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\t})\n}\n\n// The total completion time should be similar to the duration of the query on Snowflake UI.\nfunc TestMultiStatementExecutePerformance(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 100)\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tfile, err := os.Open(\"test_data/multistatements.sql\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed opening file: %s\", err)\n\t\t}\n\t\tdefer file.Close()\n\t\tstatements, err := io.ReadAll(file)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed reading file: %s\", err)\n\t\t}\n\n\t\tsql := string(statements)\n\n\t\tstart := time.Now()\n\t\tres := dbt.mustExecContext(ctx, sql)\n\t\tduration := time.Since(start)\n\n\t\tcount, err := res.RowsAffected()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"res.RowsAffected() returned error: %v\", err)\n\t\t}\n\t\tif count != 0 {\n\t\t\tt.Fatalf(\"expected 0 affected rows, got %d\", count)\n\t\t}\n\t\tt.Logf(\"The total completion time was %v\", duration)\n\n\t\tfile, err = os.Open(\"test_data/multistatements_drop.sql\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed opening file: %s\", err)\n\t\t}\n\t\tdefer file.Close()\n\t\tstatements, err = io.ReadAll(file)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed reading file: %s\", err)\n\t\t}\n\t\tsql = string(statements)\n\t\tdbt.mustExecContext(ctx, sql)\n\t})\n}\n\nfunc TestUnitGetChildResults(t *testing.T) {\n\ttestcases := []struct {\n\t\tids   string\n\t\ttypes string\n\t\tout   []childResult\n\t}{\n\t\t{\"\", \"\", nil},\n\t\t{\"\", \"4096\", nil},\n\t\t{\"01aa3265-0405-ab7c-0000-53b106343aba,02aa3265-0405-ab7c-0000-53b106343aba\", \"12544,12544\", []childResult{\n\t\t\t{\"01aa3265-0405-ab7c-0000-53b106343aba\", \"12544\"},\n\t\t\t{\"02aa3265-0405-ab7c-0000-53b106343aba\", \"12544\"}}},\n\t\t{\"01aa3265-0405-ab7c-0000-53b106343aba,02aa3265-0405-ab7c-0000-53b106343aba,03aa3265-0405-ab7c-0000-53b106343aba\", \"25344,4096,12544\", []childResult{\n\t\t\t{\"01aa3265-0405-ab7c-0000-53b106343aba\", \"25344\"},\n\t\t\t{\"02aa3265-0405-ab7c-0000-53b106343aba\", \"4096\"},\n\t\t\t{\"03aa3265-0405-ab7c-0000-53b106343aba\", \"12544\"}}},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(test.ids, func(t *testing.T) {\n\t\t\tres := getChildResults(test.ids, test.types)\n\t\t\tif !reflect.DeepEqual(res, test.out) {\n\t\t\t\tt.Fatalf(\"Child result should be equal, expected %v, actual %v\", res, test.out)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc funcGetQueryRespFail(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ time.Duration) (*http.Response, error) {\n\treturn nil, errors.New(\"failed to get query response\")\n}\n\nfunc funcGetQueryRespError(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ time.Duration) (*http.Response, error) {\n\tdd := &execResponseData{}\n\ter := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"query failed\",\n\t\tCode:    \"261000\",\n\t\tSuccess: false,\n\t}\n\tba, err := json.Marshal(er)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: ba},\n\t}, nil\n}\n\nfunc TestUnitHandleMultiExec(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tdata := execResponseData{\n\t\t\tResultIDs:   \"\",\n\t\t\tResultTypes: \"\",\n\t\t}\n\t\t_, err := sct.sc.handleMultiExec(context.Background(), data)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have failed\")\n\t\t}\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"should be snowflake error. err: %v\", err)\n\t\t}\n\t\tif driverErr.Number != ErrNoResultIDs {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrNoResultIDs, driverErr.Number)\n\t\t}\n\n\t\tdata = execResponseData{\n\t\t\tResultIDs:   \"1eFhmhe23242kmfd540GgGre,1eFhmhe23242kmfd540GgGre\",\n\t\t\tResultTypes: \"12544,12544\",\n\t\t}\n\t\tsct.sc.rest = &snowflakeRestful{\n\t\t\tFuncGet:          funcGetQueryRespFail,\n\t\t\tFuncCloseSession: closeSessionMock,\n\t\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t\t}\n\t\t_, err = sct.sc.handleMultiExec(context.Background(), data)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have failed\")\n\t\t}\n\n\t\tsct.sc.rest.FuncGet = funcGetQueryRespError\n\t\tdata.SQLState = \"01112\"\n\t\t_, err = sct.sc.handleMultiExec(context.Background(), data)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have failed\")\n\t\t}\n\t\tdriverErr, ok = err.(*SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"should be snowflake error. err: %v\", err)\n\t\t}\n\t\tif driverErr.Number != ErrFailedToPostQuery {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrFailedToPostQuery, driverErr.Number)\n\t\t}\n\t})\n}\n\nfunc TestUnitHandleMultiQuery(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tdata := execResponseData{\n\t\t\tResultIDs:   \"\",\n\t\t\tResultTypes: \"\",\n\t\t}\n\t\trows := new(snowflakeRows)\n\t\terr := sct.sc.handleMultiQuery(context.Background(), data, rows)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have failed\")\n\t\t}\n\t\tdriverErr, ok := err.(*SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"should be snowflake error. err: %v\", err)\n\t\t}\n\t\tif driverErr.Number != ErrNoResultIDs {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrNoResultIDs, driverErr.Number)\n\t\t}\n\t\tdata = execResponseData{\n\t\t\tResultIDs:   \"1eFhmhe23242kmfd540GgGre,1eFhmhe23242kmfd540GgGre\",\n\t\t\tResultTypes: \"12544,12544\",\n\t\t}\n\t\tsct.sc.rest = &snowflakeRestful{\n\t\t\tFuncGet:          funcGetQueryRespFail,\n\t\t\tFuncCloseSession: closeSessionMock,\n\t\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t\t}\n\t\terr = sct.sc.handleMultiQuery(context.Background(), data, rows)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have failed\")\n\t\t}\n\n\t\tsct.sc.rest.FuncGet = funcGetQueryRespError\n\t\tdata.SQLState = \"01112\"\n\t\terr = sct.sc.handleMultiQuery(context.Background(), data, rows)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"should have failed\")\n\t\t}\n\t\tdriverErr, ok = err.(*SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"should be snowflake error. err: %v\", err)\n\t\t}\n\t\tif driverErr.Number != ErrFailedToPostQuery {\n\t\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrFailedToPostQuery, driverErr.Number)\n\t\t}\n\t})\n}\n\nfunc TestMultiStatementArrowFormat(t *testing.T) {\n\tctx := WithMultiStatement(context.Background(), 4)\n\tmultiStmtQuery := \"select 123;\\n\" +\n\t\t\"select 456;\\n\" +\n\t\t\"select 789;\\n\" +\n\t\t\"select '000';\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET ENABLE_FIX_1758055_ADD_ARROW_SUPPORT_FOR_MULTI_STMTS = TRUE\")\n\n\t\ttestCases := []struct {\n\t\t\tname       string\n\t\t\tformatType string\n\t\t\tforceQuery string\n\t\t}{\n\t\t\t{name: \"forceJSON\", formatType: \"json\", forceQuery: forceJSON},\n\t\t\t{name: \"forceArrow\", formatType: \"arrow\", forceQuery: forceARROW},\n\t\t}\n\t\trowTypes := []string{\"123\", \"456\", \"789\", \"'000'\"}\n\n\t\tfor _, testCase := range testCases {\n\t\t\tt.Run(\"with \"+testCase.name, func(t *testing.T) {\n\t\t\t\tdbt.mustExec(testCase.forceQuery)\n\t\t\t\tbuffer, cleanup := setupTestLogger()\n\t\t\t\tdefer cleanup()\n\t\t\t\trows := dbt.mustQueryContext(ia.EnableArrowBatches(ctx), multiStmtQuery)\n\t\t\t\tdefer rows.Close()\n\t\t\t\tlogOutput := buffer.String()\n\t\t\t\tfor _, rowType := range rowTypes {\n\t\t\t\t\tassertStringContainsE(t, logOutput, \"[Server Response Validation]: RowType: \"+rowType+\", QueryResultFormat: \"+testCase.formatType)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\n\t})\n}\n"
  },
  {
    "path": "ocsp.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bufio\"\n\t\"context\"\n\t\"crypto\"\n\t\"crypto/fips140\"\n\t\"crypto/x509\"\n\t\"crypto/x509/pkix\"\n\t\"encoding/asn1\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\tsferrors \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"io\"\n\t\"math/big\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"golang.org/x/crypto/ocsp\"\n)\n\nvar (\n\tocspModuleInitialized  = false\n\tocspModuleMu           sync.Mutex\n\tocspCacheClearer       = &ocspCacheClearerType{}\n\tocspCacheServerEnabled = true\n)\n\nvar (\n\t// cacheDir is the location of OCSP response cache file\n\tcacheDir = \"\"\n\t// cacheFileName is the file name of OCSP response cache file\n\tcacheFileName = \"\"\n\t// cacheUpdated is true if the memory cache is updated\n\tcacheUpdated = true\n)\n\n// OCSPFailOpenMode is OCSP fail open mode. OCSPFailOpenTrue by default and may\n// set to ocspModeFailClosed for fail closed mode\n// Deprecated: will be moved to Config/DSN in the future releases.\ntype OCSPFailOpenMode = sfconfig.OCSPFailOpenMode\n\nconst (\n\t// OCSPFailOpenTrue represents OCSP fail open mode.\n\tOCSPFailOpenTrue = sfconfig.OCSPFailOpenTrue\n\t// OCSPFailOpenFalse represents OCSP fail closed mode.\n\tOCSPFailOpenFalse = sfconfig.OCSPFailOpenFalse\n)\n\nconst (\n\t// defaultOCSPCacheServerTimeout is the total timeout for OCSP cache server.\n\tdefaultOCSPCacheServerTimeout = 5 * time.Second\n\n\t// defaultOCSPResponderTimeout is the total timeout for OCSP responder.\n\tdefaultOCSPResponderTimeout = 10 * time.Second\n\t// defaultOCSPMaxRetryCount specifies maximum numbere of subsequent retries to OCSP (cache and server)\n\tdefaultOCSPMaxRetryCount = 2\n\n\t// defaultOCSPResponseCacheClearingInterval is the default value for clearing OCSP response cache\n\tdefaultOCSPResponseCacheClearingInterval = 15 * time.Minute\n)\n\nvar (\n\t// OcspCacheServerTimeout is a timeout for OCSP cache server.\n\t// Deprecated: will be moved to Config/DSN in the future releases.\n\tOcspCacheServerTimeout = defaultOCSPCacheServerTimeout\n\t// OcspResponderTimeout is a timeout for OCSP responders.\n\t// Deprecated: will be moved to Config/DSN in the future releases.\n\tOcspResponderTimeout = defaultOCSPResponderTimeout\n\t// OcspMaxRetryCount is a number of retires to OCSP (cache server and responders).\n\t// Deprecated: will be moved to Config/DSN in the future releases.\n\tOcspMaxRetryCount = defaultOCSPMaxRetryCount\n)\n\nconst (\n\tcacheFileBaseName = \"ocsp_response_cache.json\"\n\t// cacheExpire specifies cache data expiration time in seconds.\n\tcacheExpire                                   = float64(24 * 60 * 60)\n\tdefaultCacheServerHost                        = \"http://ocsp.snowflakecomputing.com\"\n\tcacheServerEnabledEnv                         = \"SF_OCSP_RESPONSE_CACHE_SERVER_ENABLED\"\n\tcacheServerURLEnv                             = \"SF_OCSP_RESPONSE_CACHE_SERVER_URL\"\n\tcacheDirEnv                                   = \"SF_OCSP_RESPONSE_CACHE_DIR\"\n\tocspResponseCacheClearingIntervalInSecondsEnv = \"SF_OCSP_RESPONSE_CACHE_CLEARING_INTERVAL_IN_SECONDS\"\n)\n\nconst (\n\tocspTestResponderURLEnv = \"SF_OCSP_TEST_RESPONDER_URL\"\n\tocspTestNoOCSPURLEnv    = \"SF_OCSP_TEST_NO_OCSP_RESPONDER_URL\"\n)\n\nconst (\n\ttolerableValidityRatio = 100               // buffer for certificate revocation update time\n\tmaxClockSkew           = 900 * time.Second // buffer for clock skew\n)\n\ntype ocspStatusCode int\n\ntype ocspStatus struct {\n\tcode ocspStatusCode\n\terr  error\n}\n\nconst (\n\tocspSuccess                ocspStatusCode = 0\n\tocspStatusGood             ocspStatusCode = -1\n\tocspStatusRevoked          ocspStatusCode = -2\n\tocspStatusUnknown          ocspStatusCode = -3\n\tocspStatusOthers           ocspStatusCode = -4\n\tocspNoServer               ocspStatusCode = -5\n\tocspFailedParseOCSPHost    ocspStatusCode = -6\n\tocspFailedComposeRequest   ocspStatusCode = -7\n\tocspFailedDecomposeRequest ocspStatusCode = -8\n\tocspFailedSubmit           ocspStatusCode = -9\n\tocspFailedResponse         ocspStatusCode = -10\n\tocspFailedExtractResponse  ocspStatusCode = -11\n\tocspFailedParseResponse    ocspStatusCode = -12\n\tocspInvalidValidity        ocspStatusCode = -13\n\tocspMissedCache            ocspStatusCode = -14\n\tocspCacheExpired           ocspStatusCode = -15\n\tocspFailedDecodeResponse   ocspStatusCode = -16\n)\n\n// copied from crypto/ocsp.go\ntype certID struct {\n\tHashAlgorithm pkix.AlgorithmIdentifier\n\tNameHash      []byte\n\tIssuerKeyHash []byte\n\tSerialNumber  *big.Int\n}\n\n// cache key\ntype certIDKey struct {\n\tHashAlgorithm crypto.Hash\n\tNameHash      string\n\tIssuerKeyHash string\n\tSerialNumber  string\n}\n\ntype certCacheValue struct {\n\tts             float64\n\tocspRespBase64 string\n}\n\ntype parsedOcspRespKey struct {\n\tocspRespBase64 string\n\tcertIDBase64   string\n}\n\nvar (\n\tocspResponseCache       map[certIDKey]*certCacheValue\n\tocspParsedRespCache     map[parsedOcspRespKey]*ocspStatus\n\tocspResponseCacheLock   = &sync.RWMutex{}\n\tocspParsedRespCacheLock = &sync.Mutex{}\n)\n\ntype ocspValidator struct {\n\tmode           OCSPFailOpenMode\n\tcacheServerURL string\n\tisPrivateLink  bool\n\tretryURL       string\n\tcfg            *Config\n}\n\nfunc newOcspValidator(cfg *Config) *ocspValidator {\n\tisPrivateLink := checkIsPrivateLink(cfg.Host)\n\tvar cacheServerURL, retryURL string\n\tvar ok bool\n\n\tlogger.Debug(\"initializing OCSP module\")\n\tif cacheServerURL, ok = os.LookupEnv(cacheServerURLEnv); ok {\n\t\tlogger.Debugf(\"OCSP Cache Server already set by user for %v: %v\", cfg.Host, cacheServerURL)\n\t} else if isPrivateLink {\n\t\tcacheServerURL = fmt.Sprintf(\"http://ocsp.%v/%v\", cfg.Host, cacheFileBaseName)\n\t\tlogger.Debugf(\"Using PrivateLink host (%v), setting up OCSP cache server to %v\", cfg.Host, cacheServerURL)\n\t\tretryURL = fmt.Sprintf(\"http://ocsp.%v/retry/\", cfg.Host) + \"%v/%v\"\n\t\tlogger.Debugf(\"Using PrivateLink retry proxy %v\", retryURL)\n\t} else if !strings.HasSuffix(cfg.Host, sfconfig.DefaultDomain) {\n\t\tcacheServerURL = fmt.Sprintf(\"http://ocsp.%v/%v\", cfg.Host, cacheFileBaseName)\n\t\tlogger.Debugf(\"Using not global host (%v), setting up OCSP cache server to %v\", cfg.Host, cacheServerURL)\n\t} else {\n\t\tcacheServerURL = fmt.Sprintf(\"%v/%v\", defaultCacheServerHost, cacheFileBaseName)\n\t\tlogger.Debugf(\"OCSP Cache Server not set by user for %v, setting it up to %v\", cfg.Host, cacheServerURL)\n\t}\n\n\treturn &ocspValidator{\n\t\tmode:           cfg.OCSPFailOpen,\n\t\tcacheServerURL: strings.ToLower(cacheServerURL),\n\t\tisPrivateLink:  isPrivateLink,\n\t\tretryURL:       strings.ToLower(retryURL),\n\t\tcfg:            cfg,\n\t}\n}\n\n// copied from crypto/ocsp\nvar hashOIDs = map[crypto.Hash]asn1.ObjectIdentifier{\n\tcrypto.SHA1:   asn1.ObjectIdentifier([]int{1, 3, 14, 3, 2, 26}),\n\tcrypto.SHA256: asn1.ObjectIdentifier([]int{2, 16, 840, 1, 101, 3, 4, 2, 1}),\n\tcrypto.SHA384: asn1.ObjectIdentifier([]int{2, 16, 840, 1, 101, 3, 4, 2, 2}),\n\tcrypto.SHA512: asn1.ObjectIdentifier([]int{2, 16, 840, 1, 101, 3, 4, 2, 3}),\n}\n\n// copied from crypto/ocsp\nfunc getOIDFromHashAlgorithm(target crypto.Hash) asn1.ObjectIdentifier {\n\tfor hash, oid := range hashOIDs {\n\t\tif hash == target {\n\t\t\treturn oid\n\t\t}\n\t}\n\tlogger.Errorf(\"no valid OID is found for the hash algorithm. %#v\", target)\n\treturn nil\n}\n\nfunc getHashAlgorithmFromOID(target pkix.AlgorithmIdentifier) crypto.Hash {\n\tfor hash, oid := range hashOIDs {\n\t\tif oid.Equal(target.Algorithm) {\n\t\t\treturn hash\n\t\t}\n\t}\n\tlogger.Errorf(\"no valid hash algorithm is found for the oid. Falling back to SHA1: %#v\", target)\n\treturn crypto.SHA1\n}\n\n// calcTolerableValidity returns the maximum validity buffer\nfunc calcTolerableValidity(thisUpdate, nextUpdate time.Time) time.Duration {\n\treturn durationMax(time.Duration(nextUpdate.Sub(thisUpdate)/tolerableValidityRatio), maxClockSkew)\n}\n\n// isInValidityRange checks the validity\nfunc isInValidityRange(currTime, thisUpdate, nextUpdate time.Time) bool {\n\tif currTime.Sub(thisUpdate.Add(-maxClockSkew)) < 0 {\n\t\treturn false\n\t}\n\tif nextUpdate.Add(calcTolerableValidity(thisUpdate, nextUpdate)).Sub(currTime) < 0 {\n\t\treturn false\n\t}\n\treturn true\n}\n\nfunc extractCertIDKeyFromRequest(ocspReq []byte) (*certIDKey, *ocspStatus) {\n\tr, err := ocsp.ParseRequest(ocspReq)\n\tif err != nil {\n\t\treturn nil, &ocspStatus{\n\t\t\tcode: ocspFailedDecomposeRequest,\n\t\t\terr:  err,\n\t\t}\n\t}\n\n\t// encode CertID, used as a key in the cache\n\tencodedCertID := &certIDKey{\n\t\tr.HashAlgorithm,\n\t\tbase64.StdEncoding.EncodeToString(r.IssuerNameHash),\n\t\tbase64.StdEncoding.EncodeToString(r.IssuerKeyHash),\n\t\tr.SerialNumber.String(),\n\t}\n\treturn encodedCertID, &ocspStatus{\n\t\tcode: ocspSuccess,\n\t}\n}\n\nfunc decodeCertIDKey(certIDKeyBase64 string) *certIDKey {\n\tr, err := base64.StdEncoding.DecodeString(certIDKeyBase64)\n\tif err != nil {\n\t\treturn nil\n\t}\n\tvar c certID\n\trest, err := asn1.Unmarshal(r, &c)\n\tif err != nil {\n\t\t// error in parsing\n\t\treturn nil\n\t}\n\tif len(rest) > 0 {\n\t\t// extra bytes to the end\n\t\treturn nil\n\t}\n\treturn &certIDKey{\n\t\tgetHashAlgorithmFromOID(c.HashAlgorithm),\n\t\tbase64.StdEncoding.EncodeToString(c.NameHash),\n\t\tbase64.StdEncoding.EncodeToString(c.IssuerKeyHash),\n\t\tc.SerialNumber.String(),\n\t}\n}\n\nfunc encodeCertIDKey(k *certIDKey) string {\n\tserialNumber := new(big.Int)\n\tserialNumber.SetString(k.SerialNumber, 10)\n\tnameHash, err := base64.StdEncoding.DecodeString(k.NameHash)\n\tif err != nil {\n\t\treturn \"\"\n\t}\n\tissuerKeyHash, err := base64.StdEncoding.DecodeString(k.IssuerKeyHash)\n\tif err != nil {\n\t\treturn \"\"\n\t}\n\tencodedCertID, err := asn1.Marshal(certID{\n\t\tpkix.AlgorithmIdentifier{\n\t\t\tAlgorithm:  getOIDFromHashAlgorithm(k.HashAlgorithm),\n\t\t\tParameters: asn1.RawValue{Tag: 5 /* ASN.1 NULL */},\n\t\t},\n\t\tnameHash,\n\t\tissuerKeyHash,\n\t\tserialNumber,\n\t})\n\tif err != nil {\n\t\treturn \"\"\n\t}\n\treturn base64.StdEncoding.EncodeToString(encodedCertID)\n}\n\nfunc (ov *ocspValidator) checkOCSPResponseCache(certIDKey *certIDKey, subject, issuer *x509.Certificate) *ocspStatus {\n\tif !ocspCacheServerEnabled {\n\t\treturn &ocspStatus{code: ocspNoServer}\n\t}\n\n\tgotValueFromCache, ok := func() (*certCacheValue, bool) {\n\t\tocspResponseCacheLock.RLock()\n\t\tdefer ocspResponseCacheLock.RUnlock()\n\t\tvalueFromCache, ok := ocspResponseCache[*certIDKey]\n\t\treturn valueFromCache, ok\n\t}()\n\tif !ok {\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspMissedCache,\n\t\t\terr:  fmt.Errorf(\"miss cache data. subject: %v\", subject),\n\t\t}\n\t}\n\n\tstatus := extractOCSPCacheResponseValue(certIDKey, gotValueFromCache, subject, issuer)\n\tif !isValidOCSPStatus(status.code) {\n\t\tdeleteOCSPCache(certIDKey)\n\t}\n\treturn status\n}\n\nfunc deleteOCSPCache(encodedCertID *certIDKey) {\n\tocspResponseCacheLock.Lock()\n\tdefer ocspResponseCacheLock.Unlock()\n\tdelete(ocspResponseCache, *encodedCertID)\n\tcacheUpdated = true\n}\n\nfunc validateOCSP(ocspRes *ocsp.Response) *ocspStatus {\n\tcurTime := time.Now()\n\n\tif ocspRes == nil {\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspFailedDecomposeRequest,\n\t\t\terr:  errors.New(\"OCSP Response is nil\"),\n\t\t}\n\t}\n\tif !isInValidityRange(curTime, ocspRes.ThisUpdate, ocspRes.NextUpdate) {\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspInvalidValidity,\n\t\t\terr: &SnowflakeError{\n\t\t\t\tNumber:      ErrOCSPInvalidValidity,\n\t\t\t\tMessage:     sferrors.ErrMsgOCSPInvalidValidity,\n\t\t\t\tMessageArgs: []any{ocspRes.ProducedAt, ocspRes.ThisUpdate, ocspRes.NextUpdate},\n\t\t\t},\n\t\t}\n\t}\n\treturn returnOCSPStatus(ocspRes)\n}\n\nfunc returnOCSPStatus(ocspRes *ocsp.Response) *ocspStatus {\n\tswitch ocspRes.Status {\n\tcase ocsp.Good:\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspStatusGood,\n\t\t\terr:  nil,\n\t\t}\n\tcase ocsp.Revoked:\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspStatusRevoked,\n\t\t\terr: &SnowflakeError{\n\t\t\t\tNumber:      ErrOCSPStatusRevoked,\n\t\t\t\tMessage:     sferrors.ErrMsgOCSPStatusRevoked,\n\t\t\t\tMessageArgs: []any{ocspRes.RevocationReason, ocspRes.RevokedAt},\n\t\t\t},\n\t\t}\n\tcase ocsp.Unknown:\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspStatusUnknown,\n\t\t\terr: &SnowflakeError{\n\t\t\t\tNumber:  ErrOCSPStatusUnknown,\n\t\t\t\tMessage: sferrors.ErrMsgOCSPStatusUnknown,\n\t\t\t},\n\t\t}\n\tdefault:\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspStatusOthers,\n\t\t\terr:  fmt.Errorf(\"OCSP others. %v\", ocspRes.Status),\n\t\t}\n\t}\n}\n\nfunc checkOCSPCacheServer(\n\tctx context.Context,\n\tclient clientInterface,\n\treq requestFunc,\n\tocspServerHost *url.URL,\n\ttotalTimeout time.Duration) (\n\tcacheContent *map[string]*certCacheValue,\n\tocspS *ocspStatus) {\n\tvar respd map[string][]any\n\theaders := make(map[string]string)\n\tres, err := newRetryHTTP(ctx, client, req, ocspServerHost, headers, totalTimeout, OcspMaxRetryCount, defaultTimeProvider, nil).execute()\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to get OCSP cache from OCSP Cache Server. %v\", err)\n\t\treturn nil, &ocspStatus{\n\t\t\tcode: ocspFailedSubmit,\n\t\t\terr:  err,\n\t\t}\n\t}\n\tdefer func() {\n\t\tif err = res.Body.Close(); err != nil {\n\t\t\tlogger.Warnf(\"failed to close response body: %v\", err)\n\t\t}\n\t}()\n\tlogger.WithContext(ctx).Debugf(\"StatusCode from OCSP Cache Server: %v\", res.StatusCode)\n\tif res.StatusCode != http.StatusOK {\n\t\treturn nil, &ocspStatus{\n\t\t\tcode: ocspFailedResponse,\n\t\t\terr:  fmt.Errorf(\"HTTP code is not OK. %v: %v\", res.StatusCode, res.Status),\n\t\t}\n\t}\n\tlogger.WithContext(ctx).Debugf(\"reading contents\")\n\n\tdec := json.NewDecoder(res.Body)\n\tfor {\n\t\tif err := dec.Decode(&respd); err == io.EOF {\n\t\t\tbreak\n\t\t} else if err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to decode OCSP cache. %v\", err)\n\t\t\treturn nil, &ocspStatus{\n\t\t\t\tcode: ocspFailedExtractResponse,\n\t\t\t\terr:  err,\n\t\t\t}\n\t\t}\n\t}\n\tbuf := make(map[string]*certCacheValue)\n\tfor key, value := range respd {\n\t\tok, ts, ocspRespBase64 := extractTsAndOcspRespBase64(value)\n\t\tif !ok {\n\t\t\tcontinue\n\t\t}\n\t\tbuf[key] = &certCacheValue{ts, ocspRespBase64}\n\t}\n\treturn &buf, &ocspStatus{\n\t\tcode: ocspSuccess,\n\t}\n}\n\n// retryOCSP is the second level of retry method if the returned contents are corrupted. It often happens with OCSP\n// serer and retry helps.\nfunc (ov *ocspValidator) retryOCSP(\n\tctx context.Context,\n\tclient clientInterface,\n\treq requestFunc,\n\tocspHost *url.URL,\n\theaders map[string]string,\n\treqBody []byte,\n\tissuer *x509.Certificate,\n\ttotalTimeout time.Duration) (\n\tocspRes *ocsp.Response,\n\tocspResBytes []byte,\n\tocspS *ocspStatus) {\n\tmultiplier := 1\n\tif ov.mode == OCSPFailOpenFalse {\n\t\tmultiplier = 3\n\t}\n\tres, err := newRetryHTTP(\n\t\tctx, client, req, ocspHost, headers,\n\t\ttotalTimeout*time.Duration(multiplier), OcspMaxRetryCount, defaultTimeProvider, nil).doPost().setBody(reqBody).execute()\n\tif err != nil {\n\t\treturn ocspRes, ocspResBytes, &ocspStatus{\n\t\t\tcode: ocspFailedSubmit,\n\t\t\terr:  err,\n\t\t}\n\t}\n\tdefer func() {\n\t\tif err = res.Body.Close(); err != nil {\n\t\t\tlogger.WithContext(ctx).Warnf(\"failed to close response body: %v\", err)\n\t\t}\n\t}()\n\tlogger.WithContext(ctx).Debugf(\"StatusCode from OCSP Server: %v\\n\", res.StatusCode)\n\tif res.StatusCode != http.StatusOK {\n\t\treturn ocspRes, ocspResBytes, &ocspStatus{\n\t\t\tcode: ocspFailedResponse,\n\t\t\terr:  fmt.Errorf(\"HTTP code is not OK. %v: %v\", res.StatusCode, res.Status),\n\t\t}\n\t}\n\tocspResBytes, err = io.ReadAll(res.Body)\n\tif err != nil {\n\t\treturn ocspRes, ocspResBytes, &ocspStatus{\n\t\t\tcode: ocspFailedExtractResponse,\n\t\t\terr:  err,\n\t\t}\n\t}\n\tocspRes, err = ocsp.ParseResponse(ocspResBytes, issuer)\n\tif err != nil {\n\t\t_, ok1 := err.(asn1.StructuralError)\n\t\t_, ok2 := err.(asn1.SyntaxError)\n\t\tif ok1 || ok2 {\n\t\t\tlogger.WithContext(ctx).Warnf(\"error when parsing ocsp response: %v\", err)\n\t\t\tlogger.WithContext(ctx).Warnf(\"performing GET fallback request to OCSP\")\n\t\t\treturn ov.fallbackRetryOCSPToGETRequest(ctx, client, req, ocspHost, headers, issuer, totalTimeout)\n\t\t}\n\t\tlogger.Warnf(\"Unknown response status from OCSP responder: %v\", err)\n\t\treturn nil, nil, &ocspStatus{\n\t\t\tcode: ocspStatusUnknown,\n\t\t\terr:  err,\n\t\t}\n\t}\n\n\tlogger.WithContext(ctx).Debugf(\"OCSP Status from server: %v\", printStatus(ocspRes))\n\treturn ocspRes, ocspResBytes, &ocspStatus{\n\t\tcode: ocspSuccess,\n\t}\n}\n\n// fallbackRetryOCSPToGETRequest is the third level of retry method. Some OCSP responders do not support POST requests\n// and will return with a \"malformed\" request error. In that case we also try to perform a GET request\nfunc (ov *ocspValidator) fallbackRetryOCSPToGETRequest(\n\tctx context.Context,\n\tclient clientInterface,\n\treq requestFunc,\n\tocspHost *url.URL,\n\theaders map[string]string,\n\tissuer *x509.Certificate,\n\ttotalTimeout time.Duration) (\n\tocspRes *ocsp.Response,\n\tocspResBytes []byte,\n\tocspS *ocspStatus) {\n\tmultiplier := 1\n\tif ov.mode == OCSPFailOpenFalse {\n\t\tmultiplier = 3\n\t}\n\tres, err := newRetryHTTP(ctx, client, req, ocspHost, headers,\n\t\ttotalTimeout*time.Duration(multiplier), OcspMaxRetryCount, defaultTimeProvider, nil).execute()\n\tif err != nil {\n\t\treturn ocspRes, ocspResBytes, &ocspStatus{\n\t\t\tcode: ocspFailedSubmit,\n\t\t\terr:  err,\n\t\t}\n\t}\n\tdefer func() {\n\t\tif err = res.Body.Close(); err != nil {\n\t\t\tlogger.Warnf(\"failed to close response body: %v\", err)\n\t\t}\n\t}()\n\tlogger.WithContext(ctx).Debugf(\"GET fallback StatusCode from OCSP Server: %v\", res.StatusCode)\n\tif res.StatusCode != http.StatusOK {\n\t\treturn ocspRes, ocspResBytes, &ocspStatus{\n\t\t\tcode: ocspFailedResponse,\n\t\t\terr:  fmt.Errorf(\"HTTP code is not OK. %v: %v\", res.StatusCode, res.Status),\n\t\t}\n\t}\n\tocspResBytes, err = io.ReadAll(res.Body)\n\tif err != nil {\n\t\treturn ocspRes, ocspResBytes, &ocspStatus{\n\t\t\tcode: ocspFailedExtractResponse,\n\t\t\terr:  err,\n\t\t}\n\t}\n\tocspRes, err = ocsp.ParseResponse(ocspResBytes, issuer)\n\tif err != nil {\n\t\treturn ocspRes, ocspResBytes, &ocspStatus{\n\t\t\tcode: ocspFailedParseResponse,\n\t\t\terr:  err,\n\t\t}\n\t}\n\n\tlogger.WithContext(ctx).Debugf(\"GET fallback OCSP Status from server: %v\", printStatus(ocspRes))\n\treturn ocspRes, ocspResBytes, &ocspStatus{\n\t\tcode: ocspSuccess,\n\t}\n}\n\nfunc printStatus(response *ocsp.Response) string {\n\tswitch response.Status {\n\tcase ocsp.Good:\n\t\treturn \"Good\"\n\tcase ocsp.Revoked:\n\t\treturn \"Revoked\"\n\tcase ocsp.Unknown:\n\t\treturn \"Unknown\"\n\tdefault:\n\t\treturn fmt.Sprintf(\"%d\", response.Status)\n\t}\n}\n\nfunc fullOCSPURL(url *url.URL) string {\n\tfullURL := url.Hostname()\n\tif url.Path != \"\" {\n\t\tif !strings.HasPrefix(url.Path, \"/\") {\n\t\t\tfullURL += \"/\"\n\t\t}\n\t\tfullURL += url.Path\n\t}\n\treturn fullURL\n}\n\n// getRevocationStatus checks the certificate revocation status for subject using issuer certificate.\nfunc (ov *ocspValidator) getRevocationStatus(ctx context.Context, subject, issuer *x509.Certificate) *ocspStatus {\n\tlogger.WithContext(ctx).Tracef(\"Subject: %v, Issuer: %v\", subject.Subject, issuer.Subject)\n\n\tstatus, ocspReq, encodedCertID := ov.validateWithCache(subject, issuer)\n\tif isValidOCSPStatus(status.code) {\n\t\treturn status\n\t}\n\tif ocspReq == nil || encodedCertID == nil {\n\t\treturn status\n\t}\n\tlogger.WithContext(ctx).Infof(\"cache missed\")\n\tlogger.WithContext(ctx).Infof(\"OCSP Server: %v\", subject.OCSPServer)\n\ttestResponderURL := os.Getenv(ocspTestResponderURLEnv)\n\tif (len(subject.OCSPServer) == 0 || isTestNoOCSPURL()) && testResponderURL == \"\" {\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspNoServer,\n\t\t\terr: &SnowflakeError{\n\t\t\t\tNumber:      ErrOCSPNoOCSPResponderURL,\n\t\t\t\tMessage:     sferrors.ErrMsgOCSPNoOCSPResponderURL,\n\t\t\t\tMessageArgs: []any{subject.Subject},\n\t\t\t},\n\t\t}\n\t}\n\tocspHost := testResponderURL\n\tif ocspHost == \"\" && len(subject.OCSPServer) > 0 {\n\t\tocspHost = subject.OCSPServer[0]\n\t}\n\tu, err := url.Parse(ocspHost)\n\tif err != nil {\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspFailedParseOCSPHost,\n\t\t\terr:  fmt.Errorf(\"failed to parse OCSP server host. %v\", ocspHost),\n\t\t}\n\t}\n\tvar hostname string\n\tif retryURL := ov.retryURL; retryURL != \"\" {\n\t\thostname = fmt.Sprintf(retryURL, fullOCSPURL(u), base64.StdEncoding.EncodeToString(ocspReq))\n\t\tu0, err := url.Parse(hostname)\n\t\tif err == nil {\n\t\t\thostname = u0.Hostname()\n\t\t\tu = u0\n\t\t}\n\t} else {\n\t\thostname = fullOCSPURL(u)\n\t}\n\n\tlogger.WithContext(ctx).Debugf(\"Fetching OCSP response from server: %v\", u)\n\tlogger.WithContext(ctx).Debugf(\"Host in headers: %v\", hostname)\n\n\theaders := make(map[string]string)\n\theaders[httpHeaderContentType] = \"application/ocsp-request\"\n\theaders[httpHeaderAccept] = \"application/ocsp-response\"\n\theaders[httpHeaderContentLength] = strconv.Itoa(len(ocspReq))\n\theaders[httpHeaderHost] = hostname\n\ttimeout := OcspResponderTimeout\n\n\tocspClient := &http.Client{\n\t\tTimeout:   timeout,\n\t\tTransport: newTransportFactory(ov.cfg, nil).createNoRevocationTransport(defaultTransportConfigs.forTransportType(transportTypeOCSP)),\n\t}\n\tocspRes, ocspResBytes, ocspS := ov.retryOCSP(\n\t\tctx, ocspClient, http.NewRequest, u, headers, ocspReq, issuer, timeout)\n\tif ocspS.code != ocspSuccess {\n\t\treturn ocspS\n\t}\n\n\tret := validateOCSP(ocspRes)\n\tif !isValidOCSPStatus(ret.code) {\n\t\treturn ret // return invalid\n\t}\n\tv := &certCacheValue{float64(time.Now().UTC().Unix()), base64.StdEncoding.EncodeToString(ocspResBytes)}\n\tocspResponseCacheLock.Lock()\n\tocspResponseCache[*encodedCertID] = v\n\tcacheUpdated = true\n\tocspResponseCacheLock.Unlock()\n\treturn ret\n}\n\nfunc isTestNoOCSPURL() bool {\n\treturn strings.EqualFold(os.Getenv(ocspTestNoOCSPURLEnv), \"true\")\n}\n\nfunc isValidOCSPStatus(status ocspStatusCode) bool {\n\treturn status == ocspStatusGood || status == ocspStatusRevoked || status == ocspStatusUnknown\n}\n\n// verifyPeerCertificate verifies all of certificate revocation status\nfunc (ov *ocspValidator) verifyPeerCertificate(ctx context.Context, verifiedChains [][]*x509.Certificate) (err error) {\n\tfor _, chain := range verifiedChains {\n\t\tresults := ov.getAllRevocationStatus(ctx, chain)\n\t\tif r := ov.canEarlyExitForOCSP(results, chain); r != nil {\n\t\t\treturn r.err\n\t\t}\n\t}\n\n\tocspResponseCacheLock.Lock()\n\tif cacheUpdated {\n\t\tov.writeOCSPCacheFile()\n\t}\n\tcacheUpdated = false\n\tocspResponseCacheLock.Unlock()\n\treturn nil\n}\n\nfunc (ov *ocspValidator) canEarlyExitForOCSP(results []*ocspStatus, verifiedChain []*x509.Certificate) *ocspStatus {\n\tvar msg strings.Builder\n\tif ov.mode == OCSPFailOpenFalse {\n\t\t// Fail closed. any error is returned to stop connection\n\t\tfor _, r := range results {\n\t\t\tif r.err != nil {\n\t\t\t\treturn r\n\t\t\t}\n\t\t}\n\t} else {\n\t\t// Fail open and all results are valid.\n\t\tallValid := len(results) == len(verifiedChain)-1 // root certificate is not checked\n\t\tfor _, r := range results {\n\t\t\tif !isValidOCSPStatus(r.code) {\n\t\t\t\tallValid = false\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfor _, r := range results {\n\t\t\tif allValid && r.code == ocspStatusRevoked {\n\t\t\t\treturn r\n\t\t\t}\n\t\t\tif r != nil && r.code != ocspStatusGood && r.err != nil {\n\t\t\t\tmsg.WriteString(\"\\n\" + r.err.Error())\n\t\t\t}\n\t\t}\n\t}\n\tif len(msg.String()) > 0 {\n\t\tlogger.Debugf(\"OCSP responder didn't respond correctly. Assuming certificate is not revoked. Detail: %v\", msg.String()[1:])\n\t}\n\treturn nil\n}\n\nfunc (ov *ocspValidator) validateWithCacheForAllCertificates(verifiedChains []*x509.Certificate) bool {\n\tn := len(verifiedChains) - 1\n\tfor j := range n {\n\t\tsubject := verifiedChains[j]\n\t\tissuer := verifiedChains[j+1]\n\t\tstatus, _, _ := ov.validateWithCache(subject, issuer)\n\t\tif !isValidOCSPStatus(status.code) {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\nfunc (ov *ocspValidator) validateWithCache(subject, issuer *x509.Certificate) (*ocspStatus, []byte, *certIDKey) {\n\treqOpts := &ocsp.RequestOptions{}\n\tif fips140.Enabled() {\n\t\tlogger.Debug(\"FIPS 140 mode is enabled. Using SHA256 for OCSP request.\")\n\t\treqOpts.Hash = crypto.SHA256\n\t}\n\tocspReq, err := ocsp.CreateRequest(subject, issuer, reqOpts)\n\tif err != nil {\n\t\tlogger.Errorf(\"failed to create OCSP request from the certificates.\\n\")\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspFailedComposeRequest,\n\t\t\terr:  errors.New(\"failed to create a OCSP request\"),\n\t\t}, nil, nil\n\t}\n\tencodedCertID, ocspS := extractCertIDKeyFromRequest(ocspReq)\n\tif ocspS.code != ocspSuccess {\n\t\tlogger.Errorf(\"failed to extract CertID from OCSP Request.\\n\")\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspFailedComposeRequest,\n\t\t\terr:  errors.New(\"failed to extract cert ID Key\"),\n\t\t}, ocspReq, nil\n\t}\n\tstatus := ov.checkOCSPResponseCache(encodedCertID, subject, issuer)\n\treturn status, ocspReq, encodedCertID\n}\n\nfunc (ov *ocspValidator) downloadOCSPCacheServer() {\n\t// TODO\n\tif !ocspCacheServerEnabled {\n\t\tlogger.Debugf(\"OCSP Cache Server is disabled by user. Skipping download.\")\n\t\treturn\n\t}\n\tocspCacheServerURL := ov.cacheServerURL\n\tu, err := url.Parse(ocspCacheServerURL)\n\tif err != nil {\n\t\treturn\n\t}\n\n\tlogger.Infof(\"downloading OCSP Cache from server %v\", ocspCacheServerURL)\n\ttimeout := OcspCacheServerTimeout\n\tocspClient := &http.Client{\n\t\tTimeout:   timeout,\n\t\tTransport: newTransportFactory(ov.cfg, nil).createNoRevocationTransport(defaultTransportConfigs.forTransportType(transportTypeOCSP)),\n\t}\n\tret, ocspStatus := checkOCSPCacheServer(context.Background(), ocspClient, http.NewRequest, u, timeout)\n\tif ocspStatus.code != ocspSuccess {\n\t\treturn\n\t}\n\n\tocspResponseCacheLock.Lock()\n\tfor k, cacheValue := range *ret {\n\t\tcacheKey := decodeCertIDKey(k)\n\t\tstatus := extractOCSPCacheResponseValueWithoutSubject(cacheKey, cacheValue)\n\t\tif !isValidOCSPStatus(status.code) {\n\t\t\tcontinue\n\t\t}\n\t\tocspResponseCache[*cacheKey] = cacheValue\n\t}\n\tcacheUpdated = true\n\tocspResponseCacheLock.Unlock()\n}\n\nfunc (ov *ocspValidator) getAllRevocationStatus(ctx context.Context, verifiedChains []*x509.Certificate) []*ocspStatus {\n\tcached := ov.validateWithCacheForAllCertificates(verifiedChains)\n\tif !cached {\n\t\tov.downloadOCSPCacheServer()\n\t}\n\tn := len(verifiedChains) - 1\n\tresults := make([]*ocspStatus, n)\n\tfor j := range n {\n\t\tresults[j] = ov.getRevocationStatus(ctx, verifiedChains[j], verifiedChains[j+1])\n\t\tif !isValidOCSPStatus(results[j].code) {\n\t\t\treturn results\n\t\t}\n\t}\n\treturn results\n}\n\n// verifyPeerCertificateSerial verifies the certificate revocation status in serial.\nfunc (ov *ocspValidator) verifyPeerCertificateSerial(_ [][]byte, verifiedChains [][]*x509.Certificate) (err error) {\n\tfunc() {\n\t\tocspModuleMu.Lock()\n\t\tdefer ocspModuleMu.Unlock()\n\t\tif !ocspModuleInitialized {\n\t\t\tinitOcspModule()\n\t\t}\n\t}()\n\toverrideCacheDir()\n\treturn ov.verifyPeerCertificate(context.Background(), verifiedChains)\n}\n\nfunc overrideCacheDir() {\n\tif os.Getenv(cacheDirEnv) != \"\" {\n\t\tocspResponseCacheLock.Lock()\n\t\tdefer ocspResponseCacheLock.Unlock()\n\t\tcreateOCSPCacheDir()\n\t}\n}\n\n// initOCSPCache initializes OCSP Response cache file.\nfunc initOCSPCache() {\n\tif !ocspCacheServerEnabled {\n\t\treturn\n\t}\n\tfunc() {\n\t\tocspResponseCacheLock.Lock()\n\t\tdefer ocspResponseCacheLock.Unlock()\n\t\tocspResponseCache = make(map[certIDKey]*certCacheValue)\n\t}()\n\tfunc() {\n\t\tocspParsedRespCacheLock.Lock()\n\t\tdefer ocspParsedRespCacheLock.Unlock()\n\t\tocspParsedRespCache = make(map[parsedOcspRespKey]*ocspStatus)\n\t}()\n\n\tlogger.Infof(\"reading OCSP Response cache file. %v\\n\", cacheFileName)\n\tf, err := os.OpenFile(cacheFileName, os.O_CREATE|os.O_RDONLY, readWriteFileMode)\n\tif err != nil {\n\t\tlogger.Debugf(\"failed to open. Ignored. %v\\n\", err)\n\t\treturn\n\t}\n\tdefer func() {\n\t\tif err = f.Close(); err != nil {\n\t\t\tlogger.Warnf(\"failed to close file: %v. ignored.\\n\", err)\n\t\t}\n\t}()\n\n\tbuf := make(map[string][]any)\n\tr := bufio.NewReader(f)\n\tdec := json.NewDecoder(r)\n\tfor {\n\t\tif err = dec.Decode(&buf); err == io.EOF {\n\t\t\tbreak\n\t\t} else if err != nil {\n\t\t\tlogger.Debugf(\"failed to read. Ignored. %v\\n\", err)\n\t\t\treturn\n\t\t}\n\t}\n\n\tfor k, cacheValue := range buf {\n\t\tok, ts, ocspRespBase64 := extractTsAndOcspRespBase64(cacheValue)\n\t\tif !ok {\n\t\t\tcontinue\n\t\t}\n\t\tcertValue := &certCacheValue{ts, ocspRespBase64}\n\t\tcacheKey := decodeCertIDKey(k)\n\t\tstatus := extractOCSPCacheResponseValueWithoutSubject(cacheKey, certValue)\n\t\tif !isValidOCSPStatus(status.code) {\n\t\t\tcontinue\n\t\t}\n\t\tocspResponseCache[*cacheKey] = certValue\n\n\t}\n\tcacheUpdated = false\n}\n\nfunc extractTsAndOcspRespBase64(value []any) (bool, float64, string) {\n\tts, ok := value[0].(float64)\n\tif !ok {\n\t\tlogger.Warnf(\"cannot cast %v as float64\", value[0])\n\t\treturn false, -1, \"\"\n\t}\n\tocspRespBase64, ok := value[1].(string)\n\tif !ok {\n\t\tlogger.Warnf(\"cannot cast %v as string\", value[1])\n\t\treturn false, -1, \"\"\n\t}\n\treturn true, ts, ocspRespBase64\n}\n\nfunc extractOCSPCacheResponseValueWithoutSubject(cacheKey *certIDKey, cacheValue *certCacheValue) *ocspStatus {\n\treturn extractOCSPCacheResponseValue(cacheKey, cacheValue, nil, nil)\n}\n\nfunc extractOCSPCacheResponseValue(certIDKey *certIDKey, certCacheValue *certCacheValue, subject, issuer *x509.Certificate) *ocspStatus {\n\tsubjectName := \"Unknown\"\n\tif subject != nil {\n\t\tsubjectName = subject.Subject.CommonName\n\t}\n\n\tcurTime := time.Now()\n\tcurrentTime := float64(curTime.UTC().Unix())\n\tif currentTime-certCacheValue.ts >= cacheExpire {\n\t\treturn &ocspStatus{\n\t\t\tcode: ocspCacheExpired,\n\t\t\terr: fmt.Errorf(\"cache expired. current: %v, cache: %v\",\n\t\t\t\ttime.Unix(int64(currentTime), 0).UTC(), time.Unix(int64(certCacheValue.ts), 0).UTC()),\n\t\t}\n\t}\n\n\tocspParsedRespCacheLock.Lock()\n\tdefer ocspParsedRespCacheLock.Unlock()\n\n\tvar cacheKey parsedOcspRespKey\n\tif certIDKey != nil {\n\t\tcacheKey = parsedOcspRespKey{certCacheValue.ocspRespBase64, encodeCertIDKey(certIDKey)}\n\t} else {\n\t\tcacheKey = parsedOcspRespKey{certCacheValue.ocspRespBase64, \"\"}\n\t}\n\tstatus, ok := ocspParsedRespCache[cacheKey]\n\tif !ok {\n\t\tlogger.Tracef(\"OCSP status not found in cache; certIdKey: %v\", certIDKey)\n\t\tvar err error\n\t\tvar b []byte\n\t\tb, err = base64.StdEncoding.DecodeString(certCacheValue.ocspRespBase64)\n\t\tif err != nil {\n\t\t\treturn &ocspStatus{\n\t\t\t\tcode: ocspFailedDecodeResponse,\n\t\t\t\terr:  fmt.Errorf(\"failed to decode OCSP Response value in a cache. subject: %v, err: %v\", subjectName, err),\n\t\t\t}\n\t\t}\n\t\t// check the revocation status here\n\t\tocspResponse, err := ocsp.ParseResponse(b, issuer)\n\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"the second cache element is not a valid OCSP Response. Ignored. subject: %v\\n\", subjectName)\n\t\t\treturn &ocspStatus{\n\t\t\t\tcode: ocspFailedParseResponse,\n\t\t\t\terr:  fmt.Errorf(\"failed to parse OCSP Respose. subject: %v, err: %v\", subjectName, err),\n\t\t\t}\n\t\t}\n\t\tstatus = validateOCSP(ocspResponse)\n\t\tocspParsedRespCache[cacheKey] = status\n\t}\n\tlogger.Tracef(\"OCSP status found in cache: %v; certIdKey: %v\", status, certIDKey)\n\treturn status\n}\n\n// writeOCSPCacheFile writes a OCSP Response cache file. This is called if all revocation status is success.\n// lock file is used to mitigate race condition with other process.\nfunc (ov *ocspValidator) writeOCSPCacheFile() {\n\tif !ocspCacheServerEnabled {\n\t\treturn\n\t}\n\tlogger.Infof(\"writing OCSP Response cache file. %v\\n\", cacheFileName)\n\tcacheLockFileName := cacheFileName + \".lck\"\n\terr := os.Mkdir(cacheLockFileName, 0600)\n\tswitch {\n\tcase os.IsExist(err):\n\t\tstatinfo, err := os.Stat(cacheLockFileName)\n\t\tif err != nil {\n\t\t\tlogger.Debugf(\"failed to get file info for cache lock file. file: %v, err: %v. ignored.\\n\", cacheLockFileName, err)\n\t\t\treturn\n\t\t}\n\t\tif time.Since(statinfo.ModTime()) < 15*time.Minute {\n\t\t\tlogger.Debugf(\"other process locks the cache file. %v. ignored.\\n\", cacheLockFileName)\n\t\t\treturn\n\t\t}\n\t\tif err = os.Remove(cacheLockFileName); err != nil {\n\t\t\tlogger.Debugf(\"failed to delete lock file. file: %v, err: %v. ignored.\\n\", cacheLockFileName, err)\n\t\t\treturn\n\t\t}\n\t\tif err = os.Mkdir(cacheLockFileName, 0600); err != nil {\n\t\t\tlogger.Debugf(\"failed to create lock file. file: %v, err: %v. ignored.\\n\", cacheLockFileName, err)\n\t\t\treturn\n\t\t}\n\t}\n\t// if mkdir fails for any other reason: permission denied, operation not permitted, I/O error, too many open files, etc.\n\tif err != nil {\n\t\tlogger.Debugf(\"failed to create lock file. file %v, err: %v. ignored.\\n\", cacheLockFileName, err)\n\t\treturn\n\t}\n\tdefer func() {\n\t\tif err = os.RemoveAll(cacheLockFileName); err != nil {\n\t\t\tlogger.Debugf(\"failed to delete lock file. file: %v, err: %v. ignored.\\n\", cacheLockFileName, err)\n\t\t}\n\t}()\n\n\tbuf := make(map[string][]any)\n\tfor k, v := range ocspResponseCache {\n\t\tcacheKeyInBase64 := encodeCertIDKey(&k)\n\t\tbuf[cacheKeyInBase64] = []any{v.ts, v.ocspRespBase64}\n\t}\n\n\tj, err := json.Marshal(buf)\n\tif err != nil {\n\t\tlogger.Debugf(\"failed to convert OCSP Response cache to JSON. ignored.\")\n\t\treturn\n\t}\n\tif err = os.WriteFile(cacheFileName, j, 0644); err != nil {\n\t\tlogger.Debugf(\"failed to write OCSP Response cache. err: %v. ignored.\\n\", err)\n\t}\n}\n\n// createOCSPCacheDir creates OCSP response cache directory and set the cache file name.\nfunc createOCSPCacheDir() {\n\tif !ocspCacheServerEnabled {\n\t\tlogger.Info(`OCSP Cache Server disabled. All further access and use of\n\t\t\tOCSP Cache will be disabled for this OCSP Status Query`)\n\t\treturn\n\t}\n\tcacheDir = os.Getenv(cacheDirEnv)\n\tif cacheDir == \"\" {\n\t\tcacheDir = os.Getenv(\"SNOWFLAKE_TEST_WORKSPACE\")\n\t}\n\tif cacheDir == \"\" {\n\t\tswitch runtime.GOOS {\n\t\tcase \"windows\":\n\t\t\tcacheDir = filepath.Join(os.Getenv(\"USERPROFILE\"), \"AppData\", \"Local\", \"Snowflake\", \"Caches\")\n\t\tcase \"darwin\":\n\t\t\thome := os.Getenv(\"HOME\")\n\t\t\tif home == \"\" {\n\t\t\t\tlogger.Info(\"HOME is blank.\")\n\t\t\t}\n\t\t\tcacheDir = filepath.Join(home, \"Library\", \"Caches\", \"Snowflake\")\n\t\tdefault:\n\t\t\thome := os.Getenv(\"HOME\")\n\t\t\tif home == \"\" {\n\t\t\t\tlogger.Info(\"HOME is blank\")\n\t\t\t}\n\t\t\tcacheDir = filepath.Join(home, \".cache\", \"snowflake\")\n\t\t}\n\t}\n\n\tif _, err := os.Stat(cacheDir); os.IsNotExist(err) {\n\t\tif err = os.MkdirAll(cacheDir, os.ModePerm); err != nil {\n\t\t\tlogger.Debugf(\"failed to create cache directory. %v, err: %v. ignored\\n\", cacheDir, err)\n\t\t}\n\t}\n\tcacheFileName = filepath.Join(cacheDir, cacheFileBaseName)\n\tlogger.Infof(\"reset OCSP cache file. %v\", cacheFileName)\n}\n\n// StartOCSPCacheClearer starts the job that clears OCSP caches\nfunc StartOCSPCacheClearer() {\n\tocspCacheClearer.start()\n}\n\n// StopOCSPCacheClearer stops the job that clears OCSP caches.\nfunc StopOCSPCacheClearer() {\n\tocspCacheClearer.stop()\n}\n\nfunc clearOCSPCaches() {\n\tlogger.Debugf(\"clearing OCSP caches\")\n\tfunc() {\n\t\tocspResponseCacheLock.Lock()\n\t\tdefer ocspResponseCacheLock.Unlock()\n\t\tocspResponseCache = make(map[certIDKey]*certCacheValue)\n\t}()\n\n\tfunc() {\n\t\tocspParsedRespCacheLock.Lock()\n\t\tdefer ocspParsedRespCacheLock.Unlock()\n\t\tocspParsedRespCache = make(map[parsedOcspRespKey]*ocspStatus)\n\t}()\n}\n\nfunc initOcspModule() {\n\tcreateOCSPCacheDir()\n\tinitOCSPCache()\n\n\tif cacheServerEnabledStr, ok := os.LookupEnv(cacheServerEnabledEnv); ok {\n\t\tlogger.Debugf(\"OCSP Cache Server enabled by user: %v\", cacheServerEnabledStr)\n\t\tocspCacheServerEnabled = strings.EqualFold(cacheServerEnabledStr, \"true\")\n\t}\n\n\tocspModuleInitialized = true\n}\n\ntype ocspCacheClearerType struct {\n\tmu      sync.Mutex\n\trunning bool\n\tcancel  context.CancelFunc\n}\n\nfunc (occ *ocspCacheClearerType) start() {\n\tocc.mu.Lock()\n\tdefer occ.mu.Unlock()\n\tif occ.running {\n\t\treturn\n\t}\n\tctx, cancel := context.WithCancel(context.Background())\n\tocc.cancel = cancel\n\tinterval := defaultOCSPResponseCacheClearingInterval\n\tif intervalFromEnv := os.Getenv(ocspResponseCacheClearingIntervalInSecondsEnv); intervalFromEnv != \"\" {\n\t\tintervalAsSeconds, err := strconv.Atoi(intervalFromEnv)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"unparsable %v value: %v\", ocspResponseCacheClearingIntervalInSecondsEnv, intervalFromEnv)\n\t\t} else {\n\t\t\tinterval = time.Duration(intervalAsSeconds) * time.Second\n\t\t}\n\t}\n\tlogger.Debugf(\"initializing OCSP cache clearer to %v\", interval)\n\tgo GoroutineWrapper(context.Background(), func() {\n\t\tticker := time.NewTicker(interval)\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-ticker.C:\n\t\t\t\tclearOCSPCaches()\n\t\t\tcase <-ctx.Done():\n\t\t\t\tocc.mu.Lock()\n\t\t\t\tdefer occ.mu.Unlock()\n\t\t\t\tlogger.Debug(\"stopped clearing OCSP cache\")\n\t\t\t\tticker.Stop()\n\t\t\t\tocc.running = false\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t})\n\tocc.running = true\n}\n\nfunc (occ *ocspCacheClearerType) stop() {\n\tocc.mu.Lock()\n\tdefer occ.mu.Unlock()\n\tif occ.running {\n\t\tocc.cancel()\n\t}\n}\n"
  },
  {
    "path": "ocsp_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/base64\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"golang.org/x/crypto/ocsp\"\n)\n\nfunc TestOCSP(t *testing.T) {\n\tcacheServerEnabled := []string{\n\t\t\"true\",\n\t\t\"false\",\n\t}\n\ttargetURL := []string{\n\t\t\"https://sfctest0.snowflakecomputing.com/\",\n\t\t\"https://s3-us-west-2.amazonaws.com/sfc-snowsql-updates/?prefix=1.1/windows_x86_64\",\n\t\t\"https://sfcdev2.blob.core.windows.net/\",\n\t}\n\n\tocspTransport, err := newTransportFactory(&Config{}, nil).createOCSPTransport(defaultTransportConfigs.forTransportType(transportTypeSnowflake))\n\tassertNilF(t, err)\n\n\ttransports := []http.RoundTripper{\n\t\tcreateTestNoRevocationTransport(),\n\t\tocspTransport,\n\t}\n\n\tfor _, enabled := range cacheServerEnabled {\n\t\tfor _, tgt := range targetURL {\n\t\t\t_ = os.Setenv(cacheServerEnabledEnv, enabled)\n\t\t\t_ = os.Remove(cacheFileName) // clear cache file\n\t\t\tsyncUpdateOcspResponseCache(func() {\n\t\t\t\tocspResponseCache = make(map[certIDKey]*certCacheValue)\n\t\t\t})\n\t\t\tfor _, tr := range transports {\n\t\t\t\tt.Run(fmt.Sprintf(\"%v_%v\", tgt, enabled), func(t *testing.T) {\n\t\t\t\t\tc := &http.Client{\n\t\t\t\t\t\tTransport: tr,\n\t\t\t\t\t\tTimeout:   30 * time.Second,\n\t\t\t\t\t}\n\t\t\t\t\treq, err := http.NewRequest(\"GET\", tgt, bytes.NewReader(nil))\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatalf(\"fail to create a request. err: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tres, err := c.Do(req)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatalf(\"failed to GET contents. err: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tdefer res.Body.Close()\n\t\t\t\t\t_, err = io.ReadAll(res.Body)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatalf(\"failed to read content body for %v\", tgt)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t}\n\t_ = os.Unsetenv(cacheServerEnabledEnv)\n}\n\ntype tcValidityRange struct {\n\tthisTime time.Time\n\tnextTime time.Time\n\tret      bool\n}\n\nfunc TestUnitIsInValidityRange(t *testing.T) {\n\tcurrentTime := time.Now()\n\ttestcases := []tcValidityRange{\n\t\t{\n\t\t\t// basic tests\n\t\t\tthisTime: currentTime.Add(-100 * time.Second),\n\t\t\tnextTime: currentTime.Add(maxClockSkew),\n\t\t\tret:      true,\n\t\t},\n\t\t{\n\t\t\t// on the border\n\t\t\tthisTime: currentTime.Add(maxClockSkew),\n\t\t\tnextTime: currentTime.Add(maxClockSkew),\n\t\t\tret:      true,\n\t\t},\n\t\t{\n\t\t\t// 1 earlier late\n\t\t\tthisTime: currentTime.Add(maxClockSkew + 1*time.Second),\n\t\t\tnextTime: currentTime.Add(maxClockSkew),\n\t\t\tret:      false,\n\t\t},\n\t\t{\n\t\t\t// on the border\n\t\t\tthisTime: currentTime.Add(-maxClockSkew),\n\t\t\tnextTime: currentTime.Add(-maxClockSkew),\n\t\t\tret:      true,\n\t\t},\n\t\t{\n\t\t\t// around the border\n\t\t\tthisTime: currentTime.Add(-24*time.Hour - 40*time.Second),\n\t\t\tnextTime: currentTime.Add(-24*time.Hour/time.Duration(100) - 40*time.Second),\n\t\t\tret:      false,\n\t\t},\n\t\t{\n\t\t\t// on the border\n\t\t\tthisTime: currentTime.Add(-48*time.Hour - 29*time.Minute),\n\t\t\tnextTime: currentTime.Add(-48 * time.Hour / time.Duration(100)),\n\t\t\tret:      true,\n\t\t},\n\t}\n\tfor _, tc := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v_%v\", tc.thisTime, tc.nextTime), func(t *testing.T) {\n\t\t\tif tc.ret != isInValidityRange(currentTime, tc.thisTime, tc.nextTime) {\n\t\t\t\tt.Fatalf(\"failed to check validity. should be: %v, currentTime: %v, thisTime: %v, nextTime: %v\", tc.ret, currentTime, tc.thisTime, tc.nextTime)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestUnitEncodeCertIDGood(t *testing.T) {\n\ttargetURLs := []string{\n\t\t\"faketestaccount.snowflakecomputing.com:443\",\n\t\t\"s3-us-west-2.amazonaws.com:443\",\n\t\t\"sfcdev2.blob.core.windows.net:443\",\n\t}\n\tfor _, tt := range targetURLs {\n\t\tt.Run(tt, func(t *testing.T) {\n\t\t\tchainedCerts := getCert(tt)\n\t\t\tfor i := 0; i < len(chainedCerts)-1; i++ {\n\t\t\t\tsubject := chainedCerts[i]\n\t\t\t\tissuer := chainedCerts[i+1]\n\t\t\t\tocspServers := subject.OCSPServer\n\t\t\t\tif len(ocspServers) == 0 {\n\t\t\t\t\tt.Fatalf(\"no OCSP server is found. cert: %v\", subject.Subject)\n\t\t\t\t}\n\t\t\t\tocspReq, err := ocsp.CreateRequest(subject, issuer, &ocsp.RequestOptions{})\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"failed to create OCSP request. err: %v\", err)\n\t\t\t\t}\n\t\t\t\tvar ost *ocspStatus\n\t\t\t\t_, ost = extractCertIDKeyFromRequest(ocspReq)\n\t\t\t\tif ost.err != nil {\n\t\t\t\t\tt.Fatalf(\"failed to extract cert ID from the OCSP request. err: %v\", ost.err)\n\t\t\t\t}\n\t\t\t\t// better hash. Not sure if the actual OCSP server accepts this, though.\n\t\t\t\tocspReq, err = ocsp.CreateRequest(subject, issuer, &ocsp.RequestOptions{Hash: crypto.SHA512})\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"failed to create OCSP request. err: %v\", err)\n\t\t\t\t}\n\t\t\t\t_, ost = extractCertIDKeyFromRequest(ocspReq)\n\t\t\t\tif ost.err != nil {\n\t\t\t\t\tt.Fatalf(\"failed to extract cert ID from the OCSP request. err: %v\", ost.err)\n\t\t\t\t}\n\t\t\t\t// tweaked request binary\n\t\t\t\tocspReq, err = ocsp.CreateRequest(subject, issuer, &ocsp.RequestOptions{Hash: crypto.SHA512})\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"failed to create OCSP request. err: %v\", err)\n\t\t\t\t}\n\t\t\t\tocspReq[10] = 0 // random change\n\t\t\t\t_, ost = extractCertIDKeyFromRequest(ocspReq)\n\t\t\t\tif ost.err == nil {\n\t\t\t\t\tt.Fatal(\"should have failed\")\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestUnitCheckOCSPResponseCache(t *testing.T) {\n\tocspCacheServerEnabled = true\n\tov := newOcspValidator(&Config{OCSPFailOpen: OCSPFailOpenTrue})\n\tdummyKey0 := certIDKey{\n\t\tHashAlgorithm: crypto.SHA1,\n\t\tNameHash:      \"dummy0\",\n\t\tIssuerKeyHash: \"dummy0\",\n\t\tSerialNumber:  \"dummy0\",\n\t}\n\tdummyKey := certIDKey{\n\t\tHashAlgorithm: crypto.SHA1,\n\t\tNameHash:      \"dummy1\",\n\t\tIssuerKeyHash: \"dummy1\",\n\t\tSerialNumber:  \"dummy1\",\n\t}\n\tb64Key := base64.StdEncoding.EncodeToString([]byte(\"DUMMY_VALUE\"))\n\tcurrentTime := float64(time.Now().UTC().Unix())\n\tsyncUpdateOcspResponseCache(func() {\n\t\tocspResponseCache[dummyKey0] = &certCacheValue{currentTime, b64Key}\n\t})\n\tsubject := &x509.Certificate{}\n\tissuer := &x509.Certificate{}\n\tost := ov.checkOCSPResponseCache(&dummyKey, subject, issuer)\n\tif ost.code != ocspMissedCache {\n\t\tt.Fatalf(\"should have failed. expected: %v, got: %v\", ocspMissedCache, ost.code)\n\t}\n\t// old timestamp\n\tsyncUpdateOcspResponseCache(func() {\n\t\tocspResponseCache[dummyKey] = &certCacheValue{float64(1395054952), b64Key}\n\t})\n\tost = ov.checkOCSPResponseCache(&dummyKey, subject, issuer)\n\tif ost.code != ocspCacheExpired {\n\t\tt.Fatalf(\"should have failed. expected: %v, got: %v\", ocspCacheExpired, ost.code)\n\t}\n\t// future timestamp\n\tsyncUpdateOcspResponseCache(func() {\n\t\tocspResponseCache[dummyKey] = &certCacheValue{float64(1805054952), b64Key}\n\t})\n\tost = ov.checkOCSPResponseCache(&dummyKey, subject, issuer)\n\tif ost.code != ocspFailedParseResponse {\n\t\tt.Fatalf(\"should have failed. expected: %v, got: %v\", ocspFailedDecodeResponse, ost.code)\n\t}\n\t// actual OCSP but it fails to parse, because an invalid issuer certificate is given.\n\tactualOcspResponse := \"MIIB0woBAKCCAcwwggHIBgkrBgEFBQcwAQEEggG5MIIBtTCBnqIWBBSxPsNpA/i/RwHUmCYaCALvY2QrwxgPMjAxNz\" + // pragma: allowlist secret\n\t\t\"A1MTYyMjAwMDBaMHMwcTBJMAkGBSsOAwIaBQAEFN+qEuMosQlBk+KfQoLOR0BClVijBBSxPsNpA/i/RwHUmCYaCALvY2QrwwIQBOHnp\" + // pragma: allowlist secret\n\t\t\"Nxc8vNtwCtCuF0Vn4AAGA8yMDE3MDUxNjIyMDAwMFqgERgPMjAxNzA1MjMyMjAwMDBaMA0GCSqGSIb3DQEBCwUAA4IBAQCuRGwqQsKy\" + // pragma: allowlist secret\n\t\t\"IAAGHgezTfG0PzMYgGD/XRDhU+2i08WTJ4Zs40Lu88cBeRXWF3iiJSpiX3/OLgfI7iXmHX9/sm2SmeNWc0Kb39bk5Lw1jwezf8hcI9+\" + // pragma: allowlist secret\n\t\t\"mZHt60vhUgtgZk21SsRlTZ+S4VXwtDqB1Nhv6cnSnfrL2A9qJDZS2ltPNOwebWJnznDAs2dg+KxmT2yBXpHM1kb0EOolWvNgORbgIgB\" + // pragma: allowlist secret\n\t\t\"koRzw/UU7zKsqiTB0ZN/rgJp+MocTdqQSGKvbZyR8d4u8eNQqi1x4Pk3yO/pftANFaJKGB+JPgKS3PQAqJaXcipNcEfqtl7y4PO6kqA\" + // pragma: allowlist secret\n\t\t\"Jb4xI/OTXIrRA5TsT4cCioE\"\n\t// issuer is not a true issuer certificate\n\tsyncUpdateOcspResponseCache(func() {\n\t\tocspResponseCache[dummyKey] = &certCacheValue{float64(currentTime - 1000), actualOcspResponse}\n\t})\n\tost = ov.checkOCSPResponseCache(&dummyKey, subject, issuer)\n\tif ost.code != ocspFailedParseResponse {\n\t\tt.Fatalf(\"should have failed. expected: %v, got: %v\", ocspFailedParseResponse, ost.code)\n\t}\n\t// invalid validity\n\tsyncUpdateOcspResponseCache(func() {\n\t\tocspResponseCache[dummyKey] = &certCacheValue{float64(currentTime - 1000), actualOcspResponse}\n\t})\n\tost = ov.checkOCSPResponseCache(&dummyKey, subject, nil)\n\tif ost.code != ocspInvalidValidity {\n\t\tt.Fatalf(\"should have failed. expected: %v, got: %v\", ocspInvalidValidity, ost.code)\n\t}\n}\n\nfunc TestOcspCacheClearer(t *testing.T) {\n\tinitOCSPCache()\n\torigValue := os.Getenv(ocspResponseCacheClearingIntervalInSecondsEnv)\n\tdefer func() {\n\t\tStopOCSPCacheClearer()\n\t\tos.Setenv(ocspResponseCacheClearingIntervalInSecondsEnv, origValue)\n\t\tinitOCSPCache()\n\t\tStartOCSPCacheClearer()\n\t}()\n\tsyncUpdateOcspResponseCache(func() {\n\t\tocspResponseCache[certIDKey{}] = nil\n\t})\n\tfunc() {\n\t\tocspParsedRespCacheLock.Lock()\n\t\tdefer ocspParsedRespCacheLock.Unlock()\n\t\tocspParsedRespCache[parsedOcspRespKey{}] = nil\n\t}()\n\tStopOCSPCacheClearer()\n\tos.Setenv(ocspResponseCacheClearingIntervalInSecondsEnv, \"1\")\n\tStartOCSPCacheClearer()\n\ttime.Sleep(2 * time.Second)\n\tsyncUpdateOcspResponseCache(func() {\n\t\tassertEqualE(t, len(ocspResponseCache), 0)\n\t})\n\tfunc() {\n\t\tocspParsedRespCacheLock.Lock()\n\t\tdefer ocspParsedRespCacheLock.Unlock()\n\t\tassertEqualE(t, len(ocspParsedRespCache), 0)\n\t}()\n}\n\nfunc TestUnitValidateOCSP(t *testing.T) {\n\tocspRes := &ocsp.Response{\n\t\tThisUpdate: time.Date(2020, 1, 1, 0, 0, 0, 0, time.UTC),\n\t\tNextUpdate: time.Date(2020, 1, 5, 0, 0, 0, 0, time.UTC),\n\t}\n\tost := validateOCSP(ocspRes)\n\tif ost.code != ocspInvalidValidity {\n\t\tt.Fatalf(\"should have failed. expected: %v, got: %v\", ocspInvalidValidity, ost.code)\n\t}\n\tcurrentTime := time.Now()\n\tocspRes.ThisUpdate = currentTime.Add(-2 * time.Hour)\n\tocspRes.NextUpdate = currentTime.Add(2 * time.Hour)\n\tocspRes.Status = ocsp.Revoked\n\tost = validateOCSP(ocspRes)\n\tif ost.code != ocspStatusRevoked {\n\t\tt.Fatalf(\"should have failed. expected: %v, got: %v\", ocspStatusRevoked, ost.code)\n\t}\n\tocspRes.Status = ocsp.Good\n\tost = validateOCSP(ocspRes)\n\tif ost.code != ocspStatusGood {\n\t\tt.Fatalf(\"should have success. expected: %v, got: %v\", ocspStatusGood, ost.code)\n\t}\n\tocspRes.Status = ocsp.Unknown\n\tost = validateOCSP(ocspRes)\n\tif ost.code != ocspStatusUnknown {\n\t\tt.Fatalf(\"should have failed. expected: %v, got: %v\", ocspStatusUnknown, ost.code)\n\t}\n\tocspRes.Status = ocsp.ServerFailed\n\tost = validateOCSP(ocspRes)\n\tif ost.code != ocspStatusOthers {\n\t\tt.Fatalf(\"should have failed. expected: %v, got: %v\", ocspStatusOthers, ost.code)\n\t}\n}\n\nfunc TestUnitEncodeCertID(t *testing.T) {\n\tvar st *ocspStatus\n\t_, st = extractCertIDKeyFromRequest([]byte{0x1, 0x2})\n\tif st.code != ocspFailedDecomposeRequest {\n\t\tt.Fatalf(\"failed to get OCSP status. expected: %v, got: %v\", ocspFailedDecomposeRequest, st.code)\n\t}\n}\n\nfunc getCert(addr string) []*x509.Certificate {\n\ttcpConn, err := net.DialTimeout(\"tcp\", addr, 40*time.Second)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer tcpConn.Close()\n\n\terr = tcpConn.SetDeadline(time.Now().Add(10 * time.Second))\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tconfig := tls.Config{InsecureSkipVerify: true, ServerName: addr}\n\n\tconn := tls.Client(tcpConn, &config)\n\tdefer conn.Close()\n\n\terr = conn.Handshake()\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tstate := conn.ConnectionState()\n\n\treturn state.PeerCertificates\n}\n\nfunc TestOCSPRetry(t *testing.T) {\n\tov := newOcspValidator(&Config{OCSPFailOpen: OCSPFailOpenTrue})\n\tcerts := getCert(\"s3-us-west-2.amazonaws.com:443\")\n\tdummyOCSPHost := &url.URL{\n\t\tScheme: \"https\",\n\t\tHost:   \"dummyOCSPHost\",\n\t}\n\tclient := &fakeHTTPClient{\n\t\tcnt:     3,\n\t\tsuccess: true,\n\t\tbody:    []byte{1, 2, 3},\n\t\tt:       t,\n\t}\n\tres, b, st := ov.retryOCSP(\n\t\tcontext.Background(),\n\t\tclient, emptyRequest,\n\t\tdummyOCSPHost,\n\t\tmake(map[string]string), []byte{0}, certs[len(certs)-1], 10*time.Second)\n\tif st.err == nil {\n\t\tfmt.Printf(\"should fail: %v, %v, %v\\n\", res, b, st)\n\t}\n\tclient = &fakeHTTPClient{\n\t\tcnt:     30,\n\t\tsuccess: true,\n\t\tbody:    []byte{1, 2, 3},\n\t\tt:       t,\n\t}\n\tres, b, st = ov.retryOCSP(\n\t\tcontext.Background(),\n\t\tclient, fakeRequestFunc,\n\t\tdummyOCSPHost,\n\t\tmake(map[string]string), []byte{0}, certs[len(certs)-1], 5*time.Second)\n\tif st.err == nil {\n\t\tfmt.Printf(\"should fail: %v, %v, %v\\n\", res, b, st)\n\t}\n}\n\nfunc TestFullOCSPURL(t *testing.T) {\n\ttestcases := []tcFullOCSPURL{\n\t\t{\n\t\t\turl:               &url.URL{Host: \"some-ocsp-url.com\"},\n\t\t\texpectedURLString: \"some-ocsp-url.com\",\n\t\t},\n\t\t{\n\t\t\turl: &url.URL{\n\t\t\t\tHost: \"some-ocsp-url.com\",\n\t\t\t\tPath: \"/some-path\",\n\t\t\t},\n\t\t\texpectedURLString: \"some-ocsp-url.com/some-path\",\n\t\t},\n\t\t{\n\t\t\turl: &url.URL{\n\t\t\t\tHost: \"some-ocsp-url.com\",\n\t\t\t\tPath: \"some-path\",\n\t\t\t},\n\t\t\texpectedURLString: \"some-ocsp-url.com/some-path\",\n\t\t},\n\t}\n\n\tfor _, testcase := range testcases {\n\t\tt.Run(\"\", func(t *testing.T) {\n\t\t\treturnedStringURL := fullOCSPURL(testcase.url)\n\t\t\tif returnedStringURL != testcase.expectedURLString {\n\t\t\t\tt.Fatalf(\"failed to match returned OCSP url string; expected: %v, got: %v\",\n\t\t\t\t\ttestcase.expectedURLString, returnedStringURL)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype tcFullOCSPURL struct {\n\turl               *url.URL\n\texpectedURLString string\n}\n\nfunc TestOCSPCacheServerRetry(t *testing.T) {\n\tdummyOCSPHost := &url.URL{\n\t\tScheme: \"https\",\n\t\tHost:   \"dummyOCSPHost\",\n\t}\n\tclient := &fakeHTTPClient{\n\t\tcnt:     3,\n\t\tsuccess: true,\n\t\tbody:    []byte{1, 2, 3},\n\t\tt:       t,\n\t}\n\tres, st := checkOCSPCacheServer(\n\t\tcontext.Background(), client, fakeRequestFunc, dummyOCSPHost, 20*time.Second)\n\tif st.err == nil {\n\t\tt.Errorf(\"should fail: %v\", res)\n\t}\n\tclient = &fakeHTTPClient{\n\t\tcnt:     30,\n\t\tsuccess: true,\n\t\tbody:    []byte{1, 2, 3},\n\t\tt:       t,\n\t}\n\tres, st = checkOCSPCacheServer(\n\t\tcontext.Background(), client, fakeRequestFunc, dummyOCSPHost, 10*time.Second)\n\tif st.err == nil {\n\t\tt.Errorf(\"should fail: %v\", res)\n\t}\n}\n\ntype tcCanEarlyExit struct {\n\tresults       []*ocspStatus\n\tresultLen     int\n\tretFailOpen   *ocspStatus\n\tretFailClosed *ocspStatus\n}\n\nfunc TestCanEarlyExitForOCSP(t *testing.T) {\n\ttestcases := []tcCanEarlyExit{\n\t\t{ // 0\n\t\t\tresults: []*ocspStatus{\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusGood,\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusGood,\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusGood,\n\t\t\t\t},\n\t\t\t},\n\t\t\tretFailOpen:   nil,\n\t\t\tretFailClosed: nil,\n\t\t},\n\t\t{ // 1\n\t\t\tresults: []*ocspStatus{\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusRevoked,\n\t\t\t\t\terr:  errors.New(\"revoked\"),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusGood,\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusGood,\n\t\t\t\t},\n\t\t\t},\n\t\t\tretFailOpen:   &ocspStatus{ocspStatusRevoked, errors.New(\"revoked\")},\n\t\t\tretFailClosed: &ocspStatus{ocspStatusRevoked, errors.New(\"revoked\")},\n\t\t},\n\t\t{ // 2\n\t\t\tresults: []*ocspStatus{\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusUnknown,\n\t\t\t\t\terr:  errors.New(\"unknown\"),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusGood,\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusGood,\n\t\t\t\t},\n\t\t\t},\n\t\t\tretFailOpen:   nil,\n\t\t\tretFailClosed: &ocspStatus{ocspStatusUnknown, errors.New(\"unknown\")},\n\t\t},\n\t\t{ // 3: not taken as revoked if any invalid OCSP response (ocspInvalidValidity) is included.\n\t\t\tresults: []*ocspStatus{\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusRevoked,\n\t\t\t\t\terr:  errors.New(\"revoked\"),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tcode: ocspInvalidValidity,\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusGood,\n\t\t\t\t},\n\t\t\t},\n\t\t\tretFailOpen:   nil,\n\t\t\tretFailClosed: &ocspStatus{ocspStatusRevoked, errors.New(\"revoked\")},\n\t\t},\n\t\t{ // 4: not taken as revoked if the number of results don't match the expected results.\n\t\t\tresults: []*ocspStatus{\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusRevoked,\n\t\t\t\t\terr:  errors.New(\"revoked\"),\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tcode: ocspStatusGood,\n\t\t\t\t},\n\t\t\t},\n\t\t\tresultLen:     3,\n\t\t\tretFailOpen:   nil,\n\t\t\tretFailClosed: &ocspStatus{ocspStatusRevoked, errors.New(\"revoked\")},\n\t\t},\n\t}\n\n\tfor idx, tt := range testcases {\n\t\tt.Run(\"\", func(t *testing.T) {\n\t\t\tovOpen := newOcspValidator(&Config{OCSPFailOpen: OCSPFailOpenTrue})\n\t\t\texpectedLen := len(tt.results)\n\t\t\tif tt.resultLen > 0 {\n\t\t\t\texpectedLen = tt.resultLen\n\t\t\t}\n\t\t\texpectedLen++ // add one because normally there is a root certificate that is not included in the results.\n\t\t\tmockVerifiedChain := make([]*x509.Certificate, expectedLen)\n\t\t\tr := ovOpen.canEarlyExitForOCSP(tt.results, mockVerifiedChain)\n\t\t\tif !(tt.retFailOpen == nil && r == nil) && !(tt.retFailOpen != nil && r != nil && tt.retFailOpen.code == r.code) {\n\t\t\t\tt.Fatalf(\"%d: failed to match return. expected: %v, got: %v\", idx, tt.retFailOpen, r)\n\t\t\t}\n\t\t\tovClosed := newOcspValidator(&Config{OCSPFailOpen: OCSPFailOpenFalse})\n\t\t\tr = ovClosed.canEarlyExitForOCSP(tt.results, mockVerifiedChain)\n\t\t\tif !(tt.retFailClosed == nil && r == nil) && !(tt.retFailClosed != nil && r != nil && tt.retFailClosed.code == r.code) {\n\t\t\t\tt.Fatalf(\"%d: failed to match return. expected: %v, got: %v\", idx, tt.retFailClosed, r)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestInitOCSPCacheFileCreation(t *testing.T) {\n\tif runningOnGithubAction() {\n\t\tt.Skip(\"cannot write to github file system\")\n\t}\n\tdirName, err := os.UserHomeDir()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tsrcFileName := dirName + \"/.cache/snowflake/ocsp_response_cache.json\"\n\ttmpFileName := srcFileName + \"_tmp\"\n\tdst, err := os.Create(tmpFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer dst.Close()\n\n\tvar src *os.File\n\tif _, err = os.Stat(srcFileName); errors.Is(err, os.ErrNotExist) {\n\t\t// file does not exist\n\t\tif err = os.MkdirAll(dirName+\"/.cache/snowflake/\", os.ModePerm); err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tif _, err = os.Create(srcFileName); err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t} else if err != nil {\n\t\tt.Error(err)\n\t} else {\n\t\t// file exists\n\t\tsrc, err = os.Open(srcFileName)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tdefer src.Close()\n\t\t// copy original contents to temporary file\n\t\tif _, err = io.Copy(dst, src); err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tif err = os.Remove(srcFileName); err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t}\n\n\t// cleanup\n\tdefer func() {\n\t\tsrc, _ = os.Open(tmpFileName)\n\t\tdefer src.Close()\n\t\tdst, _ = os.OpenFile(srcFileName, os.O_WRONLY, readWriteFileMode)\n\t\tdefer dst.Close()\n\t\t// copy temporary file contents back to original file\n\t\tif _, err = io.Copy(dst, src); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif err = os.Remove(tmpFileName); err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t}()\n\n\tinitOCSPCache()\n\tif _, err = os.Stat(srcFileName); errors.Is(err, os.ErrNotExist) {\n\t\tt.Error(err)\n\t} else if err != nil {\n\t\tt.Error(err)\n\t}\n}\n\nfunc syncUpdateOcspResponseCache(f func()) {\n\tocspResponseCacheLock.Lock()\n\tdefer ocspResponseCacheLock.Unlock()\n\tf()\n}\n"
  },
  {
    "path": "old_driver_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"reflect\"\n\t\"testing\"\n)\n\nconst (\n\tforceARROW = \"ALTER SESSION SET GO_QUERY_RESULT_FORMAT = ARROW\"\n\tforceJSON  = \"ALTER SESSION SET GO_QUERY_RESULT_FORMAT = JSON\"\n)\n\nfunc TestJSONInt(t *testing.T) {\n\ttestInt(t, true)\n}\n\nfunc TestJSONFloat32(t *testing.T) {\n\ttestFloat32(t, true)\n}\n\nfunc TestJSONFloat64(t *testing.T) {\n\ttestFloat64(t, true)\n}\n\nfunc TestJSONVariousTypes(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(forceJSON)\n\t\trows := dbt.mustQuery(selectVariousTypes)\n\t\tdefer rows.Close()\n\t\tif !rows.Next() {\n\t\t\tdbt.Error(\"failed to query\")\n\t\t}\n\t\tcc, err := rows.Columns()\n\t\tif err != nil {\n\t\t\tdbt.Errorf(\"columns: %v\", cc)\n\t\t}\n\t\tct, err := rows.ColumnTypes()\n\t\tif err != nil {\n\t\t\tdbt.Errorf(\"column types: %v\", ct)\n\t\t}\n\t\tvar v1 float32\n\t\tvar v2, v2a int\n\t\tvar v3 string\n\t\tvar v4 float64\n\t\tvar v5 []byte\n\t\tvar v6 bool\n\t\terr = rows.Scan(&v1, &v2, &v2a, &v3, &v4, &v5, &v6)\n\t\tif err != nil {\n\t\t\tdbt.Errorf(\"failed to scan: %#v\", err)\n\t\t}\n\t\tif v1 != 1.0 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v1)\n\t\t}\n\t\tif ct[0].Name() != \"C1\" || ct[1].Name() != \"C2\" || ct[2].Name() != \"C2A\" || ct[3].Name() != \"C3\" || ct[4].Name() != \"C4\" || ct[5].Name() != \"C5\" || ct[6].Name() != \"C6\" {\n\t\t\tdbt.Errorf(\"failed to get column names: %#v\", ct)\n\t\t}\n\t\tif ct[0].ScanType() != reflect.TypeFor[float64]() {\n\t\t\tdbt.Errorf(\"failed to get scan type. expected: %v, got: %v\", reflect.TypeFor[float64](), ct[0].ScanType())\n\t\t}\n\t\tif ct[1].ScanType() != reflect.TypeFor[int64]() {\n\t\t\tdbt.Errorf(\"failed to get scan type. expected: %v, got: %v\", reflect.TypeFor[int64](), ct[1].ScanType())\n\t\t}\n\t\tassertEqualE(t, ct[2].ScanType(), reflect.TypeFor[string]())\n\t\tvar pr, sc int64\n\t\tvar cLen int64\n\t\tvar canNull bool\n\t\tpr, sc = dbt.mustDecimalSize(ct[0])\n\t\tif pr != 30 || sc != 2 {\n\t\t\tdbt.Errorf(\"failed to get precision and scale. %#v\", ct[0])\n\t\t}\n\t\tdbt.mustFailLength(ct[0])\n\t\tcanNull = dbt.mustNullable(ct[0])\n\t\tif canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[0])\n\t\t}\n\t\tif cLen != 0 {\n\t\t\tdbt.Errorf(\"failed to get length. %#v\", ct[0])\n\t\t}\n\t\tif v2 != 2 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v2)\n\t\t}\n\t\tpr, sc = dbt.mustDecimalSize(ct[1])\n\t\tif pr != 18 || sc != 0 {\n\t\t\tdbt.Errorf(\"failed to get precision and scale. %#v\", ct[1])\n\t\t}\n\t\tdbt.mustFailLength(ct[1])\n\t\tcanNull = dbt.mustNullable(ct[1])\n\t\tif canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[1])\n\t\t}\n\t\tif v2a != 22 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v2a)\n\t\t}\n\t\tpr, sc = dbt.mustDecimalSize(ct[2])\n\t\tif pr != 38 || sc != 0 {\n\t\t\tdbt.Errorf(\"failed to get precision and scale. %#v\", ct[2])\n\t\t}\n\t\tif v3 != \"t3\" {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v3)\n\t\t}\n\t\tdbt.mustFailDecimalSize(ct[3])\n\t\tcLen = dbt.mustLength(ct[3])\n\t\tif cLen != 2 {\n\t\t\tdbt.Errorf(\"failed to get length. %#v\", ct[3])\n\t\t}\n\t\tcanNull = dbt.mustNullable(ct[3])\n\t\tif canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[3])\n\t\t}\n\t\tif v4 != 4.2 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v4)\n\t\t}\n\t\tdbt.mustFailDecimalSize(ct[4])\n\t\tdbt.mustFailLength(ct[4])\n\t\tcanNull = dbt.mustNullable(ct[4])\n\t\tif canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[4])\n\t\t}\n\t\tif !bytes.Equal(v5, []byte{0xab, 0xcd}) {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v5)\n\t\t}\n\t\tdbt.mustFailDecimalSize(ct[5])\n\t\tcLen = dbt.mustLength(ct[5]) // BINARY\n\t\tif cLen != 8388608 {\n\t\t\tdbt.Errorf(\"failed to get length. %#v\", ct[5])\n\t\t}\n\t\tcanNull = dbt.mustNullable(ct[5])\n\t\tif canNull {\n\t\t\tdbt.Errorf(\"failed to get nullable. %#v\", ct[5])\n\t\t}\n\t\tif !v6 {\n\t\t\tdbt.Errorf(\"failed to scan. %#v\", v6)\n\t\t}\n\t\tdbt.mustFailDecimalSize(ct[6])\n\t\tdbt.mustFailLength(ct[6])\n\t})\n}\n\nfunc TestJSONString(t *testing.T) {\n\ttestString(t, true)\n}\n\nfunc TestJSONSimpleDateTimeTimestampFetch(t *testing.T) {\n\ttestSimpleDateTimeTimestampFetch(t, true)\n}\n\nfunc TestJSONDateTime(t *testing.T) {\n\ttestDateTime(t, true)\n}\n\nfunc TestJSONTimestampLTZ(t *testing.T) {\n\ttestTimestampLTZ(t, true)\n}\n\nfunc TestJSONTimestampTZ(t *testing.T) {\n\ttestTimestampTZ(t, true)\n}\n\nfunc TestJSONNULL(t *testing.T) {\n\ttestNULL(t, true)\n}\n\nfunc TestJSONVariant(t *testing.T) {\n\ttestVariant(t, true)\n}\n\nfunc TestJSONArray(t *testing.T) {\n\ttestArray(t, true)\n}\n\n// TestLargeSetJSONResultWithDecoder and TestLargeSetResultWithCustomJSONDecoder\n// validate JSON result decoding with row counts large enough to trigger chunked\n// result delivery from Snowflake. The row counts (10,000 and 20,000) are\n// calibrated to exercise the chunk download pipeline while staying within CI\n// timeout limits.\nfunc TestLargeSetJSONResultWithDecoder(t *testing.T) {\n\ttestLargeSetResult(t, 10000, true)\n}\n\n// TestLargeSetResultWithCustomJSONDecoder validates chunked JSON decoding using\n// the custom decoder. Same row count constraints as TestLargeSetJSONResultWithDecoder\n// apply here — the count must be large enough to trigger chunked delivery.\nfunc TestLargeSetResultWithCustomJSONDecoder(t *testing.T) {\n\tcustomJSONDecoderEnabled = true\n\t// less number of rows to avoid CI timeout\n\ttestLargeSetResult(t, 20000, true)\n}\n\nfunc TestBindingJSONInterface(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(forceJSON)\n\t\trows := dbt.mustQuery(selectVariousTypes)\n\t\tdefer rows.Close()\n\t\tif !rows.Next() {\n\t\t\tdbt.Error(\"failed to query\")\n\t\t}\n\t\tvar v1, v2, v2a, v3, v4, v5, v6 any\n\t\tif err := rows.Scan(&v1, &v2, &v2a, &v3, &v4, &v5, &v6); err != nil {\n\t\t\tdbt.Errorf(\"failed to scan: %#v\", err)\n\t\t}\n\t\tif s, ok := v1.(string); !ok || s != \"1.00\" {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v1)\n\t\t}\n\t\tif s, ok := v2.(string); !ok || s != \"2\" {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v2)\n\t\t}\n\t\tif s, ok := v3.(string); !ok || s != \"t3\" {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v3)\n\t\t}\n\t\tif s, ok := v4.(string); !ok || s != \"4.2\" {\n\t\t\tdbt.Fatalf(\"failed to fetch. ok: %v, value: %v\", ok, v4)\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "os_specific_posix.go",
    "content": "//go:build !windows\n\npackage gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"golang.org/x/sys/unix\"\n\t\"io\"\n\t\"os\"\n\t\"syscall\"\n)\n\nvar osVersion = getOSVersion()\n\nfunc getOSVersion() string {\n\tvar uts unix.Utsname\n\tif err := unix.Uname(&uts); err != nil {\n\t\tpanic(err)\n\t}\n\n\tsysname := unix.ByteSliceToString(uts.Sysname[:])\n\trelease := unix.ByteSliceToString(uts.Release[:])\n\n\treturn sysname + \"-\" + release\n}\n\nfunc provideFileOwner(file *os.File) (uint32, error) {\n\tinfo, err := file.Stat()\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\treturn provideOwnerFromStat(info, file.Name())\n}\n\nfunc provideOwnerFromStat(info os.FileInfo, filepath string) (uint32, error) {\n\tnativeStat, ok := info.Sys().(*syscall.Stat_t)\n\tif !ok {\n\t\treturn 0, fmt.Errorf(\"cannot cast file info for %v to *syscall.Stat_t\", filepath)\n\t}\n\treturn nativeStat.Uid, nil\n}\n\nfunc getFileContents(filePath string, expectedPerm os.FileMode) ([]byte, error) {\n\t// open the file with read only and no symlink flags\n\tfile, err := os.OpenFile(filePath, syscall.O_RDONLY|syscall.O_NOFOLLOW, 0)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif err = file.Close(); err != nil {\n\t\t\tlogger.Warnf(\"failed to close the file: %v\", err)\n\t\t}\n\t}()\n\n\t// validate file permissions and owner\n\tif err = validateFilePermissionBits(file, expectedPerm); err != nil {\n\t\treturn nil, err\n\t}\n\tif err = ensureFileOwner(file); err != nil {\n\t\treturn nil, err\n\t}\n\n\t// read the file\n\tfileContents, err := io.ReadAll(file)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn fileContents, nil\n}\n\nfunc validateFilePermissionBits(f *os.File, expectedPerm os.FileMode) error {\n\tfileInfo, err := f.Stat()\n\tif err != nil {\n\t\treturn err\n\t}\n\tfilePerm := fileInfo.Mode()\n\tif filePerm&expectedPerm != 0 {\n\t\treturn fmt.Errorf(\"incorrect permissions of %s\", f.Name())\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "os_specific_windows.go",
    "content": "package gosnowflake\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\n\t\"golang.org/x/sys/windows/registry\"\n)\n\nvar osVersion = getWindowsOSVersion()\n\nfunc getWindowsOSVersion() string {\n\tk, err := registry.OpenKey(registry.LOCAL_MACHINE, `SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion`, registry.QUERY_VALUE)\n\tif err != nil {\n\t\terrString := fmt.Sprintf(\"cannot open Windows registry key: %v\", err)\n\t\tlogger.Debugf(errString)\n\t\treturn errString\n\t}\n\tdefer k.Close()\n\n\tcv, _, err := k.GetStringValue(\"CurrentVersion\")\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot find Windows current version: %v\", err)\n\t\tcv = \"CurrentVersion=unknown\"\n\t}\n\n\tpn, _, err := k.GetStringValue(\"ProductName\")\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot find Windows product name: %v\", err)\n\t\tpn = \"ProductName=unknown\"\n\t}\n\n\tmaj, _, err := k.GetIntegerValue(\"CurrentMajorVersionNumber\")\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot find Windows major version number: %v\", err)\n\t}\n\n\tmin, _, err := k.GetIntegerValue(\"CurrentMinorVersionNumber\")\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot find Windows minor version number: %v\", err)\n\t}\n\n\tcb, _, err := k.GetStringValue(\"CurrentBuild\")\n\tif err != nil {\n\t\tlogger.Debugf(\"cannot find Windows current build: %v\", err)\n\t\tcb = \"CurrentBuild=unknown\"\n\t}\n\treturn fmt.Sprintf(\"CurrentVersion=%s; ProductName=%s; MajorVersion=%d; MinorVersion=%d; CurrentBuild=%s\", cv, pn, maj, min, cb)\n}\n\nfunc provideFileOwner(file *os.File) (uint32, error) {\n\treturn 0, errors.New(\"provideFileOwner is unsupported on windows\")\n}\n\nfunc getFileContents(filePath string, expectedPerm os.FileMode) ([]byte, error) {\n\tfileContents, err := os.ReadFile(filePath)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn fileContents, nil\n}\n"
  },
  {
    "path": "parameters.json.local",
    "content": "{\n    \"testconnection\": {\n        \"SNOWFLAKE_TEST_HOST\":      \"snowflake.reg.local\",\n        \"SNOWFLAKE_TEST_PROTOCOL\":  \"http\",\n        \"SNOWFLAKE_TEST_PORT\":      \"8082\",\n        \"SNOWFLAKE_TEST_USER\":      \"snowman\",\n        \"SNOWFLAKE_TEST_PASSWORD\":  \"test\",\n        \"SNOWFLAKE_TEST_ACCOUNT\":   \"s3testaccount\",\n        \"SNOWFLAKE_TEST_WAREHOUSE\": \"regress\",\n        \"SNOWFLAKE_TEST_DATABASE\":  \"testdb\",\n        \"SNOWFLAKE_TEST_SCHEMA\":    \"testschema\",\n        \"SNOWFLAKE_TEST_ROLE\":      \"sysadmin\",\n        \"SNOWFLAKE_TEST_DEBUG\":     \"false\"\n    }\n}\n"
  },
  {
    "path": "parameters.json.tmpl",
    "content": "{\n    \"testconnection\": {\n        \"SNOWFLAKE_TEST_USER\":      \"testuser\",\n        \"SNOWFLAKE_TEST_PASSWORD\":  \"testpass\",\n        \"SNOWFLAKE_TEST_ACCOUNT\":   \"testaccount\",\n        \"SNOWFLAKE_TEST_WAREHOUSE\": \"testwarehouse\",\n        \"SNOWFLAKE_TEST_DATABASE\":  \"testdatabase\",\n        \"SNOWFLAKE_TEST_SCHEMA\":    \"testschema\",\n        \"SNOWFLAKE_TEST_ROLE\":      \"testrole\",\n        \"SNOWFLAKE_TEST_DEBUG\":     \"false\",\n    }\n}\n"
  },
  {
    "path": "permissions_test.go",
    "content": "//go:build !windows\n\npackage gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path\"\n\t\"testing\"\n\n\t\"golang.org/x/sys/unix\"\n)\n\nfunc TestConfigPermissions(t *testing.T) {\n\ttestCases := []struct {\n\t\tfilePerm int\n\t\tisValid  bool\n\t}{\n\t\t{filePerm: 0700, isValid: true},\n\t\t{filePerm: 0600, isValid: true},\n\t\t{filePerm: 0500, isValid: true},\n\t\t{filePerm: 0400, isValid: true},\n\t\t{filePerm: 0707, isValid: false},\n\t\t{filePerm: 0706, isValid: false},\n\t\t{filePerm: 0705, isValid: true},\n\t\t{filePerm: 0704, isValid: true},\n\t\t{filePerm: 0703, isValid: false},\n\t\t{filePerm: 0702, isValid: false},\n\t\t{filePerm: 0701, isValid: true},\n\t\t{filePerm: 0770, isValid: false},\n\t\t{filePerm: 0760, isValid: false},\n\t\t{filePerm: 0750, isValid: true},\n\t\t{filePerm: 0740, isValid: true},\n\t\t{filePerm: 0730, isValid: false},\n\t\t{filePerm: 0720, isValid: false},\n\t\t{filePerm: 0710, isValid: true},\n\t}\n\n\toldMask := unix.Umask(0000)\n\tdefer unix.Umask(oldMask)\n\n\tfor _, tc := range testCases {\n\t\tt.Run(fmt.Sprintf(\"0%o\", tc.filePerm), func(t *testing.T) {\n\t\t\ttempFile := path.Join(t.TempDir(), fmt.Sprintf(\"filePerm_%o\", tc.filePerm))\n\t\t\terr := os.WriteFile(tempFile, nil, os.FileMode(tc.filePerm))\n\t\t\tassertNilE(t, err)\n\t\t\tdefer os.Remove(tempFile)\n\t\t\tf, err := os.Open(tempFile)\n\t\t\tassertNilE(t, err)\n\t\t\tdefer f.Close()\n\t\t\texpectedPerm := os.FileMode(1<<4 | 1<<1)\n\t\t\terr = validateFilePermissionBits(f, expectedPerm)\n\t\t\tif err != nil && tc.isValid {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestLogDirectoryPermissions(t *testing.T) {\n\ttestCases := []struct {\n\t\tdirPerm       int\n\t\tlimitedToUser bool\n\t}{\n\t\t{dirPerm: 0700, limitedToUser: true},\n\t\t{dirPerm: 0600, limitedToUser: false},\n\t\t{dirPerm: 0500, limitedToUser: false},\n\t\t{dirPerm: 0400, limitedToUser: false},\n\t\t{dirPerm: 0300, limitedToUser: false},\n\t\t{dirPerm: 0200, limitedToUser: false},\n\t\t{dirPerm: 0100, limitedToUser: false},\n\t\t{dirPerm: 0707, limitedToUser: false},\n\t\t{dirPerm: 0706, limitedToUser: false},\n\t\t{dirPerm: 0705, limitedToUser: false},\n\t\t{dirPerm: 0704, limitedToUser: false},\n\t\t{dirPerm: 0703, limitedToUser: false},\n\t\t{dirPerm: 0702, limitedToUser: false},\n\t\t{dirPerm: 0701, limitedToUser: false},\n\t\t{dirPerm: 0770, limitedToUser: false},\n\t\t{dirPerm: 0760, limitedToUser: false},\n\t\t{dirPerm: 0750, limitedToUser: false},\n\t\t{dirPerm: 0740, limitedToUser: false},\n\t\t{dirPerm: 0730, limitedToUser: false},\n\t\t{dirPerm: 0720, limitedToUser: false},\n\t\t{dirPerm: 0710, limitedToUser: false},\n\t}\n\n\toldMask := unix.Umask(0000)\n\tdefer unix.Umask(oldMask)\n\n\tfor _, tc := range testCases {\n\t\tt.Run(fmt.Sprintf(\"0%o\", tc.dirPerm), func(t *testing.T) {\n\t\t\ttempDir := path.Join(t.TempDir(), fmt.Sprintf(\"filePerm_%o\", tc.dirPerm))\n\t\t\terr := os.Mkdir(tempDir, os.FileMode(tc.dirPerm))\n\t\t\tassertNilE(t, err)\n\t\t\tdefer os.Remove(tempDir)\n\t\t\tresult, _, err := isDirAccessCorrect(tempDir)\n\t\t\tif err != nil && tc.limitedToUser {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t\tassertEqualE(t, result, tc.limitedToUser)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "platform_detection.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"regexp\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/aws/aws-sdk-go-v2/config\"\n\t\"github.com/aws/aws-sdk-go-v2/feature/ec2/imds\"\n\t\"github.com/aws/aws-sdk-go-v2/service/sts\"\n\t\"github.com/aws/smithy-go/logging\"\n)\n\ntype platformDetectionState string\n\nconst (\n\tplatformDetected         platformDetectionState = \"detected\"\n\tplatformNotDetected      platformDetectionState = \"not_detected\"\n\tplatformDetectionTimeout platformDetectionState = \"timeout\"\n)\n\nconst disablePlatformDetectionEnv = \"SNOWFLAKE_DISABLE_PLATFORM_DETECTION\"\n\nvar (\n\tazureMetadataBaseURL = \"http://169.254.169.254\"\n\tgceMetadataRootURL   = \"http://metadata.google.internal\"\n\tgcpMetadataBaseURL   = \"http://metadata.google.internal/computeMetadata/v1\"\n)\n\nvar (\n\tdetectedPlatformsCache    []string\n\tinitPlatformDetectionOnce sync.Once\n\tplatformDetectionDone     = make(chan struct{})\n)\n\nfunc initPlatformDetection() {\n\tinitPlatformDetectionOnce.Do(func() {\n\t\tgo func() {\n\t\t\tdetectedPlatformsCache = detectPlatforms(context.Background(), 200*time.Millisecond)\n\t\t\tdefer close(platformDetectionDone)\n\t\t}()\n\t})\n}\n\nfunc getDetectedPlatforms() []string {\n\tlogger.Debugf(\"getDetectedPlatforms: waiting for platform detection to complete\")\n\t<-platformDetectionDone\n\tlogger.Debugf(\"getDetectedPlatforms: returning cached detected platforms: %v\", detectedPlatformsCache)\n\treturn detectedPlatformsCache\n}\n\nfunc metadataServerHTTPClient(timeout time.Duration) *http.Client {\n\treturn &http.Client{\n\t\tTimeout: timeout,\n\t\tTransport: &http.Transport{\n\t\t\tProxy:             nil,\n\t\t\tDisableKeepAlives: true,\n\t\t},\n\t}\n}\n\ntype detectorFunc struct {\n\tname string\n\tfn   func(ctx context.Context, timeout time.Duration) platformDetectionState\n}\n\nfunc detectPlatforms(ctx context.Context, timeout time.Duration) []string {\n\tif strings.EqualFold(os.Getenv(disablePlatformDetectionEnv), \"true\") {\n\t\treturn []string{\"disabled\"}\n\t}\n\n\tdetectors := []detectorFunc{\n\t\t{name: \"is_aws_lambda\", fn: detectAwsLambdaEnv},\n\t\t{name: \"is_azure_function\", fn: detectAzureFunctionEnv},\n\t\t{name: \"is_gce_cloud_run_service\", fn: detectGceCloudRunServiceEnv},\n\t\t{name: \"is_gce_cloud_run_job\", fn: detectGceCloudRunJobEnv},\n\t\t{name: \"is_github_action\", fn: detectGithubActionsEnv},\n\t\t{name: \"is_ec2_instance\", fn: detectEc2Instance},\n\t\t{name: \"has_aws_identity\", fn: detectAwsIdentity},\n\t\t{name: \"is_azure_vm\", fn: detectAzureVM},\n\t\t{name: \"has_azure_managed_identity\", fn: detectAzureManagedIdentity},\n\t\t{name: \"is_gce_vm\", fn: detectGceVM},\n\t\t{name: \"has_gcp_identity\", fn: detectGcpIdentity},\n\t}\n\n\tdetectionStates := make(map[string]platformDetectionState, len(detectors))\n\tvar waitGroup sync.WaitGroup\n\tvar mutex sync.Mutex\n\twaitGroup.Add(len(detectors))\n\n\tfor _, detector := range detectors {\n\t\tgo func(detector detectorFunc) {\n\t\t\tdefer waitGroup.Done()\n\t\t\tdetectionState := detector.fn(ctx, timeout)\n\t\t\tmutex.Lock()\n\t\t\tdetectionStates[detector.name] = detectionState\n\t\t\tmutex.Unlock()\n\t\t}(detector)\n\t}\n\twaitGroup.Wait()\n\n\tdetectedPlatformNames := []string{}\n\tfor _, detector := range detectors {\n\t\tif detectionStates[detector.name] == platformDetected {\n\t\t\tdetectedPlatformNames = append(detectedPlatformNames, detector.name)\n\t\t}\n\t}\n\n\tlogger.Debugf(\"detectPlatforms: completed. Detection states: %v\", detectionStates)\n\treturn detectedPlatformNames\n}\n\nfunc detectAwsLambdaEnv(_ context.Context, _ time.Duration) platformDetectionState {\n\tif os.Getenv(\"LAMBDA_TASK_ROOT\") != \"\" {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc detectGithubActionsEnv(_ context.Context, _ time.Duration) platformDetectionState {\n\tif os.Getenv(\"GITHUB_ACTIONS\") != \"\" {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc detectAzureFunctionEnv(_ context.Context, _ time.Duration) platformDetectionState {\n\tif os.Getenv(\"FUNCTIONS_WORKER_RUNTIME\") != \"\" &&\n\t\tos.Getenv(\"FUNCTIONS_EXTENSION_VERSION\") != \"\" &&\n\t\tos.Getenv(\"AzureWebJobsStorage\") != \"\" {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc detectGceCloudRunServiceEnv(_ context.Context, _ time.Duration) platformDetectionState {\n\tif os.Getenv(\"K_SERVICE\") != \"\" && os.Getenv(\"K_REVISION\") != \"\" && os.Getenv(\"K_CONFIGURATION\") != \"\" {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc detectGceCloudRunJobEnv(_ context.Context, _ time.Duration) platformDetectionState {\n\tif os.Getenv(\"CLOUD_RUN_JOB\") != \"\" && os.Getenv(\"CLOUD_RUN_EXECUTION\") != \"\" {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc detectEc2Instance(ctx context.Context, timeout time.Duration) platformDetectionState {\n\ttimeoutCtx, cancel := context.WithTimeout(ctx, timeout)\n\tdefer cancel()\n\n\tcfg, err := config.LoadDefaultConfig(timeoutCtx, config.WithLogger(logging.NewStandardLogger(io.Discard)))\n\tif err != nil {\n\t\treturn platformNotDetected\n\t}\n\n\tclient := imds.NewFromConfig(cfg)\n\tresult, err := client.GetInstanceIdentityDocument(timeoutCtx, &imds.GetInstanceIdentityDocumentInput{})\n\tif err != nil {\n\t\tif errors.Is(err, context.DeadlineExceeded) {\n\t\t\treturn platformDetectionTimeout\n\t\t}\n\t\treturn platformNotDetected\n\t}\n\tif result != nil && result.InstanceID != \"\" {\n\t\treturn platformDetected\n\t}\n\n\treturn platformNotDetected\n}\n\nfunc detectAwsIdentity(ctx context.Context, timeout time.Duration) platformDetectionState {\n\ttimeoutCtx, cancel := context.WithTimeout(ctx, timeout)\n\tdefer cancel()\n\n\tcfg, err := config.LoadDefaultConfig(timeoutCtx, config.WithLogger(logging.NewStandardLogger(io.Discard)))\n\tif err != nil {\n\t\tif errors.Is(err, context.DeadlineExceeded) {\n\t\t\treturn platformDetectionTimeout\n\t\t}\n\t\treturn platformNotDetected\n\t}\n\n\tclient := sts.NewFromConfig(cfg)\n\tout, err := client.GetCallerIdentity(timeoutCtx, &sts.GetCallerIdentityInput{})\n\tif err != nil {\n\t\tif errors.Is(err, context.DeadlineExceeded) {\n\t\t\treturn platformDetectionTimeout\n\t\t}\n\t\treturn platformNotDetected\n\t}\n\tif out == nil || out.Arn == nil || *out.Arn == \"\" {\n\t\treturn platformNotDetected\n\t}\n\tif isValidArnForWif(*out.Arn) {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc detectAzureVM(ctx context.Context, timeout time.Duration) platformDetectionState {\n\tclient := metadataServerHTTPClient(timeout)\n\treq, err := http.NewRequestWithContext(ctx, http.MethodGet, azureMetadataBaseURL+\"/metadata/instance?api-version=2019-03-11\", nil)\n\tif err != nil {\n\t\treturn platformNotDetected\n\t}\n\treq.Header.Set(\"Metadata\", \"true\")\n\tresp, err := client.Do(req)\n\tif err != nil {\n\t\tif errors.Is(err, context.DeadlineExceeded) {\n\t\t\treturn platformDetectionTimeout\n\t\t}\n\t\treturn platformNotDetected\n\t}\n\tdefer func() { _ = resp.Body.Close() }()\n\tif resp.StatusCode == http.StatusOK {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc detectAzureManagedIdentity(ctx context.Context, timeout time.Duration) platformDetectionState {\n\tif detectAzureFunctionEnv(ctx, timeout) == platformDetected && os.Getenv(\"IDENTITY_HEADER\") != \"\" {\n\t\treturn platformDetected\n\t}\n\tclient := metadataServerHTTPClient(timeout)\n\tvalues := url.Values{}\n\tvalues.Set(\"api-version\", \"2018-02-01\")\n\tvalues.Set(\"resource\", \"https://management.azure.com\")\n\treq, err := http.NewRequestWithContext(ctx, http.MethodGet, azureMetadataBaseURL+\"/metadata/identity/oauth2/token?\"+values.Encode(), nil)\n\tif err != nil {\n\t\treturn platformNotDetected\n\t}\n\treq.Header.Set(\"Metadata\", \"true\")\n\tresp, err := client.Do(req)\n\tif err != nil {\n\t\tif errors.Is(err, context.DeadlineExceeded) {\n\t\t\treturn platformDetectionTimeout\n\t\t}\n\t\treturn platformNotDetected\n\t}\n\tdefer func() { _ = resp.Body.Close() }()\n\tif resp.StatusCode == http.StatusOK {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc detectGceVM(ctx context.Context, timeout time.Duration) platformDetectionState {\n\tclient := metadataServerHTTPClient(timeout)\n\treq, err := http.NewRequestWithContext(ctx, http.MethodGet, gceMetadataRootURL, nil)\n\tif err != nil {\n\t\treturn platformNotDetected\n\t}\n\tresp, err := client.Do(req)\n\tif err != nil {\n\t\tif errors.Is(err, context.DeadlineExceeded) {\n\t\t\treturn platformDetectionTimeout\n\t\t}\n\t\treturn platformNotDetected\n\t}\n\tdefer func() { _ = resp.Body.Close() }()\n\tif resp.Header.Get(gcpMetadataFlavorHeaderName) == gcpMetadataFlavor {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc detectGcpIdentity(ctx context.Context, timeout time.Duration) platformDetectionState {\n\tclient := metadataServerHTTPClient(timeout)\n\turl := gcpMetadataBaseURL + \"/instance/service-accounts/default/email\"\n\treq, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)\n\tif err != nil {\n\t\treturn platformNotDetected\n\t}\n\treq.Header.Set(gcpMetadataFlavorHeaderName, gcpMetadataFlavor)\n\tresp, err := client.Do(req)\n\tif err != nil {\n\t\tif errors.Is(err, context.DeadlineExceeded) {\n\t\t\treturn platformDetectionTimeout\n\t\t}\n\t\treturn platformNotDetected\n\t}\n\tdefer func() { _ = resp.Body.Close() }()\n\tif resp.StatusCode == http.StatusOK {\n\t\treturn platformDetected\n\t}\n\treturn platformNotDetected\n}\n\nfunc isValidArnForWif(arn string) bool {\n\tpatterns := []string{\n\t\t`^arn:[^:]+:iam::[^:]+:user/.+$`,\n\t\t`^arn:[^:]+:sts::[^:]+:assumed-role/.+$`,\n\t}\n\tfor _, pattern := range patterns {\n\t\tmatched, err := regexp.MatchString(pattern, arn)\n\t\tif err == nil && matched {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n"
  },
  {
    "path": "platform_detection_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"slices\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\ntype platformDetectionTestCase struct {\n\tname             string\n\tenvVars          map[string]string\n\twiremockMappings []wiremockMapping\n\texpectedResult   []string\n}\n\ntype envSnapshot map[string]string\n\nfunc setupCleanPlatformEnv() func() {\n\tplatformEnvVars := []string{\n\t\t\"LAMBDA_TASK_ROOT\",\n\t\t\"GITHUB_ACTIONS\",\n\t\t\"FUNCTIONS_WORKER_RUNTIME\",\n\t\t\"FUNCTIONS_EXTENSION_VERSION\",\n\t\t\"AzureWebJobsStorage\",\n\t\t\"K_SERVICE\",\n\t\t\"K_REVISION\",\n\t\t\"K_CONFIGURATION\",\n\t\t\"CLOUD_RUN_JOB\",\n\t\t\"CLOUD_RUN_EXECUTION\",\n\t\t\"IDENTITY_HEADER\",\n\t\tdisablePlatformDetectionEnv,\n\t}\n\n\tsnapshot := make(envSnapshot)\n\tfor _, env := range platformEnvVars {\n\t\tsnapshot[env] = os.Getenv(env)\n\t}\n\n\tfor _, env := range platformEnvVars {\n\t\tos.Unsetenv(env)\n\t}\n\n\treturn func() {\n\t\tfor env, value := range snapshot {\n\t\t\tos.Setenv(env, value)\n\t\t}\n\t}\n}\n\nfunc setupWiremockMetadataEndpoints() func() {\n\toriginalAzureURL := azureMetadataBaseURL\n\toriginalGceRootURL := gceMetadataRootURL\n\toriginalGcpBaseURL := gcpMetadataBaseURL\n\n\twiremockURL := wiremock.baseURL()\n\tazureMetadataBaseURL = wiremockURL\n\tgceMetadataRootURL = wiremockURL\n\tgcpMetadataBaseURL = wiremockURL + \"/computeMetadata/v1\"\n\tos.Setenv(\"AWS_EC2_METADATA_SERVICE_ENDPOINT\", wiremockURL)\n\tos.Setenv(\"AWS_ENDPOINT_URL_STS\", wiremockURL)\n\n\treturn func() {\n\t\tazureMetadataBaseURL = originalAzureURL\n\t\tgceMetadataRootURL = originalGceRootURL\n\t\tgcpMetadataBaseURL = originalGcpBaseURL\n\t\tos.Unsetenv(\"AWS_EC2_METADATA_SERVICE_ENDPOINT\")\n\t\tos.Unsetenv(\"AWS_ENDPOINT_URL_STS\")\n\t}\n}\n\nfunc TestPlatformDetectionCachingAndSyncOnce(t *testing.T) {\n\tcleanup := setupCleanPlatformEnv()\n\tdefer cleanup()\n\n\toriginalDone, originalCache := platformDetectionDone, detectedPlatformsCache\n\tinitPlatformDetectionOnce, platformDetectionDone, detectedPlatformsCache = sync.Once{}, make(chan struct{}), nil\n\tdefer func() { platformDetectionDone, detectedPlatformsCache = originalDone, originalCache }()\n\n\tos.Setenv(\"LAMBDA_TASK_ROOT\", \"/var/task\")\n\tinitPlatformDetection()\n\tplatforms1 := getDetectedPlatforms()\n\n\t// Verify caching works and AWS Lambda detected\n\tassertDeepEqualE(t, platforms1, detectedPlatformsCache)\n\tassertTrueE(t, slices.Contains(platforms1, \"is_aws_lambda\"), \"Should detect AWS Lambda\")\n\n\t// Change environment and test sync.Once behavior\n\tcleanup()\n\tos.Setenv(\"GITHUB_ACTIONS\", \"true\")\n\tinitPlatformDetection()\n\tplatforms2 := getDetectedPlatforms()\n\n\tassertDeepEqualE(t, platforms1, platforms2)\n\tassertTrueE(t, slices.Contains(platforms2, \"is_aws_lambda\"), \"Should still show cached AWS Lambda result\")\n\tassertFalseE(t, slices.Contains(platforms2, \"is_github_action\"), \"Should NOT detect GitHub Actions due to caching\")\n}\n\nfunc TestDetectPlatforms(t *testing.T) {\n\ttestCases := []platformDetectionTestCase{\n\t\t{\n\t\t\tname: \"returns disabled when SNOWFLAKE_DISABLE_PLATFORM_DETECTION is set\",\n\t\t\tenvVars: map[string]string{\n\t\t\t\t\"SNOWFLAKE_DISABLE_PLATFORM_DETECTION\": \"true\",\n\t\t\t},\n\t\t\texpectedResult: []string{\"disabled\"},\n\t\t},\n\t\t{\n\t\t\tname:           \"returns empty when no platforms detected\",\n\t\t\texpectedResult: []string{},\n\t\t},\n\t\t{\n\t\t\tname: \"detects AWS Lambda\",\n\t\t\tenvVars: map[string]string{\n\t\t\t\t\"LAMBDA_TASK_ROOT\": \"/var/task\",\n\t\t\t},\n\t\t\texpectedResult: []string{\"is_aws_lambda\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects GitHub Actions\",\n\t\t\tenvVars: map[string]string{\n\t\t\t\t\"GITHUB_ACTIONS\": \"true\",\n\t\t\t},\n\t\t\texpectedResult: []string{\"is_github_action\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects Azure Function\",\n\t\t\tenvVars: map[string]string{\n\t\t\t\t\"FUNCTIONS_WORKER_RUNTIME\":    \"node\",\n\t\t\t\t\"FUNCTIONS_EXTENSION_VERSION\": \"~4\",\n\t\t\t\t\"AzureWebJobsStorage\":         \"DefaultEndpointsProtocol=https;AccountName=test\",\n\t\t\t},\n\t\t\texpectedResult: []string{\"is_azure_function\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects GCE Cloud Run Service\",\n\t\t\tenvVars: map[string]string{\n\t\t\t\t\"K_SERVICE\":       \"my-service\",\n\t\t\t\t\"K_REVISION\":      \"my-service-00001\",\n\t\t\t\t\"K_CONFIGURATION\": \"my-service\",\n\t\t\t},\n\t\t\texpectedResult: []string{\"is_gce_cloud_run_service\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects GCE Cloud Run Job\",\n\t\t\tenvVars: map[string]string{\n\t\t\t\t\"CLOUD_RUN_JOB\":       \"my-job\",\n\t\t\t\t\"CLOUD_RUN_EXECUTION\": \"my-job-execution-1\",\n\t\t\t},\n\t\t\texpectedResult: []string{\"is_gce_cloud_run_job\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects EC2 instance\",\n\t\t\twiremockMappings: []wiremockMapping{\n\t\t\t\tnewWiremockMapping(\"platform_detection/aws_ec2_instance_success.json\"),\n\t\t\t},\n\t\t\texpectedResult: []string{\"is_ec2_instance\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects AWS identity\",\n\t\t\twiremockMappings: []wiremockMapping{\n\t\t\t\tnewWiremockMapping(\"platform_detection/aws_identity_success.json\"),\n\t\t\t},\n\t\t\texpectedResult: []string{\"has_aws_identity\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects Azure VM\",\n\t\t\twiremockMappings: []wiremockMapping{\n\t\t\t\tnewWiremockMapping(\"platform_detection/azure_vm_success.json\"),\n\t\t\t},\n\t\t\texpectedResult: []string{\"is_azure_vm\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects Azure Managed Identity using IDENTITY_HEADER\",\n\t\t\tenvVars: map[string]string{\n\t\t\t\t\"FUNCTIONS_WORKER_RUNTIME\":    \"node\",\n\t\t\t\t\"FUNCTIONS_EXTENSION_VERSION\": \"~4\",\n\t\t\t\t\"AzureWebJobsStorage\":         \"DefaultEndpointsProtocol=https;AccountName=test\",\n\t\t\t\t\"IDENTITY_HEADER\":             \"test-header\",\n\t\t\t},\n\t\t\texpectedResult: []string{\"is_azure_function\", \"has_azure_managed_identity\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects Azure Manage Identity using metadata service\",\n\t\t\twiremockMappings: []wiremockMapping{\n\t\t\t\tnewWiremockMapping(\"platform_detection/azure_managed_identity_success.json\"),\n\t\t\t},\n\t\t\texpectedResult: []string{\"has_azure_managed_identity\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects GCE VM\",\n\t\t\twiremockMappings: []wiremockMapping{\n\t\t\t\tnewWiremockMapping(\"platform_detection/gce_vm_success.json\"),\n\t\t\t},\n\t\t\texpectedResult: []string{\"is_gce_vm\"},\n\t\t},\n\t\t{\n\t\t\tname: \"detects GCP identity\",\n\t\t\twiremockMappings: []wiremockMapping{\n\t\t\t\tnewWiremockMapping(\"platform_detection/gce_identity_success.json\"),\n\t\t\t},\n\t\t\texpectedResult: []string{\"has_gcp_identity\"},\n\t\t},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tcleanup := setupCleanPlatformEnv()\n\t\t\tdefer cleanup()\n\n\t\t\tfor key, value := range tc.envVars {\n\t\t\t\tos.Setenv(key, value)\n\t\t\t}\n\n\t\t\twiremock.registerMappings(t, tc.wiremockMappings)\n\t\t\twiremockCleanup := setupWiremockMetadataEndpoints()\n\t\t\tdefer wiremockCleanup()\n\n\t\t\tplatforms := detectPlatforms(context.Background(), 200*time.Millisecond)\n\n\t\t\tassertDeepEqualE(t, platforms, tc.expectedResult)\n\t\t})\n\t}\n}\n\nfunc TestDetectPlatformsTimeout(t *testing.T) {\n\tcleanup := setupCleanPlatformEnv()\n\tdefer cleanup()\n\n\twiremock.registerMappings(t, newWiremockMapping(\"platform_detection/timeout_response.json\"))\n\twiremockCleanup := setupWiremockMetadataEndpoints()\n\tdefer wiremockCleanup()\n\n\tstart := time.Now()\n\tplatforms := detectPlatforms(context.Background(), 200*time.Millisecond)\n\texecutionTime := time.Since(start)\n\n\tassertEqualE(t, len(platforms), 0, fmt.Sprintf(\"Expected empty platforms, got: %v\", platforms))\n\tassertTrueE(t, executionTime >= 200*time.Millisecond && executionTime < 250*time.Millisecond,\n\t\tfmt.Sprintf(\"Expected execution time around 200ms, got: %v\", executionTime))\n}\n\nfunc TestIsValidArnForWif(t *testing.T) {\n\ttestCases := []struct {\n\t\tarn      string\n\t\texpected bool\n\t}{\n\t\t{\"arn:aws:iam::123456789012:user/JohnDoe\", true},\n\t\t{\"arn:aws:sts::123456789012:assumed-role/RoleName/SessionName\", true},\n\t\t{\"invalid-arn-format\", false},\n\t\t{\"arn:aws:iam::account:root\", false},\n\t\t{\"arn:aws:iam::123456789012:group/Developers\", false},\n\t\t{\"arn:aws:iam::123456789012:role/S3Access\", false},\n\t\t{\"arn:aws:iam::123456789012:policy/UsersManageOwnCredentials\", false},\n\t\t{\"arn:aws:iam::123456789012:instance-profile/Webserver\", false},\n\t\t{\"arn:aws:sts::123456789012:federated-user/John\", false},\n\t\t{\"arn:aws:sts::account:self\", false},\n\t\t{\"arn:aws:iam::123456789012:mfa/JaneMFA\", false},\n\t\t{\"arn:aws:iam::123456789012:u2f/user/John/default\", false},\n\t\t{\"arn:aws:iam::123456789012:server-certificate/ProdServerCert\", false},\n\t\t{\"arn:aws:iam::123456789012:saml-provider/ADFSProvider\", false},\n\t\t{\"arn:aws:iam::123456789012:oidc-provider/GoogleProvider\", false},\n\t\t{\"arn:aws:iam::aws:contextProvider/IdentityCenter\", false},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.arn, func(t *testing.T) {\n\t\t\tresult := isValidArnForWif(tc.arn)\n\t\t\tassertEqualE(t, result, tc.expected, fmt.Sprintf(\"ARN validation failed for: %s\", tc.arn))\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "prepared_statement_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"testing\"\n)\n\n// TestPreparedStatement creates a basic prepared statement, inserting values\n// after the statement has been prepared\nfunc TestPreparedStatement(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"create or replace table test_prep_statement(c1 INTEGER, c2 FLOAT, c3 BOOLEAN, c4 STRING)\")\n\t\tdefer dbt.mustExec(deleteTableSQL)\n\n\t\tintArray := []int{1, 2, 3}\n\t\tfltArray := []float64{0.1, 2.34, 5.678}\n\t\tboolArray := []bool{true, false, true}\n\t\tstrArray := []string{\"test1\", \"test2\", \"test3\"}\n\t\tstmt := dbt.mustPrepare(\"insert into TEST_PREP_STATEMENT values(?, ?, ?, ?)\")\n\t\tif _, err := stmt.Exec(mustArray(&intArray), mustArray(&fltArray), mustArray(&boolArray), mustArray(&strArray)); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\trows := dbt.mustQuery(selectAllSQL)\n\t\tdefer rows.Close()\n\n\t\tvar v1 int\n\t\tvar v2 float64\n\t\tvar v3 bool\n\t\tvar v4 string\n\t\tif rows.Next() {\n\t\t\terr := rows.Scan(&v1, &v2, &v3, &v4)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif v1 != 1 && v2 != 0.1 && v3 != true && v4 != \"test1\" {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected: 1, 0.1, true, test1. got: %v, %v, %v, %v\", v1, v2, v3, v4)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\tif rows.Next() {\n\t\t\terr := rows.Scan(&v1, &v2, &v3, &v4)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif v1 != 2 && v2 != 2.34 && v3 != false && v4 != \"test2\" {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected: 2, 2.34, false, test2. got: %v, %v, %v, %v\", v1, v2, v3, v4)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\n\t\tif rows.Next() {\n\t\t\terr := rows.Scan(&v1, &v2, &v3, &v4)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif v1 != 3 && v2 != 5.678 && v3 != true && v4 != \"test3\" {\n\t\t\t\tt.Fatalf(\"failed to fetch. expected: 3, test3. got: %v, %v, %v, %v\", v1, v2, v3, v4)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Error(\"failed to query\")\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "priv_key_test.go",
    "content": "package gosnowflake\n\n// For compile concern, should any newly added variables or functions here must also be added with same\n// name or signature but with default or empty content in the priv_key_test.go(See addParseDSNTest)\n\nimport (\n\t\"context\"\n\t\"crypto/rand\"\n\t\"crypto/rsa\"\n\t\"crypto/x509\"\n\t\"database/sql\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n)\n\n// helper function to set up private key for testing\nfunc setupPrivateKey() {\n\tenv := func(key, defaultValue string) string {\n\t\tif value := os.Getenv(key); value != \"\" {\n\t\t\treturn value\n\t\t}\n\t\treturn defaultValue\n\t}\n\tprivKeyPath := env(\"SNOWFLAKE_TEST_PRIVATE_KEY\", \"\")\n\tif privKeyPath == \"\" {\n\t\tcustomPrivateKey = false\n\t\ttestPrivKey, _ = rsa.GenerateKey(rand.Reader, 2048)\n\t} else {\n\t\t// path to the DER file\n\t\tcustomPrivateKey = true\n\t\tdata, _ := os.ReadFile(privKeyPath)\n\t\tblock, _ := pem.Decode(data)\n\t\tif block == nil || block.Type != \"PRIVATE KEY\" {\n\t\t\tpanic(fmt.Sprintf(\"%v is not a public key in PEM format.\", privKeyPath))\n\t\t}\n\t\tprivKey, _ := x509.ParsePKCS8PrivateKey(block.Bytes)\n\t\ttestPrivKey = privKey.(*rsa.PrivateKey)\n\t}\n}\n\nfunc TestJWTTokenTimeout(t *testing.T) {\n\tbrt := newBlockingRoundTripper(http.DefaultTransport, 2000*time.Millisecond)\n\tlocalTestKey, err := rsa.GenerateKey(rand.Reader, 2048)\n\tassertNilF(t, err, \"Failed to generate test private key\")\n\tcfg := &Config{\n\t\tUser:             \"user\",\n\t\tHost:             \"localhost\",\n\t\tPort:             wiremock.port,\n\t\tAccount:          \"jwtAuthTokenTimeout\",\n\t\tJWTClientTimeout: 10 * time.Millisecond,\n\t\tPrivateKey:       localTestKey,\n\t\tAuthenticator:    AuthTypeJwt,\n\t\tMaxRetryCount:    1,\n\t\tTransporter:      brt,\n\t}\n\n\tdb := sql.OpenDB(NewConnector(SnowflakeDriver{}, *cfg))\n\tdefer db.Close()\n\tctx := context.Background()\n\t_, err = db.Conn(ctx)\n\tassertNotNilF(t, err)\n\tassertErrIsE(t, err, context.DeadlineExceeded)\n}\n"
  },
  {
    "path": "put_get_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"compress/gzip\"\n\t\"context\"\n\t\"crypto/sha256\"\n\t\"database/sql\"\n\t\"fmt\"\n\t\"io\"\n\t\"math/rand\"\n\t\"os\"\n\t\"os/user\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\nconst createStageStmt = \"CREATE OR REPLACE STAGE %v URL = '%v' CREDENTIALS = (%v)\"\n\nfunc TestPutError(t *testing.T) {\n\tif isWindows {\n\t\tt.Skip(\"permission model is different\")\n\t}\n\ttmpDir := t.TempDir()\n\tfile1 := filepath.Join(tmpDir, \"file1\")\n\tremoteLocation := filepath.Join(tmpDir, \"remote_loc\")\n\tf, err := os.Create(file1)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer func() {\n\t\tassertNilF(t, f.Close())\n\t}()\n\t_, err = f.WriteString(\"test1\")\n\tassertNilF(t, err)\n\tassertNilF(t, os.Chmod(file1, 0000))\n\tdefer func() {\n\t\tassertNilF(t, os.Chmod(file1, 0644))\n\t}()\n\n\tdata := &execResponseData{\n\t\tCommand:           string(uploadCommand),\n\t\tAutoCompress:      false,\n\t\tSrcLocations:      []string{file1},\n\t\tSourceCompression: \"none\",\n\t\tStageInfo: execResponseStageInfo{\n\t\t\tLocation:     remoteLocation,\n\t\t\tLocationType: string(local),\n\t\t\tPath:         \"remote_loc\",\n\t\t},\n\t}\n\n\tfta := &snowflakeFileTransferAgent{\n\t\tctx:  context.Background(),\n\t\tdata: data,\n\t\tsc: &snowflakeConn{\n\t\t\tcfg: &Config{},\n\t\t},\n\t}\n\tif err = fta.execute(); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif _, err = fta.result(); err == nil {\n\t\tt.Fatalf(\"should raise permission error\")\n\t}\n}\n\nfunc TestPercentage(t *testing.T) {\n\ttestcases := []struct {\n\t\tseen     int64\n\t\tsize     float64\n\t\texpected float64\n\t}{\n\t\t{0, 0, 1.0},\n\t\t{20, 0, 1.0},\n\t\t{40, 20, 1.0},\n\t\t{14, 28, 0.5},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v_%v_%v\", test.seen, test.size, test.expected), func(t *testing.T) {\n\t\t\tspp := snowflakeProgressPercentage{}\n\t\t\tif spp.percent(test.seen, test.size) != test.expected {\n\t\t\t\tt.Fatalf(\"percentage conversion failed. %v/%v, expected: %v, got: %v\",\n\t\t\t\t\ttest.seen, test.size, test.expected, spp.percent(test.seen, test.size))\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype tcPutGetData struct {\n\tdir                string\n\tawsAccessKeyID     string\n\tawsSecretAccessKey string\n\tstage              string\n\twarehouse          string\n\tdatabase           string\n\tuserBucket         string\n}\n\nfunc cleanupPut(dbt *DBTest, td *tcPutGetData) {\n\tdbt.mustExec(\"drop database \" + td.database)\n\tdbt.mustExec(\"drop warehouse \" + td.warehouse)\n}\n\nfunc getAWSCredentials() (string, string, string, error) {\n\tkeyID, ok := os.LookupEnv(\"AWS_ACCESS_KEY_ID\")\n\tif !ok {\n\t\treturn \"\", \"\", \"\", fmt.Errorf(\"key id invalid\")\n\t}\n\tsecretKey, ok := os.LookupEnv(\"AWS_SECRET_ACCESS_KEY\")\n\tif !ok {\n\t\treturn keyID, \"\", \"\", fmt.Errorf(\"secret key invalid\")\n\t}\n\tbucket, present := os.LookupEnv(\"SF_AWS_USER_BUCKET\")\n\tif !present {\n\t\tuser, err := user.Current()\n\t\tif err != nil {\n\t\t\treturn keyID, secretKey, \"\", err\n\t\t}\n\t\tbucket = fmt.Sprintf(\"sfc-eng-regression/%v/reg\", user.Username)\n\t}\n\treturn keyID, secretKey, bucket, nil\n}\n\nfunc createTestData(dbt *DBTest) (*tcPutGetData, error) {\n\tkeyID, secretKey, bucket, err := getAWSCredentials()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tuniqueName := randomString(10)\n\tdatabase := fmt.Sprintf(\"%v_db\", uniqueName)\n\twh := fmt.Sprintf(\"%v_wh\", uniqueName)\n\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tret := tcPutGetData{\n\t\tdir,\n\t\tkeyID,\n\t\tsecretKey,\n\t\tfmt.Sprintf(\"%v_stage\", uniqueName),\n\t\twh,\n\t\tdatabase,\n\t\tbucket,\n\t}\n\n\tif _, err = dbt.exec(\"use role sysadmin\"); err != nil {\n\t\treturn nil, err\n\t}\n\tdbt.mustExec(fmt.Sprintf(\n\t\t\"create or replace warehouse %v warehouse_size='small' \"+\n\t\t\t\"warehouse_type='standard' auto_suspend=1800\", wh))\n\tdbt.mustExec(\"create or replace database \" + database)\n\tdbt.mustExec(\"create or replace schema gotesting_schema\")\n\tdbt.mustExec(\"create or replace file format VSV type = 'CSV' \" +\n\t\t\"field_delimiter='|' error_on_column_count_mismatch=false\")\n\treturn &ret, nil\n}\n\nfunc TestPutLocalFile(t *testing.T) {\n\tif runningOnGithubAction() && !runningOnAWS() {\n\t\tt.Skip(\"skipping non aws environment\")\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdata, err := createTestData(dbt)\n\t\tif err != nil {\n\t\t\tt.Skip(\"snowflake admin account not accessible\")\n\t\t}\n\t\tdefer cleanupPut(dbt, data)\n\t\tdbt.mustExec(\"use warehouse \" + data.warehouse)\n\t\tdbt.mustExec(\"alter session set DISABLE_PUT_AND_GET_ON_EXTERNAL_STAGE=false\")\n\t\tdbt.mustExec(\"use schema \" + data.database + \".gotesting_schema\")\n\t\texecQuery := fmt.Sprintf(\n\t\t\t`create or replace table gotest_putget_t1 (c1 STRING, c2 STRING,\n\t\t\tc3 STRING, c4 STRING, c5 STRING, c6 STRING, c7 STRING, c8 STRING,\n\t\t\tc9 STRING) stage_file_format = ( field_delimiter = '|'\n\t\t\terror_on_column_count_mismatch=false) stage_copy_options =\n\t\t\t(purge=false) stage_location = (url = 's3://%v/%v' credentials =\n\t\t\t(AWS_KEY_ID='%v' AWS_SECRET_KEY='%v'))`,\n\t\t\tdata.userBucket,\n\t\t\tdata.stage,\n\t\t\tdata.awsAccessKeyID,\n\t\t\tdata.awsSecretAccessKey)\n\t\tdbt.mustExec(execQuery)\n\t\tdefer dbt.mustExec(\"drop table if exists gotest_putget_t1\")\n\n\t\texecQuery = fmt.Sprintf(`put file://%v/test_data/orders_10*.csv\n\t\t\t@%%gotest_putget_t1`, data.dir)\n\t\tdbt.mustExec(execQuery)\n\t\tdbt.mustQueryAssertCount(\"ls @%gotest_putget_t1\", 2)\n\n\t\tvar s0, s1, s2, s3, s4, s5, s6, s7, s8, s9 sql.NullString\n\t\trows := dbt.mustQuery(\"copy into gotest_putget_t1\")\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tfor rows.Next() {\n\t\t\tassertNilF(t, rows.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7, &s8, &s9))\n\t\t\tif !s1.Valid || s1.String != \"LOADED\" {\n\t\t\t\tt.Fatal(\"not loaded\")\n\t\t\t}\n\t\t}\n\n\t\trows2 := dbt.mustQuery(\"select count(*) from gotest_putget_t1\")\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows2.Close())\n\t\t}()\n\t\tvar i int\n\t\tif rows2.Next() {\n\t\t\tassertNilF(t, rows2.Scan(&i))\n\t\t\tif i != 75 {\n\t\t\t\tt.Fatalf(\"expected 75 rows, got %v\", i)\n\t\t\t}\n\t\t}\n\n\t\trows3 := dbt.mustQuery(`select STATUS from information_schema .load_history where table_name='gotest_putget_t1'`)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows3.Close())\n\t\t}()\n\t\tif rows3.Next() {\n\t\t\tassertNilF(t, rows3.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7, &s8, &s9))\n\t\t\tif !s1.Valid || s1.String != \"LOADED\" {\n\t\t\t\tt.Fatal(\"not loaded\")\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestPutGetWithAutoCompressFalse(t *testing.T) {\n\ttmpDir := t.TempDir()\n\ttestData := filepath.Join(tmpDir, \"data.txt\")\n\tf, err := os.Create(testData)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\toriginalContents := \"test1,test2\\ntest3,test4\"\n\t_, err = f.WriteString(originalContents)\n\tassertNilF(t, err)\n\tassertNilF(t, f.Sync())\n\tdefer func() {\n\t\tassertNilF(t, f.Close())\n\t}()\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tstageDir := \"test_put_uncompress_file_\" + randomString(10)\n\t\tdbt.mustExec(\"rm @~/\" + stageDir)\n\n\t\t// PUT test\n\t\tsqlText := fmt.Sprintf(\"put 'file://%v' @~/%v auto_compress=FALSE\", testData, stageDir)\n\t\tsqlText = strings.ReplaceAll(sqlText, \"\\\\\", \"\\\\\\\\\")\n\t\tdbt.mustExec(sqlText)\n\t\tdefer dbt.mustExec(\"rm @~/\" + stageDir)\n\t\trows := dbt.mustQuery(\"ls @~/\" + stageDir)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tvar file, s1, s2, s3 string\n\t\tif rows.Next() {\n\t\t\terr = rows.Scan(&file, &s1, &s2, &s3)\n\t\t\tassertNilE(t, err)\n\t\t}\n\t\tassertTrueF(t, strings.Contains(file, stageDir+\"/data.txt\"), fmt.Sprintf(\"should contain file. got: %v\", file))\n\t\tassertFalseF(t, strings.Contains(file, \"data.txt.gz\"), fmt.Sprintf(\"should not contain file. got: %v\", file))\n\n\t\t// GET test\n\t\tvar streamBuf bytes.Buffer\n\t\tctx := WithFileGetStream(context.Background(), &streamBuf)\n\t\tsql := fmt.Sprintf(\"get @~/%v/data.txt 'file://%v'\", stageDir, tmpDir)\n\t\tsqlText = strings.ReplaceAll(sql, \"\\\\\", \"\\\\\\\\\")\n\t\trows2 := dbt.mustQueryContext(ctx, sqlText)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows2.Close())\n\t\t}()\n\t\tfor rows2.Next() {\n\t\t\terr = rows2.Scan(&file, &s1, &s2, &s3)\n\t\t\tassertNilE(t, err)\n\t\t\tassertTrueE(t, strings.HasPrefix(file, \"data.txt\"), \"a file was not downloaded by GET\")\n\t\t\tv, err := strconv.Atoi(s1)\n\t\t\tassertNilE(t, err)\n\t\t\tassertEqualE(t, v, 23, \"did not return the right file size\")\n\t\t\tassertEqualE(t, s2, \"DOWNLOADED\", \"did not return DOWNLOADED status\")\n\t\t\tassertEqualE(t, s3, \"\")\n\t\t}\n\t\tvar contents string\n\t\tr := bytes.NewReader(streamBuf.Bytes())\n\t\tfor {\n\t\t\tc := make([]byte, defaultChunkBufferSize)\n\t\t\tif n, err := r.Read(c); err != nil {\n\t\t\t\tif err == io.EOF {\n\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tt.Error(err)\n\t\t\t} else {\n\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t}\n\t\t}\n\t\tassertEqualE(t, contents, originalContents)\n\t})\n}\n\nfunc TestPutOverwrite(t *testing.T) {\n\ttmpDir := t.TempDir()\n\ttestData := filepath.Join(tmpDir, \"data.txt\")\n\tf, err := os.Create(testData)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\t_, err = f.WriteString(\"test1,test2\\ntest3,test4\\n\")\n\tassertNilF(t, err)\n\tassertNilF(t, f.Close())\n\n\tstageName := \"test_put_overwrite_stage_\" + randomString(10)\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"CREATE OR REPLACE STAGE \" + stageName)\n\t\tdefer dbt.mustExec(\"DROP STAGE \" + stageName)\n\n\t\tf, _ = os.Open(testData)\n\t\trows := dbt.mustQueryContext(\n\t\t\tWithFilePutStream(context.Background(), f),\n\t\t\tfmt.Sprintf(\"put 'file://%v' @\"+stageName+\"/test_put_overwrite\",\n\t\t\t\tstrings.ReplaceAll(testData, \"\\\\\", \"/\")))\n\t\tdefer rows.Close()\n\t\tf.Close()\n\t\tvar s0, s1, s2, s3, s4, s5, s6, s7 string\n\t\tif rows.Next() {\n\t\t\tif err = rows.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t}\n\t\tif s6 != uploaded.String() {\n\t\t\tt.Fatalf(\"expected UPLOADED, got %v\", s6)\n\t\t}\n\n\t\trows = dbt.mustQuery(\"ls @\" + stageName + \"/test_put_overwrite\")\n\t\tdefer rows.Close()\n\t\tassertTrueF(t, rows.Next(), \"expected new rows\")\n\t\tif err = rows.Scan(&s0, &s1, &s2, &s3); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tmd5Column := s2\n\n\t\tf, _ = os.Open(testData)\n\t\trows = dbt.mustQueryContext(\n\t\t\tWithFilePutStream(context.Background(), f),\n\t\t\tfmt.Sprintf(\"put 'file://%v' @\"+stageName+\"/test_put_overwrite\",\n\t\t\t\tstrings.ReplaceAll(testData, \"\\\\\", \"/\")))\n\t\tdefer rows.Close()\n\t\tf.Close()\n\t\tassertTrueF(t, rows.Next(), \"expected new rows\")\n\t\tif err = rows.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif s6 != skipped.String() {\n\t\t\tt.Fatalf(\"expected SKIPPED, got %v\", s6)\n\t\t}\n\n\t\trows = dbt.mustQuery(\"ls @\" + stageName + \"/test_put_overwrite\")\n\t\tdefer rows.Close()\n\t\tassertTrueF(t, rows.Next(), \"expected new rows\")\n\n\t\tif err = rows.Scan(&s0, &s1, &s2, &s3); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif s2 != md5Column {\n\t\t\tt.Fatal(\"The MD5 column should have stayed the same\")\n\t\t}\n\n\t\tf, _ = os.Open(testData)\n\t\trows = dbt.mustQueryContext(\n\t\t\tWithFilePutStream(context.Background(), f),\n\t\t\tfmt.Sprintf(\"put 'file://%v' @\"+stageName+\"/test_put_overwrite overwrite=true\",\n\t\t\t\tstrings.ReplaceAll(testData, \"\\\\\", \"/\")))\n\t\tdefer rows.Close()\n\t\tf.Close()\n\t\tassertTrueF(t, rows.Next(), \"expected new rows\")\n\t\tif err = rows.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif s6 != uploaded.String() {\n\t\t\tt.Fatalf(\"expected UPLOADED, got %v\", s6)\n\t\t}\n\n\t\trows = dbt.mustQuery(\"ls @\" + stageName + \"/test_put_overwrite\")\n\t\tdefer rows.Close()\n\t\tassertTrueF(t, rows.Next(), \"expected new rows\")\n\t\tif err = rows.Scan(&s0, &s1, &s2, &s3); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tassertEqualE(t, s0, stageName+\"/test_put_overwrite/\"+baseName(testData)+\".gz\")\n\t\tassertNotEqualE(t, s2, md5Column)\n\t})\n}\n\nfunc TestPutGetFile(t *testing.T) {\n\ttestPutGet(t, false)\n}\n\nfunc TestPutGetStream(t *testing.T) {\n\ttestPutGet(t, true)\n}\n\nfunc testPutGet(t *testing.T, isStream bool) {\n\ttmpDir := t.TempDir()\n\tfname := filepath.Join(tmpDir, \"test_put_get.txt.gz\")\n\toriginalContents := \"123,test1\\n456,test2\\n\"\n\ttableName := randomString(5)\n\n\tvar b bytes.Buffer\n\tgzw := gzip.NewWriter(&b)\n\t_, err := gzw.Write([]byte(originalContents))\n\tassertNilF(t, err)\n\tassertNilF(t, gzw.Close())\n\tif err := os.WriteFile(fname, b.Bytes(), readWriteFileMode); err != nil {\n\t\tt.Fatal(\"could not write to gzip file\")\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"create or replace table \" + tableName +\n\t\t\t\" (a int, b string)\")\n\t\tdefer dbt.mustExec(\"drop table \" + tableName)\n\t\tfileStream, err := os.Open(fname)\n\t\tassertNilF(t, err)\n\t\tdefer func() {\n\t\t\tassertNilF(t, fileStream.Close())\n\t\t}()\n\n\t\tvar sqlText string\n\t\tvar rows *RowsExtended\n\t\tsql := \"put 'file://%v' @%%%v auto_compress=true parallel=30\"\n\t\tctx := context.Background()\n\t\tif isStream {\n\t\t\tsqlText = fmt.Sprintf(\n\t\t\t\tsql, strings.ReplaceAll(fname, \"\\\\\", \"\\\\\\\\\"), tableName)\n\t\t\trows = dbt.mustQueryContextT(WithFilePutStream(ctx, fileStream), t, sqlText)\n\t\t} else {\n\t\t\tsqlText = fmt.Sprintf(\n\t\t\t\tsql, strings.ReplaceAll(fname, \"\\\\\", \"\\\\\\\\\"), tableName)\n\t\t\trows = dbt.mustQueryT(t, sqlText)\n\t\t}\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\n\t\tvar s0, s1, s2, s3, s4, s5, s6, s7 string\n\t\tassertTrueF(t, rows.Next(), \"expected new rows\")\n\t\trows.mustScan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7)\n\t\tassertEqualF(t, s6, uploaded.String())\n\t\t// check file is PUT\n\t\tdbt.mustQueryAssertCount(\"ls @%\"+tableName, 1)\n\n\t\tdbt.mustExecT(t, \"copy into \"+tableName)\n\t\tdbt.mustExecT(t, \"rm @%\"+tableName)\n\t\tdbt.mustQueryAssertCount(\"ls @%\"+tableName, 0)\n\n\t\tdbt.mustExecT(t, fmt.Sprintf(`copy into @%%%v from %v file_format=(type=csv\n\t\t\tcompression='gzip')`, tableName, tableName))\n\n\t\tvar streamBuf bytes.Buffer\n\t\tif isStream {\n\t\t\tctx = WithFileGetStream(ctx, &streamBuf)\n\t\t}\n\t\tsql = fmt.Sprintf(\"get @%%%v 'file://%v' parallel=10\", tableName, tmpDir)\n\t\tsqlText = strings.ReplaceAll(sql, \"\\\\\", \"\\\\\\\\\")\n\t\trows2 := dbt.mustQueryContextT(ctx, t, sqlText)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows2.Close())\n\t\t}()\n\t\tfor rows2.Next() {\n\t\t\trows2.mustScan(&s0, &s1, &s2, &s3)\n\t\t\tassertHasPrefixF(t, s0, \"data_\")\n\t\t\tv, err := strconv.Atoi(s1)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, v, 36)\n\t\t\tassertEqualE(t, s2, \"DOWNLOADED\")\n\t\t\tassertEqualE(t, s3, \"\")\n\t\t}\n\n\t\tvar contents string\n\t\tif isStream {\n\t\t\tgz, err := gzip.NewReader(&streamBuf)\n\t\t\tassertNilF(t, err)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, gz.Close())\n\t\t\t}()\n\t\t\tfor {\n\t\t\t\tc := make([]byte, defaultChunkBufferSize)\n\t\t\t\tif n, err := gz.Read(c); err != nil {\n\t\t\t\t\tif err == io.EOF {\n\t\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t} else {\n\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tfiles, err := filepath.Glob(filepath.Join(tmpDir, \"data_*\"))\n\t\t\tassertNilF(t, err)\n\t\t\tfileName := files[0]\n\t\t\tf, err := os.Open(fileName)\n\t\t\tassertNilF(t, err)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, f.Close())\n\t\t\t}()\n\n\t\t\tgz, err := gzip.NewReader(f)\n\t\t\tassertNilF(t, err)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, gz.Close())\n\t\t\t}()\n\n\t\t\tfor {\n\t\t\t\tc := make([]byte, defaultChunkBufferSize)\n\t\t\t\tif n, err := gz.Read(c); err != nil {\n\t\t\t\t\tif err == io.EOF {\n\t\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t} else {\n\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tassertEqualE(t, contents, originalContents, \"output is different from the original contents\")\n\t})\n}\n\nfunc TestPutGetWithSnowflakeSSE(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tcwd, err := os.Getwd()\n\tassertNilF(t, err)\n\tsourceFilePath := filepath.Join(cwd, \"test_data\", \"orders_100.csv\")\n\n\toriginalContents, err := os.ReadFile(sourceFilePath)\n\tassertNilF(t, err)\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tfor _, useStream := range []bool{true, false} {\n\t\t\tt.Run(fmt.Sprintf(\"useStream=%v\", useStream), func(t *testing.T) {\n\t\t\t\tstageName := \"test_stage_sse_\" + randomString(10)\n\t\t\t\tdbt.mustExec(fmt.Sprintf(\"CREATE STAGE %s ENCRYPTION = (TYPE = 'SNOWFLAKE_SSE')\", stageName))\n\t\t\t\tdefer dbt.mustExec(\"DROP STAGE \" + stageName)\n\n\t\t\t\tuploadCtx := context.Background()\n\t\t\t\tif useStream {\n\t\t\t\t\tfileStream, err := os.Open(sourceFilePath)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tdefer fileStream.Close()\n\t\t\t\t\tuploadCtx = WithFilePutStream(uploadCtx, fileStream)\n\t\t\t\t}\n\t\t\t\trows := dbt.mustQueryContextT(uploadCtx, t, fmt.Sprintf(\"PUT 'file://%s' @%s\", strings.ReplaceAll(sourceFilePath, \"\\\\\", \"\\\\\\\\\"), stageName))\n\t\t\t\tdefer rows.Close()\n\n\t\t\t\tvar s0, s1, s2, s3, s4, s5, s6, s7 string\n\t\t\t\tassertTrueF(t, rows.Next(), \"expected new rows\")\n\t\t\t\trows.mustScan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7)\n\t\t\t\tassertEqualF(t, s6, uploaded.String())\n\n\t\t\t\tdownloadCtx := context.Background()\n\t\t\t\tvar downloadBuf bytes.Buffer\n\t\t\t\tif useStream {\n\t\t\t\t\tdownloadCtx = WithFileGetStream(downloadCtx, &downloadBuf)\n\t\t\t\t}\n\t\t\t\trows2 := dbt.mustQueryContextT(downloadCtx, t, fmt.Sprintf(\"GET @%s 'file://%s'\", stageName, strings.ReplaceAll(tmpDir, \"\\\\\", \"\\\\\\\\\")))\n\t\t\t\tdefer rows2.Close()\n\n\t\t\t\tassertTrueF(t, rows2.Next(), \"expected new rows\")\n\t\t\t\trows2.mustScan(&s0, &s1, &s2, &s3)\n\t\t\t\tassertEqualF(t, s2, \"DOWNLOADED\")\n\n\t\t\t\tvar compressedData []byte\n\t\t\t\tif useStream {\n\t\t\t\t\tcompressedData, err = io.ReadAll(&downloadBuf)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t} else {\n\t\t\t\t\tdownloadedFilePath := filepath.Join(tmpDir, \"orders_100.csv.gz\")\n\t\t\t\t\tcompressedData, err = os.ReadFile(downloadedFilePath)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t}\n\n\t\t\t\tgzReader, err := gzip.NewReader(bytes.NewReader(compressedData))\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tdefer gzReader.Close()\n\n\t\t\t\tdecompressedData, err := io.ReadAll(gzReader)\n\t\t\t\tassertNilF(t, err)\n\n\t\t\t\tassertEqualE(t, string(decompressedData), string(originalContents), \"downloaded file content does not match original\")\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestPutGetWithSpacesInDirectoryName(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tcwd, err := os.Getwd()\n\tassertNilF(t, err)\n\tsourceFilePath := filepath.Join(cwd, \"test_data\", \"orders_100.csv\")\n\n\toriginalContents, err := os.ReadFile(sourceFilePath)\n\tassertNilF(t, err)\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tfor _, useStream := range []bool{true, false} {\n\t\t\tt.Run(fmt.Sprintf(\"useStream=%v\", useStream), func(t *testing.T) {\n\t\t\t\tstageName := \"test_stage_sse_\" + randomString(10)\n\t\t\t\tdbt.mustExec(fmt.Sprintf(\"CREATE STAGE %s\", stageName))\n\t\t\t\tdefer dbt.mustExec(\"DROP STAGE \" + stageName)\n\n\t\t\t\tuploadCtx := context.Background()\n\t\t\t\tif useStream {\n\t\t\t\t\tfileStream, err := os.Open(sourceFilePath)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tdefer fileStream.Close()\n\t\t\t\t\tuploadCtx = WithFilePutStream(uploadCtx, fileStream)\n\t\t\t\t}\n\t\t\t\trows := dbt.mustQueryContextT(uploadCtx, t, fmt.Sprintf(\"PUT 'file://%s' '@%s/dir with spaces'\", strings.ReplaceAll(sourceFilePath, \"\\\\\", \"\\\\\\\\\"), stageName))\n\t\t\t\tdefer rows.Close()\n\n\t\t\t\tvar s0, s1, s2, s3, s4, s5, s6, s7 string\n\t\t\t\tassertTrueF(t, rows.Next(), \"expected new rows\")\n\t\t\t\trows.mustScan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7)\n\t\t\t\tassertEqualF(t, s6, uploaded.String())\n\n\t\t\t\tdownloadCtx := context.Background()\n\t\t\t\tvar downloadBuf bytes.Buffer\n\t\t\t\tif useStream {\n\t\t\t\t\tdownloadCtx = WithFileGetStream(downloadCtx, &downloadBuf)\n\t\t\t\t}\n\t\t\t\trows2 := dbt.mustQueryContextT(downloadCtx, t, fmt.Sprintf(\"GET '@%s/dir with spaces' 'file://%s'\", stageName, strings.ReplaceAll(tmpDir, \"\\\\\", \"\\\\\\\\\")))\n\t\t\t\tdefer rows2.Close()\n\n\t\t\t\tassertTrueF(t, rows2.Next(), \"expected new rows\")\n\t\t\t\trows2.mustScan(&s0, &s1, &s2, &s3)\n\t\t\t\tassertEqualF(t, s2, \"DOWNLOADED\")\n\n\t\t\t\tvar compressedData []byte\n\t\t\t\tif useStream {\n\t\t\t\t\tcompressedData, err = io.ReadAll(&downloadBuf)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t} else {\n\t\t\t\t\tdownloadedFilePath := filepath.Join(tmpDir, \"orders_100.csv.gz\")\n\t\t\t\t\tcompressedData, err = os.ReadFile(downloadedFilePath)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t}\n\n\t\t\t\tgzReader, err := gzip.NewReader(bytes.NewReader(compressedData))\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tdefer gzReader.Close()\n\n\t\t\t\tdecompressedData, err := io.ReadAll(gzReader)\n\t\t\t\tassertNilF(t, err)\n\n\t\t\t\tassertEqualE(t, string(decompressedData), string(originalContents), \"downloaded file content does not match original\")\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestPutWithNonWritableTemp(t *testing.T) {\n\tif isWindows {\n\t\tt.Skip(\"permission system is different\")\n\t}\n\ttempDir := t.TempDir()\n\tassertNilF(t, os.Chmod(tempDir, 0000))\n\tcustomDsn := dsn + \"&tmpDirPath=\" + strings.ReplaceAll(tempDir, \"/\", \"%2F\")\n\trunDBTestWithConfig(t, &testConfig{dsn: customDsn}, func(dbt *DBTest) {\n\t\tfor _, isStream := range []bool{false, true} {\n\t\t\tt.Run(fmt.Sprintf(\"isStream=%v\", isStream), func(t *testing.T) {\n\t\t\t\tstageName := \"test_stage_\" + randomString(10)\n\t\t\t\tcwd, err := os.Getwd()\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tfilePath := fmt.Sprintf(\"%v/test_data/orders_100.csv\", cwd)\n\t\t\t\tdbt.mustExecT(t, \"CREATE STAGE \"+stageName)\n\t\t\t\tdefer dbt.mustExecT(t, \"DROP STAGE \"+stageName)\n\n\t\t\t\tctx := context.Background()\n\t\t\t\tif isStream {\n\t\t\t\t\tfd, err := os.Open(filePath)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tctx = WithFilePutStream(ctx, fd)\n\t\t\t\t}\n\t\t\t\t_, err = dbt.conn.ExecContext(ctx, fmt.Sprintf(\"PUT 'file://%v' @%v\", filePath, stageName))\n\t\t\t\tif !isStream {\n\t\t\t\t\tassertNotNilF(t, err)\n\t\t\t\t\tassertStringContainsE(t, err.Error(), \"mkdir\")\n\t\t\t\t\tassertStringContainsE(t, err.Error(), \"permission denied\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilF(t, os.Chmod(tempDir, 0755))\n\t\t\t\t\t_ = dbt.mustExecContextT(ctx, t, fmt.Sprintf(\"GET @%v 'file://%v'\", stageName, tempDir))\n\t\t\t\t\tresultBytesCompressed, err := os.ReadFile(filepath.Join(tempDir, \"orders_100.csv.gz\"))\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tresultBytesReader, err := gzip.NewReader(bytes.NewReader(resultBytesCompressed))\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tresultBytes, err := io.ReadAll(resultBytesReader)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tinputBytes, err := os.ReadFile(filePath)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tassertEqualE(t, string(resultBytes), string(inputBytes))\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestGetWithNonWritableTemp(t *testing.T) {\n\tif isWindows {\n\t\tt.Skip(\"permission system is different\")\n\t}\n\ttempDir := t.TempDir()\n\tcustomDsn := dsn + \"&tmpDirPath=\" + strings.ReplaceAll(tempDir, \"/\", \"%2F\")\n\trunDBTestWithConfig(t, &testConfig{dsn: customDsn}, func(dbt *DBTest) {\n\t\tstageName := \"test_stage_\" + randomString(10)\n\t\tcwd, err := os.Getwd()\n\t\tassertNilF(t, err)\n\t\tfilePath := fmt.Sprintf(\"%v/test_data/orders_100.csv\", cwd)\n\t\tdbt.mustExecT(t, \"CREATE STAGE \"+stageName)\n\t\tdefer dbt.mustExecT(t, \"DROP STAGE \"+stageName)\n\n\t\tdbt.mustExecT(t, fmt.Sprintf(\"PUT 'file://%v' @%v\", filePath, stageName))\n\t\tassertNilF(t, os.Chmod(tempDir, 0000))\n\n\t\tfor _, isStream := range []bool{false, true} {\n\t\t\tt.Run(fmt.Sprintf(\"isStream=%v\", isStream), func(t *testing.T) {\n\t\t\t\tctx := context.Background()\n\t\t\t\tvar resultBuf bytes.Buffer\n\t\t\t\tif isStream {\n\t\t\t\t\tctx = WithFileGetStream(ctx, &resultBuf)\n\t\t\t\t}\n\t\t\t\t_, err = dbt.conn.ExecContext(ctx, fmt.Sprintf(\"GET @%v 'file://%v'\", stageName, tempDir))\n\t\t\t\tif !isStream {\n\t\t\t\t\tassertNotNilF(t, err)\n\t\t\t\t\tassertStringContainsE(t, err.Error(), \"mkdir\")\n\t\t\t\t\tassertStringContainsE(t, err.Error(), \"permission denied\")\n\t\t\t\t} else {\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tresultBytesReader, err := gzip.NewReader(&resultBuf)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tresultBytes, err := io.ReadAll(resultBytesReader)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tinputBytes, err := os.ReadFile(filePath)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tassertEqualE(t, string(resultBytes), string(inputBytes))\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestPutGetGcsDownscopedCredential(t *testing.T) {\n\tif runningOnGithubAction() && !runningOnGCP() {\n\t\tt.Skip(\"skipping non GCP environment\")\n\t}\n\n\ttmpDir, err := os.MkdirTemp(\"\", \"put_get\")\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tdefer func() {\n\t\tassertNilF(t, os.RemoveAll(tmpDir))\n\t}()\n\tfname := filepath.Join(tmpDir, \"test_put_get.txt.gz\")\n\toriginalContents := \"123,test1\\n456,test2\\n\"\n\ttableName := randomString(5)\n\n\tvar b bytes.Buffer\n\tgzw := gzip.NewWriter(&b)\n\t_, err = gzw.Write([]byte(originalContents))\n\tassertNilF(t, err)\n\tassertNilF(t, gzw.Close())\n\tif err = os.WriteFile(fname, b.Bytes(), readWriteFileMode); err != nil {\n\t\tt.Fatal(\"could not write to gzip file\")\n\t}\n\n\tcustomDsn := dsn + \"&GCS_USE_DOWNSCOPED_CREDENTIAL=true\"\n\trunDBTestWithConfig(t, &testConfig{dsn: customDsn}, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"create or replace table \" + tableName +\n\t\t\t\" (a int, b string)\")\n\t\tfileStream, err := os.Open(fname)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tdefer func() {\n\t\t\tdefer dbt.mustExec(\"drop table \" + tableName)\n\t\t\tif fileStream != nil {\n\t\t\t\tassertNilF(t, fileStream.Close())\n\t\t\t}\n\t\t}()\n\n\t\tvar sqlText string\n\t\tvar rows *RowsExtended\n\t\tsql := \"put 'file://%v' @%%%v auto_compress=true parallel=30\"\n\t\tsqlText = fmt.Sprintf(\n\t\t\tsql, strings.ReplaceAll(fname, \"\\\\\", \"\\\\\\\\\"), tableName)\n\t\trows = dbt.mustQuery(sqlText)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\n\t\tvar s0, s1, s2, s3, s4, s5, s6, s7 string\n\t\tif rows.Next() {\n\t\t\tif err = rows.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t}\n\t\tif s6 != uploaded.String() {\n\t\t\tt.Fatalf(\"expected %v, got: %v\", uploaded, s6)\n\t\t}\n\t\t// check file is PUT\n\t\tdbt.mustQueryAssertCount(\"ls @%\"+tableName, 1)\n\n\t\tdbt.mustExec(\"copy into \" + tableName)\n\t\tdbt.mustExec(\"rm @%\" + tableName)\n\t\tdbt.mustQueryAssertCount(\"ls @%\"+tableName, 0)\n\n\t\tdbt.mustExec(fmt.Sprintf(`copy into @%%%v from %v file_format=(type=csv\n            compression='gzip')`, tableName, tableName))\n\n\t\tsql = fmt.Sprintf(\"get @%%%v 'file://%v'  parallel=10\", tableName, tmpDir)\n\t\tsqlText = strings.ReplaceAll(sql, \"\\\\\", \"\\\\\\\\\")\n\t\trows2 := dbt.mustQuery(sqlText)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows2.Close())\n\t\t}()\n\t\tfor rows2.Next() {\n\t\t\tif err = rows2.Scan(&s0, &s1, &s2, &s3); err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t\tif !strings.HasPrefix(s0, \"data_\") {\n\t\t\t\tt.Error(\"a file was not downloaded by GET\")\n\t\t\t}\n\t\t\tif v, err := strconv.Atoi(s1); err != nil || v != 36 {\n\t\t\t\tt.Error(\"did not return the right file size\")\n\t\t\t}\n\t\t\tif s2 != \"DOWNLOADED\" {\n\t\t\t\tt.Error(\"did not return DOWNLOADED status\")\n\t\t\t}\n\t\t\tif s3 != \"\" {\n\t\t\t\tt.Errorf(\"returned %v\", s3)\n\t\t\t}\n\t\t}\n\n\t\tfiles, err := filepath.Glob(filepath.Join(tmpDir, \"data_*\"))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tfileName := files[0]\n\t\tf, err := os.Open(fileName)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tdefer f.Close()\n\t\tgz, err := gzip.NewReader(f)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tvar contents string\n\t\tfor {\n\t\t\tc := make([]byte, defaultChunkBufferSize)\n\t\t\tif n, err := gz.Read(c); err != nil {\n\t\t\t\tif err == io.EOF {\n\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tt.Error(err)\n\t\t\t} else {\n\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t}\n\t\t}\n\n\t\tif contents != originalContents {\n\t\t\tt.Error(\"output is different from the original file\")\n\t\t}\n\t})\n}\n\nfunc TestPutGetLargeFile(t *testing.T) {\n\ttestData := createTempLargeFile(t, 5*1024*1024)\n\tbaseName := filepath.Base(testData)\n\tfnameStage := baseName + \".gz\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tstageDir := \"test_put_largefile_\" + randomString(10)\n\t\tdbt.mustExec(\"rm @~/\" + stageDir)\n\n\t\t// PUT test\n\t\tputQuery := fmt.Sprintf(\"put 'file://%v' @~/%v\", strings.ReplaceAll(testData, \"\\\\\", \"/\"), stageDir)\n\t\tsqlText := strings.ReplaceAll(putQuery, \"\\\\\", \"\\\\\\\\\")\n\t\tdbt.mustExec(sqlText)\n\t\tdefer dbt.mustExec(\"rm @~/\" + stageDir)\n\t\trows := dbt.mustQuery(\"ls @~/\" + stageDir)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tvar file, s1, s2, s3 string\n\t\tif rows.Next() {\n\t\t\terr := rows.Scan(&file, &s1, &s2, &s3)\n\t\t\tassertNilF(t, err)\n\t\t}\n\n\t\tif !strings.Contains(file, fnameStage) {\n\t\t\tt.Fatalf(\"should contain file. got: %v\", file)\n\t\t}\n\n\t\t// GET test with stream\n\t\tvar streamBuf bytes.Buffer\n\t\tctx := WithFileGetStream(context.Background(), &streamBuf)\n\t\tsql := fmt.Sprintf(\"get @~/%v/%v 'file://%v'\", stageDir, fnameStage, t.TempDir())\n\t\tsqlText = strings.ReplaceAll(sql, \"\\\\\", \"\\\\\\\\\")\n\t\trows2 := dbt.mustQueryContext(ctx, sqlText)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows2.Close())\n\t\t}()\n\t\tfor rows2.Next() {\n\t\t\terr := rows2.Scan(&file, &s1, &s2, &s3)\n\t\t\tassertNilE(t, err)\n\t\t\tassertTrueE(t, strings.HasPrefix(file, fnameStage), \"a file was not downloaded by GET\")\n\t\t\tassertEqualE(t, s2, \"DOWNLOADED\", \"did not return DOWNLOADED status\")\n\t\t\tassertEqualE(t, s3, \"\")\n\t\t}\n\n\t\t// convert the compressed stream to string\n\t\tvar contents string\n\t\tgz, err := gzip.NewReader(&streamBuf)\n\t\tassertNilE(t, err)\n\t\tdefer func() {\n\t\t\tassertNilF(t, gz.Close())\n\t\t}()\n\t\tfor {\n\t\t\tc := make([]byte, defaultChunkBufferSize)\n\t\t\tif n, err := gz.Read(c); err != nil {\n\t\t\t\tif err == io.EOF {\n\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tt.Error(err)\n\t\t\t} else {\n\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t}\n\t\t}\n\n\t\t// verify the downloaded stream matches the original file\n\t\toriginalContents, err := os.ReadFile(testData)\n\t\tassertNilE(t, err)\n\t\tassertEqualF(t, contents, string(originalContents), \"data did not match content\")\n\t})\n}\n\nfunc TestPutGetMaxLOBSize(t *testing.T) {\n\tt.Skip(\"fails on CI because of backend testing in progress\")\n\n\ttestCases := [2]int{smallSize, largeSize}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"alter session set ALLOW_LARGE_LOBS_IN_EXTERNAL_SCAN = false\")\n\t\tdefer dbt.mustExec(\"alter session unset ALLOW_LARGE_LOBS_IN_EXTERNAL_SCAN\")\n\t\tfor _, tc := range testCases {\n\t\t\tt.Run(strconv.Itoa(tc), func(t *testing.T) {\n\n\t\t\t\t// create the data file\n\t\t\t\ttmpDir := t.TempDir()\n\t\t\t\tfname := filepath.Join(tmpDir, \"test_put_get.txt.gz\")\n\t\t\t\ttableName := randomString(5)\n\t\t\t\toriginalContents := fmt.Sprintf(\"%v,%s,%v\\n\", randomString(tc), randomString(tc), rand.Intn(100000))\n\n\t\t\t\tvar b bytes.Buffer\n\t\t\t\tgzw := gzip.NewWriter(&b)\n\t\t\t\t_, err := gzw.Write([]byte(originalContents))\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertNilF(t, gzw.Close())\n\t\t\t\terr = os.WriteFile(fname, b.Bytes(), readWriteFileMode)\n\t\t\t\tassertNilF(t, err, \"could not write to gzip file\")\n\n\t\t\t\tdbt.mustExec(fmt.Sprintf(\"create or replace table %s (c1 varchar, c2 varchar(%v), c3 int)\", tableName, tc))\n\t\t\t\tdefer dbt.mustExec(\"drop table \" + tableName)\n\t\t\t\tfileStream, err := os.Open(fname)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, fileStream.Close())\n\t\t\t\t}()\n\n\t\t\t\t// test PUT command\n\t\t\t\tvar sqlText string\n\t\t\t\tvar rows *RowsExtended\n\t\t\t\tsql := \"put 'file://%v' @%%%v auto_compress=true parallel=30\"\n\t\t\t\tsqlText = fmt.Sprintf(\n\t\t\t\t\tsql, strings.ReplaceAll(fname, \"\\\\\", \"\\\\\\\\\"), tableName)\n\t\t\t\trows = dbt.mustQuery(sqlText)\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows.Close())\n\t\t\t\t}()\n\n\t\t\t\tvar s0, s1, s2, s3, s4, s5, s6, s7 string\n\t\t\t\tassertTrueF(t, rows.Next(), \"expected new rows\")\n\t\t\t\terr = rows.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualF(t, s6, uploaded.String(), fmt.Sprintf(\"expected %v, got: %v\", uploaded, s6))\n\t\t\t\tassertNilF(t, err)\n\n\t\t\t\t// check file is PUT\n\t\t\t\tdbt.mustQueryAssertCount(\"ls @%\"+tableName, 1)\n\n\t\t\t\tdbt.mustExec(\"copy into \" + tableName)\n\t\t\t\tdbt.mustExec(\"rm @%\" + tableName)\n\t\t\t\tdbt.mustQueryAssertCount(\"ls @%\"+tableName, 0)\n\n\t\t\t\tdbt.mustExec(fmt.Sprintf(`copy into @%%%v from %v file_format=(type=csv\n\t\t\tcompression='gzip')`, tableName, tableName))\n\n\t\t\t\t// test GET command\n\t\t\t\tsql = fmt.Sprintf(\"get @%%%v 'file://%v'  parallel=10\", tableName, tmpDir)\n\t\t\t\tsqlText = strings.ReplaceAll(sql, \"\\\\\", \"\\\\\\\\\")\n\t\t\t\trows2 := dbt.mustQuery(sqlText)\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, rows2.Close())\n\t\t\t\t}()\n\t\t\t\tfor rows2.Next() {\n\t\t\t\t\terr = rows2.Scan(&s0, &s1, &s2, &s3)\n\t\t\t\t\tassertNilE(t, err)\n\t\t\t\t\tassertTrueF(t, strings.HasPrefix(s0, \"data_\"), \"a file was not downloaded by GET\")\n\t\t\t\t\tassertEqualE(t, s2, \"DOWNLOADED\", \"did not return DOWNLOADED status\")\n\t\t\t\t\tassertEqualE(t, s3, \"\", fmt.Sprintf(\"returned %v\", s3))\n\t\t\t\t}\n\n\t\t\t\t// verify the content in the file\n\t\t\t\tfiles, err := filepath.Glob(filepath.Join(tmpDir, \"data_*\"))\n\t\t\t\tassertNilF(t, err)\n\n\t\t\t\tfileName := files[0]\n\t\t\t\tf, err := os.Open(fileName)\n\t\t\t\tassertNilE(t, err)\n\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, f.Close())\n\t\t\t\t}()\n\t\t\t\tgz, err := gzip.NewReader(f)\n\t\t\t\tassertNilE(t, err)\n\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, gz.Close())\n\t\t\t\t}()\n\t\t\t\tvar contents string\n\t\t\t\tfor {\n\t\t\t\t\tc := make([]byte, defaultChunkBufferSize)\n\t\t\t\t\tif n, err := gz.Read(c); err != nil {\n\t\t\t\t\t\tif err == io.EOF {\n\t\t\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t\tt.Error(err)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tassertEqualE(t, contents, originalContents, \"output is different from the original file\")\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestPutCancel(t *testing.T) {\n\ttestData := createTempLargeFile(t, 128*1024*1024)\n\tstageDir := \"test_put_cancel_\" + randomString(10)\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tc := make(chan error, 1)\n\t\tctx, cancel := context.WithCancel(context.Background())\n\t\tgo func() {\n\t\t\t// Use a larger, non-compressed single-part upload so cancellation\n\t\t\t// wins reliably even on faster runners.\n\t\t\t_, err := dbt.conn.ExecContext(\n\t\t\t\tctx,\n\t\t\t\tfmt.Sprintf(\"put 'file://%v' @~/%v overwrite=true auto_compress=false parallel=1\",\n\t\t\t\t\tstrings.ReplaceAll(testData, \"\\\\\", \"/\"), stageDir))\n\t\t\tc <- err\n\t\t\tclose(c)\n\t\t}()\n\t\ttime.Sleep(200 * time.Millisecond)\n\t\tcancel()\n\t\tret := <-c\n\t\tassertNotNilF(t, ret)\n\t\tassertErrIsE(t, ret, context.Canceled)\n\t})\n}\n\nfunc TestPutGetLargeFileNonStream(t *testing.T) {\n\ttestPutGetLargeFile(t, false, true)\n}\n\nfunc TestPutGetLargeFileNonStreamAutoCompressFalse(t *testing.T) {\n\ttestPutGetLargeFile(t, false, false)\n}\n\nfunc TestPutGetLargeFileStream(t *testing.T) {\n\ttestPutGetLargeFile(t, true, true)\n}\n\nfunc TestPutGetLargeFileStreamAutoCompressFalse(t *testing.T) {\n\ttestPutGetLargeFile(t, true, false)\n}\n\nfunc testPutGetLargeFile(t *testing.T, isStream bool, autoCompress bool) {\n\tvar err error\n\n\tfname := createTempLargeFile(t, 5*1024*1024)\n\n\tbaseName := filepath.Base(fname)\n\tfnameGet := baseName + \".gz\"\n\tif !autoCompress {\n\t\tfnameGet = baseName\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tstageDir := \"test_put_largefile_\" + randomString(10)\n\t\tdbt.mustExec(\"rm @~/\" + stageDir)\n\n\t\tctx := context.Background()\n\t\tif isStream {\n\t\t\tf, err := os.Open(fname)\n\t\t\tassertNilF(t, err)\n\t\t\tdefer func() {\n\t\t\t\tassertNilF(t, f.Close())\n\t\t\t}()\n\t\t\tctx = WithFilePutStream(ctx, f)\n\t\t}\n\n\t\t// PUT test\n\t\tescapedFname := strings.ReplaceAll(fname, \"\\\\\", \"\\\\\\\\\")\n\t\tputQuery := fmt.Sprintf(\"put 'file://%v' @~/%v auto_compress=true overwrite=true\", escapedFname, stageDir)\n\t\tif !autoCompress {\n\t\t\tputQuery = fmt.Sprintf(\"put 'file://%v' @~/%v auto_compress=false overwrite=true\", escapedFname, stageDir)\n\t\t}\n\n\t\t// Record initial memory stats before PUT\n\t\tvar startMemStats, endMemStats runtime.MemStats\n\t\truntime.ReadMemStats(&startMemStats)\n\n\t\t// Execute PUT command\n\t\t_ = dbt.mustExecContext(ctx, putQuery)\n\n\t\t// Record memory stats after PUT\n\t\truntime.ReadMemStats(&endMemStats)\n\t\tfmt.Printf(\"Memory used for PUT command: %d MB\\n\", (endMemStats.Alloc-startMemStats.Alloc)/1024/1024)\n\n\t\tdefer dbt.mustExec(\"rm @~/\" + stageDir)\n\t\trows := dbt.mustQuery(\"ls @~/\" + stageDir)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tvar file, s1, s2, s3 sql.NullString\n\t\tif rows.Next() {\n\t\t\terr = rows.Scan(&file, &s1, &s2, &s3)\n\t\t\tassertNilF(t, err)\n\t\t}\n\n\t\tif !strings.Contains(file.String, fnameGet) {\n\t\t\tt.Fatalf(\"should contain file. got: %v\", file.String)\n\t\t}\n\n\t\t// GET test\n\t\tvar streamBuf bytes.Buffer\n\t\tctx = context.Background()\n\t\tif isStream {\n\t\t\tctx = WithFileGetStream(ctx, &streamBuf)\n\t\t}\n\n\t\ttmpDir := t.TempDir()\n\t\ttmpDirURL := strings.ReplaceAll(tmpDir, \"\\\\\", \"/\")\n\t\tsql := fmt.Sprintf(\"get @~/%v/%v 'file://%v'\", stageDir, fnameGet, tmpDirURL)\n\t\tsqlText := strings.ReplaceAll(sql, \"\\\\\", \"\\\\\\\\\")\n\t\trows2 := dbt.mustQueryContext(ctx, sqlText)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows2.Close())\n\t\t}()\n\t\tfor rows2.Next() {\n\t\t\terr = rows2.Scan(&file, &s1, &s2, &s3)\n\t\t\tassertNilE(t, err)\n\t\t\tassertTrueE(t, strings.HasPrefix(file.String, fnameGet), \"a file was not downloaded by GET\")\n\t\t\tassertEqualE(t, s2.String, \"DOWNLOADED\", \"did not return DOWNLOADED status\")\n\t\t\tassertEqualE(t, s3.String, \"\")\n\t\t}\n\n\t\tvar r io.Reader\n\t\tif autoCompress {\n\t\t\t// convert the compressed contents to string\n\t\t\tif isStream {\n\t\t\t\tr, err = gzip.NewReader(&streamBuf)\n\t\t\t\tassertNilE(t, err)\n\t\t\t} else {\n\t\t\t\tdownloadedFile := filepath.Join(tmpDir, fnameGet)\n\t\t\t\tf, err := os.Open(downloadedFile)\n\t\t\t\tassertNilE(t, err)\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, f.Close())\n\t\t\t\t}()\n\t\t\t\tr, err = gzip.NewReader(f)\n\t\t\t\tassertNilE(t, err)\n\t\t\t}\n\t\t} else {\n\t\t\tif isStream {\n\t\t\t\tr = bytes.NewReader(streamBuf.Bytes())\n\t\t\t} else {\n\t\t\t\tdownloadedFile := filepath.Join(tmpDir, fnameGet)\n\t\t\t\tf, err := os.Open(downloadedFile)\n\t\t\t\tassertNilE(t, err)\n\t\t\t\tdefer func() {\n\t\t\t\t\tassertNilF(t, f.Close())\n\t\t\t\t}()\n\t\t\t\tr = bufio.NewReader(f)\n\t\t\t}\n\t\t}\n\n\t\thash := sha256.New()\n\t\t_, err = io.Copy(hash, r)\n\t\tassertNilE(t, err)\n\t\tdownloadedChecksum := fmt.Sprintf(\"%x\", hash.Sum(nil))\n\n\t\toriginalFile, err := os.Open(fname)\n\t\tassertNilF(t, err)\n\t\tdefer func() {\n\t\t\tassertNilF(t, originalFile.Close())\n\t\t}()\n\n\t\toriginalHash := sha256.New()\n\t\t_, err = io.Copy(originalHash, originalFile)\n\t\tassertNilE(t, err)\n\t\toriginalChecksum := fmt.Sprintf(\"%x\", originalHash.Sum(nil))\n\n\t\tassertEqualF(t, downloadedChecksum, originalChecksum, \"file integrity check failed - checksums don't match\")\n\t})\n}\n\n// createTempLargeFile creates a sparse file of sizeBytes in t.TempDir().\n// The file is grown with Truncate, so no I/O is needed; sparse-file-capable\n// filesystems allocate no real disk space. The extended region reads back as\n// zero bytes, which is sufficient for PUT/GET round-trip tests.\nfunc createTempLargeFile(t *testing.T, sizeBytes int64) string {\n\tt.Helper()\n\ttmpFile, err := os.CreateTemp(t.TempDir(), \"large_test_*.bin\")\n\tassertNilF(t, err, \"creating temp large file\")\n\tassertNilF(t, tmpFile.Truncate(sizeBytes), fmt.Sprintf(\"truncating temp file to %d bytes\", sizeBytes))\n\tassertNilF(t, tmpFile.Close(), \"closing temp large file\")\n\treturn tmpFile.Name()\n}\n"
  },
  {
    "path": "put_get_user_stage_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestPutGetFileSmallDataViaUserStage(t *testing.T) {\n\tif os.Getenv(\"AWS_ACCESS_KEY_ID\") == \"\" {\n\t\tt.Skip(\"this test requires to change the internal parameter\")\n\t}\n\tputGetUserStage(t, 5, 1, false)\n}\n\nfunc TestPutGetStreamSmallDataViaUserStage(t *testing.T) {\n\tif os.Getenv(\"AWS_ACCESS_KEY_ID\") == \"\" {\n\t\tt.Skip(\"this test requires to change the internal parameter\")\n\t}\n\tputGetUserStage(t, 1, 1, true)\n}\n\nfunc putGetUserStage(t *testing.T, numberOfFiles int, numberOfLines int, isStream bool) {\n\tif os.Getenv(\"AWS_SECRET_ACCESS_KEY\") == \"\" {\n\t\tt.Fatal(\"no aws secret access key found\")\n\t}\n\ttmpDir, err := generateKLinesOfNFiles(numberOfLines, numberOfFiles, false, t.TempDir())\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tvar files string\n\tif isStream {\n\t\tlist, err := os.ReadDir(tmpDir)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tfile := list[0].Name()\n\t\tfiles = filepath.Join(tmpDir, file)\n\t} else {\n\t\tfiles = filepath.Join(tmpDir, \"file*\")\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tstageName := fmt.Sprintf(\"%v_stage_%v_%v\", dbname, numberOfFiles, numberOfLines)\n\t\tsqlText := `create or replace table %v (aa int, dt date, ts timestamp,\n\t\t\ttsltz timestamp_ltz, tsntz timestamp_ntz, tstz timestamp_tz,\n\t\t\tpct float, ratio number(6,2))`\n\t\tdbt.mustExec(fmt.Sprintf(sqlText, dbname))\n\t\tuserBucket := os.Getenv(\"SF_AWS_USER_BUCKET\")\n\t\tif userBucket == \"\" {\n\t\t\tuserBucket = fmt.Sprintf(\"sfc-eng-regression/%v/reg\", username)\n\t\t}\n\t\tsqlText = `create or replace stage %v url='s3://%v}/%v-%v-%v'\n\t\t\tcredentials = (AWS_KEY_ID='%v' AWS_SECRET_KEY='%v')`\n\t\tdbt.mustExec(fmt.Sprintf(sqlText, stageName, userBucket, stageName,\n\t\t\tnumberOfFiles, numberOfLines, os.Getenv(\"AWS_ACCESS_KEY_ID\"),\n\t\t\tos.Getenv(\"AWS_SECRET_ACCESS_KEY\")))\n\n\t\tdbt.mustExec(\"alter session set disable_put_and_get_on_external_stage = false\")\n\t\tdbt.mustExec(\"rm @\" + stageName)\n\t\tvar fs *os.File\n\t\tif isStream {\n\t\t\tfs, _ = os.Open(files)\n\t\t\tdbt.mustExecContext(WithFilePutStream(context.Background(), fs),\n\t\t\t\tfmt.Sprintf(\"put 'file://%v' @%v\", strings.ReplaceAll(\n\t\t\t\t\tfiles, \"\\\\\", \"\\\\\\\\\"), stageName))\n\t\t} else {\n\t\t\tdbt.mustExec(fmt.Sprintf(\"put 'file://%v' @%v \", strings.ReplaceAll(files, \"\\\\\", \"\\\\\\\\\"), stageName))\n\t\t}\n\t\tdefer func() {\n\t\t\tif isStream {\n\t\t\t\tfs.Close()\n\t\t\t}\n\t\t\tdbt.mustExec(\"rm @\" + stageName)\n\t\t\tdbt.mustExec(\"drop stage if exists \" + stageName)\n\t\t\tdbt.mustExec(\"drop table if exists \" + dbname)\n\t\t}()\n\t\tdbt.mustExec(fmt.Sprintf(\"copy into %v from @%v\", dbname, stageName))\n\n\t\trows := dbt.mustQuery(\"select count(*) from \" + dbname)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tvar cnt string\n\t\tif rows.Next() {\n\t\t\tassertNilF(t, rows.Scan(&cnt))\n\t\t}\n\t\tcount, err := strconv.Atoi(cnt)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tif count != numberOfFiles*numberOfLines {\n\t\t\tt.Errorf(\"count did not match expected number. count: %v, expected: %v\", count, numberOfFiles*numberOfLines)\n\t\t}\n\t})\n}\n\nfunc TestPutLoadFromUserStage(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdata, err := createTestData(dbt)\n\t\tif err != nil {\n\t\t\tt.Skip(\"snowflake admin account not accessible\")\n\t\t}\n\t\tdefer cleanupPut(dbt, data)\n\t\tdbt.mustExec(\"alter session set DISABLE_PUT_AND_GET_ON_EXTERNAL_STAGE=false\")\n\t\tdbt.mustExec(\"use warehouse \" + data.warehouse)\n\t\tdbt.mustExec(\"use schema \" + data.database + \".gotesting_schema\")\n\n\t\texecQuery := fmt.Sprintf(\n\t\t\t`create or replace stage %v url = 's3://%v/%v' credentials = (\n\t\t\tAWS_KEY_ID='%v' AWS_SECRET_KEY='%v')`,\n\t\t\tdata.stage, data.userBucket, data.stage,\n\t\t\tdata.awsAccessKeyID, data.awsSecretAccessKey)\n\t\tdbt.mustExec(execQuery)\n\n\t\texecQuery = `create or replace table gotest_putget_t2 (c1 STRING,\n\t\t\tc2 STRING, c3 STRING,c4 STRING, c5 STRING, c6 STRING, c7 STRING,\n\t\t\tc8 STRING, c9 STRING)`\n\t\tdbt.mustExec(execQuery)\n\t\tdefer dbt.mustExec(\"drop table if exists gotest_putget_t2\")\n\t\tdefer dbt.mustExec(\"drop stage if exists \" + data.stage)\n\n\t\texecQuery = fmt.Sprintf(\"put file://%v/test_data/orders_10*.csv @%v\",\n\t\t\tdata.dir, data.stage)\n\t\tdbt.mustExec(execQuery)\n\t\tdbt.mustQueryAssertCount(\"ls @%gotest_putget_t2\", 0)\n\n\t\trows := dbt.mustQuery(fmt.Sprintf(`copy into gotest_putget_t2 from @%v\n\t\t\tfile_format = (field_delimiter = '|' error_on_column_count_mismatch\n\t\t\t=false) purge=true`, data.stage))\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tvar s0, s1, s2, s3, s4, s5 string\n\t\tvar s6, s7, s8, s9 any\n\t\torders100 := fmt.Sprintf(\"s3://%v/%v/orders_100.csv.gz\",\n\t\t\tdata.userBucket, data.stage)\n\t\torders101 := fmt.Sprintf(\"s3://%v/%v/orders_101.csv.gz\",\n\t\t\tdata.userBucket, data.stage)\n\t\tfor rows.Next() {\n\t\t\tassertNilF(t, rows.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7, &s8, &s9))\n\t\t\tif s0 != orders100 && s0 != orders101 {\n\t\t\t\tt.Fatalf(\"copy did not load orders files. got: %v\", s0)\n\t\t\t}\n\t\t}\n\t\tdbt.mustQueryAssertCount(fmt.Sprintf(\"ls @%v\", data.stage), 0)\n\t})\n}\n"
  },
  {
    "path": "put_get_with_aws_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"compress/gzip\"\n\t\"context\"\n\t\"database/sql\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/aws/aws-sdk-go-v2/feature/s3/manager\"\n\t\"github.com/aws/aws-sdk-go-v2/service/s3\"\n)\n\nfunc TestLoadS3(t *testing.T) {\n\tif runningOnGithubAction() && !runningOnAWS() {\n\t\tt.Skip(\"skipping non aws environment\")\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdata, err := createTestData(dbt)\n\t\tif err != nil {\n\t\t\tt.Skip(\"snowflake admin account not accessible\")\n\t\t}\n\t\tdefer cleanupPut(dbt, data)\n\t\tdbt.mustExec(\"use warehouse \" + data.warehouse)\n\t\tdbt.mustExec(\"use schema \" + data.database + \".gotesting_schema\")\n\t\texecQuery := `create or replace table tweets(created_at timestamp,\n\t\t\tid number, id_str string, text string, source string,\n\t\t\tin_reply_to_status_id number, in_reply_to_status_id_str string,\n\t\t\tin_reply_to_user_id number, in_reply_to_user_id_str string,\n\t\t\tin_reply_to_screen_name string, user__id number, user__id_str string,\n\t\t\tuser__name string, user__screen_name string, user__location string,\n\t\t\tuser__description string, user__url string,\n\t\t\tuser__entities__description__urls string, user__protected string,\n\t\t\tuser__followers_count number, user__friends_count number,\n\t\t\tuser__listed_count number, user__created_at timestamp,\n\t\t\tuser__favourites_count number, user__utc_offset number,\n\t\t\tuser__time_zone string, user__geo_enabled string,\n\t\t\tuser__verified string, user__statuses_count number, user__lang string,\n\t\t\tuser__contributors_enabled string, user__is_translator string,\n\t\t\tuser__profile_background_color string,\n\t\t\tuser__profile_background_image_url string,\n\t\t\tuser__profile_background_image_url_https string,\n\t\t\tuser__profile_background_tile string, user__profile_image_url string,\n\t\t\tuser__profile_image_url_https string, user__profile_link_color string,\n\t\t\tuser__profile_sidebar_border_color string,\n\t\t\tuser__profile_sidebar_fill_color string, user__profile_text_color string,\n\t\t\tuser__profile_use_background_image string, user__default_profile string,\n\t\t\tuser__default_profile_image string, user__following string,\n\t\t\tuser__follow_request_sent string, user__notifications string,\n\t\t\tgeo string, coordinates string, place string, contributors string,\n\t\t\tretweet_count number, favorite_count number, entities__hashtags string,\n\t\t\tentities__symbols string, entities__urls string,\n\t\t\tentities__user_mentions string, favorited string, retweeted string,\n\t\t\tlang string)`\n\t\tdbt.mustExec(execQuery)\n\t\tdefer dbt.mustExec(\"drop table if exists tweets\")\n\t\tdbt.mustQueryAssertCount(\"ls @%tweets\", 0)\n\n\t\trows := dbt.mustQuery(fmt.Sprintf(`copy into tweets from\n\t\t\ts3://sfc-eng-data/twitter/O1k/tweets/ credentials=(AWS_KEY_ID='%v'\n\t\t\tAWS_SECRET_KEY='%v') file_format=(skip_header=1 null_if=('')\n\t\t\tfield_optionally_enclosed_by='\\\"')`,\n\t\t\tdata.awsAccessKeyID, data.awsSecretAccessKey))\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tvar s0, s1, s2, s3, s4, s5, s6, s7, s8, s9 sql.NullString\n\t\tcnt := 0\n\t\tfor rows.Next() {\n\t\t\tassertNilF(t, rows.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7, &s8, &s9))\n\t\t\tcnt++\n\t\t}\n\t\tif cnt != 1 {\n\t\t\tt.Fatal(\"copy into tweets did not set row count to 1\")\n\t\t}\n\t\tif !s0.Valid || s0.String != \"s3://sfc-eng-data/twitter/O1k/tweets/1.csv.gz\" {\n\t\t\tt.Fatalf(\"got %v as file\", s0)\n\t\t}\n\t})\n}\n\nfunc TestPutWithInvalidToken(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tif !runningOnAWS() {\n\t\t\tt.Skip(\"skipping non aws environment\")\n\t\t}\n\t\ttmpDir := t.TempDir()\n\t\tfname := filepath.Join(tmpDir, \"test_put_get_with_aws.txt.gz\")\n\t\toriginalContents := \"123,test1\\n456,test2\\n\"\n\n\t\tvar b bytes.Buffer\n\t\tgzw := gzip.NewWriter(&b)\n\t\t_, err := gzw.Write([]byte(originalContents))\n\t\tassertNilF(t, err)\n\t\tassertNilF(t, gzw.Close())\n\t\tif err := os.WriteFile(fname, b.Bytes(), readWriteFileMode); err != nil {\n\t\t\tt.Fatal(\"could not write to gzip file\")\n\t\t}\n\n\t\ttableName := randomString(5)\n\t\tsct.mustExec(\"create or replace table \"+tableName+\" (a int, b string)\", nil)\n\t\tdefer sct.mustExec(\"drop table \"+tableName, nil)\n\n\t\tjsonBody, err := json.Marshal(execRequest{\n\t\t\tSQLText: fmt.Sprintf(\"put 'file://%v' @%%%v\", fname, tableName),\n\t\t})\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\theaders := getHeaders()\n\t\theaders[httpHeaderAccept] = headerContentTypeApplicationJSON\n\t\tdata, err := sct.sc.rest.FuncPostQuery(\n\t\t\tsct.sc.ctx, sct.sc.rest, &url.Values{}, headers, jsonBody,\n\t\t\tsct.sc.rest.RequestTimeout, getOrGenerateRequestIDFromContext(sct.sc.ctx), sct.sc.cfg)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\ts3Util := new(snowflakeS3Client)\n\t\ts3Cli, err := s3Util.createClient(&data.Data.StageInfo, false, &snowflakeTelemetry{})\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tclient := s3Cli.(*s3.Client)\n\n\t\ts3Loc, err := s3Util.extractBucketNameAndPath(data.Data.StageInfo.Location)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\ts3Path := s3Loc.s3Path + baseName(fname) + \".gz\"\n\n\t\tf, err := os.Open(fname)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tdefer func() {\n\t\t\tassertNilF(t, f.Close())\n\t\t}()\n\t\tuploader := manager.NewUploader(client)\n\t\tif _, err = uploader.Upload(context.Background(), &s3.PutObjectInput{\n\t\t\tBucket: &s3Loc.bucketName,\n\t\t\tKey:    &s3Path,\n\t\t\tBody:   f,\n\t\t}); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\tparentPath := filepath.Dir(filepath.Dir(s3Path)) + \"/\"\n\t\tif _, err = uploader.Upload(context.Background(), &s3.PutObjectInput{\n\t\t\tBucket: &s3Loc.bucketName,\n\t\t\tKey:    &parentPath,\n\t\t\tBody:   f,\n\t\t}); err == nil {\n\t\t\tt.Fatal(\"should have failed attempting to put file in parent path\")\n\t\t}\n\n\t\tinfo := execResponseStageInfo{\n\t\t\tCreds: execResponseCredentials{\n\t\t\t\tAwsID:        data.Data.StageInfo.Creds.AwsID,\n\t\t\t\tAwsSecretKey: data.Data.StageInfo.Creds.AwsSecretKey,\n\t\t\t},\n\t\t}\n\t\ts3Cli, err = s3Util.createClient(&info, false, &snowflakeTelemetry{})\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tclient = s3Cli.(*s3.Client)\n\n\t\tuploader = manager.NewUploader(client)\n\t\tif _, err = uploader.Upload(context.Background(), &s3.PutObjectInput{\n\t\t\tBucket: &s3Loc.bucketName,\n\t\t\tKey:    &s3Path,\n\t\t\tBody:   f,\n\t\t}); err == nil {\n\t\t\tt.Fatal(\"should have failed attempting to put with missing aws token\")\n\t\t}\n\t})\n}\n\nfunc TestPretendToPutButList(t *testing.T) {\n\tif runningOnGithubAction() && !runningOnAWS() {\n\t\tt.Skip(\"skipping non aws environment\")\n\t}\n\ttmpDir := t.TempDir()\n\tfname := filepath.Join(tmpDir, \"test_put_get_with_aws.txt.gz\")\n\toriginalContents := \"123,test1\\n456,test2\\n\"\n\n\tvar b bytes.Buffer\n\tgzw := gzip.NewWriter(&b)\n\t_, err := gzw.Write([]byte(originalContents))\n\tassertNilF(t, err)\n\tassertNilF(t, gzw.Close())\n\tif err := os.WriteFile(fname, b.Bytes(), readWriteFileMode); err != nil {\n\t\tt.Fatal(\"could not write to gzip file\")\n\t}\n\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\ttableName := randomString(5)\n\t\tsct.mustExec(\"create or replace table \"+tableName+\n\t\t\t\" (a int, b string)\", nil)\n\t\tdefer sct.mustExec(\"drop table \"+tableName, nil)\n\n\t\tjsonBody, err := json.Marshal(execRequest{\n\t\t\tSQLText: fmt.Sprintf(\"put 'file://%v' @%%%v\", fname, tableName),\n\t\t})\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\theaders := getHeaders()\n\t\theaders[httpHeaderAccept] = headerContentTypeApplicationJSON\n\t\tdata, err := sct.sc.rest.FuncPostQuery(\n\t\t\tsct.sc.ctx, sct.sc.rest, &url.Values{}, headers, jsonBody,\n\t\t\tsct.sc.rest.RequestTimeout, getOrGenerateRequestIDFromContext(sct.sc.ctx), sct.sc.cfg)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\ts3Util := new(snowflakeS3Client)\n\t\ts3Cli, err := s3Util.createClient(&data.Data.StageInfo, false, &snowflakeTelemetry{})\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tclient := s3Cli.(*s3.Client)\n\t\tif _, err = client.ListBuckets(context.Background(),\n\t\t\t&s3.ListBucketsInput{}); err == nil {\n\t\t\tt.Fatal(\"list buckets should fail\")\n\t\t}\n\t})\n}\n\nfunc TestPutGetAWSStage(t *testing.T) {\n\tif runningOnGithubAction() || !runningOnAWS() {\n\t\tt.Skip(\"skipping non aws environment\")\n\t}\n\n\ttmpDir := t.TempDir()\n\tname := \"test_put_get.txt.gz\"\n\tfname := filepath.Join(tmpDir, name)\n\toriginalContents := \"123,test1\\n456,test2\\n\"\n\tstageName := \"test_put_get_stage_\" + randomString(5)\n\n\tvar b bytes.Buffer\n\tgzw := gzip.NewWriter(&b)\n\t_, err := gzw.Write([]byte(originalContents))\n\tassertNilF(t, err)\n\tassertNilF(t, gzw.Close())\n\tif err := os.WriteFile(fname, b.Bytes(), readWriteFileMode); err != nil {\n\t\tt.Fatal(\"could not write to gzip file\")\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tvar createStageQuery string\n\t\tkeyID, secretKey, _, err := getAWSCredentials()\n\t\tif err != nil {\n\t\t\tt.Skip(\"snowflake admin account not accessible\")\n\t\t}\n\t\tcreateStageQuery = fmt.Sprintf(createStageStmt,\n\t\t\tstageName,\n\t\t\t\"s3://\"+stageName,\n\t\t\tfmt.Sprintf(\"AWS_KEY_ID='%v' AWS_SECRET_KEY='%v'\", keyID, secretKey))\n\t\tdbt.mustExec(createStageQuery)\n\n\t\tdefer dbt.mustExec(\"DROP STAGE IF EXISTS \" + stageName)\n\n\t\tsql := \"put 'file://%v' @~/%v auto_compress=false\"\n\t\tsqlText := fmt.Sprintf(sql, strings.ReplaceAll(fname, \"\\\\\", \"\\\\\\\\\"), stageName)\n\t\trows := dbt.mustQuery(sqlText)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\n\t\tvar s0, s1, s2, s3, s4, s5, s6, s7 string\n\t\tif rows.Next() {\n\t\t\tif err = rows.Scan(&s0, &s1, &s2, &s3, &s4, &s5, &s6, &s7); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t}\n\t\tif s6 != uploaded.String() {\n\t\t\tt.Fatalf(\"expected %v, got: %v\", uploaded, s6)\n\t\t}\n\n\t\tsql = fmt.Sprintf(\"get @~/%v 'file://%v'\", stageName, tmpDir)\n\t\tsqlText = strings.ReplaceAll(sql, \"\\\\\", \"\\\\\\\\\")\n\t\trows = dbt.mustQuery(sqlText)\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows.Close())\n\t\t}()\n\t\tfor rows.Next() {\n\t\t\tif err = rows.Scan(&s0, &s1, &s2, &s3); err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\n\t\t\tif strings.Compare(s0, name) != 0 {\n\t\t\t\tt.Error(\"a file was not downloaded by GET\")\n\t\t\t}\n\t\t\tif v, err := strconv.Atoi(s1); err != nil || v != 41 {\n\t\t\t\tt.Error(\"did not return the right file size\")\n\t\t\t}\n\t\t\tif s2 != \"DOWNLOADED\" {\n\t\t\t\tt.Error(\"did not return DOWNLOADED status\")\n\t\t\t}\n\t\t\tif s3 != \"\" {\n\t\t\t\tt.Errorf(\"returned %v\", s3)\n\t\t\t}\n\t\t}\n\n\t\tfiles, err := filepath.Glob(filepath.Join(tmpDir, \"*\"))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tfileName := files[0]\n\t\tf, err := os.Open(fileName)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tdefer func() {\n\t\t\tassertNilF(t, f.Close())\n\t\t}()\n\t\tgz, err := gzip.NewReader(f)\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tvar contents string\n\t\tfor {\n\t\t\tc := make([]byte, defaultChunkBufferSize)\n\t\t\tif n, err := gz.Read(c); err != nil {\n\t\t\t\tif err == io.EOF {\n\t\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tt.Error(err)\n\t\t\t} else {\n\t\t\t\tcontents = contents + string(c[:n])\n\t\t\t}\n\t\t}\n\n\t\tif contents != originalContents {\n\t\t\tt.Error(\"output is different from the original file\")\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "query.go",
    "content": "package gosnowflake\n\nimport (\n\t\"encoding/json\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"time\"\n)\n\ntype resultFormat string\n\nconst (\n\tjsonFormat  resultFormat = \"json\"\n\tarrowFormat resultFormat = \"arrow\"\n)\n\ntype execBindParameter struct {\n\tType   string         `json:\"type\"`\n\tValue  any            `json:\"value\"`\n\tFormat string         `json:\"fmt,omitempty\"`\n\tSchema *bindingSchema `json:\"schema,omitempty\"`\n}\n\ntype execRequest struct {\n\tSQLText      string                       `json:\"sqlText\"`\n\tAsyncExec    bool                         `json:\"asyncExec\"`\n\tSequenceID   uint64                       `json:\"sequenceId\"`\n\tIsInternal   bool                         `json:\"isInternal\"`\n\tDescribeOnly bool                         `json:\"describeOnly,omitempty\"`\n\tParameters   map[string]any               `json:\"parameters,omitempty\"`\n\tBindings     map[string]execBindParameter `json:\"bindings,omitempty\"`\n\tBindStage    string                       `json:\"bindStage,omitempty\"`\n\tQueryContext requestQueryContext          `json:\"queryContextDTO\"`\n}\n\ntype requestQueryContext struct {\n\tEntries []requestQueryContextEntry `json:\"entries,omitempty\"`\n}\n\ntype requestQueryContextEntry struct {\n\tContext   contextData `json:\"context\"`\n\tID        int         `json:\"id\"`\n\tPriority  int         `json:\"priority\"`\n\tTimestamp int64       `json:\"timestamp,omitempty\"`\n}\n\ntype contextData struct {\n\tBase64Data string `json:\"base64Data,omitempty\"`\n}\n\ntype execResponseCredentials struct {\n\tAwsKeyID       string `json:\"AWS_KEY_ID,omitempty\"`\n\tAwsSecretKey   string `json:\"AWS_SECRET_KEY,omitempty\"`\n\tAwsToken       string `json:\"AWS_TOKEN,omitempty\"`\n\tAwsID          string `json:\"AWS_ID,omitempty\"`\n\tAwsKey         string `json:\"AWS_KEY,omitempty\"`\n\tAzureSasToken  string `json:\"AZURE_SAS_TOKEN,omitempty\"`\n\tGcsAccessToken string `json:\"GCS_ACCESS_TOKEN,omitempty\"`\n}\n\ntype execResponseStageInfo struct {\n\tLocationType          string                  `json:\"locationType,omitempty\"`\n\tLocation              string                  `json:\"location,omitempty\"`\n\tPath                  string                  `json:\"path,omitempty\"`\n\tRegion                string                  `json:\"region,omitempty\"`\n\tStorageAccount        string                  `json:\"storageAccount,omitempty\"`\n\tIsClientSideEncrypted bool                    `json:\"isClientSideEncrypted,omitempty\"`\n\tCreds                 execResponseCredentials `json:\"creds\"`\n\tPresignedURL          string                  `json:\"presignedUrl,omitempty\"`\n\tEndPoint              string                  `json:\"endPoint,omitempty\"`\n\tUseS3RegionalURL      bool                    `json:\"useS3RegionalUrl,omitempty\"`\n\tUseRegionalURL        bool                    `json:\"useRegionalUrl,omitempty\"`\n\tUseVirtualURL         bool                    `json:\"useVirtualUrl,omitempty\"`\n}\n\n// make all data field optional\ntype execResponseData struct {\n\t// succeed query response data\n\tParameters         []nameValueParameter        `json:\"parameters,omitempty\"`\n\tRowType            []query.ExecResponseRowType `json:\"rowtype,omitempty\"`\n\tRowSet             [][]*string                 `json:\"rowset,omitempty\"`\n\tRowSetBase64       string                      `json:\"rowsetbase64,omitempty\"`\n\tTotal              int64                       `json:\"total,omitempty\"`    // java:long\n\tReturned           int64                       `json:\"returned,omitempty\"` // java:long\n\tQueryID            string                      `json:\"queryId,omitempty\"`\n\tSQLState           string                      `json:\"sqlState,omitempty\"`\n\tDatabaseProvider   string                      `json:\"databaseProvider,omitempty\"`\n\tFinalDatabaseName  string                      `json:\"finalDatabaseName,omitempty\"`\n\tFinalSchemaName    string                      `json:\"finalSchemaName,omitempty\"`\n\tFinalWarehouseName string                      `json:\"finalWarehouseName,omitempty\"`\n\tFinalRoleName      string                      `json:\"finalRoleName,omitempty\"`\n\tNumberOfBinds      int                         `json:\"numberOfBinds,omitempty\"`   // java:int\n\tStatementTypeID    int64                       `json:\"statementTypeId,omitempty\"` // java:long\n\tVersion            int64                       `json:\"version,omitempty\"`         // java:long\n\tChunks             []query.ExecResponseChunk   `json:\"chunks,omitempty\"`\n\tQrmk               string                      `json:\"qrmk,omitempty\"`\n\tChunkHeaders       map[string]string           `json:\"chunkHeaders,omitempty\"`\n\n\t// ping pong response data\n\tGetResultURL      string        `json:\"getResultUrl,omitempty\"`\n\tProgressDesc      string        `json:\"progressDesc,omitempty\"`\n\tQueryAbortTimeout time.Duration `json:\"queryAbortsAfterSecs,omitempty\"`\n\tResultIDs         string        `json:\"resultIds,omitempty\"`\n\tResultTypes       string        `json:\"resultTypes,omitempty\"`\n\tQueryResultFormat string        `json:\"queryResultFormat,omitempty\"`\n\n\t// async response placeholders\n\tAsyncResult *snowflakeResult `json:\"asyncResult,omitempty\"`\n\tAsyncRows   *snowflakeRows   `json:\"asyncRows,omitempty\"`\n\n\t// file transfer response data\n\tUploadInfo              execResponseStageInfo `json:\"uploadInfo\"`\n\tLocalLocation           string                `json:\"localLocation,omitempty\"`\n\tSrcLocations            []string              `json:\"src_locations,omitempty\"`\n\tParallel                int64                 `json:\"parallel,omitempty\"`\n\tThreshold               int64                 `json:\"threshold,omitempty\"`\n\tAutoCompress            bool                  `json:\"autoCompress,omitempty\"`\n\tOverwrite               bool                  `json:\"overwrite,omitempty\"`\n\tSourceCompression       string                `json:\"sourceCompression,omitempty\"`\n\tShowEncryptionParameter bool                  `json:\"clientShowEncryptionParameter,omitempty\"`\n\tEncryptionMaterial      encryptionWrapper     `json:\"encryptionMaterial\"`\n\tPresignedURLs           []string              `json:\"presignedUrls,omitempty\"`\n\tStageInfo               execResponseStageInfo `json:\"stageInfo\"`\n\tCommand                 string                `json:\"command,omitempty\"`\n\tKind                    string                `json:\"kind,omitempty\"`\n\tOperation               string                `json:\"operation,omitempty\"`\n\n\t// HTAP\n\tQueryContext json.RawMessage `json:\"queryContext,omitempty\"`\n}\n\ntype execResponse struct {\n\tData    execResponseData `json:\"Data\"`\n\tMessage string           `json:\"message\"`\n\tCode    string           `json:\"code\"`\n\tSuccess bool             `json:\"success\"`\n}\n"
  },
  {
    "path": "restful.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"time\"\n)\n\n// HTTP headers\nconst (\n\theaderSnowflakeToken   = \"Snowflake Token=\\\"%v\\\"\"\n\theaderAuthorizationKey = \"Authorization\"\n\n\theaderContentTypeApplicationJSON     = \"application/json\"\n\theaderAcceptTypeApplicationSnowflake = \"application/snowflake\"\n)\n\n// Snowflake Server Endpoints\nconst (\n\tloginRequestPath         = \"/session/v1/login-request\"\n\tqueryRequestPath         = \"/queries/v1/query-request\"\n\ttokenRequestPath         = \"/session/token-request\"\n\tabortRequestPath         = \"/queries/v1/abort-request\"\n\tauthenticatorRequestPath = \"/session/authenticator-request\"\n\tmonitoringQueriesPath    = \"/monitoring/queries\"\n\tsessionRequestPath       = \"/session\"\n\theartBeatPath            = \"/session/heartbeat\"\n\tconsoleLoginRequestPath  = \"/console/login\"\n)\n\ntype (\n\tfuncGetType      func(context.Context, *snowflakeRestful, *url.URL, map[string]string, time.Duration) (*http.Response, error)\n\tfuncPostType     func(context.Context, *snowflakeRestful, *url.URL, map[string]string, []byte, time.Duration, currentTimeProvider, *Config) (*http.Response, error)\n\tfuncAuthPostType func(context.Context, *http.Client, *url.URL, map[string]string, bodyCreatorType, time.Duration, int) (*http.Response, error)\n\tbodyCreatorType  func() ([]byte, error)\n)\n\nvar emptyBodyCreator = func() ([]byte, error) {\n\treturn []byte{}, nil\n}\n\ntype snowflakeRestful struct {\n\tHost           string\n\tPort           int\n\tProtocol       string\n\tLoginTimeout   time.Duration // Login timeout\n\tRequestTimeout time.Duration // request timeout\n\tMaxRetryCount  int\n\n\tClient        *http.Client\n\tJWTClient     *http.Client\n\tTokenAccessor TokenAccessor\n\tHeartBeat     *heartbeat\n\n\tConnection *snowflakeConn\n\n\tFuncPostQuery       func(context.Context, *snowflakeRestful, *url.Values, map[string]string, []byte, time.Duration, UUID, *Config) (*execResponse, error)\n\tFuncPostQueryHelper func(context.Context, *snowflakeRestful, *url.Values, map[string]string, []byte, time.Duration, UUID, *Config) (*execResponse, error)\n\tFuncPost            funcPostType\n\tFuncGet             funcGetType\n\tFuncAuthPost        funcAuthPostType\n\tFuncRenewSession    func(context.Context, *snowflakeRestful, time.Duration) error\n\tFuncCloseSession    func(context.Context, *snowflakeRestful, time.Duration) error\n\tFuncCancelQuery     func(context.Context, *snowflakeRestful, UUID, time.Duration) error\n\n\tFuncPostAuth     func(context.Context, *snowflakeRestful, *http.Client, *url.Values, map[string]string, bodyCreatorType, time.Duration) (*authResponse, error)\n\tFuncPostAuthSAML func(context.Context, *snowflakeRestful, map[string]string, []byte, time.Duration) (*authResponse, error)\n\tFuncPostAuthOKTA func(context.Context, *snowflakeRestful, map[string]string, []byte, string, time.Duration) (*authOKTAResponse, error)\n\tFuncGetSSO       func(context.Context, *snowflakeRestful, *url.Values, map[string]string, string, time.Duration) ([]byte, error)\n}\n\nfunc (sr *snowflakeRestful) getURL() *url.URL {\n\treturn &url.URL{\n\t\tScheme: sr.Protocol,\n\t\tHost:   sr.Host + \":\" + strconv.Itoa(sr.Port),\n\t}\n}\n\nfunc (sr *snowflakeRestful) getFullURL(path string, params *url.Values) *url.URL {\n\tret := &url.URL{\n\t\tScheme: sr.Protocol,\n\t\tHost:   sr.Host + \":\" + strconv.Itoa(sr.Port),\n\t\tPath:   path,\n\t}\n\tif params != nil {\n\t\tret.RawQuery = params.Encode()\n\t}\n\treturn ret\n}\n\n// We need separate client for JWT, because if token processing takes too long, token may be already expired.\nfunc (sr *snowflakeRestful) getClientFor(authType AuthType) *http.Client {\n\tswitch authType {\n\tcase AuthTypeJwt:\n\t\treturn sr.JWTClient\n\tdefault:\n\t\treturn sr.Client\n\t}\n}\n\n// Renew the snowflake session if the current token is still the stale token specified\nfunc (sr *snowflakeRestful) renewExpiredSessionToken(ctx context.Context, timeout time.Duration, expiredToken string) error {\n\terr := sr.TokenAccessor.Lock()\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer sr.TokenAccessor.Unlock()\n\tcurrentToken, _, _ := sr.TokenAccessor.GetTokens()\n\tif expiredToken == currentToken || currentToken == \"\" {\n\t\t// Only renew the session if the current token is still the expired token or current token is empty\n\t\treturn sr.FuncRenewSession(ctx, sr, timeout)\n\t}\n\treturn nil\n}\n\ntype renewSessionResponse struct {\n\tData    renewSessionResponseMain `json:\"data\"`\n\tMessage string                   `json:\"message\"`\n\tCode    string                   `json:\"code\"`\n\tSuccess bool                     `json:\"success\"`\n}\n\ntype renewSessionResponseMain struct {\n\tSessionToken        string        `json:\"sessionToken\"`\n\tValidityInSecondsST time.Duration `json:\"validityInSecondsST\"`\n\tMasterToken         string        `json:\"masterToken\"`\n\tValidityInSecondsMT time.Duration `json:\"validityInSecondsMT\"`\n\tSessionID           int64         `json:\"sessionId\"`\n}\n\ntype cancelQueryResponse struct {\n\tData    any    `json:\"data\"`\n\tMessage string `json:\"message\"`\n\tCode    string `json:\"code\"`\n\tSuccess bool   `json:\"success\"`\n}\n\ntype telemetryResponse struct {\n\tData    any               `json:\"data,omitempty\"`\n\tMessage string            `json:\"message\"`\n\tCode    string            `json:\"code\"`\n\tSuccess bool              `json:\"success\"`\n\tHeaders map[string]string `json:\"headers,omitempty\"`\n}\n\nfunc postRestful(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\tfullURL *url.URL,\n\theaders map[string]string,\n\tbody []byte,\n\ttimeout time.Duration,\n\tcurrentTimeProvider currentTimeProvider,\n\tcfg *Config) (\n\t*http.Response, error) {\n\treturn newRetryHTTP(ctx, sr.Client, http.NewRequest, fullURL, headers, timeout, sr.MaxRetryCount, currentTimeProvider, cfg).\n\t\tdoPost().\n\t\tsetBody(body).\n\t\texecute()\n}\n\nfunc getRestful(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\tfullURL *url.URL,\n\theaders map[string]string,\n\ttimeout time.Duration) (\n\t*http.Response, error) {\n\treturn newRetryHTTP(ctx, sr.Client, http.NewRequest, fullURL, headers, timeout, sr.MaxRetryCount, defaultTimeProvider, nil).execute()\n}\n\nfunc postAuthRestful(\n\tctx context.Context,\n\tclient *http.Client,\n\tfullURL *url.URL,\n\theaders map[string]string,\n\tbodyCreator bodyCreatorType,\n\ttimeout time.Duration,\n\tmaxRetryCount int) (\n\t*http.Response, error) {\n\treturn newRetryHTTP(ctx, client, http.NewRequest, fullURL, headers, timeout, maxRetryCount, defaultTimeProvider, nil).\n\t\tdoPost().\n\t\tsetBodyCreator(bodyCreator).\n\t\texecute()\n}\n\nfunc postRestfulQuery(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\tparams *url.Values,\n\theaders map[string]string,\n\tbody []byte,\n\ttimeout time.Duration,\n\trequestID UUID,\n\tcfg *Config) (\n\tdata *execResponse, err error) {\n\n\tdata, err = sr.FuncPostQueryHelper(ctx, sr, params, headers, body, timeout, requestID, cfg)\n\n\tif errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {\n\t\t// For context cancel/timeout cases, a special cancel request needs to be sent.\n\t\tif cancelErr := sr.FuncCancelQuery(context.Background(), sr, requestID, timeout); cancelErr != nil {\n\t\t\t// Wrap the original error with the cancel error.\n\t\t\terr = fmt.Errorf(\"failed to cancel query. cancelErr: %w, queryErr: %w\", cancelErr, err)\n\t\t}\n\t}\n\n\treturn data, err\n}\n\nfunc postRestfulQueryHelper(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\tparams *url.Values,\n\theaders map[string]string,\n\tbody []byte,\n\ttimeout time.Duration,\n\trequestID UUID,\n\tcfg *Config) (\n\tdata *execResponse, err error) {\n\tlogger.WithContext(ctx).Infof(\"params: %v\", params)\n\tparams.Set(requestIDKey, requestID.String())\n\tparams.Set(requestGUIDKey, NewUUID().String())\n\ttoken, _, _ := sr.TokenAccessor.GetTokens()\n\tif token != \"\" {\n\t\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, token)\n\t}\n\n\tvar resp *http.Response\n\tfullURL := sr.getFullURL(queryRequestPath, params)\n\n\tlogger.WithContext(ctx).Infof(\"postQuery: make a request to Host: %v, Path: %v\", fullURL.Host, fullURL.Path)\n\tresp, err = sr.FuncPost(ctx, sr, fullURL, headers, body, timeout, defaultTimeProvider, cfg)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func(resp *http.Response, url string) {\n\t\tif closeErr := resp.Body.Close(); closeErr != nil {\n\t\t\tlogger.WithContext(ctx).Warnf(\"failed to close response body for %v. err: %v\", url, closeErr)\n\t\t}\n\t}(resp, fullURL.String())\n\n\tif resp.StatusCode == http.StatusOK {\n\t\trespd := &execResponse{}\n\t\tif err = json.NewDecoder(resp.Body).Decode(respd); err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to decode JSON. err: %v\", err)\n\t\t\treturn nil, err\n\t\t}\n\t\tif respd.Code == sessionExpiredCode {\n\t\t\tif err = sr.renewExpiredSessionToken(ctx, timeout, token); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\treturn sr.FuncPostQuery(ctx, sr, params, headers, body, timeout, requestID, cfg)\n\t\t}\n\n\t\tif queryIDChan := getQueryIDChan(ctx); queryIDChan != nil {\n\t\t\tqueryIDChan <- respd.Data.QueryID\n\t\t\tclose(queryIDChan)\n\t\t\tctx = WithQueryIDChan(ctx, nil)\n\t\t}\n\n\t\tisSessionRenewed := false\n\n\t\t// if asynchronous query in progress, kick off retrieval but return object\n\t\tif respd.Code == queryInProgressAsyncCode && isAsyncMode(ctx) {\n\t\t\treturn sr.processAsync(ctx, respd, headers, timeout, cfg)\n\t\t}\n\t\tfor isSessionRenewed || respd.Code == queryInProgressCode ||\n\t\t\trespd.Code == queryInProgressAsyncCode {\n\t\t\tif !isSessionRenewed {\n\t\t\t\tfullURL = sr.getFullURL(respd.Data.GetResultURL, nil)\n\t\t\t}\n\n\t\t\tlogger.WithContext(ctx).Info(\"ping pong\")\n\t\t\ttoken, _, _ = sr.TokenAccessor.GetTokens()\n\t\t\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, token)\n\n\t\t\trespd, err = getExecResponse(ctx, sr, fullURL, headers, timeout)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tif respd.Code == sessionExpiredCode {\n\t\t\t\tif err = sr.renewExpiredSessionToken(ctx, timeout, token); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tisSessionRenewed = true\n\t\t\t} else {\n\t\t\t\tisSessionRenewed = false\n\t\t\t}\n\t\t}\n\t\treturn respd, nil\n\t}\n\tb, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to extract HTTP response body. err: %v\", err)\n\t\treturn nil, err\n\t}\n\tlogger.WithContext(ctx).Infof(\"HTTP: %v, URL: %v, Body: %v\", resp.StatusCode, fullURL, b)\n\tlogger.WithContext(ctx).Infof(\"Header: %v\", resp.Header)\n\treturn nil, &SnowflakeError{\n\t\tNumber:      ErrFailedToPostQuery,\n\t\tSQLState:    SQLStateConnectionFailure,\n\t\tMessage:     errors2.ErrMsgFailedToPostQuery,\n\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t}\n}\n\nfunc closeSession(ctx context.Context, sr *snowflakeRestful, timeout time.Duration) error {\n\tlogger.WithContext(ctx).Info(\"close session\")\n\tparams := &url.Values{}\n\tparams.Set(\"delete\", \"true\")\n\tparams.Set(requestIDKey, getOrGenerateRequestIDFromContext(ctx).String())\n\tparams.Set(requestGUIDKey, NewUUID().String())\n\tfullURL := sr.getFullURL(sessionRequestPath, params)\n\n\theaders := getHeaders()\n\ttoken, _, _ := sr.TokenAccessor.GetTokens()\n\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, token)\n\n\tresp, err := sr.FuncPost(ctx, sr, fullURL, headers, nil, 5*time.Second, defaultTimeProvider, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif err = resp.Body.Close(); err != nil {\n\t\t\tlogger.WithContext(ctx).Warnf(\"failed to close response body for %v. err: %v\", fullURL, err)\n\t\t}\n\t}()\n\tif resp.StatusCode == http.StatusOK {\n\t\tvar respd renewSessionResponse\n\t\tif err = json.NewDecoder(resp.Body).Decode(&respd); err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to decode JSON. err: %v\", err)\n\t\t\treturn err\n\t\t}\n\t\tif !respd.Success && respd.Code != sessionExpiredCode {\n\t\t\tc, err := strconv.Atoi(respd.Code)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn &SnowflakeError{\n\t\t\t\tNumber:  c,\n\t\t\t\tMessage: respd.Message,\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}\n\tb, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to extract HTTP response body. err: %v\", err)\n\t\treturn err\n\t}\n\tlogger.WithContext(ctx).Infof(\"HTTP: %v, URL: %v, Body: %v\", resp.StatusCode, fullURL, b)\n\tlogger.WithContext(ctx).Infof(\"Header: %v\", resp.Header)\n\treturn &SnowflakeError{\n\t\tNumber:      ErrFailedToCloseSession,\n\t\tSQLState:    SQLStateConnectionFailure,\n\t\tMessage:     errors2.ErrMsgFailedToCloseSession,\n\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t}\n}\n\nfunc renewRestfulSession(ctx context.Context, sr *snowflakeRestful, timeout time.Duration) error {\n\tparams := &url.Values{}\n\tparams.Set(requestIDKey, getOrGenerateRequestIDFromContext(ctx).String())\n\tparams.Set(requestGUIDKey, NewUUID().String())\n\tfullURL := sr.getFullURL(tokenRequestPath, params)\n\n\ttoken, masterToken, sessionID := sr.TokenAccessor.GetTokens()\n\theaders := getHeaders()\n\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, masterToken)\n\n\tbody := make(map[string]string)\n\tbody[\"oldSessionToken\"] = token\n\tbody[\"requestType\"] = \"RENEW\"\n\n\tctx = context.WithValue(ctx, SFSessionIDKey, sessionID)\n\tlogger.WithContext(ctx).Info(\"start renew session\")\n\tvar reqBody []byte\n\treqBody, err := json.Marshal(body)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tresp, err := sr.FuncPost(ctx, sr, fullURL, headers, reqBody, timeout, defaultTimeProvider, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif err = resp.Body.Close(); err != nil {\n\t\t\tlogger.WithContext(ctx).Warnf(\"failed to close response body for %v. err: %v\", fullURL, err)\n\t\t}\n\t}()\n\tif resp.StatusCode == http.StatusOK {\n\t\tvar respd renewSessionResponse\n\t\terr = json.NewDecoder(resp.Body).Decode(&respd)\n\t\tif err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to decode JSON. err: %v\", err)\n\t\t\treturn err\n\t\t}\n\t\tif !respd.Success {\n\t\t\tc, err := strconv.Atoi(respd.Code)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn &SnowflakeError{\n\t\t\t\tNumber:  c,\n\t\t\t\tMessage: respd.Message,\n\t\t\t}\n\t\t}\n\t\tsr.TokenAccessor.SetTokens(respd.Data.SessionToken, respd.Data.MasterToken, respd.Data.SessionID)\n\t\tlogger.WithContext(ctx).Info(\"successfully renewed session\")\n\t\treturn nil\n\t}\n\tb, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to extract HTTP response body. err: %v\", err)\n\t\treturn err\n\t}\n\tlogger.WithContext(ctx).Infof(\"HTTP: %v, URL: %v, Body: %v\", resp.StatusCode, fullURL, b)\n\tlogger.WithContext(ctx).Infof(\"Header: %v\", resp.Header)\n\treturn &SnowflakeError{\n\t\tNumber:      ErrFailedToRenewSession,\n\t\tSQLState:    SQLStateConnectionFailure,\n\t\tMessage:     errors2.ErrMsgFailedToRenew,\n\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t}\n}\n\nfunc getCancelRetry(ctx context.Context) int {\n\tval := ctx.Value(cancelRetry)\n\tif val == nil {\n\t\treturn 5\n\t}\n\tcnt, ok := val.(int)\n\tif !ok {\n\t\treturn -1\n\t}\n\treturn cnt\n}\n\nfunc cancelQuery(ctx context.Context, sr *snowflakeRestful, requestID UUID, timeout time.Duration) error {\n\tlogger.WithContext(ctx).Info(\"cancel query\")\n\tparams := &url.Values{}\n\tparams.Set(requestIDKey, getOrGenerateRequestIDFromContext(ctx).String())\n\tparams.Set(requestGUIDKey, NewUUID().String())\n\n\tfullURL := sr.getFullURL(abortRequestPath, params)\n\n\theaders := getHeaders()\n\ttoken, _, _ := sr.TokenAccessor.GetTokens()\n\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, token)\n\n\treq := make(map[string]string)\n\treq[requestIDKey] = requestID.String()\n\n\treqByte, err := json.Marshal(req)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tresp, err := sr.FuncPost(ctx, sr, fullURL, headers, reqByte, timeout, defaultTimeProvider, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif err = resp.Body.Close(); err != nil {\n\t\t\tlogger.WithContext(ctx).Warnf(\"failed to close response body for %v. err: %v\", fullURL, err)\n\t\t}\n\t}()\n\tif resp.StatusCode == http.StatusOK {\n\t\tvar respd cancelQueryResponse\n\t\tif err = json.NewDecoder(resp.Body).Decode(&respd); err != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to decode JSON. err: %v\", err)\n\t\t\treturn err\n\t\t}\n\t\tctxRetry := getCancelRetry(ctx)\n\t\tif !respd.Success && respd.Code == sessionExpiredCode {\n\t\t\tif err = sr.FuncRenewSession(ctx, sr, timeout); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn sr.FuncCancelQuery(ctx, sr, requestID, timeout)\n\t\t} else if !respd.Success && respd.Code == queryNotExecutingCode {\n\t\t\tif ctxRetry != 0 {\n\t\t\t\treturn sr.FuncCancelQuery(context.WithValue(ctx, cancelRetry, ctxRetry-1), sr, requestID, timeout)\n\t\t\t}\n\t\t\t// After exhausting retries, we can safely treat queryNotExecutingCode as success\n\t\t\t// since it indicates the query has already completed and there's nothing left to cancel\n\t\t\tlogger.WithContext(ctx).Info(\"query has already completed, no cancellation needed\")\n\t\t\treturn nil\n\t\t} else if respd.Success {\n\t\t\treturn nil\n\t\t} else {\n\t\t\tc, err := strconv.Atoi(respd.Code)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn &SnowflakeError{\n\t\t\t\tNumber:  c,\n\t\t\t\tMessage: respd.Message,\n\t\t\t}\n\t\t}\n\t}\n\tb, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to extract HTTP response body. err: %v\", err)\n\t\treturn err\n\t}\n\tlogger.WithContext(ctx).Infof(\"HTTP: %v, URL: %v, Body: %v\", resp.StatusCode, fullURL, b)\n\tlogger.WithContext(ctx).Infof(\"Header: %v\", resp.Header)\n\treturn &SnowflakeError{\n\t\tNumber:      ErrFailedToCancelQuery,\n\t\tSQLState:    SQLStateConnectionFailure,\n\t\tMessage:     errors2.ErrMsgFailedToCancelQuery,\n\t\tMessageArgs: []any{resp.StatusCode, fullURL},\n\t}\n}\n\nfunc getQueryIDChan(ctx context.Context) chan<- string {\n\tv := ctx.Value(queryIDChannel)\n\tif v == nil {\n\t\treturn nil\n\t}\n\tc, ok := v.(chan<- string)\n\tif !ok {\n\t\treturn nil\n\t}\n\treturn c\n}\n\n// getExecResponse fetches a response using FuncGet and decodes it and returns it.\nfunc getExecResponse(\n\tctx context.Context,\n\tsr *snowflakeRestful,\n\tfullURL *url.URL,\n\theaders map[string]string,\n\ttimeout time.Duration) (*execResponse, error) {\n\tresp, err := sr.FuncGet(ctx, sr, fullURL, headers, timeout)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to get response. err: %v\", err)\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif closeErr := resp.Body.Close(); closeErr != nil {\n\t\t\tlogger.WithContext(ctx).Errorf(\"failed to close response body for %v. err: %v\", fullURL, closeErr)\n\t\t}\n\t}()\n\n\t// decode response and fill into an empty execResponse\n\trespd := &execResponse{}\n\terr = json.NewDecoder(resp.Body).Decode(respd)\n\tif err != nil {\n\t\tlogger.WithContext(ctx).Errorf(\"failed to decode JSON. err: %v\", err)\n\t\treturn nil, err\n\t}\n\treturn respd, nil\n}\n"
  },
  {
    "path": "restful_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc postTestError(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ []byte, _ time.Duration, _ currentTimeProvider, _ *Config) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, errors.New(\"failed to run post method\")\n}\n\nfunc postAuthTestError(_ context.Context, _ *http.Client, _ *url.URL, _ map[string]string, _ bodyCreatorType, _ time.Duration, _ int) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, errors.New(\"failed to run post method\")\n}\n\nfunc postTestSuccessButInvalidJSON(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ []byte, _ time.Duration, _ currentTimeProvider, _ *Config) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc postTestAppBadGatewayError(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ []byte, _ time.Duration, _ currentTimeProvider, _ *Config) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusBadGateway,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc postAuthTestAppBadGatewayError(_ context.Context, _ *http.Client, _ *url.URL, _ map[string]string, _ bodyCreatorType, _ time.Duration, _ int) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusBadGateway,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc postTestAppForbiddenError(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ []byte, _ time.Duration, _ currentTimeProvider, _ *Config) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusForbidden,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc postAuthTestAppForbiddenError(_ context.Context, _ *http.Client, _ *url.URL, _ map[string]string, _ bodyCreatorType, _ time.Duration, _ int) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusForbidden,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc postAuthTestAppUnexpectedError(_ context.Context, _ *http.Client, _ *url.URL, _ map[string]string, _ bodyCreatorType, _ time.Duration, _ int) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusInsufficientStorage,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc postTestQueryNotExecuting(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ []byte, _ time.Duration, _ currentTimeProvider, _ *Config) (*http.Response, error) {\n\tdd := &execResponseData{}\n\ter := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"\",\n\t\tCode:    queryNotExecutingCode,\n\t\tSuccess: false,\n\t}\n\tba, err := json.Marshal(er)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: ba},\n\t}, nil\n}\n\nfunc postTestRenew(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ []byte, _ time.Duration, _ currentTimeProvider, _ *Config) (*http.Response, error) {\n\tdd := &execResponseData{}\n\ter := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"\",\n\t\tCode:    sessionExpiredCode,\n\t\tSuccess: true,\n\t}\n\n\tba, err := json.Marshal(er)\n\tlogger.Infof(\"encoded JSON: %v\", ba)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: ba},\n\t}, nil\n}\n\nfunc postAuthTestAfterRenew(_ context.Context, _ *http.Client, _ *url.URL, _ map[string]string, _ bodyCreatorType, _ time.Duration, _ int) (*http.Response, error) {\n\tdd := &execResponseData{}\n\ter := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"\",\n\t\tCode:    \"\",\n\t\tSuccess: true,\n\t}\n\n\tba, err := json.Marshal(er)\n\tlogger.Infof(\"encoded JSON: %v\", ba)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: ba},\n\t}, nil\n}\n\nfunc postTestAfterRenew(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ []byte, _ time.Duration, _ currentTimeProvider, _ *Config) (*http.Response, error) {\n\tdd := &execResponseData{}\n\ter := &execResponse{\n\t\tData:    *dd,\n\t\tMessage: \"\",\n\t\tCode:    \"\",\n\t\tSuccess: true,\n\t}\n\n\tba, err := json.Marshal(er)\n\tlogger.Infof(\"encoded JSON: %v\", ba)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: ba},\n\t}, nil\n}\n\nfunc TestUnitPostQueryHelperError(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncPost:      postTestError,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tvar err error\n\trequestID := NewUUID()\n\t_, err = postRestfulQueryHelper(context.Background(), sr, &url.Values{}, make(map[string]string), []byte{0x12, 0x34}, 0, requestID, &Config{})\n\tif err == nil {\n\t\tt.Fatalf(\"should have failed to post\")\n\t}\n\tsr.FuncPost = postTestAppBadGatewayError\n\trequestID = NewUUID()\n\t_, err = postRestfulQueryHelper(context.Background(), sr, &url.Values{}, make(map[string]string), []byte{0x12, 0x34}, 0, requestID, &Config{})\n\tif err == nil {\n\t\tt.Fatalf(\"should have failed to post\")\n\t}\n\tsr.FuncPost = postTestSuccessButInvalidJSON\n\trequestID = NewUUID()\n\t_, err = postRestfulQueryHelper(context.Background(), sr, &url.Values{}, make(map[string]string), []byte{0x12, 0x34}, 0, requestID, &Config{})\n\tif err == nil {\n\t\tt.Fatalf(\"should have failed to post\")\n\t}\n}\n\nfunc TestUnitPostQueryHelperOnRenewSessionKeepsRequestIdButGeneratesNewRequestGuid(t *testing.T) {\n\tpostCount := 0\n\trequestID := NewUUID()\n\n\tsr := &snowflakeRestful{\n\t\tFuncPost: func(ctx context.Context, restful *snowflakeRestful, url *url.URL, headers map[string]string, bytes []byte, duration time.Duration, provider currentTimeProvider, config *Config) (*http.Response, error) {\n\t\t\tassertEqualF(t, len((url.Query())[requestIDKey]), 1)\n\t\t\tassertEqualF(t, len((url.Query())[requestGUIDKey]), 1)\n\t\t\treturn &http.Response{\n\t\t\t\tStatusCode: 200,\n\t\t\t\tBody:       &fakeResponseBody{body: []byte(`{\"data\":null,\"code\":\"390112\",\"message\":\"token expired for testing\",\"success\":false,\"headers\":null}`)},\n\t\t\t}, nil\n\t\t},\n\t\tFuncPostQuery: func(ctx context.Context, restful *snowflakeRestful, values *url.Values, headers map[string]string, bytes []byte, timeout time.Duration, uuid UUID, config *Config) (*execResponse, error) {\n\t\t\tassertEqualF(t, requestID.String(), uuid.String())\n\t\t\tassertEqualF(t, len((*values)[requestIDKey]), 1)\n\t\t\tassertEqualF(t, len((*values)[requestGUIDKey]), 1)\n\t\t\tif postCount == 0 {\n\t\t\t\tpostCount++\n\t\t\t\treturn postRestfulQueryHelper(ctx, restful, values, headers, bytes, timeout, uuid, config)\n\t\t\t}\n\t\t\treturn nil, nil\n\t\t},\n\t\tFuncRenewSession: renewSessionTest,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\t_, err := postRestfulQueryHelper(context.Background(), sr, &url.Values{}, make(map[string]string), make([]byte, 0), time.Second, requestID, nil)\n\tassertNilE(t, err)\n}\n\nfunc renewSessionTest(_ context.Context, _ *snowflakeRestful, _ time.Duration) error {\n\treturn nil\n}\n\nfunc renewSessionTestError(_ context.Context, _ *snowflakeRestful, _ time.Duration) error {\n\treturn errors.New(\"failed to renew session in tests\")\n}\n\nfunc TestUnitTokenAccessorDoesNotRenewStaleToken(t *testing.T) {\n\taccessor := getSimpleTokenAccessor()\n\toldToken := \"test\"\n\taccessor.SetTokens(oldToken, \"master\", 123)\n\n\trenewSessionCalled := false\n\trenewSessionDummy := func(_ context.Context, sr *snowflakeRestful, _ time.Duration) error {\n\t\t// should not have gotten to actual renewal\n\t\trenewSessionCalled = true\n\t\treturn nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncRenewSession: renewSessionDummy,\n\t\tTokenAccessor:    accessor,\n\t}\n\n\t// try to intentionally renew with stale token\n\tassertNilE(t, sr.renewExpiredSessionToken(context.Background(), time.Hour, \"stale-token\"))\n\n\tif renewSessionCalled {\n\t\tt.Fatal(\"FuncRenewSession should not have been called\")\n\t}\n\n\t// set the current token to empty, should still call renew even if stale token is passed in\n\taccessor.SetTokens(\"\", \"master\", 123)\n\tassertNilE(t, sr.renewExpiredSessionToken(context.Background(), time.Hour, \"stale-token\"))\n\n\tif !renewSessionCalled {\n\t\tt.Fatal(\"FuncRenewSession should have been called because current token is empty\")\n\t}\n}\n\ntype wrappedAccessor struct {\n\tta              TokenAccessor\n\tlockCallCount   int32\n\tunlockCallCount int32\n}\n\nfunc (wa *wrappedAccessor) Lock() error {\n\tatomic.AddInt32(&wa.lockCallCount, 1)\n\terr := wa.ta.Lock()\n\treturn err\n}\n\nfunc (wa *wrappedAccessor) Unlock() {\n\tatomic.AddInt32(&wa.unlockCallCount, 1)\n\twa.ta.Unlock()\n}\n\nfunc (wa *wrappedAccessor) GetTokens() (token string, masterToken string, sessionID int64) {\n\treturn wa.ta.GetTokens()\n}\n\nfunc (wa *wrappedAccessor) SetTokens(token string, masterToken string, sessionID int64) {\n\twa.ta.SetTokens(token, masterToken, sessionID)\n}\n\nfunc TestUnitTokenAccessorRenewBlocked(t *testing.T) {\n\taccessor := wrappedAccessor{\n\t\tta: getSimpleTokenAccessor(),\n\t}\n\toldToken := \"test\"\n\taccessor.SetTokens(oldToken, \"master\", 123)\n\n\trenewSessionCalled := false\n\trenewSessionDummy := func(_ context.Context, sr *snowflakeRestful, _ time.Duration) error {\n\t\trenewSessionCalled = true\n\t\treturn nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncRenewSession: renewSessionDummy,\n\t\tTokenAccessor:    &accessor,\n\t}\n\n\t// intentionally lock the accessor first\n\tassertNilE(t, accessor.Lock())\n\n\t// try to intentionally renew with stale token\n\tvar renewalStart sync.WaitGroup\n\tvar renewalDone sync.WaitGroup\n\trenewalStart.Add(1)\n\trenewalDone.Add(1)\n\tgo func() {\n\t\trenewalStart.Done()\n\t\tassertNilE(t, sr.renewExpiredSessionToken(context.Background(), time.Hour, oldToken))\n\t\trenewalDone.Done()\n\t}()\n\n\t// wait for renewal to start and get blocked on lock\n\trenewalStart.Wait()\n\t// should be blocked and not be able to call renew session\n\tif renewSessionCalled {\n\t\tt.Fail()\n\t}\n\n\t// rotate the token again so that the session token is considered stale\n\taccessor.SetTokens(\"new-token\", \"m\", 321)\n\n\t// unlock so that renew can happen\n\taccessor.Unlock()\n\trenewalDone.Wait()\n\n\t// renewal should be done but token should still not\n\t// have been renewed since we intentionally swapped token while locked\n\tif renewSessionCalled {\n\t\tt.Fail()\n\t}\n\n\t// wait for accessor defer unlock\n\tassertNilE(t, accessor.Lock())\n\tif accessor.lockCallCount != 3 {\n\t\tt.Fatalf(\"Expected Lock() to be called thrice, but got %v\", accessor.lockCallCount)\n\t}\n\tif accessor.unlockCallCount != 2 {\n\t\tt.Fatalf(\"Expected Unlock() to be called twice, but got %v\", accessor.unlockCallCount)\n\t}\n}\n\nfunc TestUnitTokenAccessorRenewSessionContention(t *testing.T) {\n\taccessor := getSimpleTokenAccessor()\n\toldToken := \"test\"\n\taccessor.SetTokens(oldToken, \"master\", 123)\n\tvar counter int32 = 0\n\n\texpectedToken := \"new token\"\n\texpectedMaster := \"new master\"\n\texpectedSession := int64(321)\n\n\trenewSessionDummy := func(_ context.Context, sr *snowflakeRestful, _ time.Duration) error {\n\t\taccessor.SetTokens(expectedToken, expectedMaster, expectedSession)\n\t\tatomic.AddInt32(&counter, 1)\n\t\treturn nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncRenewSession: renewSessionDummy,\n\t\tTokenAccessor:    accessor,\n\t}\n\n\tvar renewalsStart sync.WaitGroup\n\tvar renewalsDone sync.WaitGroup\n\tvar renewalError error\n\tnumRoutines := 50\n\tfor range numRoutines {\n\t\trenewalsDone.Add(1)\n\t\trenewalsStart.Add(1)\n\t\tgo func() {\n\t\t\t// wait for all goroutines to have been created before proceeding to race against each other\n\t\t\trenewalsStart.Wait()\n\t\t\terr := sr.renewExpiredSessionToken(context.Background(), time.Hour, oldToken)\n\t\t\tif err != nil {\n\t\t\t\trenewalError = err\n\t\t\t}\n\t\t\trenewalsDone.Done()\n\t\t}()\n\t}\n\n\t// unlock all of the waiting goroutines simultaneously\n\trenewalsStart.Add(-numRoutines)\n\n\t// wait for all competing goroutines to finish calling renew expired session token\n\trenewalsDone.Wait()\n\n\tif renewalError != nil {\n\t\tt.Fatalf(\"failed to renew session, error %v\", renewalError)\n\t}\n\tnewToken, newMaster, newSession := accessor.GetTokens()\n\tif newToken != expectedToken {\n\t\tt.Fatalf(\"token %v does not match expected %v\", newToken, expectedToken)\n\t}\n\tif newMaster != expectedMaster {\n\t\tt.Fatalf(\"master token %v does not match expected %v\", newMaster, expectedMaster)\n\t}\n\tif newSession != expectedSession {\n\t\tt.Fatalf(\"session %v does not match expected %v\", newSession, expectedSession)\n\t}\n\t// only the first renewal will go through and FuncRenewSession should be called exactly once\n\tif counter != 1 {\n\t\tt.Fatalf(\"renew expired session was called more than once: %v\", counter)\n\t}\n}\n\nfunc TestUnitPostQueryHelperUsesToken(t *testing.T) {\n\taccessor := getSimpleTokenAccessor()\n\ttoken := \"token123\"\n\taccessor.SetTokens(token, \"\", 0)\n\n\tvar err error\n\tpostQueryTest := func(_ context.Context, _ *snowflakeRestful, _ *url.Values, headers map[string]string, _ []byte, _ time.Duration, _ UUID, _ *Config) (*execResponse, error) {\n\t\tif headers[headerAuthorizationKey] != fmt.Sprintf(headerSnowflakeToken, token) {\n\t\t\tt.Fatalf(\"authorization key doesn't match, %v vs %v\", headers[headerAuthorizationKey], fmt.Sprintf(headerSnowflakeToken, token))\n\t\t}\n\t\tdd := &execResponseData{}\n\t\treturn &execResponse{\n\t\t\tData:    *dd,\n\t\t\tMessage: \"\",\n\t\t\tCode:    \"0\",\n\t\t\tSuccess: true,\n\t\t}, nil\n\t}\n\tsr := &snowflakeRestful{\n\t\tFuncPost:         postTestRenew,\n\t\tFuncPostQuery:    postQueryTest,\n\t\tFuncRenewSession: renewSessionTest,\n\t\tTokenAccessor:    accessor,\n\t}\n\t_, err = postRestfulQueryHelper(context.Background(), sr, &url.Values{}, make(map[string]string), []byte{0x12, 0x34}, 0, NewUUID(), &Config{})\n\tif err != nil {\n\t\tt.Fatalf(\"err: %v\", err)\n\t}\n}\n\nfunc TestUnitPostQueryHelperRenewSession(t *testing.T) {\n\tvar err error\n\torigRequestID := NewUUID()\n\tpostQueryTest := func(_ context.Context, _ *snowflakeRestful, _ *url.Values, _ map[string]string, _ []byte, _ time.Duration, requestID UUID, _ *Config) (*execResponse, error) {\n\t\t// ensure the same requestID is used after the session token is renewed.\n\t\tif requestID != origRequestID {\n\t\t\tt.Fatal(\"requestID doesn't match\")\n\t\t}\n\t\tdd := &execResponseData{}\n\t\treturn &execResponse{\n\t\t\tData:    *dd,\n\t\t\tMessage: \"\",\n\t\t\tCode:    \"0\",\n\t\t\tSuccess: true,\n\t\t}, nil\n\t}\n\tsr := &snowflakeRestful{\n\t\tFuncPost:         postTestRenew,\n\t\tFuncPostQuery:    postQueryTest,\n\t\tFuncRenewSession: renewSessionTest,\n\t\tTokenAccessor:    getSimpleTokenAccessor(),\n\t}\n\n\t_, err = postRestfulQueryHelper(context.Background(), sr, &url.Values{}, make(map[string]string), []byte{0x12, 0x34}, 0, origRequestID, &Config{})\n\tif err != nil {\n\t\tt.Fatalf(\"err: %v\", err)\n\t}\n\tsr.FuncRenewSession = renewSessionTestError\n\t_, err = postRestfulQueryHelper(context.Background(), sr, &url.Values{}, make(map[string]string), []byte{0x12, 0x34}, 0, origRequestID, &Config{})\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to renew session\")\n\t}\n}\n\nfunc TestUnitRenewRestfulSession(t *testing.T) {\n\taccessor := getSimpleTokenAccessor()\n\toldToken, oldMasterToken, oldSessionID := \"oldtoken\", \"oldmaster\", int64(100)\n\tnewToken, newMasterToken, newSessionID := \"newtoken\", \"newmaster\", int64(200)\n\tpostTestSuccessWithNewTokens := func(_ context.Context, _ *snowflakeRestful, _ *url.URL, headers map[string]string, _ []byte, _ time.Duration, _ currentTimeProvider, _ *Config) (*http.Response, error) {\n\t\tif headers[headerAuthorizationKey] != fmt.Sprintf(headerSnowflakeToken, oldMasterToken) {\n\t\t\tt.Fatalf(\"authorization key doesn't match, %v vs %v\", headers[headerAuthorizationKey], fmt.Sprintf(headerSnowflakeToken, oldMasterToken))\n\t\t}\n\t\ttr := &renewSessionResponse{\n\t\t\tData: renewSessionResponseMain{\n\t\t\t\tSessionToken: newToken,\n\t\t\t\tMasterToken:  newMasterToken,\n\t\t\t\tSessionID:    newSessionID,\n\t\t\t},\n\t\t\tMessage: \"\",\n\t\t\tSuccess: true,\n\t\t}\n\t\tba, err := json.Marshal(tr)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed to serialize token response %v\", err)\n\t\t}\n\t\treturn &http.Response{\n\t\t\tStatusCode: http.StatusOK,\n\t\t\tBody:       &fakeResponseBody{body: ba},\n\t\t}, nil\n\t}\n\n\tsr := &snowflakeRestful{\n\t\tFuncPost:      postTestAfterRenew,\n\t\tTokenAccessor: accessor,\n\t}\n\terr := renewRestfulSession(context.Background(), sr, time.Second)\n\tif err != nil {\n\t\tt.Fatalf(\"err: %v\", err)\n\t}\n\tsr.FuncPost = postTestError\n\terr = renewRestfulSession(context.Background(), sr, time.Second)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to run post request after the renewal\")\n\t}\n\tsr.FuncPost = postTestAppBadGatewayError\n\terr = renewRestfulSession(context.Background(), sr, time.Second)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to run post request after the renewal\")\n\t}\n\tsr.FuncPost = postTestSuccessButInvalidJSON\n\terr = renewRestfulSession(context.Background(), sr, time.Second)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to run post request after the renewal\")\n\t}\n\taccessor.SetTokens(oldToken, oldMasterToken, oldSessionID)\n\tsr.FuncPost = postTestSuccessWithNewTokens\n\terr = renewRestfulSession(context.Background(), sr, time.Second)\n\tif err != nil {\n\t\tt.Fatal(\"should not have failed to run post request after the renewal\")\n\t}\n\ttoken, masterToken, sessionID := accessor.GetTokens()\n\tif token != newToken {\n\t\tt.Fatalf(\"unexpected new token %v\", token)\n\t}\n\tif masterToken != newMasterToken {\n\t\tt.Fatalf(\"unexpected new master token %v\", masterToken)\n\t}\n\tif sessionID != newSessionID {\n\t\tt.Fatalf(\"unexpected new session id %v\", sessionID)\n\t}\n}\n\nfunc TestUnitCloseSession(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncPost:      postTestAfterRenew,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\terr := closeSession(context.Background(), sr, time.Second)\n\tif err != nil {\n\t\tt.Fatalf(\"err: %v\", err)\n\t}\n\tsr.FuncPost = postTestError\n\terr = closeSession(context.Background(), sr, time.Second)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to close session\")\n\t}\n\tsr.FuncPost = postTestAppBadGatewayError\n\terr = closeSession(context.Background(), sr, time.Second)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to close session\")\n\t}\n\tsr.FuncPost = postTestSuccessButInvalidJSON\n\terr = closeSession(context.Background(), sr, time.Second)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to close session\")\n\t}\n}\n\nfunc TestUnitCancelQuery(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncPost:      postTestAfterRenew,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tctx := context.Background()\n\terr := cancelQuery(ctx, sr, getOrGenerateRequestIDFromContext(ctx), time.Second)\n\tif err != nil {\n\t\tt.Fatalf(\"err: %v\", err)\n\t}\n\tsr.FuncPost = postTestError\n\terr = cancelQuery(ctx, sr, getOrGenerateRequestIDFromContext(ctx), time.Second)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to close session\")\n\t}\n\tsr.FuncPost = postTestAppBadGatewayError\n\terr = cancelQuery(context.Background(), sr, getOrGenerateRequestIDFromContext(ctx), time.Second)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to close session\")\n\t}\n\tsr.FuncPost = postTestSuccessButInvalidJSON\n\terr = cancelQuery(context.Background(), sr, getOrGenerateRequestIDFromContext(ctx), time.Second)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed to close session\")\n\t}\n}\n\nfunc TestCancelRetry(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tTokenAccessor:   getSimpleTokenAccessor(),\n\t\tFuncPost:        postTestQueryNotExecuting,\n\t\tFuncCancelQuery: cancelQuery,\n\t}\n\tctx := context.Background()\n\terr := cancelQuery(ctx, sr, getOrGenerateRequestIDFromContext(ctx), time.Second)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n}\n\nfunc TestPostRestfulQueryContextErrors(t *testing.T) {\n\tvar cancelCalled bool\n\tnewRestfulWithError := func(queryErr error) *snowflakeRestful {\n\t\tcancelCalled = false\n\t\treturn &snowflakeRestful{\n\t\t\tFuncPostQueryHelper: func(context.Context, *snowflakeRestful, *url.Values, map[string]string, []byte, time.Duration, UUID, *Config) (*execResponse, error) {\n\t\t\t\treturn nil, queryErr\n\t\t\t},\n\t\t\tFuncCancelQuery: func(context.Context, *snowflakeRestful, UUID, time.Duration) error {\n\t\t\t\tcancelCalled = true\n\t\t\t\treturn nil\n\t\t\t},\n\t\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t\t}\n\t}\n\trunPostRestfulQuery := func(sr *snowflakeRestful) (data *execResponse, err error) {\n\t\treturn postRestfulQuery(context.Background(), sr, &url.Values{}, nil, nil, 0, NewUUID(), nil)\n\t}\n\n\tt.Run(\"postRestfulQuery error does not trigger cancel\", func(t *testing.T) {\n\t\texpectedErr := fmt.Errorf(\"query error\")\n\t\tsr := newRestfulWithError(expectedErr)\n\t\t_, err := runPostRestfulQuery(sr)\n\t\tassertFalseE(t, cancelCalled)\n\t\tassertErrIsE(t, expectedErr, err)\n\t})\n\n\tt.Run(\"context.Canceled triggers cancel\", func(t *testing.T) {\n\t\tsr := newRestfulWithError(context.Canceled)\n\t\t_, err := runPostRestfulQuery(sr)\n\t\tassertTrueE(t, cancelCalled)\n\t\tassertErrIsE(t, context.Canceled, err)\n\t})\n\n\tt.Run(\"context.DeadlineExceeded triggers cancel\", func(t *testing.T) {\n\t\tsr := newRestfulWithError(context.DeadlineExceeded)\n\t\t_, err := runPostRestfulQuery(sr)\n\t\tassertTrueE(t, cancelCalled)\n\t\tassertErrIsE(t, context.DeadlineExceeded, err)\n\t})\n\n\tt.Run(\"cancel failure returns wrapped error\", func(t *testing.T) {\n\t\tfatalCancelErr := fmt.Errorf(\"fatal failure\")\n\t\tsr := newRestfulWithError(context.Canceled)\n\t\tsr.FuncCancelQuery = func(context.Context, *snowflakeRestful, UUID, time.Duration) error {\n\t\t\tcancelCalled = true\n\t\t\treturn fatalCancelErr\n\t\t}\n\t\t_, err := runPostRestfulQuery(sr)\n\t\tassertTrueE(t, cancelCalled)\n\t\tassertErrIsE(t, err, context.Canceled)\n\t\tassertErrIsE(t, err, fatalCancelErr)\n\t\tassertEqualE(t, \"failed to cancel query. cancelErr: fatal failure, queryErr: context canceled\", err.Error())\n\t})\n}\n\nfunc TestErrorReturnedFromLongRunningQuery(t *testing.T) {\n\tt.Run(\"e2e test\", func(t *testing.T) {\n\t\tt.Skip(\"long running test, uncomment to run manually, otherwise the test on mocks should be sufficient\")\n\t\tdb := openDB(t)\n\t\tctx, cancel := context.WithTimeout(context.Background(), 50*time.Second)\n\t\tdefer cancel()\n\t\t_, err := db.ExecContext(ctx, \"CALL SYSTEM$WAIT(55, 'SECONDS')\")\n\t\tassertNotNilF(t, err)\n\t\tassertErrIsE(t, err, context.DeadlineExceeded)\n\t})\n\n\tt.Run(\"mock test\", func(t *testing.T) {\n\t\twiremock.registerMappings(t,\n\t\t\tnewWiremockMapping(\"auth/password/successful_flow.json\"),\n\t\t\tnewWiremockMapping(\"query/long_running_query.json\"),\n\t\t\tnewWiremockMapping(\"query/query_by_id_timeout.json\"),\n\t\t)\n\t\tctx, cancel := context.WithTimeout(context.Background(), time.Second)\n\t\tdefer cancel()\n\t\tdb := wiremock.openDb(t)\n\t\t_, err := db.QueryContext(ctx, \"SELECT 1\")\n\t\tassertNotNilF(t, err)\n\t\tassertErrIsE(t, err, context.DeadlineExceeded)\n\t})\n}\n"
  },
  {
    "path": "result.go",
    "content": "package gosnowflake\n\nimport \"errors\"\n\n// QueryStatus denotes the status of a query.\ntype QueryStatus string\n\nconst (\n\t// QueryStatusInProgress denotes a query execution in progress\n\tQueryStatusInProgress QueryStatus = \"queryStatusInProgress\"\n\t// QueryStatusComplete denotes a completed query execution\n\tQueryStatusComplete QueryStatus = \"queryStatusComplete\"\n\t// QueryFailed denotes a failed query\n\tQueryFailed QueryStatus = \"queryFailed\"\n)\n\n// SnowflakeResult provides an API for methods exposed to the clients\ntype SnowflakeResult interface {\n\tGetQueryID() string\n\tGetStatus() QueryStatus\n}\n\ntype snowflakeResult struct {\n\taffectedRows int64\n\tinsertID     int64 // Snowflake doesn't support last insert id\n\tqueryID      string\n\tstatus       QueryStatus\n\terr          error\n\terrChannel   chan error\n}\n\nfunc (res *snowflakeResult) LastInsertId() (int64, error) {\n\tif err := res.waitForAsyncExecStatus(); err != nil {\n\t\treturn -1, err\n\t}\n\treturn res.insertID, nil\n}\n\nfunc (res *snowflakeResult) RowsAffected() (int64, error) {\n\tif err := res.waitForAsyncExecStatus(); err != nil {\n\t\treturn -1, err\n\t}\n\treturn res.affectedRows, nil\n}\n\nfunc (res *snowflakeResult) GetQueryID() string {\n\treturn res.queryID\n}\n\nfunc (res *snowflakeResult) GetStatus() QueryStatus {\n\treturn res.status\n}\n\nfunc (res *snowflakeResult) waitForAsyncExecStatus() error {\n\t// if async exec, block until execution is finished\n\tswitch res.status {\n\tcase QueryStatusInProgress:\n\t\terr := <-res.errChannel\n\t\tres.status = QueryStatusComplete\n\t\tif err != nil {\n\t\t\tres.status = QueryFailed\n\t\t\tres.err = err\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\tcase QueryFailed:\n\t\treturn res.err\n\tdefault:\n\t\treturn nil\n\t}\n}\n\ntype snowflakeResultNoRows struct {\n\tqueryID string\n}\n\nfunc (*snowflakeResultNoRows) LastInsertId() (int64, error) {\n\treturn 0, errors.New(\"no LastInsertId available\")\n}\n\nfunc (*snowflakeResultNoRows) RowsAffected() (int64, error) {\n\treturn 0, errors.New(\"no RowsAffected available\")\n}\n\nfunc (rnr *snowflakeResultNoRows) GetQueryID() string {\n\treturn rnr.queryID\n}\n"
  },
  {
    "path": "retry.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"math\"\n\t\"math/rand\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n)\n\ntype waitAlgo struct {\n\tmutex  *sync.Mutex // required for *rand.Rand usage\n\trandom *rand.Rand\n\tbase   time.Duration // base wait time\n\tcap    time.Duration // maximum wait time\n}\n\nvar random *rand.Rand\nvar defaultWaitAlgo *waitAlgo\n\nvar authEndpoints = []string{\n\tloginRequestPath,\n\ttokenRequestPath,\n\tauthenticatorRequestPath,\n}\n\nvar clientErrorsStatusCodesEligibleForRetry = []int{\n\thttp.StatusTooManyRequests,\n\thttp.StatusRequestTimeout,\n}\n\nfunc init() {\n\trandom = rand.New(rand.NewSource(time.Now().UnixNano()))\n\t// sleep time before retrying starts from 1s and the max sleep time is 16s\n\tdefaultWaitAlgo = &waitAlgo{mutex: &sync.Mutex{}, random: random, base: 1 * time.Second, cap: 16 * time.Second}\n}\n\nconst (\n\t// requestGUIDKey is attached to every request against Snowflake\n\trequestGUIDKey string = \"request_guid\"\n\t// retryCountKey is attached to query-request from the second time\n\tretryCountKey string = \"retryCount\"\n\t// retryReasonKey contains last HTTP status or 0 if timeout\n\tretryReasonKey string = \"retryReason\"\n\t// clientStartTime contains a time when client started request (first request, not retries)\n\tclientStartTimeKey string = \"clientStartTime\"\n\t// requestIDKey is attached to all requests to Snowflake\n\trequestIDKey string = \"requestId\"\n)\n\n// This class takes in an url during construction and replaces the value of\n// request_guid every time replace() is called. If the url does not contain\n// request_guid, just return the original url\ntype requestGUIDReplacer interface {\n\t// replace the url with new ID\n\treplace() *url.URL\n}\n\n// Make requestGUIDReplacer given a url string\nfunc newRequestGUIDReplace(urlPtr *url.URL) requestGUIDReplacer {\n\tvalues, err := url.ParseQuery(urlPtr.RawQuery)\n\tif err != nil {\n\t\t// nop if invalid query parameters\n\t\treturn &transientReplace{urlPtr}\n\t}\n\tif len(values.Get(requestGUIDKey)) == 0 {\n\t\t// nop if no request_guid is included.\n\t\treturn &transientReplace{urlPtr}\n\t}\n\n\treturn &requestGUIDReplace{urlPtr, values}\n}\n\n// this replacer does nothing but replace the url\ntype transientReplace struct {\n\turlPtr *url.URL\n}\n\nfunc (replacer *transientReplace) replace() *url.URL {\n\treturn replacer.urlPtr\n}\n\n/*\nrequestGUIDReplacer is a one-shot object that is created out of the retry loop and\ncalled with replace to change the retry_guid's value upon every retry\n*/\ntype requestGUIDReplace struct {\n\turlPtr    *url.URL\n\turlValues url.Values\n}\n\n/*\n*\nThis function would replace they value of the requestGUIDKey in a url with a newly\ngenerated UUID\n*/\nfunc (replacer *requestGUIDReplace) replace() *url.URL {\n\treplacer.urlValues.Del(requestGUIDKey)\n\treplacer.urlValues.Add(requestGUIDKey, NewUUID().String())\n\treplacer.urlPtr.RawQuery = replacer.urlValues.Encode()\n\treturn replacer.urlPtr\n}\n\ntype retryCountUpdater interface {\n\treplaceOrAdd(retry int) *url.URL\n}\n\ntype retryCountUpdate struct {\n\turlPtr    *url.URL\n\turlValues url.Values\n}\n\n// this replacer does nothing but replace the url\ntype transientRetryCountUpdater struct {\n\turlPtr *url.URL\n}\n\nfunc (replaceOrAdder *transientRetryCountUpdater) replaceOrAdd(retry int) *url.URL {\n\treturn replaceOrAdder.urlPtr\n}\n\nfunc (replacer *retryCountUpdate) replaceOrAdd(retry int) *url.URL {\n\treplacer.urlValues.Del(retryCountKey)\n\treplacer.urlValues.Add(retryCountKey, strconv.Itoa(retry))\n\treplacer.urlPtr.RawQuery = replacer.urlValues.Encode()\n\treturn replacer.urlPtr\n}\n\nfunc newRetryCountUpdater(urlPtr *url.URL) retryCountUpdater {\n\tif !isQueryRequest(urlPtr) {\n\t\t// nop if not query-request\n\t\treturn &transientRetryCountUpdater{urlPtr}\n\t}\n\tvalues, err := url.ParseQuery(urlPtr.RawQuery)\n\tif err != nil {\n\t\t// nop if the URL is not valid\n\t\treturn &transientRetryCountUpdater{urlPtr}\n\t}\n\treturn &retryCountUpdate{urlPtr, values}\n}\n\ntype retryReasonUpdater interface {\n\treplaceOrAdd(reason int) *url.URL\n}\n\ntype retryReasonUpdate struct {\n\turl *url.URL\n}\n\nfunc (retryReasonUpdater *retryReasonUpdate) replaceOrAdd(reason int) *url.URL {\n\tquery := retryReasonUpdater.url.Query()\n\tquery.Del(retryReasonKey)\n\tquery.Add(retryReasonKey, strconv.Itoa(reason))\n\tretryReasonUpdater.url.RawQuery = query.Encode()\n\treturn retryReasonUpdater.url\n}\n\ntype transientRetryReasonUpdater struct {\n\turl *url.URL\n}\n\nfunc (retryReasonUpdater *transientRetryReasonUpdater) replaceOrAdd(_ int) *url.URL {\n\treturn retryReasonUpdater.url\n}\n\nfunc newRetryReasonUpdater(url *url.URL, cfg *Config) retryReasonUpdater {\n\t// not a query request\n\tif !isQueryRequest(url) {\n\t\treturn &transientRetryReasonUpdater{url}\n\t}\n\t// implicitly disabled retry reason\n\tif cfg != nil && cfg.IncludeRetryReason == ConfigBoolFalse {\n\t\treturn &transientRetryReasonUpdater{url}\n\t}\n\treturn &retryReasonUpdate{url}\n}\n\nfunc ensureClientStartTimeIsSet(url *url.URL, clientStartTime string) *url.URL {\n\tif !isQueryRequest(url) {\n\t\t// nop if not query-request\n\t\treturn url\n\t}\n\tquery := url.Query()\n\tif query.Has(clientStartTimeKey) {\n\t\treturn url\n\t}\n\tquery.Add(clientStartTimeKey, clientStartTime)\n\turl.RawQuery = query.Encode()\n\treturn url\n}\n\nfunc isQueryRequest(url *url.URL) bool {\n\treturn strings.HasPrefix(url.Path, queryRequestPath)\n}\n\n// jitter backoff in seconds\nfunc (w *waitAlgo) calculateWaitBeforeRetryForAuthRequest(attempt int, currWaitTimeDuration time.Duration) time.Duration {\n\tw.mutex.Lock()\n\tdefer w.mutex.Unlock()\n\tcurrWaitTimeInSeconds := currWaitTimeDuration.Seconds()\n\tjitterAmount := w.getJitter(currWaitTimeInSeconds)\n\tjitteredSleepTime := chooseRandomFromRange(currWaitTimeInSeconds+jitterAmount, math.Pow(2, float64(attempt))+jitterAmount)\n\treturn time.Duration(jitteredSleepTime * float64(time.Second))\n}\n\nfunc (w *waitAlgo) calculateWaitBeforeRetry(sleep time.Duration) time.Duration {\n\tw.mutex.Lock()\n\tdefer w.mutex.Unlock()\n\t// use decorrelated jitter in retry time\n\trandDuration := randMilliSecondDuration(w.base, sleep*3)\n\treturn durationMin(w.cap, randDuration)\n}\n\nfunc randMilliSecondDuration(base time.Duration, bound time.Duration) time.Duration {\n\tbaseNumber := int64(base / time.Millisecond)\n\tboundNumber := int64(bound / time.Millisecond)\n\trandomDuration := random.Int63n(boundNumber-baseNumber) + baseNumber\n\treturn time.Duration(randomDuration) * time.Millisecond\n}\n\nfunc (w *waitAlgo) getJitter(currWaitTime float64) float64 {\n\tmultiplicationFactor := chooseRandomFromRange(-1, 1)\n\tjitterAmount := 0.5 * currWaitTime * multiplicationFactor\n\treturn jitterAmount\n}\n\ntype requestFunc func(method, urlStr string, body io.Reader) (*http.Request, error)\n\ntype clientInterface interface {\n\tDo(req *http.Request) (*http.Response, error)\n}\n\ntype retryHTTP struct {\n\tctx                 context.Context\n\tclient              clientInterface\n\treq                 requestFunc\n\tmethod              string\n\tfullURL             *url.URL\n\theaders             map[string]string\n\tbodyCreator         bodyCreatorType\n\ttimeout             time.Duration\n\tmaxRetryCount       int\n\tcurrentTimeProvider currentTimeProvider\n\tcfg                 *Config\n}\n\nfunc newRetryHTTP(ctx context.Context,\n\tclient clientInterface,\n\treq requestFunc,\n\tfullURL *url.URL,\n\theaders map[string]string,\n\ttimeout time.Duration,\n\tmaxRetryCount int,\n\tcurrentTimeProvider currentTimeProvider,\n\tcfg *Config) *retryHTTP {\n\tinstance := retryHTTP{}\n\tinstance.ctx = ctx\n\tinstance.client = client\n\tinstance.req = req\n\tinstance.method = \"GET\"\n\tinstance.fullURL = fullURL\n\tinstance.headers = headers\n\tinstance.timeout = timeout\n\tinstance.maxRetryCount = maxRetryCount\n\tinstance.bodyCreator = emptyBodyCreator\n\tinstance.currentTimeProvider = currentTimeProvider\n\tinstance.cfg = cfg\n\treturn &instance\n}\n\nfunc (r *retryHTTP) doPost() *retryHTTP {\n\tr.method = \"POST\"\n\treturn r\n}\n\nfunc (r *retryHTTP) setBody(body []byte) *retryHTTP {\n\tr.bodyCreator = func() ([]byte, error) {\n\t\treturn body, nil\n\t}\n\treturn r\n}\n\nfunc (r *retryHTTP) setBodyCreator(bodyCreator bodyCreatorType) *retryHTTP {\n\tr.bodyCreator = bodyCreator\n\treturn r\n}\n\nfunc (r *retryHTTP) execute() (res *http.Response, err error) {\n\ttotalTimeout := r.timeout\n\tlogger.WithContext(r.ctx).Debugf(\"retryHTTP.totalTimeout: %v\", totalTimeout)\n\tretryCounter := 0\n\tsleepTime := time.Duration(time.Second)\n\tclientStartTime := strconv.FormatInt(r.currentTimeProvider.currentTime(), 10)\n\n\tvar requestGUIDReplacer requestGUIDReplacer\n\tvar retryCountUpdater retryCountUpdater\n\tvar retryReasonUpdater retryReasonUpdater\n\n\tfor {\n\t\ttimer := time.Now()\n\t\tlogger.WithContext(r.ctx).Debugf(\"retry count: %v\", retryCounter)\n\t\tbody, err := r.bodyCreator()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treq, err := r.req(r.method, r.fullURL.String(), bytes.NewReader(body))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif req != nil {\n\t\t\t// req can be nil in tests\n\t\t\treq = req.WithContext(r.ctx)\n\t\t}\n\t\tfor k, v := range r.headers {\n\t\t\treq.Header.Set(k, v)\n\t\t}\n\t\tres, err = r.client.Do(req)\n\n\t\t// check if it can retry.\n\t\tretryable, err := isRetryableError(r.ctx, req, res, err)\n\t\tif !retryable {\n\t\t\treturn res, err\n\t\t}\n\t\tlogger.WithContext(r.ctx).Debugf(\"Request to %v - response received after milliseconds %v with status .\", r.fullURL.Host, time.Since(timer).String())\n\n\t\tif err != nil {\n\t\t\tlogger.WithContext(r.ctx).Warnf(\n\t\t\t\t\"failed http connection. err: %v. retrying...\\n\", err)\n\t\t} else {\n\t\t\tlogger.WithContext(r.ctx).Tracef(\n\t\t\t\t\"failed http connection. HTTP Status: %v. retrying...\\n\", res.StatusCode)\n\t\t\tif closeErr := res.Body.Close(); closeErr != nil {\n\t\t\t\tlogger.Warnf(\"failed to close response body. err: %v\", closeErr)\n\t\t\t}\n\t\t}\n\t\t// uses exponential jitter backoff\n\t\tretryCounter++\n\t\tif isLoginRequest(req) {\n\t\t\tsleepTime = defaultWaitAlgo.calculateWaitBeforeRetryForAuthRequest(retryCounter, sleepTime)\n\t\t} else {\n\t\t\tsleepTime = defaultWaitAlgo.calculateWaitBeforeRetry(sleepTime)\n\t\t}\n\t\tif totalTimeout > 0 { // if any timeout is set\n\t\t\ttotalTimeout -= sleepTime\n\t\t}\n\t\tif (r.timeout > 0 && totalTimeout <= 0) || retryCounter > r.maxRetryCount {\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tif res != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"timeout after %s and %v attempts. HTTP Status: %v. Hanging?\", r.timeout, retryCounter, res.StatusCode)\n\t\t\t}\n\t\t\treturn nil, fmt.Errorf(\"timeout after %s and %v attempts. Hanging?\", r.timeout, retryCounter)\n\t\t}\n\t\tif requestGUIDReplacer == nil {\n\t\t\trequestGUIDReplacer = newRequestGUIDReplace(r.fullURL)\n\t\t}\n\t\tr.fullURL = requestGUIDReplacer.replace()\n\t\tif retryCountUpdater == nil {\n\t\t\tretryCountUpdater = newRetryCountUpdater(r.fullURL)\n\t\t}\n\t\tr.fullURL = retryCountUpdater.replaceOrAdd(retryCounter)\n\t\tif retryReasonUpdater == nil {\n\t\t\tretryReasonUpdater = newRetryReasonUpdater(r.fullURL, r.cfg)\n\t\t}\n\t\tretryReason := 0\n\t\tif res != nil {\n\t\t\tretryReason = res.StatusCode\n\t\t}\n\t\tr.fullURL = retryReasonUpdater.replaceOrAdd(retryReason)\n\t\tr.fullURL = ensureClientStartTimeIsSet(r.fullURL, clientStartTime)\n\t\tlogger.WithContext(r.ctx).Debugf(\"sleeping %v. to timeout: %v. retrying\", sleepTime, totalTimeout)\n\t\tlogger.WithContext(r.ctx).Debugf(\"retry count: %v, retry reason: %v\", retryCounter, retryReason)\n\n\t\tawait := time.NewTimer(sleepTime)\n\t\tselect {\n\t\tcase <-await.C:\n\t\t\t// retry the request\n\t\tcase <-r.ctx.Done():\n\t\t\tawait.Stop()\n\t\t\treturn res, r.ctx.Err()\n\t\t}\n\t}\n}\n\nfunc isRetryableError(ctx context.Context, req *http.Request, res *http.Response, err error) (bool, error) {\n\tif ctx.Err() != nil {\n\t\treturn false, ctx.Err()\n\t}\n\tif err != nil && res == nil { // Failed http connection. Most probably client timeout.\n\t\treturn true, err\n\t}\n\tif res == nil || req == nil {\n\t\treturn false, err\n\t}\n\treturn isRetryableStatus(res.StatusCode), err\n}\n\nfunc isRetryableStatus(statusCode int) bool {\n\treturn (statusCode >= 500 && statusCode < 600) || slices.Contains(clientErrorsStatusCodesEligibleForRetry, statusCode)\n}\n\nfunc isLoginRequest(req *http.Request) bool {\n\treturn slices.Contains(authEndpoints, req.URL.Path)\n}\n"
  },
  {
    "path": "retry_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"database/sql\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc fakeRequestFunc(_, _ string, _ io.Reader) (*http.Request, error) {\n\treturn nil, nil\n}\n\nfunc emptyRequest(method string, urlStr string, body io.Reader) (*http.Request, error) {\n\treturn http.NewRequest(method, urlStr, body)\n}\n\ntype fakeHTTPError struct {\n\terr     string\n\ttimeout bool\n}\n\nfunc (e *fakeHTTPError) Error() string   { return e.err }\nfunc (e *fakeHTTPError) Timeout() bool   { return e.timeout }\nfunc (e *fakeHTTPError) Temporary() bool { return true }\n\ntype fakeResponseBody struct {\n\tbody []byte\n\tcnt  int\n}\n\nfunc (b *fakeResponseBody) Read(p []byte) (n int, err error) {\n\tif b.cnt == 0 {\n\t\tcopy(p, b.body)\n\t\tb.cnt = 1\n\t\treturn len(b.body), nil\n\t}\n\tb.cnt = 0\n\treturn 0, io.EOF\n}\n\nfunc (b *fakeResponseBody) Close() error {\n\treturn nil\n}\n\ntype fakeHTTPClient struct {\n\tt                   *testing.T                // for assertions\n\tcnt                 int                       // number of retry\n\tsuccess             bool                      // return success after retry in cnt times\n\ttimeout             bool                      // timeout\n\tbody                []byte                    // return body\n\treqBody             []byte                    // last request body\n\tstatusCode          int                       // status code\n\tretryNumber         int                       // consecutive number of  retries\n\texpectedQueryParams map[int]map[string]string // expected query params per each retry (0-based)\n}\n\nfunc (c *fakeHTTPClient) Do(req *http.Request) (*http.Response, error) {\n\tdefer func() {\n\t\tc.retryNumber++\n\t}()\n\tif req != nil {\n\t\tbuf := new(bytes.Buffer)\n\t\t_, err := buf.ReadFrom(req.Body)\n\t\tassertNilF(c.t, err)\n\t\tc.reqBody = buf.Bytes()\n\t}\n\n\tif len(c.expectedQueryParams) > 0 {\n\t\texpectedQueryParams, ok := c.expectedQueryParams[c.retryNumber]\n\t\tif ok {\n\t\t\tfor queryParamName, expectedValue := range expectedQueryParams {\n\t\t\t\tactualValue := req.URL.Query().Get(queryParamName)\n\t\t\t\tif actualValue != expectedValue {\n\t\t\t\t\tc.t.Fatalf(\"expected query param %v to be %v, got %v\", queryParamName, expectedValue, actualValue)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tc.cnt--\n\tif c.cnt < 0 {\n\t\tc.cnt = 0\n\t}\n\tlogger.Infof(\"fakeHTTPClient.cnt: %v\", c.cnt)\n\n\tvar retcode int\n\tif c.success && c.cnt == 0 {\n\t\tretcode = 200\n\t} else {\n\t\tif c.timeout {\n\t\t\t// simulate timeout\n\t\t\ttime.Sleep(time.Second * 1)\n\t\t\treturn nil, &fakeHTTPError{\n\t\t\t\terr:     \"Whatever reason (Client.Timeout exceeded while awaiting headers)\",\n\t\t\t\ttimeout: true,\n\t\t\t}\n\t\t}\n\t\tif c.statusCode != 0 {\n\t\t\tretcode = c.statusCode\n\t\t} else {\n\t\t\tretcode = 0\n\t\t}\n\t}\n\n\tret := &http.Response{\n\t\tStatusCode: retcode,\n\t\tBody:       &fakeResponseBody{body: c.body},\n\t}\n\treturn ret, nil\n}\n\nfunc TestRequestGUID(t *testing.T) {\n\tvar ridReplacer requestGUIDReplacer\n\tvar testURL *url.URL\n\tvar actualURL *url.URL\n\tretryTime := 4\n\n\t// empty url\n\ttestURL = &url.URL{}\n\tridReplacer = newRequestGUIDReplace(testURL)\n\tfor range retryTime {\n\t\tactualURL = ridReplacer.replace()\n\t\tif actualURL.String() != \"\" {\n\t\t\tt.Fatalf(\"empty url not replaced by an empty one, got %s\", actualURL)\n\t\t}\n\t}\n\n\t// url with on retry id\n\ttestURL = &url.URL{\n\t\tPath: \"/\" + requestIDKey + \"=123-1923-9?param2=value\",\n\t}\n\tridReplacer = newRequestGUIDReplace(testURL)\n\tfor range retryTime {\n\t\tactualURL = ridReplacer.replace()\n\n\t\tif actualURL != testURL {\n\t\t\tt.Fatalf(\"url without retry id not replaced by origin one, got %s\", actualURL)\n\t\t}\n\t}\n\n\t// url with retry id\n\t// With both prefix and suffix\n\tprefix := \"/\" + requestIDKey + \"=123-1923-9?\" + requestGUIDKey + \"=\"\n\tsuffix := \"?param2=value\"\n\ttestURL = &url.URL{\n\t\tPath: prefix + \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" + suffix,\n\t}\n\tridReplacer = newRequestGUIDReplace(testURL)\n\tfor range retryTime {\n\t\tactualURL = ridReplacer.replace()\n\t\tif (!strings.HasPrefix(actualURL.Path, prefix)) ||\n\t\t\t(!strings.HasSuffix(actualURL.Path, suffix)) ||\n\t\t\tlen(testURL.Path) != len(actualURL.Path) {\n\t\t\tt.Fatalf(\"Retry url not replaced correctedly: \\n origin: %s \\n result: %s\", testURL, actualURL)\n\t\t}\n\t}\n\n\t// With no suffix\n\tprefix = \"/\" + requestIDKey + \"=123-1923-9?\" + requestGUIDKey + \"=\"\n\tsuffix = \"\"\n\ttestURL = &url.URL{\n\t\tPath: prefix + \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" + suffix,\n\t}\n\tridReplacer = newRequestGUIDReplace(testURL)\n\tfor range retryTime {\n\t\tactualURL = ridReplacer.replace()\n\t\tif (!strings.HasPrefix(actualURL.Path, prefix)) ||\n\t\t\t(!strings.HasSuffix(actualURL.Path, suffix)) ||\n\t\t\tlen(testURL.Path) != len(actualURL.Path) {\n\t\t\tt.Fatalf(\"Retry url not replaced correctedly: \\n origin: %s \\n result: %s\", testURL, actualURL)\n\t\t}\n\n\t}\n\t// With no prefix\n\tprefix = requestGUIDKey + \"=\"\n\tsuffix = \"?param2=value\"\n\ttestURL = &url.URL{\n\t\tPath: prefix + \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" + suffix,\n\t}\n\tridReplacer = newRequestGUIDReplace(testURL)\n\tfor range retryTime {\n\t\tactualURL = ridReplacer.replace()\n\t\tif (!strings.HasPrefix(actualURL.Path, prefix)) ||\n\t\t\t(!strings.HasSuffix(actualURL.Path, suffix)) ||\n\t\t\tlen(testURL.Path) != len(actualURL.Path) {\n\t\t\tt.Fatalf(\"Retry url not replaced correctedly: \\n origin: %s \\n result: %s\", testURL, actualURL)\n\t\t}\n\t}\n}\n\nfunc TestRetryQuerySuccess(t *testing.T) {\n\tlogger.Info(\"Retry N times and Success\")\n\tclient := &fakeHTTPClient{\n\t\tcnt:        3,\n\t\tsuccess:    true,\n\t\tstatusCode: 429,\n\t\tt:          t,\n\t\texpectedQueryParams: map[int]map[string]string{\n\t\t\t0: {\n\t\t\t\t\"retryCount\":      \"\",\n\t\t\t\t\"retryReason\":     \"\",\n\t\t\t\t\"clientStartTime\": \"\",\n\t\t\t},\n\t\t\t1: {\n\t\t\t\t\"retryCount\":      \"1\",\n\t\t\t\t\"retryReason\":     \"429\",\n\t\t\t\t\"clientStartTime\": \"123456\",\n\t\t\t},\n\t\t\t2: {\n\t\t\t\t\"retryCount\":      \"2\",\n\t\t\t\t\"retryReason\":     \"429\",\n\t\t\t\t\"clientStartTime\": \"123456\",\n\t\t\t},\n\t\t},\n\t}\n\turlPtr, err := url.Parse(\"https://fakeaccountretrysuccess.snowflakecomputing.com:443/queries/v1/query-request?\" + requestIDKey + \"=testid\")\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\t_, err = newRetryHTTP(context.Background(),\n\t\tclient,\n\t\temptyRequest, urlPtr, make(map[string]string), 60*time.Second, 3, constTimeProvider(123456), &Config{IncludeRetryReason: ConfigBoolTrue}).doPost().setBody([]byte{0}).execute()\n\tassertNilF(t, err, \"failed to run retry\")\n\tvar values url.Values\n\tvalues, err = url.ParseQuery(urlPtr.RawQuery)\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\tretry, err := strconv.Atoi(values.Get(retryCountKey))\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get retry counter: %v\", err)\n\t}\n\tif retry < 2 {\n\t\tt.Fatalf(\"not enough retry counter: %v\", retry)\n\t}\n}\n\nfunc TestRetryQuerySuccessWithRetryReasonDisabled(t *testing.T) {\n\tlogger.Info(\"Retry N times and Success\")\n\tclient := &fakeHTTPClient{\n\t\tcnt:        3,\n\t\tsuccess:    true,\n\t\tstatusCode: 429,\n\t\tt:          t,\n\t\texpectedQueryParams: map[int]map[string]string{\n\t\t\t0: {\n\t\t\t\t\"retryCount\":      \"\",\n\t\t\t\t\"retryReason\":     \"\",\n\t\t\t\t\"clientStartTime\": \"\",\n\t\t\t},\n\t\t\t1: {\n\t\t\t\t\"retryCount\":      \"1\",\n\t\t\t\t\"retryReason\":     \"\",\n\t\t\t\t\"clientStartTime\": \"123456\",\n\t\t\t},\n\t\t\t2: {\n\t\t\t\t\"retryCount\":      \"2\",\n\t\t\t\t\"retryReason\":     \"\",\n\t\t\t\t\"clientStartTime\": \"123456\",\n\t\t\t},\n\t\t},\n\t}\n\turlPtr, err := url.Parse(\"https://fakeaccountretrysuccess.snowflakecomputing.com:443/queries/v1/query-request?\" + requestIDKey + \"=testid\")\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\t_, err = newRetryHTTP(context.Background(),\n\t\tclient,\n\t\temptyRequest, urlPtr, make(map[string]string), 60*time.Second, 3, constTimeProvider(123456), &Config{IncludeRetryReason: ConfigBoolFalse}).doPost().setBody([]byte{0}).execute()\n\tassertNilF(t, err, \"failed to run retry\")\n\tvar values url.Values\n\tvalues, err = url.ParseQuery(urlPtr.RawQuery)\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\tretry, err := strconv.Atoi(values.Get(retryCountKey))\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get retry counter: %v\", err)\n\t}\n\tif retry < 2 {\n\t\tt.Fatalf(\"not enough retry counter: %v\", retry)\n\t}\n}\n\nfunc TestRetryQuerySuccessWithTimeout(t *testing.T) {\n\tlogger.Info(\"Retry N times and Success\")\n\tclient := &fakeHTTPClient{\n\t\tcnt:     3,\n\t\tsuccess: true,\n\t\ttimeout: true,\n\t\tt:       t,\n\t\texpectedQueryParams: map[int]map[string]string{\n\t\t\t0: {\n\t\t\t\t\"retryCount\":  \"\",\n\t\t\t\t\"retryReason\": \"\",\n\t\t\t},\n\t\t\t1: {\n\t\t\t\t\"retryCount\":  \"1\",\n\t\t\t\t\"retryReason\": \"0\",\n\t\t\t},\n\t\t\t2: {\n\t\t\t\t\"retryCount\":  \"2\",\n\t\t\t\t\"retryReason\": \"0\",\n\t\t\t},\n\t\t},\n\t}\n\turlPtr, err := url.Parse(\"https://fakeaccountretrysuccess.snowflakecomputing.com:443/queries/v1/query-request?\" + requestIDKey + \"=testid\")\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\t_, err = newRetryHTTP(context.Background(),\n\t\tclient,\n\t\temptyRequest, urlPtr, make(map[string]string), 60*time.Second, 3, constTimeProvider(123456), nil).doPost().setBody([]byte{0}).execute()\n\tassertNilF(t, err, \"failed to run retry\")\n\tvar values url.Values\n\tvalues, err = url.ParseQuery(urlPtr.RawQuery)\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\tretry, err := strconv.Atoi(values.Get(retryCountKey))\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get retry counter: %v\", err)\n\t}\n\tif retry < 2 {\n\t\tt.Fatalf(\"not enough retry counter: %v\", retry)\n\t}\n}\n\nfunc TestRetryQueryFailWithTimeout(t *testing.T) {\n\tlogger.Info(\"Retry N times until there is a timeout and Fail\")\n\tclient := &fakeHTTPClient{\n\t\tstatusCode: http.StatusTooManyRequests,\n\t\tsuccess:    false,\n\t\tt:          t,\n\t}\n\turlPtr, err := url.Parse(\"https://fakeaccountretryfail.snowflakecomputing.com:443/queries/v1/query-request?\" + requestIDKey)\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\t_, err = newRetryHTTP(context.Background(),\n\t\tclient,\n\t\temptyRequest, urlPtr, make(map[string]string), 20*time.Second, 100, defaultTimeProvider, nil).doPost().setBody([]byte{0}).execute()\n\tassertNotNilF(t, err, \"should fail to run retry\")\n\tvar values url.Values\n\tvalues, err = url.ParseQuery(urlPtr.RawQuery)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to parse the URL: %v\", err))\n\tretry, err := strconv.Atoi(values.Get(retryCountKey))\n\tassertNilF(t, err, fmt.Sprintf(\"failed to get retry counter: %v\", err))\n\tif retry < 2 {\n\t\tt.Fatalf(\"not enough retries: %v\", retry)\n\t}\n}\n\nfunc TestRetryQueryFailWithMaxRetryCount(t *testing.T) {\n\ttcs := []struct {\n\t\tname    string\n\t\ttimeout time.Duration\n\t}{\n\t\t{\n\t\t\tname:    \"with timeout\",\n\t\t\ttimeout: 15 * time.Hour,\n\t\t},\n\t\t{\n\t\t\tname:    \"without timeout\",\n\t\t\ttimeout: 0,\n\t\t},\n\t}\n\n\tfor _, tc := range tcs {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tmaxRetryCount := 3\n\t\t\tlogger.Info(\"Retry 3 times until retry reaches MaxRetryCount and Fail\")\n\t\t\tclient := &fakeHTTPClient{\n\t\t\t\tstatusCode: http.StatusTooManyRequests,\n\t\t\t\tsuccess:    false,\n\t\t\t\tt:          t,\n\t\t\t}\n\t\t\turlPtr, err := url.Parse(\"https://fakeaccountretryfail.snowflakecomputing.com:443/queries/v1/query-request?\" + requestIDKey)\n\t\t\tassertNilF(t, err, \"failed to parse the test URL\")\n\t\t\t_, err = newRetryHTTP(context.Background(),\n\t\t\t\tclient,\n\t\t\t\temptyRequest, urlPtr, make(map[string]string), tc.timeout, maxRetryCount, defaultTimeProvider, nil).doPost().setBody([]byte{0}).execute()\n\t\t\tassertNotNilF(t, err, \"should fail to run retry\")\n\t\t\tvar values url.Values\n\t\t\tvalues, err = url.ParseQuery(urlPtr.RawQuery)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"failed to parse the URL: %v\", err)\n\t\t\t}\n\t\t\tretryCount, err := strconv.Atoi(values.Get(retryCountKey))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"failed to get retry counter: %v\", err)\n\t\t\t}\n\t\t\tif retryCount < 3 {\n\t\t\t\tt.Fatalf(\"not enough retries: %v; expected %v\", retryCount, maxRetryCount)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestRetryLoginRequest(t *testing.T) {\n\tlogger.Info(\"Retry N times for timeouts and Success\")\n\tclient := &fakeHTTPClient{\n\t\tcnt:     3,\n\t\tsuccess: true,\n\t\ttimeout: true,\n\t\tt:       t,\n\t\texpectedQueryParams: map[int]map[string]string{\n\t\t\t0: {\n\t\t\t\t\"retryCount\":  \"\",\n\t\t\t\t\"retryReason\": \"\",\n\t\t\t},\n\t\t\t1: {\n\t\t\t\t\"retryCount\":  \"\",\n\t\t\t\t\"retryReason\": \"\",\n\t\t\t},\n\t\t\t2: {\n\t\t\t\t\"retryCount\":  \"\",\n\t\t\t\t\"retryReason\": \"\",\n\t\t\t},\n\t\t},\n\t}\n\turlPtr, err := url.Parse(\"https://fakeaccountretrylogin.snowflakecomputing.com:443/login-request?request_id=testid\")\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\t_, err = newRetryHTTP(context.Background(),\n\t\tclient,\n\t\temptyRequest, urlPtr, make(map[string]string), 60*time.Second, 3, defaultTimeProvider, nil).doPost().setBody([]byte{0}).execute()\n\tassertNilF(t, err, \"failed to run retry\")\n\tvar values url.Values\n\tvalues, err = url.ParseQuery(urlPtr.RawQuery)\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\tif values.Get(retryCountKey) != \"\" {\n\t\tt.Fatalf(\"no retry counter should be attached: %v\", retryCountKey)\n\t}\n\tlogger.Info(\"Retry N times for timeouts and Fail\")\n\tclient = &fakeHTTPClient{\n\t\tsuccess: false,\n\t\ttimeout: true,\n\t\tt:       t,\n\t}\n\t_, err = newRetryHTTP(context.Background(),\n\t\tclient,\n\t\temptyRequest, urlPtr, make(map[string]string), 5*time.Second, 3, defaultTimeProvider, nil).doPost().setBody([]byte{0}).execute()\n\tassertNotNilF(t, err, \"should fail to run retry\")\n\tvalues, err = url.ParseQuery(urlPtr.RawQuery)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to parse the URL: %v\", err)\n\t}\n\tif values.Get(retryCountKey) != \"\" {\n\t\tt.Fatalf(\"no retry counter should be attached: %v\", retryCountKey)\n\t}\n}\n\nfunc TestRetryAuthLoginRequest(t *testing.T) {\n\tlogger.Info(\"Retry N times always with newer body\")\n\tclient := &fakeHTTPClient{\n\t\tcnt:     3,\n\t\tsuccess: true,\n\t\ttimeout: true,\n\t\tt:       t,\n\t}\n\turlPtr, err := url.Parse(\"https://fakeaccountretrylogin.snowflakecomputing.com:443/login-request?request_id=testid\")\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\texecID := 0\n\tbodyCreator := func() ([]byte, error) {\n\t\texecID++\n\t\treturn fmt.Appendf(nil, \"execID: %d\", execID), nil\n\t}\n\t_, err = newRetryHTTP(context.Background(),\n\t\tclient,\n\t\thttp.NewRequest, urlPtr, make(map[string]string), 60*time.Second, 3, defaultTimeProvider, nil).doPost().setBodyCreator(bodyCreator).execute()\n\tassertNilF(t, err, \"failed to run retry\")\n\tif lastReqBody := string(client.reqBody); lastReqBody != \"execID: 3\" {\n\t\tt.Fatalf(\"body should be updated on each request, expected: execID: 3, last body: %v\", lastReqBody)\n\t}\n}\n\nfunc TestLoginRetry429(t *testing.T) {\n\tclient := &fakeHTTPClient{\n\t\tcnt:        3,\n\t\tsuccess:    true,\n\t\tstatusCode: 429,\n\t\tt:          t,\n\t}\n\turlPtr, err := url.Parse(\"https://fakeaccountretrylogin.snowflakecomputing.com:443/login-request?request_id=testid\")\n\tassertNilF(t, err, \"failed to parse the test URL\")\n\n\t_, err = newRetryHTTP(context.Background(),\n\t\tclient,\n\t\temptyRequest, urlPtr, make(map[string]string), 60*time.Second, 3, defaultTimeProvider, nil).doPost().setBody([]byte{0}).execute() // enable doRaise4XXX\n\tassertNilF(t, err, \"failed to run retry\")\n\n\tvar values url.Values\n\tvalues, err = url.ParseQuery(urlPtr.RawQuery)\n\tassertNilF(t, err, fmt.Sprintf(\"failed to parse the URL: %v\", err))\n\tif values.Get(retryCountKey) != \"\" {\n\t\tt.Fatalf(\"no retry counter should be attached: %v\", retryCountKey)\n\t}\n}\n\nfunc TestIsRetryable(t *testing.T) {\n\tdeadLineCtx, cancel := context.WithTimeout(context.Background(), 1*time.Nanosecond)\n\tdefer cancel()\n\ttime.Sleep(2 * time.Nanosecond)\n\n\ttcs := []struct {\n\t\tctx      context.Context\n\t\treq      *http.Request\n\t\tres      *http.Response\n\t\terr      error\n\t\texpected bool\n\t}{\n\t\t{\n\t\t\tctx:      context.Background(),\n\t\t\treq:      nil,\n\t\t\tres:      nil,\n\t\t\terr:      nil,\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tctx:      context.Background(),\n\t\t\treq:      nil,\n\t\t\tres:      &http.Response{StatusCode: http.StatusBadRequest},\n\t\t\terr:      nil,\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tctx:      context.Background(),\n\t\t\treq:      &http.Request{URL: &url.URL{Path: loginRequestPath}},\n\t\t\tres:      nil,\n\t\t\terr:      nil,\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tctx:      context.Background(),\n\t\t\treq:      &http.Request{URL: &url.URL{Path: loginRequestPath}},\n\t\t\tres:      &http.Response{StatusCode: http.StatusNotFound},\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tctx:      context.Background(),\n\t\t\treq:      &http.Request{URL: &url.URL{Path: loginRequestPath}},\n\t\t\tres:      nil,\n\t\t\terr:      &url.Error{Err: context.DeadlineExceeded},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tctx:      context.Background(),\n\t\t\treq:      &http.Request{URL: &url.URL{Path: loginRequestPath}},\n\t\t\tres:      nil,\n\t\t\terr:      errors.ErrUnknownError(),\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tctx:      context.Background(),\n\t\t\treq:      &http.Request{URL: &url.URL{Path: loginRequestPath}},\n\t\t\tres:      &http.Response{StatusCode: http.StatusTooManyRequests},\n\t\t\terr:      nil,\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tctx:      deadLineCtx,\n\t\t\treq:      &http.Request{URL: &url.URL{Path: loginRequestPath}},\n\t\t\tres:      nil,\n\t\t\terr:      &url.Error{Err: context.DeadlineExceeded},\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tctx:      deadLineCtx,\n\t\t\treq:      &http.Request{URL: &url.URL{Path: queryRequestPath}},\n\t\t\tres:      nil,\n\t\t\terr:      &url.Error{Err: context.DeadlineExceeded},\n\t\t\texpected: false,\n\t\t},\n\t}\n\n\tfor _, tc := range tcs {\n\t\tt.Run(fmt.Sprintf(\"req %v, resp %v\", tc.req, tc.res), func(t *testing.T) {\n\t\t\tresult, _ := isRetryableError(tc.ctx, tc.req, tc.res, tc.err)\n\t\t\tif result != tc.expected {\n\t\t\t\tt.Fatalf(\"expected %v, got %v; request: %v, response: %v\", tc.expected, result, tc.req, tc.res)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestCalculateRetryWait(t *testing.T) {\n\t// test for randomly selected attempt and currWaitTime values\n\t// minSleepTime, maxSleepTime are limit values\n\ttcs := []struct {\n\t\tattempt      int\n\t\tcurrWaitTime float64\n\t\tminSleepTime float64\n\t\tmaxSleepTime float64\n\t}{\n\t\t{\n\t\t\tattempt:      1,\n\t\t\tcurrWaitTime: 3.346609,\n\t\t\tminSleepTime: 0.326695,\n\t\t\tmaxSleepTime: 5.019914,\n\t\t},\n\t\t{\n\t\t\tattempt:      2,\n\t\t\tcurrWaitTime: 4.260357,\n\t\t\tminSleepTime: 1.869821,\n\t\t\tmaxSleepTime: 6.390536,\n\t\t},\n\t\t{\n\t\t\tattempt:      3,\n\t\t\tcurrWaitTime: 7.857728,\n\t\t\tminSleepTime: 3.928864,\n\t\t\tmaxSleepTime: 11.928864,\n\t\t},\n\t\t{\n\t\t\tattempt:      4,\n\t\t\tcurrWaitTime: 7.249255,\n\t\t\tminSleepTime: 3.624628,\n\t\t\tmaxSleepTime: 19.624628,\n\t\t},\n\t\t{\n\t\t\tattempt:      5,\n\t\t\tcurrWaitTime: 23.598257,\n\t\t\tminSleepTime: 11.799129,\n\t\t\tmaxSleepTime: 43.799129,\n\t\t},\n\t\t{\n\t\t\tattempt:      8,\n\t\t\tcurrWaitTime: 27.088613,\n\t\t\tminSleepTime: 13.544306,\n\t\t\tmaxSleepTime: 269.544306,\n\t\t},\n\t\t{\n\t\t\tattempt:      10,\n\t\t\tcurrWaitTime: 30.879329,\n\t\t\tminSleepTime: 15.439664,\n\t\t\tmaxSleepTime: 1039.439664,\n\t\t},\n\t\t{\n\t\t\tattempt:      12,\n\t\t\tcurrWaitTime: 39.919798,\n\t\t\tminSleepTime: 19.959899,\n\t\t\tmaxSleepTime: 4115.959899,\n\t\t},\n\t\t{\n\t\t\tattempt:      15,\n\t\t\tcurrWaitTime: 33.750758,\n\t\t\tminSleepTime: 16.875379,\n\t\t\tmaxSleepTime: 32784.875379,\n\t\t},\n\t\t{\n\t\t\tattempt:      20,\n\t\t\tcurrWaitTime: 32.357793,\n\t\t\tminSleepTime: 16.178897,\n\t\t\tmaxSleepTime: 1048592.178897,\n\t\t},\n\t}\n\n\tfor _, tc := range tcs {\n\t\tt.Run(fmt.Sprintf(\"attmept: %v\", tc.attempt), func(t *testing.T) {\n\t\t\tresult := defaultWaitAlgo.calculateWaitBeforeRetryForAuthRequest(tc.attempt, time.Duration(tc.currWaitTime*float64(time.Second)))\n\t\t\tassertBetweenE(t, result.Seconds(), tc.minSleepTime, tc.maxSleepTime)\n\t\t})\n\t}\n}\n\nfunc TestCalculateRetryWaitForNonAuthRequests(t *testing.T) {\n\t// test for randomly selected currWaitTime values\n\t// maxSleepTime is the limit value\n\ttcs := []struct {\n\t\tcurrWaitTime float64\n\t\tmaxSleepTime float64\n\t}{\n\t\t{\n\t\t\tcurrWaitTime: 3.346609,\n\t\t\tmaxSleepTime: 10.039827,\n\t\t},\n\t\t{\n\t\t\tcurrWaitTime: 4.260357,\n\t\t\tmaxSleepTime: 12.781071,\n\t\t},\n\t\t{\n\t\t\tcurrWaitTime: 5.154231,\n\t\t\tmaxSleepTime: 15.462693,\n\t\t},\n\t\t{\n\t\t\tcurrWaitTime: 7.249255,\n\t\t\tmaxSleepTime: 16,\n\t\t},\n\t\t{\n\t\t\tcurrWaitTime: 23.598257,\n\t\t\tmaxSleepTime: 16,\n\t\t},\n\t}\n\n\tfor _, tc := range tcs {\n\t\tdefaultMinSleepTime := 1\n\t\tt.Run(fmt.Sprintf(\"currWaitTime: %v\", tc.currWaitTime), func(t *testing.T) {\n\t\t\tresult := defaultWaitAlgo.calculateWaitBeforeRetry(time.Duration(tc.currWaitTime) * time.Second)\n\t\t\tassertBetweenInclusiveE(t, result.Seconds(), float64(defaultMinSleepTime), tc.maxSleepTime)\n\t\t})\n\t}\n}\n\nfunc TestRedirectRetry(t *testing.T) {\n\twiremock.registerMappings(t, newWiremockMapping(\"retry/redirection_retry_workflow.json\"))\n\tcfg := wiremock.connectionConfig()\n\tcfg.ClientTimeout = 3 * time.Second\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\trunSmokeQuery(t, db)\n}\n"
  },
  {
    "path": "rows.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"io\"\n\t\"reflect\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n)\n\nconst (\n\theaderSseCAlgorithm = \"x-amz-server-side-encryption-customer-algorithm\"\n\theaderSseCKey       = \"x-amz-server-side-encryption-customer-key\"\n\theaderSseCAes       = \"AES256\"\n)\n\nvar (\n\t// customJSONDecoderEnabled has the chunk downloader use the custom JSON decoder to reduce memory footprint.\n\tcustomJSONDecoderEnabled = false\n\n\tmaxChunkDownloaderErrorCounter = 5\n)\n\nconst defaultMaxChunkDownloadWorkers = 10\nconst clientPrefetchThreadsKey = \"client_prefetch_threads\"\n\n// SnowflakeRows provides an API for methods exposed to the clients\ntype SnowflakeRows interface {\n\tGetQueryID() string\n\tGetStatus() QueryStatus\n\t// NextResultSet switches Arrow Batches to the next result set.\n\t// Returns io.EOF if there are no more result sets.\n\tNextResultSet() error\n}\n\ntype snowflakeRows struct {\n\tsc                  *snowflakeConn\n\tChunkDownloader     chunkDownloader\n\ttailChunkDownloader chunkDownloader\n\tqueryID             string\n\tstatus              QueryStatus\n\terr                 error\n\terrChannel          chan error\n\tlocation            *time.Location\n\tctx                 context.Context\n}\n\nfunc (rows *snowflakeRows) getLocation() *time.Location {\n\tif rows.location == nil && rows.sc != nil && rows.sc.cfg != nil {\n\t\trows.location = getCurrentLocation(&rows.sc.syncParams)\n\t}\n\treturn rows.location\n}\n\ntype snowflakeValue any\n\ntype chunkRowType struct {\n\tRowSet   []*string\n\tArrowRow []snowflakeValue\n}\n\ntype rowSetType struct {\n\tRowType      []query.ExecResponseRowType\n\tJSON         [][]*string\n\tRowSetBase64 string\n}\n\ntype chunkError struct {\n\tIndex int\n\tError error\n}\n\nfunc (rows *snowflakeRows) Close() (err error) {\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn err\n\t}\n\tlogger.WithContext(rows.sc.ctx).Debug(\"Rows.Close\")\n\tif scd, ok := rows.ChunkDownloader.(*snowflakeChunkDownloader); ok {\n\t\tscd.releaseRawArrowBatches()\n\t}\n\treturn nil\n}\n\n// ColumnTypeDatabaseTypeName returns the database column name.\nfunc (rows *snowflakeRows) ColumnTypeDatabaseTypeName(index int) string {\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn err.Error()\n\t}\n\treturn strings.ToUpper(rows.ChunkDownloader.getRowType()[index].Type)\n}\n\n// ColumnTypeLength returns the length of the column\nfunc (rows *snowflakeRows) ColumnTypeLength(index int) (length int64, ok bool) {\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn 0, false\n\t}\n\tif index < 0 || index > len(rows.ChunkDownloader.getRowType()) {\n\t\treturn 0, false\n\t}\n\tswitch rows.ChunkDownloader.getRowType()[index].Type {\n\tcase \"text\", \"variant\", \"object\", \"array\", \"binary\":\n\t\treturn rows.ChunkDownloader.getRowType()[index].Length, true\n\t}\n\treturn 0, false\n}\n\nfunc (rows *snowflakeRows) ColumnTypeNullable(index int) (nullable, ok bool) {\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn false, false\n\t}\n\tif index < 0 || index > len(rows.ChunkDownloader.getRowType()) {\n\t\treturn false, false\n\t}\n\treturn rows.ChunkDownloader.getRowType()[index].Nullable, true\n}\n\nfunc (rows *snowflakeRows) ColumnTypePrecisionScale(index int) (precision, scale int64, ok bool) {\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn 0, 0, false\n\t}\n\trowType := rows.ChunkDownloader.getRowType()\n\tif index < 0 || index > len(rowType) {\n\t\treturn 0, 0, false\n\t}\n\tswitch rowType[index].Type {\n\tcase \"fixed\":\n\t\treturn rowType[index].Precision, rowType[index].Scale, true\n\tcase \"time\":\n\t\treturn rowType[index].Scale, 0, true\n\tcase \"timestamp\":\n\t\treturn rowType[index].Scale, 0, true\n\t}\n\treturn 0, 0, false\n}\n\nfunc (rows *snowflakeRows) Columns() []string {\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn make([]string, 0)\n\t}\n\tlogger.WithContext(rows.ctx).Debug(\"Rows.Columns\")\n\tret := make([]string, len(rows.ChunkDownloader.getRowType()))\n\tfor i, n := 0, len(rows.ChunkDownloader.getRowType()); i < n; i++ {\n\t\tret[i] = rows.ChunkDownloader.getRowType()[i].Name\n\t}\n\treturn ret\n}\n\nfunc (rows *snowflakeRows) ColumnTypeScanType(index int) reflect.Type {\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn nil\n\t}\n\treturn snowflakeTypeToGo(rows.ctx, types.GetSnowflakeType(rows.ChunkDownloader.getRowType()[index].Type), rows.ChunkDownloader.getRowType()[index].Precision, rows.ChunkDownloader.getRowType()[index].Scale, rows.ChunkDownloader.getRowType()[index].Fields)\n}\n\nfunc (rows *snowflakeRows) GetQueryID() string {\n\treturn rows.queryID\n}\n\nfunc (rows *snowflakeRows) GetStatus() QueryStatus {\n\treturn rows.status\n}\n\n// GetArrowBatches returns raw arrow batch data for use by the arrowbatches sub-package.\n// Implements ia.BatchDataProvider.\nfunc (rows *snowflakeRows) GetArrowBatches() (*ia.BatchDataInfo, error) {\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif rows.ChunkDownloader.getQueryResultFormat() != arrowFormat {\n\t\treturn nil, exceptionTelemetry(errors.ErrNonArrowResponseForArrowBatches(rows.queryID), rows.sc)\n\t}\n\n\tscd, ok := rows.ChunkDownloader.(*snowflakeChunkDownloader)\n\tif !ok {\n\t\treturn nil, &SnowflakeError{\n\t\t\tNumber:  ErrNotImplemented,\n\t\t\tMessage: \"chunk downloader does not support arrow batch data\",\n\t\t}\n\t}\n\n\trawBatches := scd.getRawArrowBatches()\n\tbatches := make([]ia.BatchRaw, len(rawBatches))\n\tfor i, raw := range rawBatches {\n\t\tbatch := ia.BatchRaw{\n\t\t\tRecords:  raw.records,\n\t\t\tIndex:    i,\n\t\t\tRowCount: raw.rowCount,\n\t\t\tLocation: raw.loc,\n\t\t}\n\t\traw.records = nil\n\t\tif batch.Records == nil {\n\t\t\tcapturedIdx := i\n\t\t\tif scd.firstBatchRaw != nil {\n\t\t\t\tcapturedIdx = i - 1\n\t\t\t}\n\t\t\tbatch.Download = func(ctx context.Context) (*[]arrow.Record, int, error) {\n\t\t\t\tif err := scd.FuncDownloadHelper(ctx, scd, capturedIdx); err != nil {\n\t\t\t\t\treturn nil, 0, err\n\t\t\t\t}\n\t\t\t\tactualRaw := scd.rawBatches[capturedIdx]\n\t\t\t\treturn actualRaw.records, actualRaw.rowCount, nil\n\t\t\t}\n\t\t}\n\t\tbatches[i] = batch\n\t}\n\n\treturn &ia.BatchDataInfo{\n\t\tBatches:   batches,\n\t\tRowTypes:  scd.RowSet.RowType,\n\t\tAllocator: scd.pool,\n\t\tCtx:       scd.ctx,\n\t\tQueryID:   rows.queryID,\n\t}, nil\n}\n\nfunc (rows *snowflakeRows) Next(dest []driver.Value) (err error) {\n\tif err = rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn err\n\t}\n\trow, err := rows.ChunkDownloader.next()\n\tif err != nil {\n\t\t// includes io.EOF\n\t\tif err == io.EOF {\n\t\t\trows.ChunkDownloader.reset()\n\t\t}\n\t\treturn err\n\t}\n\n\tif rows.ChunkDownloader.getQueryResultFormat() == arrowFormat {\n\t\tfor i, n := 0, len(row.ArrowRow); i < n; i++ {\n\t\t\tdest[i] = row.ArrowRow[i]\n\t\t}\n\t} else {\n\t\tfor i, n := 0, len(row.RowSet); i < n; i++ {\n\t\t\t// could move to chunk downloader so that each go routine\n\t\t\t// can convert data\n\t\t\terr = stringToValue(rows.ctx, &dest[i], rows.ChunkDownloader.getRowType()[i], row.RowSet[i], rows.getLocation(), &rows.sc.syncParams)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn err\n}\n\nfunc (rows *snowflakeRows) HasNextResultSet() bool {\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn false\n\t}\n\thasNextResultSet := rows.ChunkDownloader.getNextChunkDownloader() != nil\n\tlogger.WithContext(rows.ctx).Debugf(\"[queryId: %v] Rows.HasNextResultSet: %v\", rows.queryID, hasNextResultSet)\n\treturn hasNextResultSet\n}\n\nfunc (rows *snowflakeRows) NextResultSet() error {\n\tlogger.WithContext(rows.ctx).Debugf(\"[queryId: %v] Rows.NextResultSet\", rows.queryID)\n\tif err := rows.waitForAsyncQueryStatus(); err != nil {\n\t\treturn err\n\t}\n\tif rows.ChunkDownloader.getNextChunkDownloader() == nil {\n\t\treturn io.EOF\n\t}\n\trows.ChunkDownloader = rows.ChunkDownloader.getNextChunkDownloader()\n\tif err := rows.ChunkDownloader.start(); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc (rows *snowflakeRows) waitForAsyncQueryStatus() error {\n\t// if async query, block until query is finished\n\tswitch rows.status {\n\tcase QueryStatusInProgress:\n\t\terr := <-rows.errChannel\n\t\trows.status = QueryStatusComplete\n\t\tif err != nil {\n\t\t\trows.status = QueryFailed\n\t\t\trows.err = err\n\t\t\treturn rows.err\n\t\t}\n\tcase QueryFailed:\n\t\treturn rows.err\n\tdefault:\n\t\treturn nil\n\t}\n\treturn nil\n}\n\nfunc (rows *snowflakeRows) addDownloader(newDL chunkDownloader) {\n\tif rows.ChunkDownloader == nil {\n\t\trows.ChunkDownloader = newDL\n\t\trows.tailChunkDownloader = newDL\n\t\treturn\n\t}\n\trows.tailChunkDownloader.setNextChunkDownloader(newDL)\n\trows.tailChunkDownloader = newDL\n}\n"
  },
  {
    "path": "rows_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"fmt\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"io\"\n\t\"net/http\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\ntype RowsExtended struct {\n\trows      *sql.Rows\n\tcloseChan *chan bool\n\tt         *testing.T\n}\n\nfunc (rs *RowsExtended) Close() error {\n\t*rs.closeChan <- true\n\tclose(*rs.closeChan)\n\treturn rs.rows.Close()\n}\n\nfunc (rs *RowsExtended) ColumnTypes() ([]*sql.ColumnType, error) {\n\treturn rs.rows.ColumnTypes()\n}\n\nfunc (rs *RowsExtended) Columns() ([]string, error) {\n\treturn rs.rows.Columns()\n}\n\nfunc (rs *RowsExtended) Err() error {\n\treturn rs.rows.Err()\n}\n\nfunc (rs *RowsExtended) Next() bool {\n\treturn rs.rows.Next()\n}\n\nfunc (rs *RowsExtended) mustNext() {\n\tassertTrueF(rs.t, rs.rows.Next())\n}\n\nfunc (rs *RowsExtended) NextResultSet() bool {\n\treturn rs.rows.NextResultSet()\n}\n\nfunc (rs *RowsExtended) Scan(dest ...any) error {\n\treturn rs.rows.Scan(dest...)\n}\n\nfunc (rs *RowsExtended) mustScan(dest ...any) {\n\terr := rs.rows.Scan(dest...)\n\tassertNilF(rs.t, err)\n}\n\n// test variables\nvar (\n\trowsInChunk = 123\n)\n\n// Special cases where rows are already closed\nfunc TestRowsClose(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows, err := dbt.query(\"SELECT 1\")\n\t\tif err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t\tif err = rows.Close(); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\n\t\tif rows.Next() {\n\t\t\tdbt.Fatal(\"unexpected row after rows.Close()\")\n\t\t}\n\t\tif err = rows.Err(); err != nil {\n\t\t\tdbt.Fatal(err)\n\t\t}\n\t})\n}\n\nfunc TestResultNoRows(t *testing.T) {\n\t// DDL\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trow, err := dbt.exec(\"CREATE OR REPLACE TABLE test(c1 int)\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed to execute DDL. err: %v\", err)\n\t\t}\n\t\tif _, err = row.RowsAffected(); err == nil {\n\t\t\tt.Fatal(\"should have failed to get RowsAffected\")\n\t\t}\n\t\tif _, err = row.LastInsertId(); err == nil {\n\t\t\tt.Fatal(\"should have failed to get LastInsertID\")\n\t\t}\n\t})\n}\n\nfunc TestRowsWithoutChunkDownloader(t *testing.T) {\n\tsts1 := \"1\"\n\tsts2 := \"Test1\"\n\tvar i int\n\tcc := make([][]*string, 0)\n\tfor i = 0; i < 10; i++ {\n\t\tcc = append(cc, []*string{&sts1, &sts2})\n\t}\n\trt := []query.ExecResponseRowType{\n\t\t{Name: \"c1\", ByteLength: 10, Length: 10, Type: \"FIXED\", Scale: 0, Nullable: true},\n\t\t{Name: \"c2\", ByteLength: 100000, Length: 100000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t}\n\tcm := []query.ExecResponseChunk{}\n\trows := new(snowflakeRows)\n\tsc := &snowflakeConn{\n\t\tcfg: &Config{},\n\t}\n\trows.sc = sc\n\trows.ctx = context.Background()\n\trows.ChunkDownloader = &snowflakeChunkDownloader{\n\t\tsc:                 sc,\n\t\tctx:                context.Background(),\n\t\tTotal:              int64(len(cc)),\n\t\tChunkMetas:         cm,\n\t\tTotalRowIndex:      int64(-1),\n\t\tQrmk:               \"\",\n\t\tFuncDownload:       nil,\n\t\tFuncDownloadHelper: nil,\n\t\tRowSet:             rowSetType{RowType: rt, JSON: cc},\n\t\tQueryResultFormat:  \"json\",\n\t}\n\terr := rows.ChunkDownloader.start()\n\tassertNilF(t, err)\n\tdest := make([]driver.Value, 2)\n\tfor i = 0; i < len(cc); i++ {\n\t\tif err := rows.Next(dest); err != nil {\n\t\t\tt.Fatalf(\"failed to get value. err: %v\", err)\n\t\t}\n\t\tif dest[0] != sts1 {\n\t\t\tt.Fatalf(\"failed to get value. expected: %v, got: %v\", sts1, dest[0])\n\t\t}\n\t\tif dest[1] != sts2 {\n\t\t\tt.Fatalf(\"failed to get value. expected: %v, got: %v\", sts2, dest[1])\n\t\t}\n\t}\n\tif err := rows.Next(dest); err != io.EOF {\n\t\tt.Fatalf(\"failed to finish getting data. err: %v\", err)\n\t}\n\tlogger.Infof(\"dest: %v\", dest)\n\n}\n\nfunc downloadChunkTest(ctx context.Context, scd *snowflakeChunkDownloader, idx int) {\n\td := make([][]*string, 0)\n\tfor i := range rowsInChunk {\n\t\tv1 := fmt.Sprintf(\"%v\", idx*1000+i)\n\t\tv2 := fmt.Sprintf(\"testchunk%v\", idx*1000+i)\n\t\td = append(d, []*string{&v1, &v2})\n\t}\n\tscd.ChunksMutex.Lock()\n\tscd.Chunks[idx] = make([]chunkRowType, len(d))\n\tpopulateJSONRowSet(scd.Chunks[idx], d)\n\tscd.DoneDownloadCond.Broadcast()\n\tscd.ChunksMutex.Unlock()\n}\n\nfunc TestRowsWithChunkDownloader(t *testing.T) {\n\tnumChunks := 12\n\tvar i int\n\tcc := make([][]*string, 0)\n\tfor i = 0; i < 100; i++ {\n\t\tv1 := fmt.Sprintf(\"%v\", i)\n\t\tv2 := fmt.Sprintf(\"Test%v\", i)\n\t\tcc = append(cc, []*string{&v1, &v2})\n\t}\n\trt := []query.ExecResponseRowType{\n\t\t{Name: \"c1\", ByteLength: 10, Length: 10, Type: \"FIXED\", Scale: 0, Nullable: true},\n\t\t{Name: \"c2\", ByteLength: 100000, Length: 100000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t}\n\tcm := make([]query.ExecResponseChunk, 0)\n\tfor i = range numChunks {\n\t\tcm = append(cm, query.ExecResponseChunk{URL: fmt.Sprintf(\"dummyURL%v\", i+1), RowCount: rowsInChunk})\n\t}\n\trows := new(snowflakeRows)\n\ttwo := \"2\"\n\tparams := map[string]*string{\n\t\tclientPrefetchThreadsKey: &two,\n\t}\n\tsc := &snowflakeConn{\n\t\tcfg:        &Config{},\n\t\tsyncParams: syncParams{params: params},\n\t}\n\trows.sc = sc\n\trows.ctx = context.Background()\n\trows.ChunkDownloader = &snowflakeChunkDownloader{\n\t\tsc:            sc,\n\t\tctx:           context.Background(),\n\t\tTotal:         int64(len(cc) + numChunks*rowsInChunk),\n\t\tChunkMetas:    cm,\n\t\tTotalRowIndex: int64(-1),\n\t\tQrmk:          \"HAHAHA\",\n\t\tFuncDownload:  downloadChunkTest,\n\t\tRowSet:        rowSetType{RowType: rt, JSON: cc},\n\t}\n\tassertNilF(t, rows.ChunkDownloader.start())\n\tcnt := 0\n\tdest := make([]driver.Value, 2)\n\tvar err error\n\tfor err != io.EOF {\n\t\terr := rows.Next(dest)\n\t\tif err == io.EOF {\n\t\t\tbreak\n\t\t}\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed to get value. err: %v\", err)\n\t\t}\n\t\tcnt++\n\t}\n\tif cnt != len(cc)+numChunks*rowsInChunk {\n\t\tt.Fatalf(\"failed to get all results. expected:%v, got:%v\", len(cc)+numChunks*rowsInChunk, cnt)\n\t}\n\tlogger.Infof(\"dest: %v\", dest)\n}\n\nfunc downloadChunkTestError(ctx context.Context, scd *snowflakeChunkDownloader, idx int) {\n\t// fail to download 6th and 10th chunk, and retry up to N times and success\n\t// NOTE: zero based index\n\tscd.ChunksMutex.Lock()\n\tdefer scd.ChunksMutex.Unlock()\n\tif (idx == 6 || idx == 10) && scd.ChunksErrorCounter < maxChunkDownloaderErrorCounter {\n\t\tscd.ChunksError <- &chunkError{\n\t\t\tIndex: idx,\n\t\t\tError: fmt.Errorf(\n\t\t\t\t\"dummy error. idx: %v, errCnt: %v\", idx+1, scd.ChunksErrorCounter)}\n\t\tscd.DoneDownloadCond.Broadcast()\n\t\treturn\n\t}\n\td := make([][]*string, 0)\n\tfor i := range rowsInChunk {\n\t\tv1 := fmt.Sprintf(\"%v\", idx*1000+i)\n\t\tv2 := fmt.Sprintf(\"testchunk%v\", idx*1000+i)\n\t\td = append(d, []*string{&v1, &v2})\n\t}\n\tscd.Chunks[idx] = make([]chunkRowType, len(d))\n\tpopulateJSONRowSet(scd.Chunks[idx], d)\n\tscd.DoneDownloadCond.Broadcast()\n}\n\nfunc TestRowsWithChunkDownloaderError(t *testing.T) {\n\tnumChunks := 12\n\tvar i int\n\tcc := make([][]*string, 0)\n\tfor i = 0; i < 100; i++ {\n\t\tv1 := fmt.Sprintf(\"%v\", i)\n\t\tv2 := fmt.Sprintf(\"Test%v\", i)\n\t\tcc = append(cc, []*string{&v1, &v2})\n\t}\n\trt := []query.ExecResponseRowType{\n\t\t{Name: \"c1\", ByteLength: 10, Length: 10, Type: \"FIXED\", Scale: 0, Nullable: true},\n\t\t{Name: \"c2\", ByteLength: 100000, Length: 100000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t}\n\tcm := make([]query.ExecResponseChunk, 0)\n\tfor i = range numChunks {\n\t\tcm = append(cm, query.ExecResponseChunk{URL: fmt.Sprintf(\"dummyURL%v\", i+1), RowCount: rowsInChunk})\n\t}\n\trows := new(snowflakeRows)\n\tthree := \"3\"\n\tparams := map[string]*string{\n\t\tclientPrefetchThreadsKey: &three,\n\t}\n\tsc := &snowflakeConn{\n\t\tcfg:        &Config{},\n\t\tsyncParams: syncParams{params: params},\n\t}\n\trows.sc = sc\n\trows.ctx = context.Background()\n\trows.ChunkDownloader = &snowflakeChunkDownloader{\n\t\tsc:            sc,\n\t\tctx:           context.Background(),\n\t\tTotal:         int64(len(cc) + numChunks*rowsInChunk),\n\t\tChunkMetas:    cm,\n\t\tTotalRowIndex: int64(-1),\n\t\tQrmk:          \"HOHOHO\",\n\t\tFuncDownload:  downloadChunkTestError,\n\t\tRowSet:        rowSetType{RowType: rt, JSON: cc},\n\t}\n\tassertNilF(t, rows.ChunkDownloader.start())\n\tcnt := 0\n\tdest := make([]driver.Value, 2)\n\tvar err error\n\tfor err != io.EOF {\n\t\terr := rows.Next(dest)\n\t\tif err == io.EOF {\n\t\t\tbreak\n\t\t}\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed to get value. err: %v\", err)\n\t\t}\n\t\t// fmt.Printf(\"data: %v\\n\", dest)\n\t\tcnt++\n\t}\n\tif cnt != len(cc)+numChunks*rowsInChunk {\n\t\tt.Fatalf(\"failed to get all results. expected:%v, got:%v\", len(cc)+numChunks*rowsInChunk, cnt)\n\t}\n\tlogger.Infof(\"dest: %v\", dest)\n}\n\nfunc downloadChunkTestErrorFail(ctx context.Context, scd *snowflakeChunkDownloader, idx int) {\n\t// fail to download 6th and 10th chunk, and retry up to N times and fail\n\t// NOTE: zero based index\n\tscd.ChunksMutex.Lock()\n\tdefer scd.ChunksMutex.Unlock()\n\tif idx == 6 && scd.ChunksErrorCounter <= maxChunkDownloaderErrorCounter {\n\t\tscd.ChunksError <- &chunkError{\n\t\t\tIndex: idx,\n\t\t\tError: fmt.Errorf(\n\t\t\t\t\"dummy error. idx: %v, errCnt: %v\", idx+1, scd.ChunksErrorCounter)}\n\t\tscd.DoneDownloadCond.Broadcast()\n\t\treturn\n\t}\n\td := make([][]*string, 0)\n\tfor i := range rowsInChunk {\n\t\tv1 := fmt.Sprintf(\"%v\", idx*1000+i)\n\t\tv2 := fmt.Sprintf(\"testchunk%v\", idx*1000+i)\n\t\td = append(d, []*string{&v1, &v2})\n\t}\n\tscd.Chunks[idx] = make([]chunkRowType, len(d))\n\tpopulateJSONRowSet(scd.Chunks[idx], d)\n\tscd.DoneDownloadCond.Broadcast()\n}\n\nfunc TestRowsWithChunkDownloaderErrorFail(t *testing.T) {\n\tnumChunks := 12\n\t// changed the workers\n\tlogger.Info(\"START TESTS\")\n\tvar i int\n\tcc := make([][]*string, 0)\n\tfor i = 0; i < 100; i++ {\n\t\tv1 := fmt.Sprintf(\"%v\", i)\n\t\tv2 := fmt.Sprintf(\"Test%v\", i)\n\t\tcc = append(cc, []*string{&v1, &v2})\n\t}\n\trt := []query.ExecResponseRowType{\n\t\t{Name: \"c1\", ByteLength: 10, Length: 10, Type: \"FIXED\", Scale: 0, Nullable: true},\n\t\t{Name: \"c2\", ByteLength: 100000, Length: 100000, Type: \"TEXT\", Scale: 0, Nullable: false},\n\t}\n\tcm := make([]query.ExecResponseChunk, 0)\n\tfor i = range numChunks {\n\t\tcm = append(cm, query.ExecResponseChunk{URL: fmt.Sprintf(\"dummyURL%v\", i+1), RowCount: rowsInChunk})\n\t}\n\trows := new(snowflakeRows)\n\tsc := &snowflakeConn{\n\t\tcfg: &Config{},\n\t}\n\trows.sc = sc\n\trows.ctx = context.Background()\n\trows.ChunkDownloader = &snowflakeChunkDownloader{\n\t\tsc:            sc,\n\t\tctx:           context.Background(),\n\t\tTotal:         int64(len(cc) + numChunks*rowsInChunk),\n\t\tChunkMetas:    cm,\n\t\tTotalRowIndex: int64(-1),\n\t\tQrmk:          \"HOHOHO\",\n\t\tFuncDownload:  downloadChunkTestErrorFail,\n\t\tRowSet:        rowSetType{RowType: rt, JSON: cc},\n\t}\n\tassertNilF(t, rows.ChunkDownloader.start())\n\tcnt := 0\n\tdest := make([]driver.Value, 2)\n\tvar err error\n\tfor err != io.EOF {\n\t\terr := rows.Next(dest)\n\t\tif err == io.EOF {\n\t\t\tbreak\n\t\t}\n\t\tif err != nil {\n\t\t\tlogger.Infof(\n\t\t\t\t\"failure was expected by the number of rows is wrong. expected: %v, got: %v\", 715, cnt)\n\t\t\tbreak\n\t\t}\n\t\tcnt++\n\t}\n}\n\nfunc getChunkTestInvalidResponseBody(_ context.Context, _ *snowflakeConn, _ string, _ map[string]string, _ time.Duration) (\n\t*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc TestDownloadChunkInvalidResponseBody(t *testing.T) {\n\tnumChunks := 2\n\tcm := make([]query.ExecResponseChunk, 0)\n\tfor i := range numChunks {\n\t\tcm = append(cm, query.ExecResponseChunk{URL: fmt.Sprintf(\n\t\t\t\"dummyURL%v\", i+1), RowCount: rowsInChunk})\n\t}\n\tscd := &snowflakeChunkDownloader{\n\t\tsc: &snowflakeConn{\n\t\t\trest: &snowflakeRestful{RequestTimeout: sfconfig.DefaultRequestTimeout},\n\t\t},\n\t\tctx:                context.Background(),\n\t\tChunkMetas:         cm,\n\t\tTotalRowIndex:      int64(-1),\n\t\tQrmk:               \"HOHOHO\",\n\t\tFuncDownload:       downloadChunk,\n\t\tFuncDownloadHelper: downloadChunkHelper,\n\t\tFuncGet:            getChunkTestInvalidResponseBody,\n\t}\n\tscd.ChunksMutex = &sync.Mutex{}\n\tscd.DoneDownloadCond = sync.NewCond(scd.ChunksMutex)\n\tscd.Chunks = make(map[int][]chunkRowType)\n\tscd.ChunksError = make(chan *chunkError, 1)\n\tscd.FuncDownload(scd.ctx, scd, 1)\n\tselect {\n\tcase errc := <-scd.ChunksError:\n\t\tif errc.Index != 1 {\n\t\t\tt.Fatalf(\"the error should have caused with chunk idx: %v\", errc.Index)\n\t\t}\n\tdefault:\n\t\tt.Fatal(\"should have caused an error and queued in scd.ChunksError\")\n\t}\n}\n\nfunc getChunkTestErrorStatus(_ context.Context, _ *snowflakeConn, _ string, _ map[string]string, _ time.Duration) (\n\t*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusBadGateway,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc TestDownloadChunkErrorStatus(t *testing.T) {\n\tnumChunks := 2\n\tcm := make([]query.ExecResponseChunk, 0)\n\tfor i := range numChunks {\n\t\tcm = append(cm, query.ExecResponseChunk{URL: fmt.Sprintf(\n\t\t\t\"dummyURL%v\", i+1), RowCount: rowsInChunk})\n\t}\n\tscd := &snowflakeChunkDownloader{\n\t\tsc: &snowflakeConn{\n\t\t\trest: &snowflakeRestful{RequestTimeout: sfconfig.DefaultRequestTimeout},\n\t\t},\n\t\tctx:                context.Background(),\n\t\tChunkMetas:         cm,\n\t\tTotalRowIndex:      int64(-1),\n\t\tQrmk:               \"HOHOHO\",\n\t\tFuncDownload:       downloadChunk,\n\t\tFuncDownloadHelper: downloadChunkHelper,\n\t\tFuncGet:            getChunkTestErrorStatus,\n\t}\n\tscd.ChunksMutex = &sync.Mutex{}\n\tscd.DoneDownloadCond = sync.NewCond(scd.ChunksMutex)\n\tscd.Chunks = make(map[int][]chunkRowType)\n\tscd.ChunksError = make(chan *chunkError, 1)\n\tscd.FuncDownload(scd.ctx, scd, 1)\n\tselect {\n\tcase errc := <-scd.ChunksError:\n\t\tif errc.Index != 1 {\n\t\t\tt.Fatalf(\"the error should have caused with chunk idx: %v\", errc.Index)\n\t\t}\n\t\tserr, ok := errc.Error.(*SnowflakeError)\n\t\tif !ok {\n\t\t\tt.Fatalf(\"should have been snowflake error. err: %v\", errc.Error)\n\t\t}\n\t\tif serr.Number != ErrFailedToGetChunk {\n\t\t\tt.Fatalf(\"message error code is not correct. msg: %v\", serr.Number)\n\t\t}\n\tdefault:\n\t\tt.Fatal(\"should have caused an error and queued in scd.ChunksError\")\n\t}\n}\n\nfunc TestLocationChangesAfterAlterSession(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE location_timestamp_ltz (val timestamp_ltz)\")\n\t\tdefer dbt.mustExec(\"DROP TABLE location_timestamp_ltz\")\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.mustExec(\"INSERT INTO location_timestamp_ltz VALUES('2023-08-09 10:00:00')\")\n\t\trows1 := dbt.mustQuery(\"SELECT * FROM location_timestamp_ltz\")\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows1.Close())\n\t\t}()\n\t\tif !rows1.Next() {\n\t\t\tt.Fatalf(\"cannot read a record\")\n\t\t}\n\t\tvar t1 time.Time\n\t\tassertNilF(t, rows1.Scan(&t1))\n\t\tif t1.Location().String() != \"Europe/Warsaw\" {\n\t\t\tt.Fatalf(\"should return time in Warsaw timezone\")\n\t\t}\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Pacific/Honolulu'\")\n\t\trows2 := dbt.mustQuery(\"SELECT * FROM location_timestamp_ltz\")\n\t\tdefer func() {\n\t\t\tassertNilF(t, rows2.Close())\n\t\t}()\n\t\tif !rows2.Next() {\n\t\t\tt.Fatalf(\"cannot read a record\")\n\t\t}\n\t\tvar t2 time.Time\n\t\tassertNilF(t, rows2.Scan(&t2))\n\t\tif t2.Location().String() != \"Pacific/Honolulu\" {\n\t\t\tt.Fatalf(\"should return time in Honolulu timezone\")\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "s3_storage_client.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"cmp\"\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"github.com/aws/aws-sdk-go-v2/aws\"\n\t\"github.com/aws/aws-sdk-go-v2/credentials\"\n\t\"github.com/aws/aws-sdk-go-v2/feature/s3/manager\"\n\t\"github.com/aws/aws-sdk-go-v2/service/s3\"\n\t\"github.com/aws/smithy-go\"\n\t\"github.com/aws/smithy-go/logging\"\n\t\"io\"\n\t\"net/http\"\n\t\"os\"\n\t\"strings\"\n)\n\nconst (\n\tsfcDigest  = \"sfc-digest\"\n\tamzMatdesc = \"x-amz-matdesc\"\n\tamzKey     = \"x-amz-key\"\n\tamzIv      = \"x-amz-iv\"\n\n\tnotFound             = \"NotFound\"\n\texpiredToken         = \"ExpiredToken\"\n\terrNoWsaeconnaborted = \"10053\"\n)\n\ntype snowflakeS3Client struct {\n\tcfg       *Config\n\ttelemetry *snowflakeTelemetry\n}\n\ntype s3Location struct {\n\tbucketName string\n\ts3Path     string\n}\n\n// S3LoggingMode allows to configure which logs should be included.\n// By default no logs are included.\n// See https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode for allowed values.\n// Deprecated: will be moved to DSN/Config in a future release.\nvar S3LoggingMode aws.ClientLogMode\n\nfunc (util *snowflakeS3Client) createClient(info *execResponseStageInfo, useAccelerateEndpoint bool, telemetry *snowflakeTelemetry) (cloudClient, error) {\n\tstageCredentials := info.Creds\n\ts3Logger := logging.LoggerFunc(s3LoggingFunc)\n\tendPoint := getS3CustomEndpoint(info)\n\n\ttransport, err := newTransportFactory(util.cfg, telemetry).createTransport(transportConfigFor(transportTypeCloudProvider))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn s3.New(s3.Options{\n\t\tRegion: info.Region,\n\t\tCredentials: aws.NewCredentialsCache(credentials.NewStaticCredentialsProvider(\n\t\t\tstageCredentials.AwsKeyID,\n\t\t\tstageCredentials.AwsSecretKey,\n\t\t\tstageCredentials.AwsToken)),\n\t\tBaseEndpoint:  endPoint,\n\t\tUseAccelerate: useAccelerateEndpoint,\n\t\tHTTPClient: &http.Client{\n\t\t\tTransport: transport,\n\t\t},\n\t\tClientLogMode: S3LoggingMode,\n\t\tLogger:        s3Logger,\n\t}), nil\n}\n\n// to be used with S3 transferAccelerateConfigWithUtil\nfunc (util *snowflakeS3Client) createClientWithConfig(info *execResponseStageInfo, useAccelerateEndpoint bool, cfg *Config, telemetry *snowflakeTelemetry) (cloudClient, error) {\n\t// copy snowflakeFileTransferAgent's config onto the cloud client so we could decide which Transport to use\n\tutil.cfg = cfg\n\tutil.telemetry = telemetry\n\treturn util.createClient(info, useAccelerateEndpoint, telemetry)\n}\n\nfunc getS3CustomEndpoint(info *execResponseStageInfo) *string {\n\tvar endPoint *string\n\tisRegionalURLEnabled := info.UseRegionalURL || info.UseS3RegionalURL\n\tif info.EndPoint != \"\" {\n\t\ttmp := fmt.Sprintf(\"https://%s\", info.EndPoint)\n\t\tendPoint = &tmp\n\t} else if info.Region != \"\" && isRegionalURLEnabled {\n\t\tdomainSuffixForRegionalURL := \"amazonaws.com\"\n\t\tif strings.HasPrefix(strings.ToLower(info.Region), \"cn-\") {\n\t\t\tdomainSuffixForRegionalURL = \"amazonaws.com.cn\"\n\t\t}\n\t\ttmp := fmt.Sprintf(\"https://s3.%s.%s\", info.Region, domainSuffixForRegionalURL)\n\t\tendPoint = &tmp\n\t}\n\treturn endPoint\n}\n\nfunc s3LoggingFunc(classification logging.Classification, format string, v ...any) {\n\tswitch classification {\n\tcase logging.Debug:\n\t\tlogger.WithField(\"logger\", \"S3\").Debugf(format, v...)\n\tcase logging.Warn:\n\t\tlogger.WithField(\"logger\", \"S3\").Warnf(format, v...)\n\t}\n}\n\ntype s3HeaderAPI interface {\n\tHeadObject(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error)\n}\n\n// cloudUtil implementation\nfunc (util *snowflakeS3Client) getFileHeader(ctx context.Context, meta *fileMetadata, filename string) (*fileHeader, error) {\n\theadObjInput, err := util.getS3Object(meta, filename)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar s3Cli s3HeaderAPI\n\ts3Cli, ok := meta.client.(*s3.Client)\n\tif !ok {\n\t\treturn nil, errors.New(\"could not parse client to s3.Client\")\n\t}\n\t// for testing only\n\tif meta.mockHeader != nil {\n\t\ts3Cli = meta.mockHeader\n\t}\n\tout, err := withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (*s3.HeadObjectOutput, error) {\n\t\treturn s3Cli.HeadObject(ctx, headObjInput)\n\t})\n\tif err != nil {\n\t\tvar ae smithy.APIError\n\t\tif errors.As(err, &ae) {\n\t\t\tif ae.ErrorCode() == notFound {\n\t\t\t\tmeta.resStatus = notFoundFile\n\t\t\t\treturn nil, errors.New(\"could not find file\")\n\t\t\t} else if ae.ErrorCode() == expiredToken {\n\t\t\t\tmeta.resStatus = renewToken\n\t\t\t\treturn nil, errors.New(\"received expired token. renewing\")\n\t\t\t}\n\t\t\tmeta.resStatus = errStatus\n\t\t\tmeta.lastError = err\n\t\t\treturn nil, fmt.Errorf(\"error while retrieving header, errorCode=%v. %w\", ae.ErrorCode(), err)\n\t\t}\n\t\tmeta.resStatus = errStatus\n\t\tmeta.lastError = err\n\t\treturn nil, fmt.Errorf(\"unexpected error while retrieving header: %w\", err)\n\t}\n\n\tmeta.resStatus = uploaded\n\tvar encMeta encryptMetadata\n\tif out.Metadata[amzKey] != \"\" {\n\t\tencMeta = encryptMetadata{\n\t\t\tout.Metadata[amzKey],\n\t\t\tout.Metadata[amzIv],\n\t\t\tout.Metadata[amzMatdesc],\n\t\t}\n\t}\n\tcontentLength := convertContentLength(out.ContentLength)\n\treturn &fileHeader{\n\t\tout.Metadata[sfcDigest],\n\t\tcontentLength,\n\t\t&encMeta,\n\t}, nil\n}\n\n// SNOW-974548 remove this function after upgrading AWS SDK\nfunc convertContentLength(contentLength any) int64 {\n\tswitch t := contentLength.(type) {\n\tcase int64:\n\t\treturn t\n\tcase *int64:\n\t\tif t != nil {\n\t\t\treturn *t\n\t\t}\n\t}\n\treturn 0\n}\n\ntype s3UploadAPI interface {\n\tUpload(ctx context.Context, params *s3.PutObjectInput, optFns ...func(*manager.Uploader)) (*manager.UploadOutput, error)\n}\n\n// cloudUtil implementation\nfunc (util *snowflakeS3Client) uploadFile(\n\tctx context.Context,\n\tdataFile string,\n\tmeta *fileMetadata,\n\tmaxConcurrency int,\n\tmultiPartThreshold int64) error {\n\ts3Meta := map[string]string{\n\t\thttpHeaderContentType: httpHeaderValueOctetStream,\n\t\tsfcDigest:             meta.sha256Digest,\n\t}\n\tif meta.encryptMeta != nil {\n\t\ts3Meta[amzIv] = meta.encryptMeta.iv\n\t\ts3Meta[amzKey] = meta.encryptMeta.key\n\t\ts3Meta[amzMatdesc] = meta.encryptMeta.matdesc\n\t}\n\n\ts3loc, err := util.extractBucketNameAndPath(meta.stageInfo.Location)\n\tif err != nil {\n\t\treturn err\n\t}\n\ts3path := s3loc.s3Path + strings.TrimLeft(meta.dstFileName, \"/\")\n\n\tclient, ok := meta.client.(*s3.Client)\n\tif !ok {\n\t\treturn &SnowflakeError{\n\t\t\tMessage: \"failed to cast to s3 client\",\n\t\t}\n\t}\n\tvar uploader s3UploadAPI\n\tuploader = manager.NewUploader(client, func(u *manager.Uploader) {\n\t\tu.Concurrency = maxConcurrency\n\t\tu.PartSize = int64Max(multiPartThreshold, manager.DefaultUploadPartSize)\n\t})\n\t// for testing only\n\tif meta.mockUploader != nil {\n\t\tuploader = meta.mockUploader\n\t}\n\n\t_, err = withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (any, error) {\n\t\tif meta.srcStream != nil {\n\t\t\tuploadStream := cmp.Or(meta.realSrcStream, meta.srcStream)\n\t\t\treturn uploader.Upload(ctx, &s3.PutObjectInput{\n\t\t\t\tBucket:   &s3loc.bucketName,\n\t\t\t\tKey:      &s3path,\n\t\t\t\tBody:     bytes.NewBuffer(uploadStream.Bytes()),\n\t\t\t\tMetadata: s3Meta,\n\t\t\t})\n\t\t}\n\t\tvar file *os.File\n\t\tfile, err = os.Open(dataFile)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tdefer func() {\n\t\t\tif err = file.Close(); err != nil {\n\t\t\t\tlogger.Warnf(\"failed to close %v file: %v\", dataFile, err)\n\t\t\t}\n\t\t}()\n\t\treturn uploader.Upload(ctx, &s3.PutObjectInput{\n\t\t\tBucket:   &s3loc.bucketName,\n\t\t\tKey:      &s3path,\n\t\t\tBody:     file,\n\t\t\tMetadata: s3Meta,\n\t\t})\n\n\t})\n\n\tif err != nil {\n\t\tvar ae smithy.APIError\n\t\tif errors.As(err, &ae) {\n\t\t\tif ae.ErrorCode() == expiredToken {\n\t\t\t\tmeta.resStatus = renewToken\n\t\t\t\treturn err\n\t\t\t} else if strings.Contains(ae.ErrorCode(), errNoWsaeconnaborted) {\n\t\t\t\tmeta.lastError = err\n\t\t\t\tmeta.resStatus = needRetryWithLowerConcurrency\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\tmeta.lastError = err\n\t\tmeta.resStatus = needRetry\n\t\treturn fmt.Errorf(\"error while uploading file. %w\", err)\n\t}\n\tmeta.dstFileSize = meta.uploadSize\n\tmeta.resStatus = uploaded\n\treturn nil\n}\n\ntype s3DownloadAPI interface {\n\tDownload(ctx context.Context, w io.WriterAt, params *s3.GetObjectInput, optFns ...func(*manager.Downloader)) (int64, error)\n}\n\n// cloudUtil implementation\nfunc (util *snowflakeS3Client) nativeDownloadFile(\n\tctx context.Context,\n\tmeta *fileMetadata,\n\tfullDstFileName string,\n\tmaxConcurrency int64,\n\tpartSize int64) error {\n\ts3Obj, _ := util.getS3Object(meta, meta.srcFileName)\n\tclient, ok := meta.client.(*s3.Client)\n\tif !ok {\n\t\treturn &SnowflakeError{\n\t\t\tMessage: \"failed to cast to s3 client\",\n\t\t}\n\t}\n\tlogger.Debugf(\"S3 Client: Send Get Request to the Bucket: %v\", meta.stageInfo.Location)\n\n\tvar downloader s3DownloadAPI\n\tdownloader = manager.NewDownloader(client, func(u *manager.Downloader) {\n\t\tu.Concurrency = int(maxConcurrency)\n\t\tu.PartSize = int64Max(partSize, manager.DefaultDownloadPartSize)\n\t})\n\t// for testing only\n\tif meta.mockDownloader != nil {\n\t\tdownloader = meta.mockDownloader\n\t}\n\n\t_, err := withCloudStorageTimeout(ctx, util.cfg, func(ctx context.Context) (any, error) {\n\t\tif isFileGetStream(ctx) {\n\t\t\tbuf := manager.NewWriteAtBuffer([]byte{})\n\t\t\tif _, err := downloader.Download(ctx, buf, &s3.GetObjectInput{\n\t\t\t\tBucket: s3Obj.Bucket,\n\t\t\t\tKey:    s3Obj.Key,\n\t\t\t}); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tmeta.dstStream = bytes.NewBuffer(buf.Bytes())\n\t\t} else {\n\t\t\tf, err := os.OpenFile(fullDstFileName, os.O_CREATE|os.O_WRONLY, readWriteFileMode)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tdefer func() {\n\t\t\t\tif err = f.Close(); err != nil {\n\t\t\t\t\tlogger.Warnf(\"failed to close %v file: %v\", fullDstFileName, err)\n\t\t\t\t}\n\t\t\t}()\n\t\t\tif _, err = downloader.Download(ctx, f, &s3.GetObjectInput{\n\t\t\t\tBucket: s3Obj.Bucket,\n\t\t\t\tKey:    s3Obj.Key,\n\t\t\t}); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t\treturn nil, nil\n\t})\n\n\tif err != nil {\n\t\tvar ae smithy.APIError\n\t\tif errors.As(err, &ae) {\n\t\t\tif ae.ErrorCode() == expiredToken {\n\t\t\t\tmeta.resStatus = renewToken\n\t\t\t\treturn err\n\t\t\t} else if strings.Contains(ae.ErrorCode(), errNoWsaeconnaborted) {\n\t\t\t\tmeta.lastError = err\n\t\t\t\tmeta.resStatus = needRetryWithLowerConcurrency\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tmeta.lastError = err\n\t\t\tmeta.resStatus = errStatus\n\t\t\treturn fmt.Errorf(\"error while downloading file, errorCode=%v. %w\", ae.ErrorCode(), err)\n\t\t}\n\t\tmeta.lastError = err\n\t\tmeta.resStatus = needRetry\n\t\treturn fmt.Errorf(\"error while downloading file. %w\", err)\n\t}\n\tmeta.resStatus = downloaded\n\treturn nil\n}\n\nfunc (util *snowflakeS3Client) extractBucketNameAndPath(location string) (*s3Location, error) {\n\tstageLocation, err := expandUser(location)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tbucketName := stageLocation\n\ts3Path := \"\"\n\n\tif before, after, ok := strings.Cut(stageLocation, \"/\"); ok {\n\t\tbucketName = before\n\t\ts3Path = after\n\t\tif s3Path != \"\" && !strings.HasSuffix(s3Path, \"/\") {\n\t\t\ts3Path += \"/\"\n\t\t}\n\t}\n\treturn &s3Location{bucketName, s3Path}, nil\n}\n\nfunc (util *snowflakeS3Client) getS3Object(meta *fileMetadata, filename string) (*s3.HeadObjectInput, error) {\n\ts3loc, err := util.extractBucketNameAndPath(meta.stageInfo.Location)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\ts3path := s3loc.s3Path + strings.TrimLeft(filename, \"/\")\n\treturn &s3.HeadObjectInput{\n\t\tBucket: &s3loc.bucketName,\n\t\tKey:    &s3path,\n\t}, nil\n}\n"
  },
  {
    "path": "s3_storage_client_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path\"\n\t\"strconv\"\n\t\"testing\"\n\n\t\"github.com/aws/aws-sdk-go-v2/feature/s3/manager\"\n\t\"github.com/aws/aws-sdk-go-v2/service/s3\"\n\t\"github.com/aws/smithy-go\"\n)\n\ntype tcBucketPath struct {\n\tin     string\n\tbucket string\n\tpath   string\n}\n\nfunc TestExtractBucketNameAndPath(t *testing.T) {\n\ts3util := new(snowflakeS3Client)\n\ttestcases := []tcBucketPath{\n\t\t{\"sfc-eng-regression/test_sub_dir/\", \"sfc-eng-regression\", \"test_sub_dir/\"},\n\t\t{\"sfc-eng-regression/dir/test_stg/test_sub_dir/\", \"sfc-eng-regression\", \"dir/test_stg/test_sub_dir/\"},\n\t\t{\"sfc-eng-regression/\", \"sfc-eng-regression\", \"\"},\n\t\t{\"sfc-eng-regression//\", \"sfc-eng-regression\", \"/\"},\n\t\t{\"sfc-eng-regression///\", \"sfc-eng-regression\", \"//\"},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(test.in, func(t *testing.T) {\n\t\t\ts3Loc, err := s3util.extractBucketNameAndPath(test.in)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t\tif s3Loc.bucketName != test.bucket {\n\t\t\t\tt.Errorf(\"failed. in: %v, expected: %v, got: %v\", test.in, test.bucket, s3Loc.bucketName)\n\t\t\t}\n\t\t\tif s3Loc.s3Path != test.path {\n\t\t\t\tt.Errorf(\"failed. in: %v, expected: %v, got: %v\", test.in, test.path, s3Loc.s3Path)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype mockUploadObjectAPI func(ctx context.Context, params *s3.PutObjectInput, optFns ...func(*manager.Uploader)) (*manager.UploadOutput, error)\n\nfunc (m mockUploadObjectAPI) Upload(\n\tctx context.Context,\n\tparams *s3.PutObjectInput,\n\toptFns ...func(*manager.Uploader)) (*manager.UploadOutput, error) {\n\treturn m(ctx, params, optFns...)\n}\n\nfunc TestUploadOneFileToS3WSAEConnAborted(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-customer-stage/rwyi-testacco/users/9220/\",\n\t\tLocationType: \"S3\",\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    false,\n\t\tparallel:          initialParallel,\n\t\tclient:            s3Cli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\tencryptMeta:       testEncryptionMeta(),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockUploader: mockUploadObjectAPI(func(ctx context.Context, params *s3.PutObjectInput, optFns ...func(*manager.Uploader)) (*manager.UploadOutput, error) {\n\t\t\treturn nil, &smithy.GenericAPIError{\n\t\t\t\tCode:    errNoWsaeconnaborted,\n\t\t\t\tMessage: \"mock err, connection aborted\",\n\t\t\t}\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t}}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\tif uploadMeta.lastMaxConcurrency == 0 {\n\t\tt.Fatalf(\"expected concurrency. got: 0\")\n\t}\n\tif uploadMeta.lastMaxConcurrency != int(initialParallel/defaultMaxRetry) {\n\t\tt.Fatalf(\"expected last max concurrency to be: %v, got: %v\",\n\t\t\tint(initialParallel/defaultMaxRetry), uploadMeta.lastMaxConcurrency)\n\t}\n\n\tinitialParallel = 4\n\tuploadMeta.parallel = initialParallel\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\tif uploadMeta.lastMaxConcurrency == 0 {\n\t\tt.Fatalf(\"expected no last max concurrency. got: %v\",\n\t\t\tuploadMeta.lastMaxConcurrency)\n\t}\n\tif uploadMeta.lastMaxConcurrency != 1 {\n\t\tt.Fatalf(\"expected last max concurrency to be: 1, got: %v\",\n\t\t\tuploadMeta.lastMaxConcurrency)\n\t}\n}\n\nfunc TestUploadOneFileToS3ConnReset(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-teststage/rwyitestacco/users/1234/\",\n\t\tLocationType: \"S3\",\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    true,\n\t\tparallel:          initialParallel,\n\t\tclient:            s3Cli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\tencryptMeta:       testEncryptionMeta(),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockUploader: mockUploadObjectAPI(func(ctx context.Context, params *s3.PutObjectInput, optFns ...func(*manager.Uploader)) (*manager.UploadOutput, error) {\n\t\t\treturn nil, &smithy.GenericAPIError{\n\t\t\t\tCode:    strconv.Itoa(-1),\n\t\t\t\tMessage: \"mock err, connection aborted\",\n\t\t\t}\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\tif uploadMeta.lastMaxConcurrency != 0 {\n\t\tt.Fatalf(\"expected no concurrency. got: %v\",\n\t\t\tuploadMeta.lastMaxConcurrency)\n\t}\n}\n\nfunc TestUploadFileWithS3UploadFailedError(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-teststage/rwyitestacco/users/1234/\",\n\t\tLocationType: \"S3\",\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    true,\n\t\tparallel:          initialParallel,\n\t\tclient:            s3Cli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\tencryptMeta:       testEncryptionMeta(),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockUploader: mockUploadObjectAPI(func(ctx context.Context, params *s3.PutObjectInput, optFns ...func(*manager.Uploader)) (*manager.UploadOutput, error) {\n\t\t\treturn nil, &smithy.GenericAPIError{\n\t\t\t\tCode: expiredToken,\n\t\t\t\tMessage: \"An error occurred (ExpiredToken) when calling the \" +\n\t\t\t\t\t\"operation: The provided token has expired.\",\n\t\t\t}\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif uploadMeta.resStatus != renewToken {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\trenewToken, uploadMeta.resStatus)\n\t}\n}\n\ntype mockHeaderAPI func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error)\n\nfunc (m mockHeaderAPI) HeadObject(\n\tctx context.Context,\n\tparams *s3.HeadObjectInput,\n\toptFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\treturn m(ctx, params, optFns...)\n}\n\nfunc TestGetHeadExpiryError(t *testing.T) {\n\tmeta := fileMetadata{\n\t\tclient:    s3.New(s3.Options{}),\n\t\tstageInfo: &execResponseStageInfo{Location: \"\"},\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn nil, &smithy.GenericAPIError{\n\t\t\t\tCode: expiredToken,\n\t\t\t}\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\tif header, err := (&snowflakeS3Client{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\tif meta.resStatus != renewToken {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\trenewToken, meta.resStatus)\n\t}\n}\n\nfunc TestGetHeaderUnexpectedError(t *testing.T) {\n\tmeta := fileMetadata{\n\t\tclient:    s3.New(s3.Options{}),\n\t\tstageInfo: &execResponseStageInfo{Location: \"\"},\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn nil, &smithy.GenericAPIError{\n\t\t\t\tCode: \"-1\",\n\t\t\t}\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\tif header, err := (&snowflakeS3Client{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\"); header != nil || err == nil {\n\t\tt.Fatalf(\"expected null header, got: %v\", header)\n\t}\n\tif meta.resStatus != errStatus {\n\t\tt.Fatalf(\"expected %v result status, got: %v\", errStatus, meta.resStatus)\n\t}\n}\n\nfunc TestGetHeaderNonApiError(t *testing.T) {\n\tmeta := fileMetadata{\n\t\tclient:    s3.New(s3.Options{}),\n\t\tstageInfo: &execResponseStageInfo{Location: \"\"},\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn nil, errors.New(\"something went wrong here\")\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\theader, err := (&snowflakeS3Client{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\")\n\tassertNilE(t, header, fmt.Sprintf(\"expected header to be nil, actual: %v\", header))\n\tassertNotNilE(t, err, \"expected err to not be nil\")\n\tassertEqualE(t, meta.resStatus, errStatus, fmt.Sprintf(\"expected %v result status for non-APIerror, got: %v\", errStatus, meta.resStatus))\n}\n\nfunc TestGetHeaderNotFoundError(t *testing.T) {\n\tmeta := fileMetadata{\n\t\tclient:    s3.New(s3.Options{}),\n\t\tstageInfo: &execResponseStageInfo{Location: \"\"},\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn nil, &smithy.GenericAPIError{\n\t\t\t\tCode: notFound,\n\t\t\t}\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\t_, err := (&snowflakeS3Client{cfg: &Config{}}).getFileHeader(context.Background(), &meta, \"file.txt\")\n\tif err != nil && err.Error() != \"could not find file\" {\n\t\tt.Error(err)\n\t}\n\n\tif meta.resStatus != notFoundFile {\n\t\tt.Fatalf(\"expected %v result status, got: %v\", errStatus, meta.resStatus)\n\t}\n}\n\ntype mockDownloadObjectAPI func(ctx context.Context, w io.WriterAt, params *s3.GetObjectInput, optFns ...func(*manager.Downloader)) (int64, error)\n\nfunc (m mockDownloadObjectAPI) Download(\n\tctx context.Context,\n\tw io.WriterAt,\n\tparams *s3.GetObjectInput,\n\toptFns ...func(*manager.Downloader)) (int64, error) {\n\treturn m(ctx, w, params, optFns...)\n}\n\nfunc TestDownloadFileWithS3TokenExpired(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-teststage/rwyitestacco/users/1234/\",\n\t\tLocationType: \"S3\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            s3Cli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockDownloader: mockDownloadObjectAPI(func(ctx context.Context, w io.WriterAt, params *s3.GetObjectInput, optFns ...func(*manager.Downloader)) (int64, error) {\n\t\t\treturn 0, &smithy.GenericAPIError{\n\t\t\t\tCode: expiredToken,\n\t\t\t\tMessage: \"An error occurred (ExpiredToken) when calling the \" +\n\t\t\t\t\t\"operation: The provided token has expired.\",\n\t\t\t}\n\t\t}),\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn &s3.HeadObjectOutput{}, nil\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\tif downloadMeta.resStatus != renewToken {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\trenewToken, downloadMeta.resStatus)\n\t}\n}\n\nfunc TestDownloadFileWithS3ConnReset(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-teststage/rwyitestacco/users/1234/\",\n\t\tLocationType: \"S3\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            s3Cli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockDownloader: mockDownloadObjectAPI(func(ctx context.Context, w io.WriterAt, params *s3.GetObjectInput, optFns ...func(*manager.Downloader)) (int64, error) {\n\t\t\treturn 0, &smithy.GenericAPIError{\n\t\t\t\tCode:    strconv.Itoa(-1),\n\t\t\t\tMessage: \"mock err, connection aborted\",\n\t\t\t}\n\t\t}),\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn &s3.HeadObjectOutput{}, nil\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\tif downloadMeta.lastMaxConcurrency != 0 {\n\t\tt.Fatalf(\"expected no concurrency. got: %v\",\n\t\t\tdownloadMeta.lastMaxConcurrency)\n\t}\n}\n\nfunc TestDownloadOneFileToS3WSAEConnAborted(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-teststage/rwyitestacco/users/1234/\",\n\t\tLocationType: \"S3\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            s3Cli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockDownloader: mockDownloadObjectAPI(func(ctx context.Context, w io.WriterAt, params *s3.GetObjectInput, optFns ...func(*manager.Downloader)) (int64, error) {\n\t\t\treturn 0, &smithy.GenericAPIError{\n\t\t\t\tCode:    errNoWsaeconnaborted,\n\t\t\t\tMessage: \"mock err, connection aborted\",\n\t\t\t}\n\t\t}),\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn &s3.HeadObjectOutput{}, nil\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\n\tif downloadMeta.resStatus != needRetryWithLowerConcurrency {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\tneedRetryWithLowerConcurrency, downloadMeta.resStatus)\n\t}\n}\n\nfunc TestDownloadOneFileToS3Failed(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-teststage/rwyitestacco/users/1234/\",\n\t\tLocationType: \"S3\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tdownloadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    true,\n\t\tclient:            s3Cli,\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\toverwrite:         true,\n\t\tsrcFileName:       \"data1.txt.gz\",\n\t\tlocalLocation:     dir,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockDownloader: mockDownloadObjectAPI(func(ctx context.Context, w io.WriterAt, params *s3.GetObjectInput, optFns ...func(*manager.Downloader)) (int64, error) {\n\t\t\treturn 0, errors.New(\"Failed to upload file\")\n\t\t}),\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn &s3.HeadObjectOutput{}, nil\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\terr = new(remoteStorageUtil).downloadOneFile(context.Background(), &downloadMeta)\n\tif err == nil {\n\t\tt.Error(\"should have raised an error\")\n\t}\n\n\tif downloadMeta.resStatus != needRetry {\n\t\tt.Fatalf(\"expected %v result status, got: %v\",\n\t\t\tneedRetry, downloadMeta.resStatus)\n\t}\n}\n\nfunc TestUploadFileToS3ClientCastFail(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-customer-stage/rwyi-testacco/users/9220/\",\n\t\tLocationType: \"S3\",\n\t}\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    false,\n\t\tclient:            azureCli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\tencryptMeta:       testEncryptionMeta(),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestGetHeaderClientCastFail(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-customer-stage/rwyi-testacco/users/9220/\",\n\t\tLocationType: \"S3\",\n\t}\n\tazureCli, err := new(snowflakeAzureClient).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tmeta := fileMetadata{\n\t\tclient:    azureCli,\n\t\tstageInfo: &execResponseStageInfo{Location: \"\"},\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn nil, &smithy.GenericAPIError{\n\t\t\t\tCode: notFound,\n\t\t\t}\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\t_, err = new(snowflakeS3Client).getFileHeader(context.Background(), &meta, \"file.txt\")\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestS3UploadRetryWithHeaderNotFound(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-customer-stage/rwyi-testacco/users/9220/\",\n\t\tLocationType: \"S3\",\n\t}\n\tinitialParallel := int64(100)\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    true,\n\t\tparallel:          initialParallel,\n\t\tclient:            s3Cli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcFileName:       path.Join(dir, \"/test_data/put_get_1.txt\"),\n\t\tencryptMeta:       testEncryptionMeta(),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockUploader: mockUploadObjectAPI(func(ctx context.Context, params *s3.PutObjectInput, optFns ...func(*manager.Uploader)) (*manager.UploadOutput, error) {\n\t\t\treturn &manager.UploadOutput{\n\t\t\t\tLocation: \"https://sfc-customer-stage/rwyi-testacco/users/9220/data1.txt.gz\",\n\t\t\t}, nil\n\t\t}),\n\t\tmockHeader: mockHeaderAPI(func(ctx context.Context, params *s3.HeadObjectInput, optFns ...func(*s3.Options)) (*s3.HeadObjectOutput, error) {\n\t\t\treturn nil, &smithy.GenericAPIError{\n\t\t\t\tCode: notFound,\n\t\t\t}\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcFileName = uploadMeta.srcFileName\n\tfi, err := os.Stat(uploadMeta.srcFileName)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tuploadMeta.uploadSize = fi.Size()\n\n\terr = (&remoteStorageUtil{cfg: &Config{}}).uploadOneFileWithRetry(context.Background(), &uploadMeta)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tif uploadMeta.resStatus != errStatus {\n\t\tt.Fatalf(\"expected %v result status, got: %v\", errStatus, uploadMeta.resStatus)\n\t}\n}\n\nfunc TestS3UploadStreamFailed(t *testing.T) {\n\tinfo := execResponseStageInfo{\n\t\tLocation:     \"sfc-customer-stage/rwyi-testacco/users/9220/\",\n\t\tLocationType: \"S3\",\n\t}\n\tinitialParallel := int64(100)\n\tsrc := []byte{65, 66, 67}\n\n\ts3Cli, err := new(snowflakeS3Client).createClient(&info, false, &snowflakeTelemetry{})\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\n\tuploadMeta := fileMetadata{\n\t\tname:              \"data1.txt.gz\",\n\t\tstageLocationType: \"S3\",\n\t\tnoSleepingTime:    true,\n\t\tparallel:          initialParallel,\n\t\tclient:            s3Cli,\n\t\tsha256Digest:      \"123456789abcdef\",\n\t\tstageInfo:         &info,\n\t\tdstFileName:       \"data1.txt.gz\",\n\t\tsrcStream:         bytes.NewBuffer(src),\n\t\tencryptMeta:       testEncryptionMeta(),\n\t\toverwrite:         true,\n\t\toptions: &SnowflakeFileTransferOptions{\n\t\t\tMultiPartThreshold: multiPartThreshold,\n\t\t},\n\t\tmockUploader: mockUploadObjectAPI(func(ctx context.Context, params *s3.PutObjectInput, optFns ...func(*manager.Uploader)) (*manager.UploadOutput, error) {\n\t\t\treturn nil, errors.New(\"unexpected error uploading file\")\n\t\t}),\n\t\tsfa: &snowflakeFileTransferAgent{\n\t\t\tsc: &snowflakeConn{\n\t\t\t\tcfg: &Config{},\n\t\t\t},\n\t\t},\n\t}\n\n\tuploadMeta.realSrcStream = uploadMeta.srcStream\n\n\terr = new(remoteStorageUtil).uploadOneFile(context.Background(), &uploadMeta)\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n}\n\nfunc TestConvertContentLength(t *testing.T) {\n\tsomeInt := int64(1)\n\ttcs := []struct {\n\t\tcontentLength any\n\t\tdesc          string\n\t\texpected      int64\n\t}{\n\t\t{\n\t\t\tcontentLength: someInt,\n\t\t\tdesc:          \"int\",\n\t\t\texpected:      1,\n\t\t},\n\t\t{\n\t\t\tcontentLength: &someInt,\n\t\t\tdesc:          \"pointer\",\n\t\t\texpected:      1,\n\t\t},\n\t\t{\n\t\t\tcontentLength: float64(1),\n\t\t\tdesc:          \"another type\",\n\t\t\texpected:      0,\n\t\t},\n\t}\n\tfor _, tc := range tcs {\n\t\tt.Run(tc.desc, func(t *testing.T) {\n\t\t\tactual := convertContentLength(tc.contentLength)\n\t\t\tassertEqualF(t, actual, tc.expected, fmt.Sprintf(\"expected %v (%T) but got %v (%T)\", actual, actual, tc.expected, tc.expected))\n\t\t})\n\t}\n}\n\nfunc TestGetS3Endpoint(t *testing.T) {\n\ttestcases := []struct {\n\t\tdesc string\n\t\tin   execResponseStageInfo\n\t\tout  string\n\t}{\n\n\t\t{\n\t\t\tdesc: \"when UseRegionalURL is valid and the region does not start with cn-\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseS3RegionalURL: false,\n\t\t\t\tUseRegionalURL:   true,\n\t\t\t\tEndPoint:         \"\",\n\t\t\t\tRegion:           \"WEST-1\",\n\t\t\t},\n\t\t\tout: \"https://s3.WEST-1.amazonaws.com\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when UseS3RegionalURL is valid and the region does not start with cn-\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseS3RegionalURL: true,\n\t\t\t\tUseRegionalURL:   false,\n\t\t\t\tEndPoint:         \"\",\n\t\t\t\tRegion:           \"WEST-1\",\n\t\t\t},\n\t\t\tout: \"https://s3.WEST-1.amazonaws.com\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when endPoint is enabled and the region does not start with cn-\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseS3RegionalURL: false,\n\t\t\t\tUseRegionalURL:   false,\n\t\t\t\tEndPoint:         \"s3.endpoint\",\n\t\t\t\tRegion:           \"mockLocation\",\n\t\t\t},\n\t\t\tout: \"https://s3.endpoint\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when endPoint is enabled and the region starts with cn-\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseS3RegionalURL: false,\n\t\t\t\tUseRegionalURL:   false,\n\t\t\t\tEndPoint:         \"s3.endpoint\",\n\t\t\t\tRegion:           \"cn-mockLocation\",\n\t\t\t},\n\t\t\tout: \"https://s3.endpoint\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when useS3RegionalURL is valid and domain starts with cn\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseS3RegionalURL: true,\n\t\t\t\tUseRegionalURL:   false,\n\t\t\t\tEndPoint:         \"\",\n\t\t\t\tRegion:           \"cn-mockLocation\",\n\t\t\t},\n\t\t\tout: \"https://s3.cn-mockLocation.amazonaws.com.cn\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when useRegionalURL is valid and domain starts with cn\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseS3RegionalURL: true,\n\t\t\t\tUseRegionalURL:   false,\n\t\t\t\tEndPoint:         \"\",\n\t\t\t\tRegion:           \"cn-mockLocation\",\n\t\t\t},\n\t\t\tout: \"https://s3.cn-mockLocation.amazonaws.com.cn\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when useRegionalURL is valid and domain starts with cn\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseS3RegionalURL: true,\n\t\t\t\tUseRegionalURL:   false,\n\t\t\t\tEndPoint:         \"\",\n\t\t\t\tRegion:           \"cn-mockLocation\",\n\t\t\t},\n\t\t\tout: \"https://s3.cn-mockLocation.amazonaws.com.cn\",\n\t\t},\n\t\t{\n\t\t\tdesc: \"when endPoint is specified, both UseRegionalURL and useS3PRegionalUrl are valid, and the region starts with cn\",\n\t\t\tin: execResponseStageInfo{\n\t\t\t\tUseS3RegionalURL: true,\n\t\t\t\tUseRegionalURL:   true,\n\t\t\t\tEndPoint:         \"s3.endpoint\",\n\t\t\t\tRegion:           \"cn-mockLocation\",\n\t\t\t},\n\t\t\tout: \"https://s3.endpoint\",\n\t\t},\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.desc, func(t *testing.T) {\n\t\t\tendpoint := getS3CustomEndpoint(&test.in)\n\t\t\tif *endpoint != test.out {\n\t\t\t\tt.Errorf(\"failed. in: %v, expected: %v, got: %v\", test.in, test.out, *endpoint)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "secret_detector.go",
    "content": "package gosnowflake\n\nimport loggerinternal \"github.com/snowflakedb/gosnowflake/v2/internal/logger\"\n\n// maskSecrets masks secrets in text (unexported for internal use within main package)\nfunc maskSecrets(text string) string {\n\treturn loggerinternal.MaskSecrets(text)\n}\n"
  },
  {
    "path": "secret_detector_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/golang-jwt/jwt/v5\"\n)\n\nconst (\n\tlongToken = \"_Y1ZNETTn5/qfUWj3Jedby7gipDzQs=UKyJH9DS=nFzzWnfZKGV+C7GopWC\" + // pragma: allowlist secret\n\t\t\"GD4LjOLLFZKOE26LXHDt3pTi4iI1qwKuSpf/FmClCMBSissVsU3Ei590FP0lPQQhcSG\" + // pragma: allowlist secret\n\t\t\"cDu69ZL_1X6e9h5z62t/iY7ZkII28n2qU=nrBJUgPRCIbtJQkVJXIuOHjX4G5yUEKjZ\" + // pragma: allowlist secret\n\t\t\"BAx4w6=_lqtt67bIA=o7D=oUSjfywsRFoloNIkBPXCwFTv+1RVUHgVA2g8A9Lw5XdJY\" + // pragma: allowlist secret\n\t\t\"uI8vhg=f0bKSq7AhQ2Bh\"\n\trandomPassword     = `Fh[+2J~AcqeqW%?`\n\tfalsePositiveToken = \"2020-04-30 23:06:04,069 - MainThread auth.py:397\" +\n\t\t\" - write_temporary_credential() - DEBUG - no ID token is given when \" +\n\t\t\"try to store temporary credential\"\n)\n\n// generateTestJWT creates a test JWT token for masking tests using the JWT library\nfunc generateTestJWT(t *testing.T) string {\n\t// Create claims for the test JWT\n\tclaims := jwt.MapClaims{\n\t\t\"sub\":  \"test123\",\n\t\t\"name\": \"Test User\",\n\t\t\"exp\":  time.Now().Add(time.Hour).Unix(),\n\t\t\"iat\":  time.Now().Unix(),\n\t}\n\n\t// Create the token with HS256 signing method\n\ttoken := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)\n\n\t// Sign the token with a test secret\n\ttestSecret := []byte(\"test-secret-for-masking-validation\")\n\ttokenString, err := token.SignedString(testSecret)\n\tif err != nil {\n\t\t// Fallback to a simple test JWT if signing fails\n\t\tt.Fatalf(\"Failed to generate test JWT: %s\", err)\n\t}\n\n\treturn tokenString\n}\n\nfunc TestSecretsDetector(t *testing.T) {\n\ttestCases := []struct {\n\t\tname     string\n\t\tinput    string\n\t\texpected string\n\t}{\n\t\t// Token masking tests\n\t\t{\"Token with equals\", fmt.Sprintf(\"Token =%s\", longToken), \"Token =****\"},\n\t\t{\"idToken with colon space\", fmt.Sprintf(\"idToken : %s\", longToken), \"idToken : ****\"},\n\t\t{\"sessionToken with colon space\", fmt.Sprintf(\"sessionToken : %s\", longToken), \"sessionToken : ****\"},\n\t\t{\"masterToken with colon space\", fmt.Sprintf(\"masterToken : %s\", longToken), \"masterToken : ****\"},\n\t\t{\"accessToken with colon space\", fmt.Sprintf(\"accessToken : %s\", longToken), \"accessToken : ****\"},\n\t\t{\"refreshToken with colon space\", fmt.Sprintf(\"refreshToken : %s\", longToken), \"refreshToken : ****\"},\n\t\t{\"programmaticAccessToken with colon space\", fmt.Sprintf(\"programmaticAccessToken : %s\", longToken), \"programmaticAccessToken : ****\"},\n\t\t{\"programmatic_access_token with colon space\", fmt.Sprintf(\"programmatic_access_token : %s\", longToken), \"programmatic_access_token : ****\"},\n\t\t{\"JWT - with Bearer prefix\", fmt.Sprintf(\"Bearer %s\", generateTestJWT(t)), \"Bearer ****\"},\n\t\t{\"JWT - with JWT prefix\", fmt.Sprintf(\"JWT %s\", generateTestJWT(t)), \"JWT ****\"},\n\n\t\t// Password masking tests\n\t\t{\"password with colon\", fmt.Sprintf(\"password:%s\", randomPassword), \"password:****\"},\n\t\t{\"PASSWORD uppercase with colon\", fmt.Sprintf(\"PASSWORD:%s\", randomPassword), \"PASSWORD:****\"},\n\t\t{\"PaSsWoRd mixed case with colon\", fmt.Sprintf(\"PaSsWoRd:%s\", randomPassword), \"PaSsWoRd:****\"},\n\t\t{\"password with equals and spaces\", fmt.Sprintf(\"password = %s\", randomPassword), \"password = ****\"},\n\t\t{\"pwd with colon\", fmt.Sprintf(\"pwd:%s\", randomPassword), \"pwd:****\"},\n\n\t\t// Mixed token and password tests\n\t\t{\n\t\t\t\"token and password mixed\",\n\t\t\tfmt.Sprintf(\"token=%s foo bar baz password:%s\", longToken, randomPassword),\n\t\t\t\"token=**** foo bar baz password:****\",\n\t\t},\n\t\t{\n\t\t\t\"PWD and TOKEN mixed\",\n\t\t\tfmt.Sprintf(\"PWD = %s blah blah blah TOKEN:%s\", randomPassword, longToken),\n\t\t\t\"PWD = **** blah blah blah TOKEN:****\",\n\t\t},\n\n\t\t// Client secret tests\n\t\t{\"clientSecret with values\", \"clientSecret abc oauthClientSECRET=def\", \"clientSecret **** oauthClientSECRET=****\"},\n\n\t\t// False positive test\n\t\t{\"false positive should not be masked\", falsePositiveToken, falsePositiveToken},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tresult := maskSecrets(tc.input)\n\t\t\tassertEqualE(t, result, tc.expected)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "secure_storage_manager.go",
    "content": "package gosnowflake\n\nimport (\n\t\"crypto/sha256\"\n\t\"encoding/hex\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"os/user\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n)\n\ntype tokenType string\n\nconst (\n\tidToken           tokenType = \"ID_TOKEN\"\n\tmfaToken          tokenType = \"MFA_TOKEN\"\n\toauthAccessToken  tokenType = \"OAUTH_ACCESS_TOKEN\"\n\toauthRefreshToken tokenType = \"OAUTH_REFRESH_TOKEN\"\n)\n\nconst (\n\tcredCacheDirEnv   = \"SF_TEMPORARY_CREDENTIAL_CACHE_DIR\"\n\tcredCacheFileName = \"credential_cache_v1.json\"\n)\n\ntype cacheDirConf struct {\n\tenvVar       string\n\tpathSegments []string\n}\n\nvar defaultLinuxCacheDirConf = []cacheDirConf{\n\t{envVar: credCacheDirEnv, pathSegments: []string{}},\n\t{envVar: \"XDG_CACHE_DIR\", pathSegments: []string{\"snowflake\"}},\n\t{envVar: \"HOME\", pathSegments: []string{\".cache\", \"snowflake\"}},\n}\n\ntype secureTokenSpec struct {\n\thost, user string\n\ttokenType  tokenType\n}\n\nfunc (t *secureTokenSpec) buildKey() (string, error) {\n\treturn buildCredentialsKey(t.host, t.user, t.tokenType)\n}\n\nfunc newMfaTokenSpec(host, user string) *secureTokenSpec {\n\treturn &secureTokenSpec{\n\t\thost,\n\t\tuser,\n\t\tmfaToken,\n\t}\n}\n\nfunc newIDTokenSpec(host, user string) *secureTokenSpec {\n\treturn &secureTokenSpec{\n\t\thost,\n\t\tuser,\n\t\tidToken,\n\t}\n}\n\nfunc newOAuthAccessTokenSpec(host, user string) *secureTokenSpec {\n\treturn &secureTokenSpec{\n\t\thost,\n\t\tuser,\n\t\toauthAccessToken,\n\t}\n}\n\nfunc newOAuthRefreshTokenSpec(host, user string) *secureTokenSpec {\n\treturn &secureTokenSpec{\n\t\thost,\n\t\tuser,\n\t\toauthRefreshToken,\n\t}\n}\n\ntype secureStorageManager interface {\n\tsetCredential(tokenSpec *secureTokenSpec, value string)\n\tgetCredential(tokenSpec *secureTokenSpec) string\n\tdeleteCredential(tokenSpec *secureTokenSpec)\n}\n\nvar credentialsStorage = newSecureStorageManager()\n\nfunc newSecureStorageManager() secureStorageManager {\n\treturn defaultOsSpecificSecureStorageManager()\n}\n\ntype fileBasedSecureStorageManager struct {\n\tcredDirPath string\n}\n\nfunc newFileBasedSecureStorageManager() (*fileBasedSecureStorageManager, error) {\n\tcredDirPath, err := buildCredCacheDirPath(defaultLinuxCacheDirConf)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tssm := &fileBasedSecureStorageManager{\n\t\tcredDirPath: credDirPath,\n\t}\n\treturn ssm, nil\n}\n\nfunc lookupCacheDir(envVar string, pathSegments ...string) (string, error) {\n\tenvVal := os.Getenv(envVar)\n\tif envVal == \"\" {\n\t\treturn \"\", fmt.Errorf(\"environment variable %s not set\", envVar)\n\t}\n\n\tfileInfo, err := os.Stat(envVal)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to stat %s=%s, due to %v\", envVar, envVal, err)\n\t}\n\n\tif !fileInfo.IsDir() {\n\t\treturn \"\", fmt.Errorf(\"environment variable %s=%s is not a directory\", envVar, envVal)\n\t}\n\n\tcacheDir := filepath.Join(envVal, filepath.Join(pathSegments...))\n\tparentOfCacheDir := cacheDir[:strings.LastIndex(cacheDir, \"/\")]\n\n\tif err = os.MkdirAll(parentOfCacheDir, os.FileMode(0755)); err != nil {\n\t\treturn \"\", err\n\t}\n\n\t// We don't check if permissions are incorrect here if a directory exists, because we check it later.\n\tif err = os.Mkdir(cacheDir, os.FileMode(0700)); err != nil && !errors.Is(err, os.ErrExist) {\n\t\treturn \"\", err\n\t}\n\n\treturn cacheDir, nil\n}\n\nfunc buildCredCacheDirPath(confs []cacheDirConf) (string, error) {\n\tfor _, conf := range confs {\n\t\tpath, err := lookupCacheDir(conf.envVar, conf.pathSegments...)\n\t\tif err != nil {\n\t\t\tlogger.Debugf(\"Skipping %s in cache directory lookup due to %v\", conf.envVar, err)\n\t\t} else {\n\t\t\tlogger.Debugf(\"Using %s as cache directory\", path)\n\t\t\treturn path, nil\n\t\t}\n\t}\n\n\treturn \"\", errors.New(\"no credentials cache directory found\")\n}\n\nfunc (ssm *fileBasedSecureStorageManager) getTokens(data map[string]any) map[string]any {\n\tval, ok := data[\"tokens\"]\n\tif !ok {\n\t\treturn map[string]any{}\n\t}\n\n\ttokens, ok := val.(map[string]any)\n\tif !ok {\n\t\treturn map[string]any{}\n\t}\n\n\treturn tokens\n}\n\nfunc (ssm *fileBasedSecureStorageManager) withLock(action func(cacheFile *os.File)) {\n\terr := ssm.lockFile()\n\tif err != nil {\n\t\tlogger.Warnf(\"Unable to lock cache. %v\", err)\n\t\treturn\n\t}\n\tdefer ssm.unlockFile()\n\n\tssm.withCacheFile(action)\n}\n\nfunc (ssm *fileBasedSecureStorageManager) withCacheFile(action func(*os.File)) {\n\tcacheFile, err := os.OpenFile(ssm.credFilePath(), os.O_CREATE|os.O_RDWR, 0600)\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot access %v. %v\", ssm.credFilePath(), err)\n\t\treturn\n\t}\n\tdefer func(file *os.File) {\n\t\tif err := file.Close(); err != nil {\n\t\t\tlogger.Warnf(\"cannot release file descriptor for %v. %v\", ssm.credFilePath(), err)\n\t\t}\n\t}(cacheFile)\n\n\tcacheDir, err := os.Open(ssm.credDirPath)\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot access %v. %v\", ssm.credDirPath, err)\n\t}\n\tdefer func(file *os.File) {\n\t\tif err := file.Close(); err != nil {\n\t\t\tlogger.Warnf(\"cannot release file descriptor for %v. %v\", cacheDir, err)\n\t\t}\n\t}(cacheDir)\n\n\tif err := ensureFileOwner(cacheFile); err != nil {\n\t\tlogger.Warnf(\"failed to ensure owner for temporary cache file. %v\", err)\n\t\treturn\n\t}\n\tif err := ensureFilePermissions(cacheFile, 0600); err != nil {\n\t\tlogger.Warnf(\"failed to ensure permission for temporary cache file. %v\", err)\n\t\treturn\n\t}\n\tif err := ensureFileOwner(cacheDir); err != nil {\n\t\tlogger.Warnf(\"failed to ensure owner for temporary cache dir. %v\", err)\n\t\treturn\n\t}\n\tif err := ensureFilePermissions(cacheDir, 0700|os.ModeDir); err != nil {\n\t\tlogger.Warnf(\"failed to ensure permission for temporary cache dir. %v\", err)\n\t\treturn\n\t}\n\n\taction(cacheFile)\n}\n\nfunc (ssm *fileBasedSecureStorageManager) setCredential(tokenSpec *secureTokenSpec, value string) {\n\tif value == \"\" {\n\t\tlogger.Debug(\"no token provided\")\n\t\treturn\n\t}\n\tcredentialsKey, err := tokenSpec.buildKey()\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot build token spec: %v\", err)\n\t\treturn\n\t}\n\n\tssm.withLock(func(cacheFile *os.File) {\n\t\tcredCache, err := ssm.readTemporaryCacheFile(cacheFile)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"Error while reading cache file. %v\", err)\n\t\t\treturn\n\t\t}\n\t\ttokens := ssm.getTokens(credCache)\n\t\ttokens[credentialsKey] = value\n\t\tcredCache[\"tokens\"] = tokens\n\t\terr = ssm.writeTemporaryCacheFile(credCache, cacheFile)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"Set credential failed. Unable to write cache. %v\", err)\n\t\t} else {\n\t\t\tlogger.Debugf(\"Set credential succeeded. Authentication type: %v, User: %v,  file location: %v\", tokenSpec.tokenType, tokenSpec.user, ssm.credFilePath())\n\t\t}\n\t})\n}\n\nfunc (ssm *fileBasedSecureStorageManager) lockPath() string {\n\treturn filepath.Join(ssm.credDirPath, credCacheFileName+\".lck\")\n}\n\nfunc (ssm *fileBasedSecureStorageManager) lockFile() error {\n\tconst numRetries = 10\n\tconst retryInterval = 100 * time.Millisecond\n\tlockPath := ssm.lockPath()\n\n\tlockFile, err := os.Open(lockPath)\n\tif err != nil && !errors.Is(err, os.ErrNotExist) {\n\t\treturn fmt.Errorf(\"failed to open %v. err: %v\", lockPath, err)\n\t}\n\tdefer func() {\n\t\tif lockFile != nil {\n\t\t\terr = lockFile.Close()\n\t\t\tif err != nil {\n\t\t\t\tlogger.Debugf(\"error while closing lock file. %v\", err)\n\t\t\t}\n\t\t}\n\t}()\n\n\tif err == nil { // file exists\n\t\tfileInfo, err := lockFile.Stat()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"failed to stat %v and determine if lock is stale. err: %v\", lockPath, err)\n\t\t}\n\n\t\townerUID, err := provideFileOwner(lockFile)\n\t\tif err != nil && !errors.Is(err, os.ErrNotExist) {\n\t\t\treturn err\n\t\t}\n\t\tcurrentUser, err := user.Current()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif strconv.Itoa(int(ownerUID)) != currentUser.Uid {\n\t\t\treturn errors.New(\"incorrect owner of \" + lockFile.Name())\n\t\t}\n\n\t\t// removing stale lock\n\t\tnow := time.Now()\n\t\tif fileInfo.ModTime().Add(time.Second).UnixNano() < now.UnixNano() {\n\t\t\tlogger.Debugf(\"removing credentials cache lock file, stale for %vms\", (now.UnixNano()-fileInfo.ModTime().UnixNano())/1000/1000)\n\t\t\terr = os.Remove(lockPath)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"failed to remove %v while trying to remove stale lock. err: %v\", lockPath, err)\n\t\t\t}\n\t\t}\n\t}\n\n\tlocked := false\n\tfor range numRetries {\n\t\terr := os.Mkdir(lockPath, 0700)\n\t\tif err != nil {\n\t\t\tif errors.Is(err, os.ErrExist) {\n\t\t\t\ttime.Sleep(retryInterval)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\treturn fmt.Errorf(\"failed to create cache lock: %v, err: %v\", lockPath, err)\n\t\t}\n\t\tlocked = true\n\t\tbreak\n\t}\n\tif !locked {\n\t\treturn fmt.Errorf(\"failed to lock cache. lockPath: %v\", lockPath)\n\t}\n\treturn nil\n}\n\nfunc (ssm *fileBasedSecureStorageManager) unlockFile() {\n\tlockPath := ssm.lockPath()\n\terr := os.Remove(lockPath)\n\tif err != nil {\n\t\tlogger.Warnf(\"Failed to unlock cache lock: %v. %v\", lockPath, err)\n\t}\n}\n\nfunc (ssm *fileBasedSecureStorageManager) getCredential(tokenSpec *secureTokenSpec) string {\n\tcredentialsKey, err := tokenSpec.buildKey()\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot build token spec: %v\", err)\n\t\treturn \"\"\n\t}\n\n\tret := \"\"\n\tssm.withLock(func(cacheFile *os.File) {\n\t\tcredCache, err := ssm.readTemporaryCacheFile(cacheFile)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"Error while reading cache file. %v\", err)\n\t\t\treturn\n\t\t}\n\t\tcred, ok := ssm.getTokens(credCache)[credentialsKey]\n\t\tif !ok {\n\t\t\treturn\n\t\t}\n\n\t\tcredStr, ok := cred.(string)\n\t\tif !ok {\n\t\t\treturn\n\t\t}\n\n\t\tret = credStr\n\t})\n\treturn ret\n}\n\nfunc (ssm *fileBasedSecureStorageManager) credFilePath() string {\n\treturn filepath.Join(ssm.credDirPath, credCacheFileName)\n}\n\nfunc ensureFileOwner(f *os.File) error {\n\townerUID, err := provideFileOwner(f)\n\tif err != nil && !errors.Is(err, os.ErrNotExist) {\n\t\treturn err\n\t}\n\tcurrentUser, err := user.Current()\n\tif err != nil {\n\t\treturn err\n\t}\n\tif errors.Is(err, os.ErrNotExist) {\n\t\treturn nil\n\t}\n\tif strconv.Itoa(int(ownerUID)) != currentUser.Uid {\n\t\treturn errors.New(\"incorrect owner of \" + f.Name())\n\t}\n\treturn nil\n}\n\nfunc ensureFilePermissions(f *os.File, expectedMode os.FileMode) error {\n\tfileInfo, err := f.Stat()\n\tif err != nil {\n\t\treturn err\n\t}\n\tif fileInfo.Mode().Perm() != expectedMode&os.ModePerm {\n\t\treturn fmt.Errorf(\"incorrect permissions(%v, expected %v) for credential file\", fileInfo.Mode(), expectedMode)\n\t}\n\treturn nil\n}\n\nfunc (ssm *fileBasedSecureStorageManager) readTemporaryCacheFile(cacheFile *os.File) (map[string]any, error) {\n\n\tjsonData, err := io.ReadAll(cacheFile)\n\tif err != nil {\n\t\tlogger.Warnf(\"Failed to read credential cache file. %v.\\n\", err)\n\t\treturn map[string]any{}, nil\n\t}\n\tif _, err = cacheFile.Seek(0, 0); err != nil {\n\t\treturn map[string]any{}, fmt.Errorf(\"cannot seek to the beginning of a cache file. %v\", err)\n\t}\n\n\tif len(jsonData) == 0 {\n\t\t// Happens when the file didn't exist before.\n\t\treturn map[string]any{}, nil\n\t}\n\n\tcredentialsMap := map[string]any{}\n\terr = json.Unmarshal(jsonData, &credentialsMap)\n\tif err != nil {\n\t\treturn map[string]any{}, fmt.Errorf(\"failed to unmarshal credential cache file. %v\", err)\n\t}\n\n\treturn credentialsMap, nil\n}\n\nfunc (ssm *fileBasedSecureStorageManager) deleteCredential(tokenSpec *secureTokenSpec) {\n\tcredentialsKey, err := tokenSpec.buildKey()\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot build token spec: %v\", err)\n\t\treturn\n\t}\n\n\tssm.withLock(func(cacheFile *os.File) {\n\t\tcredCache, err := ssm.readTemporaryCacheFile(cacheFile)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"Error while reading cache file. %v\", err)\n\t\t\treturn\n\t\t}\n\t\tdelete(ssm.getTokens(credCache), credentialsKey)\n\n\t\terr = ssm.writeTemporaryCacheFile(credCache, cacheFile)\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"Set credential failed. Unable to write cache. %v\", err)\n\t\t} else {\n\t\t\tlogger.Debugf(\"Deleted credential succeeded. Authentication type: %v, User: %v,  file location: %v\", tokenSpec.tokenType, tokenSpec.user, ssm.credFilePath())\n\t\t}\n\t})\n}\n\nfunc (ssm *fileBasedSecureStorageManager) writeTemporaryCacheFile(cache map[string]any, cacheFile *os.File) error {\n\tbytes, err := json.Marshal(cache)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to marshal credential cache map. %w\", err)\n\t}\n\n\tif err = cacheFile.Truncate(0); err != nil {\n\t\treturn fmt.Errorf(\"error while truncating credentials cache. %v\", err)\n\t}\n\t_, err = cacheFile.Write(bytes)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to write the credential cache file: %w\", err)\n\t}\n\treturn nil\n}\n\nfunc buildCredentialsKey(host, user string, credType tokenType) (string, error) {\n\tif host == \"\" {\n\t\treturn \"\", errors.New(\"host is not provided to store in token cache, skipping\")\n\t}\n\tif user == \"\" {\n\t\treturn \"\", errors.New(\"user is not provided to store in token cache, skipping\")\n\t}\n\tplainCredKey := host + \":\" + user + \":\" + string(credType)\n\tchecksum := sha256.New()\n\tchecksum.Write([]byte(plainCredKey))\n\treturn hex.EncodeToString(checksum.Sum(nil)), nil\n}\n\ntype noopSecureStorageManager struct {\n}\n\nfunc newNoopSecureStorageManager() *noopSecureStorageManager {\n\treturn &noopSecureStorageManager{}\n}\n\nfunc (ssm *noopSecureStorageManager) setCredential(_ *secureTokenSpec, _ string) {\n}\n\nfunc (ssm *noopSecureStorageManager) getCredential(_ *secureTokenSpec) string {\n\treturn \"\"\n}\n\nfunc (ssm *noopSecureStorageManager) deleteCredential(_ *secureTokenSpec) {\n}\n\ntype threadSafeSecureStorageManager struct {\n\tmu       *sync.Mutex\n\tdelegate secureStorageManager\n}\n\nfunc (ssm *threadSafeSecureStorageManager) setCredential(tokenSpec *secureTokenSpec, value string) {\n\tssm.mu.Lock()\n\tdefer ssm.mu.Unlock()\n\tssm.delegate.setCredential(tokenSpec, value)\n}\n\nfunc (ssm *threadSafeSecureStorageManager) getCredential(tokenSpec *secureTokenSpec) string {\n\tssm.mu.Lock()\n\tdefer ssm.mu.Unlock()\n\treturn ssm.delegate.getCredential(tokenSpec)\n}\n\nfunc (ssm *threadSafeSecureStorageManager) deleteCredential(tokenSpec *secureTokenSpec) {\n\tssm.mu.Lock()\n\tdefer ssm.mu.Unlock()\n\tssm.delegate.deleteCredential(tokenSpec)\n}\n"
  },
  {
    "path": "secure_storage_manager_linux.go",
    "content": "//go:build linux\n\npackage gosnowflake\n\nimport (\n\t\"runtime\"\n\t\"sync\"\n)\n\nfunc defaultOsSpecificSecureStorageManager() secureStorageManager {\n\tlogger.Debugf(\"OS is %v, using file based secure storage manager.\", runtime.GOOS)\n\tssm, err := newFileBasedSecureStorageManager()\n\tif err != nil {\n\t\tlogger.Debugf(\"failed to create credentials cache dir: %v. Not storing credentials locally.\", err)\n\t\treturn newNoopSecureStorageManager()\n\t}\n\treturn &threadSafeSecureStorageManager{&sync.Mutex{}, ssm}\n}\n"
  },
  {
    "path": "secure_storage_manager_notlinux.go",
    "content": "//go:build !linux\n\npackage gosnowflake\n\nimport (\n\t\"github.com/99designs/keyring\"\n\t\"runtime\"\n\t\"strings\"\n\t\"sync\"\n)\n\nfunc defaultOsSpecificSecureStorageManager() secureStorageManager {\n\tswitch runtime.GOOS {\n\tcase \"darwin\", \"windows\":\n\t\tlogger.Debugf(\"OS is %v, using keyring based secure storage manager.\", runtime.GOOS)\n\t\treturn &threadSafeSecureStorageManager{&sync.Mutex{}, newKeyringBasedSecureStorageManager()}\n\tdefault:\n\t\tlogger.Debugf(\"OS %v does not support credentials cache\", runtime.GOOS)\n\t\treturn newNoopSecureStorageManager()\n\t}\n}\n\ntype keyringSecureStorageManager struct {\n}\n\nfunc newKeyringBasedSecureStorageManager() *keyringSecureStorageManager {\n\treturn &keyringSecureStorageManager{}\n}\n\nfunc (ssm *keyringSecureStorageManager) setCredential(tokenSpec *secureTokenSpec, value string) {\n\tif value == \"\" {\n\t\tlogger.Debug(\"no token provided\")\n\t} else {\n\t\tcredentialsKey, err := tokenSpec.buildKey()\n\t\tif err != nil {\n\t\t\tlogger.Warnf(\"cannot build token spec: %v\", err)\n\t\t\treturn\n\t\t}\n\t\tswitch runtime.GOOS {\n\t\tcase \"windows\":\n\t\t\tring, _ := keyring.Open(keyring.Config{\n\t\t\t\tWinCredPrefix: strings.ToUpper(tokenSpec.host),\n\t\t\t\tServiceName:   strings.ToUpper(tokenSpec.user),\n\t\t\t})\n\t\t\titem := keyring.Item{\n\t\t\t\tKey:  credentialsKey,\n\t\t\t\tData: []byte(value),\n\t\t\t}\n\t\t\tif err := ring.Set(item); err != nil {\n\t\t\t\tlogger.Debugf(\"Failed to write to Windows credential manager. Err: %v\", err)\n\t\t\t}\n\t\tcase \"darwin\":\n\t\t\tring, _ := keyring.Open(keyring.Config{\n\t\t\t\tServiceName: credentialsKey,\n\t\t\t})\n\t\t\taccount := strings.ToUpper(tokenSpec.user)\n\t\t\titem := keyring.Item{\n\t\t\t\tKey:  account,\n\t\t\t\tData: []byte(value),\n\t\t\t}\n\t\t\tif err := ring.Set(item); err != nil {\n\t\t\t\tlogger.Debugf(\"Failed to write to keychain. Err: %v\", err)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (ssm *keyringSecureStorageManager) getCredential(tokenSpec *secureTokenSpec) string {\n\tcred := \"\"\n\tcredentialsKey, err := tokenSpec.buildKey()\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot build token spec: %v\", err)\n\t\treturn \"\"\n\t}\n\tswitch runtime.GOOS {\n\tcase \"windows\":\n\t\tring, _ := keyring.Open(keyring.Config{\n\t\t\tWinCredPrefix: strings.ToUpper(tokenSpec.host),\n\t\t\tServiceName:   strings.ToUpper(tokenSpec.user),\n\t\t})\n\t\ti, err := ring.Get(credentialsKey)\n\t\tif err != nil {\n\t\t\tlogger.Debugf(\"Failed to read credentialsKey or could not find it in Windows Credential Manager. Error: %v\", err)\n\t\t}\n\t\tcred = string(i.Data)\n\tcase \"darwin\":\n\t\tring, _ := keyring.Open(keyring.Config{\n\t\t\tServiceName: credentialsKey,\n\t\t})\n\t\taccount := strings.ToUpper(tokenSpec.user)\n\t\ti, err := ring.Get(account)\n\t\tif err != nil {\n\t\t\tlogger.Debugf(\"Failed to find the item in keychain or item does not exist. Error: %v\", err)\n\t\t}\n\t\tcred = string(i.Data)\n\t\tif cred == \"\" {\n\t\t\tlogger.Debug(\"Returned credential is empty\")\n\t\t} else {\n\t\t\tlogger.Debug(\"Successfully read token. Returning as string\")\n\t\t}\n\t}\n\treturn cred\n}\n\nfunc (ssm *keyringSecureStorageManager) deleteCredential(tokenSpec *secureTokenSpec) {\n\tcredentialsKey, err := tokenSpec.buildKey()\n\tif err != nil {\n\t\tlogger.Warnf(\"cannot build token spec: %v\", err)\n\t\treturn\n\t}\n\tswitch runtime.GOOS {\n\tcase \"windows\":\n\t\tring, _ := keyring.Open(keyring.Config{\n\t\t\tWinCredPrefix: strings.ToUpper(tokenSpec.host),\n\t\t\tServiceName:   strings.ToUpper(tokenSpec.user),\n\t\t})\n\t\terr := ring.Remove(string(credentialsKey))\n\t\tif err != nil {\n\t\t\tlogger.Debugf(\"Failed to delete credentialsKey in Windows Credential Manager. Error: %v\", err)\n\t\t}\n\tcase \"darwin\":\n\t\tring, _ := keyring.Open(keyring.Config{\n\t\t\tServiceName: credentialsKey,\n\t\t})\n\t\taccount := strings.ToUpper(tokenSpec.user)\n\t\terr := ring.Remove(account)\n\t\tif err != nil {\n\t\t\tlogger.Debugf(\"Failed to delete credentialsKey in keychain. Error: %v\", err)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "secure_storage_manager_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"encoding/json\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestBuildCredCacheDirPath(t *testing.T) {\n\tskipOnWindows(t, \"permission model is different\")\n\ttestRoot1, err := os.MkdirTemp(\"\", \"\")\n\tassertNilF(t, err)\n\tdefer os.RemoveAll(testRoot1)\n\ttestRoot2, err := os.MkdirTemp(\"\", \"\")\n\tassertNilF(t, err)\n\tdefer os.RemoveAll(testRoot2)\n\n\tenv1 := overrideEnv(\"CACHE_DIR_TEST_NOT_EXISTING\", \"/tmp/not_existing_dir\")\n\tdefer env1.rollback()\n\tenv2 := overrideEnv(\"CACHE_DIR_TEST_1\", testRoot1)\n\tdefer env2.rollback()\n\tenv3 := overrideEnv(\"CACHE_DIR_TEST_2\", testRoot2)\n\tdefer env3.rollback()\n\n\tt.Run(\"cannot find any dir\", func(t *testing.T) {\n\t\t_, err := buildCredCacheDirPath([]cacheDirConf{\n\t\t\t{envVar: \"CACHE_DIR_TEST_NOT_EXISTING\"},\n\t\t})\n\t\tassertEqualE(t, err.Error(), \"no credentials cache directory found\")\n\t\t_, err = os.Stat(\"/tmp/not_existing_dir\")\n\t\tassertStringContainsE(t, err.Error(), \"no such file or directory\")\n\t})\n\n\tt.Run(\"should use first dir that exists\", func(t *testing.T) {\n\t\tpath, err := buildCredCacheDirPath([]cacheDirConf{\n\t\t\t{envVar: \"CACHE_DIR_TEST_NOT_EXISTING\"},\n\t\t\t{envVar: \"CACHE_DIR_TEST_1\"},\n\t\t})\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, path, testRoot1)\n\t\tstat, err := os.Stat(testRoot1)\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, stat.Mode(), 0700|os.ModeDir)\n\t})\n\n\tt.Run(\"should use first dir that exists and append segments\", func(t *testing.T) {\n\t\tpath, err := buildCredCacheDirPath([]cacheDirConf{\n\t\t\t{envVar: \"CACHE_DIR_TEST_NOT_EXISTING\"},\n\t\t\t{envVar: \"CACHE_DIR_TEST_2\", pathSegments: []string{\"sub1\", \"sub2\"}},\n\t\t})\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, path, filepath.Join(testRoot2, \"sub1\", \"sub2\"))\n\t\tstat, err := os.Stat(testRoot2)\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, stat.Mode(), 0700|os.ModeDir)\n\t})\n}\n\nfunc TestSnowflakeFileBasedSecureStorageManager(t *testing.T) {\n\tskipOnWindows(t, \"file system permission is different\")\n\tcredCacheDir, err := os.MkdirTemp(\"\", \"\")\n\tassertNilF(t, err)\n\tassertNilF(t, os.MkdirAll(credCacheDir, os.ModePerm))\n\tcredCacheDirEnvOverride := overrideEnv(credCacheDirEnv, credCacheDir)\n\tdefer credCacheDirEnvOverride.rollback()\n\tssm, err := newFileBasedSecureStorageManager()\n\tassertNilF(t, err)\n\n\tt.Run(\"store single token\", func(t *testing.T) {\n\t\ttokenSpec := newMfaTokenSpec(\"host.com\", \"johndoe\")\n\t\tcred := \"token123\"\n\t\tssm.setCredential(tokenSpec, cred)\n\t\tassertEqualE(t, ssm.getCredential(tokenSpec), cred)\n\t\tssm.deleteCredential(tokenSpec)\n\t\tassertEqualE(t, ssm.getCredential(tokenSpec), \"\")\n\t})\n\n\tt.Run(\"store tokens of different types, hosts and users\", func(t *testing.T) {\n\t\tmfaTokenSpec := newMfaTokenSpec(\"host.com\", \"johndoe\")\n\t\tmfaCred := \"token12\"\n\t\tidTokenSpec := newIDTokenSpec(\"host.com\", \"johndoe\")\n\t\tidCred := \"token34\"\n\t\tidTokenSpec2 := newIDTokenSpec(\"host.org\", \"johndoe\")\n\t\tidCred2 := \"token56\"\n\t\tidTokenSpec3 := newIDTokenSpec(\"host.com\", \"someoneelse\")\n\t\tidCred3 := \"token78\"\n\t\tssm.setCredential(mfaTokenSpec, mfaCred)\n\t\tssm.setCredential(idTokenSpec, idCred)\n\t\tssm.setCredential(idTokenSpec2, idCred2)\n\t\tssm.setCredential(idTokenSpec3, idCred3)\n\t\tassertEqualE(t, ssm.getCredential(mfaTokenSpec), mfaCred)\n\t\tassertEqualE(t, ssm.getCredential(idTokenSpec), idCred)\n\t\tassertEqualE(t, ssm.getCredential(idTokenSpec2), idCred2)\n\t\tassertEqualE(t, ssm.getCredential(idTokenSpec3), idCred3)\n\t\tssm.deleteCredential(mfaTokenSpec)\n\t\tassertEqualE(t, ssm.getCredential(mfaTokenSpec), \"\")\n\t\tassertEqualE(t, ssm.getCredential(idTokenSpec), idCred)\n\t\tassertEqualE(t, ssm.getCredential(idTokenSpec2), idCred2)\n\t\tassertEqualE(t, ssm.getCredential(idTokenSpec3), idCred3)\n\t})\n\n\tt.Run(\"override single token\", func(t *testing.T) {\n\t\tmfaTokenSpec := newMfaTokenSpec(\"host.com\", \"johndoe\")\n\t\tmfaCred := \"token123\"\n\t\tidTokenSpec := newIDTokenSpec(\"host.com\", \"johndoe\")\n\t\tidCred := \"token456\"\n\t\tssm.setCredential(mfaTokenSpec, mfaCred)\n\t\tssm.setCredential(idTokenSpec, idCred)\n\t\tassertEqualE(t, ssm.getCredential(mfaTokenSpec), mfaCred)\n\t\tmfaCredOverride := \"token789\"\n\t\tssm.setCredential(mfaTokenSpec, mfaCredOverride)\n\t\tassertEqualE(t, ssm.getCredential(mfaTokenSpec), mfaCredOverride)\n\t\tssm.setCredential(idTokenSpec, idCred)\n\t})\n\n\tt.Run(\"unlock stale cache\", func(t *testing.T) {\n\t\ttokenSpec := newMfaTokenSpec(\"stale\", \"cache\")\n\t\tassertNilF(t, os.Mkdir(ssm.lockPath(), 0700))\n\t\ttime.Sleep(1000 * time.Millisecond)\n\t\tssm.setCredential(tokenSpec, \"unlocked\")\n\t\tassertEqualE(t, ssm.getCredential(tokenSpec), \"unlocked\")\n\t})\n\n\tt.Run(\"wait for other process to unlock cache\", func(t *testing.T) {\n\t\ttokenSpec := newMfaTokenSpec(\"stale\", \"cache\")\n\t\tstartTime := time.Now()\n\t\tassertNilF(t, os.Mkdir(ssm.lockPath(), 0700))\n\t\ttime.Sleep(500 * time.Millisecond)\n\t\tgo func() {\n\t\t\ttime.Sleep(500 * time.Millisecond)\n\t\t\tassertNilF(t, os.Remove(ssm.lockPath()))\n\t\t}()\n\t\tssm.setCredential(tokenSpec, \"unlocked\")\n\t\ttotalDurationMillis := time.Since(startTime).Milliseconds()\n\t\tassertEqualE(t, ssm.getCredential(tokenSpec), \"unlocked\")\n\t\tassertTrueE(t, totalDurationMillis > 1000 && totalDurationMillis < 1200)\n\t})\n\n\tt.Run(\"should not modify keys other than tokens\", func(t *testing.T) {\n\t\tcontent := []byte(`{\n\t\t\t\"otherKey\": \"otherValue\"\n\t\t}`)\n\t\terr = os.WriteFile(ssm.credFilePath(), content, 0600)\n\t\tassertNilF(t, err)\n\t\tssm.setCredential(newMfaTokenSpec(\"somehost.com\", \"someUser\"), \"someToken\")\n\t\tresult, err := os.ReadFile(ssm.credFilePath())\n\t\tassertNilF(t, err)\n\t\tassertStringContainsE(t, string(result), `\"otherKey\":\"otherValue\"`)\n\t})\n\n\tt.Run(\"should not modify file if it has wrong permission\", func(t *testing.T) {\n\t\ttokenSpec := newMfaTokenSpec(\"somehost.com\", \"someUser\")\n\t\tssm.setCredential(tokenSpec, \"initialValue\")\n\t\tassertEqualE(t, ssm.getCredential(tokenSpec), \"initialValue\")\n\t\terr = os.Chmod(ssm.credFilePath(), 0644)\n\t\tassertNilF(t, err)\n\t\tdefer func() {\n\t\t\tassertNilE(t, os.Chmod(ssm.credFilePath(), 0600))\n\t\t}()\n\t\tssm.setCredential(tokenSpec, \"newValue\")\n\t\tassertEqualE(t, ssm.getCredential(tokenSpec), \"\")\n\t\tfileContent, err := os.ReadFile(ssm.credFilePath())\n\t\tassertNilF(t, err)\n\t\tvar m map[string]any\n\t\terr = json.Unmarshal(fileContent, &m)\n\t\tassertNilF(t, err)\n\t\tcacheKey, err := tokenSpec.buildKey()\n\t\tassertNilF(t, err)\n\t\ttokens := m[\"tokens\"].(map[string]any)\n\t\tassertEqualE(t, tokens[cacheKey], \"initialValue\")\n\t})\n\n\tt.Run(\"should not modify file if its dir has wrong permission\", func(t *testing.T) {\n\t\ttokenSpec := newMfaTokenSpec(\"somehost.com\", \"someUser\")\n\t\tssm.setCredential(tokenSpec, \"initialValue\")\n\t\tassertEqualE(t, ssm.getCredential(tokenSpec), \"initialValue\")\n\t\terr = os.Chmod(ssm.credDirPath, 0777)\n\t\tassertNilF(t, err)\n\t\tdefer func() {\n\t\t\tassertNilE(t, os.Chmod(ssm.credDirPath, 0700))\n\t\t}()\n\t\tssm.setCredential(tokenSpec, \"newValue\")\n\t\tassertEqualE(t, ssm.getCredential(tokenSpec), \"\")\n\t\tfileContent, err := os.ReadFile(ssm.credFilePath())\n\t\tassertNilF(t, err)\n\t\tvar m map[string]any\n\t\terr = json.Unmarshal(fileContent, &m)\n\t\tassertNilF(t, err)\n\t\tcacheKey, err := tokenSpec.buildKey()\n\t\tassertNilF(t, err)\n\t\ttokens := m[\"tokens\"].(map[string]any)\n\t\tassertEqualE(t, tokens[cacheKey], \"initialValue\")\n\t})\n}\n\nfunc TestSetAndGetCredential(t *testing.T) {\n\tskipOnMissingHome(t)\n\tfor _, tokenSpec := range []*secureTokenSpec{\n\t\tnewMfaTokenSpec(\"testhost\", \"testuser\"),\n\t\tnewIDTokenSpec(\"testhost\", \"testuser\"),\n\t} {\n\t\tt.Run(string(tokenSpec.tokenType), func(t *testing.T) {\n\t\t\tskipOnMac(t, \"keyring asks for password\")\n\t\t\tfakeMfaToken := \"test token\"\n\t\t\ttokenSpec := newMfaTokenSpec(\"testHost\", \"testUser\")\n\t\t\tcredentialsStorage.setCredential(tokenSpec, fakeMfaToken)\n\t\t\tassertEqualE(t, credentialsStorage.getCredential(tokenSpec), fakeMfaToken)\n\n\t\t\t// delete credential and check it no longer exists\n\t\t\tcredentialsStorage.deleteCredential(tokenSpec)\n\t\t\tassertEqualE(t, credentialsStorage.getCredential(tokenSpec), \"\")\n\t\t})\n\t}\n}\n\nfunc TestSkipStoringCredentialIfUserIsEmpty(t *testing.T) {\n\ttokenSpecs := []*secureTokenSpec{\n\t\tnewMfaTokenSpec(\"mfaHost.com\", \"\"),\n\t\tnewIDTokenSpec(\"idHost.com\", \"\"),\n\t}\n\n\tfor _, tokenSpec := range tokenSpecs {\n\t\tt.Run(tokenSpec.host, func(t *testing.T) {\n\t\t\tcredentialsStorage.setCredential(tokenSpec, \"non-empty-value\")\n\t\t\tassertEqualE(t, credentialsStorage.getCredential(tokenSpec), \"\")\n\t\t})\n\t}\n}\n\nfunc TestSkipStoringCredentialIfHostIsEmpty(t *testing.T) {\n\ttokenSpecs := []*secureTokenSpec{\n\t\tnewMfaTokenSpec(\"\", \"mfaUser\"),\n\t\tnewIDTokenSpec(\"\", \"idUser\"),\n\t}\n\n\tfor _, tokenSpec := range tokenSpecs {\n\t\tt.Run(tokenSpec.user, func(t *testing.T) {\n\t\t\tcredentialsStorage.setCredential(tokenSpec, \"non-empty-value\")\n\t\t\tassertEqualE(t, credentialsStorage.getCredential(tokenSpec), \"\")\n\t\t})\n\t}\n}\n\nfunc TestStoreTemporaryCredential(t *testing.T) {\n\tif runningOnGithubAction() {\n\t\tt.Skip(\"cannot write to github file system\")\n\t}\n\n\ttestcases := []struct {\n\t\ttokenSpec *secureTokenSpec\n\t\tvalue     string\n\t}{\n\t\t{newMfaTokenSpec(\"testhost\", \"testuser\"), \"mfa token\"},\n\t\t{newIDTokenSpec(\"testhost\", \"testuser\"), \"id token\"},\n\t\t{newOAuthAccessTokenSpec(\"testhost\", \"testuser\"), \"access token\"},\n\t\t{newOAuthRefreshTokenSpec(\"testhost\", \"testuser\"), \"refresh token\"},\n\t}\n\n\tssm, err := newFileBasedSecureStorageManager()\n\tassertNilF(t, err)\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.value, func(t *testing.T) {\n\t\t\tssm.setCredential(test.tokenSpec, test.value)\n\t\t\tassertEqualE(t, ssm.getCredential(test.tokenSpec), test.value)\n\t\t\tssm.deleteCredential(test.tokenSpec)\n\t\t\tassertEqualE(t, ssm.getCredential(test.tokenSpec), \"\")\n\t\t})\n\t}\n}\n\nfunc TestBuildCredentialsKey(t *testing.T) {\n\ttestcases := []struct {\n\t\thost     string\n\t\tuser     string\n\t\tcredType tokenType\n\t\tout      string\n\t}{\n\t\t{\"testaccount.snowflakecomputing.com\", \"testuser\", \"mfaToken\", \"c4e781475e7a5e74aca87cd462afafa8cc48ebff6f6ccb5054b894dae5eb6345\"}, // pragma: allowlist secret\n\t\t{\"testaccount.snowflakecomputing.com\", \"testuser\", \"IdToken\", \"5014e26489992b6ea56b50e936ba85764dc51338f60441bdd4a69eac7e15bada\"},  // pragma: allowlist secret\n\t}\n\tfor _, test := range testcases {\n\t\ttarget, err := buildCredentialsKey(test.host, test.user, test.credType)\n\t\tassertNilF(t, err)\n\t\tif target != test.out {\n\t\t\tt.Fatalf(\"failed to convert target. expected: %v, but got: %v\", test.out, target)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "sflog/interface.go",
    "content": "// Package sflog package defines the logging interface for Snowflake's Go driver.\n// If you want to implement a custom logger, you should implement the SFLogger interface defined in this package.\npackage sflog\n\nimport (\n\t\"context\"\n\t\"io\"\n)\n\n// ClientLogContextHook is a client-defined hook that can be used to insert log\n// fields based on the Context.\ntype ClientLogContextHook func(context.Context) string\n\n// LogEntry allows for logging using a snapshot of field values.\n// No implementation-specific logging details should be placed into this interface.\ntype LogEntry interface {\n\tTracef(format string, args ...any)\n\tDebugf(format string, args ...any)\n\tInfof(format string, args ...any)\n\tWarnf(format string, args ...any)\n\tErrorf(format string, args ...any)\n\tFatalf(format string, args ...any)\n\n\tTrace(msg string)\n\tDebug(msg string)\n\tInfo(msg string)\n\tWarn(msg string)\n\tError(msg string)\n\tFatal(msg string)\n}\n\n// SFLogger Snowflake logger interface which abstracts away the underlying logging mechanism.\n// No implementation-specific logging details should be placed into this interface.\ntype SFLogger interface {\n\tLogEntry\n\tWithField(key string, value any) LogEntry\n\tWithFields(fields map[string]any) LogEntry\n\n\tSetLogLevel(level string) error\n\tSetLogLevelInt(level Level) error\n\tGetLogLevel() string\n\tGetLogLevelInt() Level\n\tWithContext(ctx context.Context) LogEntry\n\tSetOutput(output io.Writer)\n}\n"
  },
  {
    "path": "sflog/levels.go",
    "content": "package sflog\n\nimport (\n\t\"fmt\"\n\t\"math\"\n\t\"strings\"\n)\n\n// Level represents the log level for a log message. It extends slog's standard levels with custom levels.\ntype Level int\n\n// Custom level constants that extend slog's standard levels\nconst (\n\tLevelTrace = Level(-8)\n\tLevelDebug = Level(-4)\n\tLevelInfo  = Level(0)\n\tLevelWarn  = Level(4)\n\tLevelError = Level(8)\n\tLevelFatal = Level(12)\n\tLevelOff   = Level(math.MaxInt)\n)\n\n// ParseLevel converts a string level to Level\nfunc ParseLevel(level string) (Level, error) {\n\tswitch strings.ToUpper(level) {\n\tcase \"TRACE\":\n\t\treturn LevelTrace, nil\n\tcase \"DEBUG\":\n\t\treturn LevelDebug, nil\n\tcase \"INFO\":\n\t\treturn LevelInfo, nil\n\tcase \"WARN\":\n\t\treturn LevelWarn, nil\n\tcase \"ERROR\":\n\t\treturn LevelError, nil\n\tcase \"FATAL\":\n\t\treturn LevelFatal, nil\n\tcase \"OFF\":\n\t\treturn LevelOff, nil\n\tdefault:\n\t\treturn LevelInfo, fmt.Errorf(\"unknown log level: %s\", level)\n\t}\n}\n\n// LevelToString converts Level to string\nfunc LevelToString(level Level) (string, error) {\n\tswitch level {\n\tcase LevelTrace:\n\t\treturn \"TRACE\", nil\n\tcase LevelDebug:\n\t\treturn \"DEBUG\", nil\n\tcase LevelInfo:\n\t\treturn \"INFO\", nil\n\tcase LevelWarn:\n\t\treturn \"WARN\", nil\n\tcase LevelError:\n\t\treturn \"ERROR\", nil\n\tcase LevelFatal:\n\t\treturn \"FATAL\", nil\n\tcase LevelOff:\n\t\treturn \"OFF\", nil\n\tdefault:\n\t\treturn \"\", fmt.Errorf(\"unknown log level: %d\", level)\n\t}\n}\n"
  },
  {
    "path": "sflog/slog.go",
    "content": "package sflog\n\nimport \"log/slog\"\n\n// SFSlogLogger is an optional interface for advanced slog handler configuration.\n// This interface is separate from SFLogger to maintain framework-agnostic design.\n// Users can type-assert the logger to check if slog handler configuration is supported.\n//\n// Example usage:\n//\n//\tlogger := gosnowflake.GetLogger()\n//\tif slogLogger, ok := logger.(gosnowflake.SFSlogLogger); ok {\n//\t    customHandler := slog.NewJSONHandler(os.Stdout, nil)\n//\t    slogLogger.SetHandler(customHandler)\n//\t}\ntype SFSlogLogger interface {\n\tSetHandler(handler slog.Handler) error\n}\n"
  },
  {
    "path": "sqlstate.go",
    "content": "package gosnowflake\n\nconst (\n\t// SQLStateNumericValueOutOfRange is a SQL State code indicating Numeric value is out of range.\n\tSQLStateNumericValueOutOfRange = \"22003\"\n\t// SQLStateInvalidDataTimeFormat is a SQL State code indicating DataTime format is invalid.\n\tSQLStateInvalidDataTimeFormat = \"22007\"\n\t// SQLStateConnectionWasNotEstablished is a SQL State code indicating connection was not established.\n\tSQLStateConnectionWasNotEstablished = \"08001\"\n\t// SQLStateConnectionRejected is a SQL State code indicating connection was rejected.\n\tSQLStateConnectionRejected = \"08004\"\n\t// SQLStateConnectionFailure is a SQL State code indicating connection failed.\n\tSQLStateConnectionFailure = \"08006\"\n\t// SQLStateFeatureNotSupported is a SQL State code indicating the feature is not enabled.\n\tSQLStateFeatureNotSupported = \"0A000\"\n)\n"
  },
  {
    "path": "statement.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"errors\"\n\t\"fmt\"\n\t\"time\"\n)\n\n// SnowflakeStmt represents the prepared statement in driver.\ntype SnowflakeStmt interface {\n\tGetQueryID() string\n}\n\ntype snowflakeStmt struct {\n\tsc          *snowflakeConn\n\tquery       string\n\tlastQueryID string\n}\n\nfunc (stmt *snowflakeStmt) Close() error {\n\tlogger.WithContext(stmt.sc.ctx).Info(\"Stmt.Close\")\n\t// noop\n\treturn nil\n}\n\nfunc (stmt *snowflakeStmt) NumInput() int {\n\tlogger.WithContext(stmt.sc.ctx).Info(\"Stmt.NumInput\")\n\t// Go Snowflake doesn't know the number of binding parameters.\n\treturn -1\n}\n\nfunc (stmt *snowflakeStmt) ExecContext(ctx context.Context, args []driver.NamedValue) (driver.Result, error) {\n\tlogger.WithContext(stmt.sc.ctx).Info(\"Stmt.ExecContext\")\n\treturn stmt.execInternal(ctx, args)\n}\n\nfunc (stmt *snowflakeStmt) QueryContext(ctx context.Context, args []driver.NamedValue) (driver.Rows, error) {\n\tlogger.WithContext(stmt.sc.ctx).Info(\"Stmt.QueryContext\")\n\trows, err := stmt.sc.QueryContext(ctx, stmt.query, args)\n\tif err != nil {\n\t\tstmt.setQueryIDFromError(err)\n\t\treturn nil, err\n\t}\n\tr, ok := rows.(SnowflakeRows)\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"interface convertion. expected type SnowflakeRows but got %T\", rows)\n\t}\n\tstmt.lastQueryID = r.GetQueryID()\n\treturn rows, nil\n}\n\nfunc (stmt *snowflakeStmt) Exec(args []driver.Value) (driver.Result, error) {\n\tlogger.WithContext(stmt.sc.ctx).Info(\"Stmt.Exec\")\n\treturn stmt.execInternal(context.Background(), toNamedValues(args))\n}\n\nfunc (stmt *snowflakeStmt) execInternal(ctx context.Context, args []driver.NamedValue) (driver.Result, error) {\n\tlogger.WithContext(stmt.sc.ctx).Debug(\"Stmt.execInternal\")\n\tif ctx == nil {\n\t\tctx = context.Background()\n\t}\n\tstmtCtx := context.WithValue(ctx, executionType, executionTypeStatement)\n\ttimer := time.Now()\n\tresult, err := stmt.sc.ExecContext(stmtCtx, stmt.query, args)\n\tif err != nil {\n\t\tstmt.setQueryIDFromError(err)\n\t\tlogger.WithContext(ctx).Errorf(\"QueryID: %v failed to execute because of the error %v. It took %v ms.\", stmt.lastQueryID, err, time.Since(timer).String())\n\t\treturn nil, err\n\t}\n\trnr, ok := result.(*snowflakeResultNoRows)\n\tif ok {\n\t\tstmt.lastQueryID = rnr.GetQueryID()\n\t\tlogger.WithContext(ctx).Debugf(\"Query ID: %v has no result. It took %v ms.,\", stmt.lastQueryID, time.Since(timer).String())\n\t\treturn driver.ResultNoRows, nil\n\t}\n\tr, ok := result.(SnowflakeResult)\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"interface convertion. expected type SnowflakeResult but got %T\", result)\n\t}\n\tstmt.lastQueryID = r.GetQueryID()\n\tlogger.WithContext(ctx).Debugf(\"Query ID: %v has no result. It took %v ms.,\", stmt.lastQueryID, time.Since(timer).String())\n\n\treturn result, err\n}\n\nfunc (stmt *snowflakeStmt) Query(args []driver.Value) (driver.Rows, error) {\n\tlogger.WithContext(stmt.sc.ctx).Info(\"Stmt.Query\")\n\ttimer := time.Now()\n\trows, err := stmt.sc.Query(stmt.query, args)\n\tif err != nil {\n\t\tlogger.WithContext(stmt.sc.ctx).Errorf(\"QueryID: %v failed to execute because of the error %v. It took %v ms.\", stmt.lastQueryID, err, time.Since(timer).String())\n\t\tstmt.setQueryIDFromError(err)\n\t\treturn nil, err\n\t}\n\tr, ok := rows.(SnowflakeRows)\n\tif !ok {\n\t\tlogger.WithContext(stmt.sc.ctx).Errorf(\"Query ID: %v failed to convert the rows to SnowflakeRows. It took %v ms.,\", stmt.lastQueryID, time.Since(timer).String())\n\t\treturn nil, fmt.Errorf(\"interface convertion. expected type SnowflakeRows but got %T\", rows)\n\t}\n\tstmt.lastQueryID = r.GetQueryID()\n\tlogger.WithContext(stmt.sc.ctx).Debugf(\"Query ID: %v has no result. It took %v ms.,\", stmt.lastQueryID, time.Since(timer).String())\n\treturn rows, err\n}\n\nfunc (stmt *snowflakeStmt) GetQueryID() string {\n\treturn stmt.lastQueryID\n}\n\nfunc (stmt *snowflakeStmt) setQueryIDFromError(err error) {\n\tvar snowflakeError *SnowflakeError\n\tif errors.As(err, &snowflakeError) {\n\t\tstmt.lastQueryID = snowflakeError.QueryID\n\t}\n}\n"
  },
  {
    "path": "statement_test.go",
    "content": "//lint:file-ignore SA1019 Ignore deprecated methods. We should leave them as-is to keep backward compatibility.\n\npackage gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc openDB(t *testing.T) *sql.DB {\n\tvar db *sql.DB\n\tvar err error\n\n\tif db, err = sql.Open(\"snowflake\", dsn); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v\", err)\n\t}\n\n\treturn db\n}\n\nfunc openConn(t *testing.T, config *testConfig) (*sql.DB, *sql.Conn) {\n\tvar db *sql.DB\n\tvar conn *sql.Conn\n\tvar err error\n\n\tif db, err = sql.Open(\"snowflake\", config.dsn); err != nil {\n\t\tt.Fatalf(\"failed to open db. %v, err: %v\", dsn, err)\n\t}\n\tif conn, err = db.Conn(context.Background()); err != nil {\n\t\tt.Fatalf(\"failed to open connection: %v\", err)\n\t}\n\treturn db, conn\n}\n\nfunc TestExecStmt(t *testing.T) {\n\tdqlQuery := \"SELECT 1\"\n\tdmlQuery := \"INSERT INTO TestDDLExec VALUES (1)\"\n\tddlQuery := \"CREATE OR REPLACE TABLE TestDDLExec (num NUMBER)\"\n\tmultiStmtQuery := \"DELETE FROM TestDDLExec;\\n\" +\n\t\t\"SELECT 1;\\n\" +\n\t\t\"SELECT 2;\"\n\tctx := context.Background()\n\tmultiStmtCtx := WithMultiStatement(ctx, 3)\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(ddlQuery)\n\t\tdefer dbt.mustExec(\"DROP TABLE IF EXISTS TestDDLExec\")\n\t\ttestcases := []struct {\n\t\t\tname  string\n\t\t\tquery string\n\t\t\tf     func(stmt driver.Stmt) (any, error)\n\t\t}{\n\t\t\t{\n\t\t\t\tname:  \"dql Exec\",\n\t\t\t\tquery: dqlQuery,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.Exec(nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:  \"dql ExecContext\",\n\t\t\t\tquery: dqlQuery,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.(driver.StmtExecContext).ExecContext(ctx, nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:  \"ddl Exec\",\n\t\t\t\tquery: ddlQuery,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.Exec(nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:  \"ddl ExecContext\",\n\t\t\t\tquery: ddlQuery,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.(driver.StmtExecContext).ExecContext(ctx, nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:  \"dml Exec\",\n\t\t\t\tquery: dmlQuery,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.Exec(nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:  \"dml ExecContext\",\n\t\t\t\tquery: dmlQuery,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.(driver.StmtExecContext).ExecContext(ctx, nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:  \"multistmt ExecContext\",\n\t\t\t\tquery: multiStmtQuery,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.(driver.StmtExecContext).ExecContext(multiStmtCtx, nil)\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\terr := dbt.conn.Raw(func(x any) error {\n\t\t\t\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, tc.query)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Error(err)\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() != \"\" {\n\t\t\t\t\t\tt.Error(\"queryId should be empty before executing any query\")\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := tc.f(stmt); err != nil {\n\t\t\t\t\t\tt.Errorf(\"should have not failed to execute the query, err: %s\\n\", err)\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() == \"\" {\n\t\t\t\t\t\tt.Error(\"should have set the query id\")\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestFailedQueryIdInSnowflakeError(t *testing.T) {\n\tfailingQuery := \"SELECTT 1\"\n\tfailingExec := \"INSERT 1 INTO NON_EXISTENT_TABLE\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttestcases := []struct {\n\t\t\tname  string\n\t\t\tquery string\n\t\t\tf     func(dbt *DBTest) (any, error)\n\t\t}{\n\t\t\t{\n\t\t\t\tname: \"query\",\n\t\t\t\tf: func(dbt *DBTest) (any, error) {\n\t\t\t\t\treturn dbt.query(failingQuery)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname: \"exec\",\n\t\t\t\tf: func(dbt *DBTest) (any, error) {\n\t\t\t\t\treturn dbt.exec(failingExec)\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\t_, err := tc.f(dbt)\n\t\t\t\tif err == nil {\n\t\t\t\t\tt.Error(\"should have failed\")\n\t\t\t\t}\n\t\t\t\tvar snowflakeError *SnowflakeError\n\t\t\t\tif !errors.As(err, &snowflakeError) {\n\t\t\t\t\tt.Error(\"should be a SnowflakeError\")\n\t\t\t\t}\n\t\t\t\tif snowflakeError.QueryID == \"\" {\n\t\t\t\t\tt.Error(\"QueryID should be set\")\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestSetFailedQueryId(t *testing.T) {\n\tctx := context.Background()\n\tfailingQuery := \"SELECTT 1\"\n\tfailingExec := \"INSERT 1 INTO NON_EXISTENT_TABLE\"\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttestcases := []struct {\n\t\t\tname  string\n\t\t\tquery string\n\t\t\tf     func(stmt driver.Stmt) (any, error)\n\t\t}{\n\t\t\t{\n\t\t\t\tname:  \"query\",\n\t\t\t\tquery: failingQuery,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.Query(nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:  \"exec\",\n\t\t\t\tquery: failingExec,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.Exec(nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:  \"queryContext\",\n\t\t\t\tquery: failingQuery,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.(driver.StmtQueryContext).QueryContext(ctx, nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:  \"execContext\",\n\t\t\t\tquery: failingExec,\n\t\t\t\tf: func(stmt driver.Stmt) (any, error) {\n\t\t\t\t\treturn stmt.(driver.StmtExecContext).ExecContext(ctx, nil)\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\terr := dbt.conn.Raw(func(x any) error {\n\t\t\t\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, tc.query)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Error(err)\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() != \"\" {\n\t\t\t\t\t\tt.Error(\"queryId should be empty before executing any query\")\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := tc.f(stmt); err == nil {\n\t\t\t\t\t\tt.Error(\"should have failed to execute the query\")\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() == \"\" {\n\t\t\t\t\t\tt.Error(\"should have set the query id\")\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestAsyncFailQueryId(t *testing.T) {\n\tctx := WithAsyncMode(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\terr := dbt.conn.Raw(func(x any) error {\n\t\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, \"SELECTT 1\")\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t\tif stmt.(SnowflakeStmt).GetQueryID() != \"\" {\n\t\t\t\tt.Error(\"queryId should be empty before executing any query\")\n\t\t\t}\n\t\t\trows, err := stmt.(driver.StmtQueryContext).QueryContext(ctx, nil)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"should not fail the initial request\")\n\t\t\t}\n\t\t\tif rows.(SnowflakeRows).GetStatus() != QueryStatusInProgress {\n\t\t\t\tt.Error(\"should be in progress\")\n\t\t\t}\n\t\t\t// Wait for the query to complete\n\t\t\tassertNotNilE(t, rows.Next(nil))\n\t\t\tif rows.(SnowflakeRows).GetStatus() != QueryFailed {\n\t\t\t\tt.Error(\"should have failed\")\n\t\t\t}\n\t\t\tif rows.(SnowflakeRows).GetQueryID() != stmt.(SnowflakeStmt).GetQueryID() {\n\t\t\t\tt.Error(\"last query id should be the same as rows query id\")\n\t\t\t}\n\t\t\treturn nil\n\t\t})\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t})\n}\n\nfunc TestGetQueryID(t *testing.T) {\n\tctx := context.Background()\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tif err := dbt.conn.Raw(func(x any) error {\n\t\t\trows, err := x.(driver.QueryerContext).QueryContext(ctx, \"select 1\", nil)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tdefer rows.Close()\n\n\t\t\tif _, err = x.(driver.QueryerContext).QueryContext(ctx, \"selectt 1\", nil); err == nil {\n\t\t\t\tt.Fatal(\"should have failed to execute query\")\n\t\t\t}\n\t\t\tif driverErr, ok := err.(*SnowflakeError); ok {\n\t\t\t\tif driverErr.Number != 1003 {\n\t\t\t\t\tt.Fatalf(\"incorrect error code. expected: 1003, got: %v\", driverErr.Number)\n\t\t\t\t}\n\t\t\t\tif driverErr.QueryID == \"\" {\n\t\t\t\t\tt.Fatal(\"should have an associated query ID\")\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tt.Fatal(\"should have been able to cast to Snowflake Error\")\n\t\t\t}\n\t\t\treturn nil\n\t\t}); err != nil {\n\t\t\tt.Fatalf(\"failed to prepare statement. err: %v\", err)\n\t\t}\n\t})\n}\n\nfunc TestEmitQueryID(t *testing.T) {\n\tqueryIDChan := make(chan string, 1)\n\tnumrows := 100000\n\tctx := WithAsyncMode(context.Background())\n\tctx = WithQueryIDChan(ctx, queryIDChan)\n\n\tgoRoutineChan := make(chan string)\n\tgo func(grCh chan string, qIDch chan string) {\n\t\tqueryID := <-queryIDChan\n\t\tgrCh <- queryID\n\t}(goRoutineChan, queryIDChan)\n\n\tcnt := 0\n\tvar idx int\n\tvar v string\n\trunDBTest(t, func(dbt *DBTest) {\n\t\trows := dbt.mustQueryContext(ctx, fmt.Sprintf(selectRandomGenerator, numrows))\n\t\tdefer rows.Close()\n\n\t\tfor rows.Next() {\n\t\t\tif err := rows.Scan(&idx, &v); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tcnt++\n\t\t}\n\t\tlogger.Infof(\"NextResultSet: %v\", rows.NextResultSet())\n\t})\n\n\tqueryID := <-goRoutineChan\n\tif queryID == \"\" {\n\t\tt.Fatal(\"expected a nonempty query ID\")\n\t}\n\tif cnt != numrows {\n\t\tt.Errorf(\"number of rows didn't match. expected: %v, got: %v\", numrows, cnt)\n\t}\n}\n\n// End-to-end test to fetch result with queryID\nfunc TestE2EFetchResultByID(t *testing.T) {\n\tdb := openDB(t)\n\tdefer db.Close()\n\n\tif _, err := db.Exec(`create or replace table test_fetch_result(c1 number,\n\t\tc2 string) as select 10, 'z'`); err != nil {\n\t\tt.Fatalf(\"failed to create table: %v\", err)\n\t}\n\n\tctx := context.Background()\n\tconn, err := db.Conn(ctx)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tif err = conn.Raw(func(x any) error {\n\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, \"select * from test_fetch_result\")\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\trows1, err := stmt.(driver.StmtQueryContext).QueryContext(ctx, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tqid := rows1.(SnowflakeResult).GetQueryID()\n\n\t\tnewCtx := context.WithValue(context.Background(), fetchResultByID, qid)\n\t\trows2, err := db.QueryContext(newCtx, \"\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Fetch Query Result by ID failed: %v\", err)\n\t\t}\n\t\tvar c1 sql.NullInt64\n\t\tvar c2 sql.NullString\n\t\tfor rows2.Next() {\n\t\t\terr = rows2.Scan(&c1, &c2)\n\t\t}\n\t\tif c1.Int64 != 10 || c2.String != \"z\" {\n\t\t\tt.Fatalf(\"Query result is not expected: %v\", err)\n\t\t}\n\t\treturn nil\n\t}); err != nil {\n\t\tt.Fatalf(\"failed to drop table: %v\", err)\n\t}\n\n\tif _, err := db.Exec(\"drop table if exists test_fetch_result\"); err != nil {\n\t\tt.Fatalf(\"failed to drop table: %v\", err)\n\t}\n}\n\nfunc TestWithDescribeOnly(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tctx := WithDescribeOnly(context.Background())\n\t\trows := dbt.mustQueryContext(ctx, selectVariousTypes)\n\t\tdefer rows.Close()\n\t\tcols, err := rows.Columns()\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\ttypes, err := rows.ColumnTypes()\n\t\tif err != nil {\n\t\t\tt.Error(err)\n\t\t}\n\t\tfor i, col := range cols {\n\t\t\tif types[i].Name() != col {\n\t\t\t\tt.Fatalf(\"column name mismatch. expected: %v, got: %v\", col, types[i].Name())\n\t\t\t}\n\t\t}\n\t\tif rows.Next() {\n\t\t\tt.Fatal(\"there should not be any rows in describe only mode\")\n\t\t}\n\t})\n}\n\nfunc TestCallStatement(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tin1 := float64(1)\n\t\tin2 := string(\"[2,3]\")\n\t\texpected := \"1 \\\"[2,3]\\\" [2,3]\"\n\t\tvar out string\n\n\t\tdbt.mustExec(\"ALTER SESSION SET USE_STATEMENT_TYPE_CALL_FOR_STORED_PROC_CALLS = true\")\n\n\t\tdbt.mustExec(\"create or replace procedure \" +\n\t\t\t\"TEST_SP_CALL_STMT_ENABLED(in1 float, in2 variant) \" +\n\t\t\t\"returns string language javascript as $$ \" +\n\t\t\t\"let res = snowflake.execute({sqlText: 'select ? c1, ? c2', binds:[IN1, JSON.stringify(IN2)]}); \" +\n\t\t\t\"res.next(); \" +\n\t\t\t\"return res.getColumnValueAsString(1) + ' ' + res.getColumnValueAsString(2) + ' ' + IN2; \" +\n\t\t\t\"$$;\")\n\n\t\tstmt, err := dbt.conn.PrepareContext(context.Background(), \"call TEST_SP_CALL_STMT_ENABLED(?, to_variant(?))\")\n\t\tif err != nil {\n\t\t\tdbt.Errorf(\"failed to prepare query: %v\", err)\n\t\t}\n\t\tdefer stmt.Close()\n\t\terr = stmt.QueryRow(in1, in2).Scan(&out)\n\t\tif err != nil {\n\t\t\tdbt.Errorf(\"failed to scan: %v\", err)\n\t\t}\n\n\t\tif expected != out {\n\t\t\tdbt.Errorf(\"expected: %s, got: %s\", expected, out)\n\t\t}\n\n\t\tdbt.mustExec(\"drop procedure if exists TEST_SP_CALL_STMT_ENABLED(float, variant)\")\n\t})\n}\n\nfunc TestStmtExec(t *testing.T) {\n\tctx := context.Background()\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExecT(t, `create or replace table test_table(col1 int, col2 int)`)\n\n\t\tif err := dbt.conn.Raw(func(x any) error {\n\t\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, \"insert into test_table values (1, 2)\")\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t\t_, err = stmt.(*snowflakeStmt).Exec(nil)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t\t_, err = stmt.(*snowflakeStmt).Query(nil)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err)\n\t\t\t}\n\t\t\treturn nil\n\t\t}); err != nil {\n\t\t\tt.Fatalf(\"failed to drop table: %v\", err)\n\t\t}\n\n\t\tdbt.mustExecT(t, \"drop table if exists test_table\")\n\t})\n}\n\nfunc TestStmtExec_Error(t *testing.T) {\n\tctx := context.Background()\n\trunDBTest(t, func(dbt *DBTest) {\n\t\t// Create a test table\n\t\tdbt.mustExecT(t, `create or replace table test_table(col1 int, col2 int)`)\n\t\tdefer dbt.mustExecT(t, \"drop table if exists test_table\")\n\n\t\t// Attempt to execute an invalid statement\n\t\tif err := dbt.conn.Raw(func(x any) error {\n\t\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, \"insert into test_table values (?, ?)\")\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"failed to prepare statement: %v\", err)\n\t\t\t}\n\n\t\t\t// Intentionally passing a string instead of an integer to cause an error\n\t\t\t_, err = stmt.(*snowflakeStmt).Exec([]driver.Value{\"invalid_data\", 2})\n\t\t\tif err == nil {\n\t\t\t\tt.Errorf(\"expected an error, but got none\")\n\t\t\t}\n\n\t\t\treturn nil\n\t\t}); err != nil {\n\t\t\tt.Fatalf(\"unexpected error: %v\", err)\n\t\t}\n\t})\n}\n\nfunc getStatusSuccessButInvalidJSONfunc(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ time.Duration) (*http.Response, error) {\n\treturn &http.Response{\n\t\tStatusCode: http.StatusOK,\n\t\tBody:       &fakeResponseBody{body: []byte{0x12, 0x34}},\n\t}, nil\n}\n\nfunc TestUnitCheckQueryStatus(t *testing.T) {\n\tsc := getDefaultSnowflakeConn()\n\tctx := context.Background()\n\tqid := NewUUID()\n\n\tsr := &snowflakeRestful{\n\t\tFuncGet:       getStatusSuccessButInvalidJSONfunc,\n\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t}\n\tsc.rest = sr\n\t_, err := sc.checkQueryStatus(ctx, qid.String())\n\tif err == nil {\n\t\tt.Fatal(\"invalid json. should have failed\")\n\t}\n\tsc.rest.FuncGet = funcGetQueryRespFail\n\t_, err = sc.checkQueryStatus(ctx, qid.String())\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n\n\tsc.rest.FuncGet = funcGetQueryRespError\n\t_, err = sc.checkQueryStatus(ctx, qid.String())\n\tif err == nil {\n\t\tt.Fatal(\"should have failed\")\n\t}\n\tdriverErr, ok := err.(*SnowflakeError)\n\tif !ok {\n\t\tt.Fatalf(\"should be snowflake error. err: %v\", err)\n\t}\n\tif driverErr.Number != ErrQueryStatus {\n\t\tt.Fatalf(\"unexpected error code. expected: %v, got: %v\", ErrQueryStatus, driverErr.Number)\n\t}\n}\n\nfunc TestStatementQueryIdForQueries(t *testing.T) {\n\tctx := context.Background()\n\n\ttestcases := []struct {\n\t\tname string\n\t\tf    func(stmt driver.Stmt) (driver.Rows, error)\n\t}{\n\t\t{\n\t\t\t\"query\",\n\t\t\tfunc(stmt driver.Stmt) (driver.Rows, error) {\n\t\t\t\treturn stmt.Query(nil)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t\"queryContext\",\n\t\t\tfunc(stmt driver.Stmt) (driver.Rows, error) {\n\t\t\t\treturn stmt.(driver.StmtQueryContext).QueryContext(ctx, nil)\n\t\t\t},\n\t\t},\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\terr := dbt.conn.Raw(func(x any) error {\n\t\t\t\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, \"SELECT 1\")\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() != \"\" {\n\t\t\t\t\t\tt.Error(\"queryId should be empty before executing any query\")\n\t\t\t\t\t}\n\t\t\t\t\tfirstQuery, err := tc.f(stmt)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() == \"\" {\n\t\t\t\t\t\tt.Error(\"queryId should not be empty after executing query\")\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() != firstQuery.(SnowflakeRows).GetQueryID() {\n\t\t\t\t\t\tt.Error(\"queryId should be equal among query result and prepared statement\")\n\t\t\t\t\t}\n\t\t\t\t\tsecondQuery, err := tc.f(stmt)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() == \"\" {\n\t\t\t\t\t\tt.Error(\"queryId should not be empty after executing query\")\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() != secondQuery.(SnowflakeRows).GetQueryID() {\n\t\t\t\t\t\tt.Error(\"queryId should be equal among query result and prepared statement\")\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestStatementQuery(t *testing.T) {\n\tctx := context.Background()\n\n\ttestcases := []struct {\n\t\tname    string\n\t\tquery   string\n\t\tf       func(stmt driver.Stmt) (driver.Rows, error)\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\t\"validQuery\",\n\t\t\t\"SELECT 1\",\n\t\t\tfunc(stmt driver.Stmt) (driver.Rows, error) {\n\t\t\t\treturn stmt.Query(nil)\n\t\t\t},\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\t\"validQueryContext\",\n\t\t\t\"SELECT 1\",\n\t\t\tfunc(stmt driver.Stmt) (driver.Rows, error) {\n\t\t\t\treturn stmt.(driver.StmtQueryContext).QueryContext(ctx, nil)\n\t\t\t},\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\t\"invalidQuery\",\n\t\t\t\"SELECT * FROM non_existing_table\",\n\t\t\tfunc(stmt driver.Stmt) (driver.Rows, error) {\n\t\t\t\treturn stmt.Query(nil)\n\t\t\t},\n\t\t\ttrue,\n\t\t},\n\t\t{\n\t\t\t\"invalidQueryContext\",\n\t\t\t\"SELECT * FROM non_existing_table\",\n\t\t\tfunc(stmt driver.Stmt) (driver.Rows, error) {\n\t\t\t\treturn stmt.(driver.StmtQueryContext).QueryContext(ctx, nil)\n\t\t\t},\n\t\t\ttrue,\n\t\t},\n\t}\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\terr := dbt.conn.Raw(func(x any) error {\n\t\t\t\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, tc.query)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tif tc.wantErr {\n\t\t\t\t\t\t\treturn nil // expected error\n\t\t\t\t\t\t}\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\n\t\t\t\t\t_, err = tc.f(stmt)\n\t\t\t\t\tif (err != nil) != tc.wantErr {\n\t\t\t\t\t\tt.Fatalf(\"error = %v, wantErr %v\", err, tc.wantErr)\n\t\t\t\t\t}\n\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestStatementQueryIdForExecs(t *testing.T) {\n\tctx := context.Background()\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"CREATE TABLE TestStatementQueryIdForExecs (v INTEGER)\")\n\t\tdefer dbt.mustExec(\"DROP TABLE IF EXISTS TestStatementQueryIdForExecs\")\n\n\t\ttestcases := []struct {\n\t\t\tname string\n\t\t\tf    func(stmt driver.Stmt) (driver.Result, error)\n\t\t}{\n\t\t\t{\n\t\t\t\t\"exec\",\n\t\t\t\tfunc(stmt driver.Stmt) (driver.Result, error) {\n\t\t\t\t\treturn stmt.Exec(nil)\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"execContext\",\n\t\t\t\tfunc(stmt driver.Stmt) (driver.Result, error) {\n\t\t\t\t\treturn stmt.(driver.StmtExecContext).ExecContext(ctx, nil)\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\terr := dbt.conn.Raw(func(x any) error {\n\t\t\t\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, \"INSERT INTO TestStatementQueryIdForExecs VALUES (1)\")\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() != \"\" {\n\t\t\t\t\t\tt.Error(\"queryId should be empty before executing any query\")\n\t\t\t\t\t}\n\t\t\t\t\tfirstExec, err := tc.f(stmt)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() == \"\" {\n\t\t\t\t\t\tt.Error(\"queryId should not be empty after executing query\")\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() != firstExec.(SnowflakeResult).GetQueryID() {\n\t\t\t\t\t\tt.Error(\"queryId should be equal among query result and prepared statement\")\n\t\t\t\t\t}\n\t\t\t\t\tsecondExec, err := tc.f(stmt)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() == \"\" {\n\t\t\t\t\t\tt.Error(\"queryId should not be empty after executing query\")\n\t\t\t\t\t}\n\t\t\t\t\tif stmt.(SnowflakeStmt).GetQueryID() != secondExec.(SnowflakeResult).GetQueryID() {\n\t\t\t\t\t\tt.Error(\"queryId should be equal among query result and prepared statement\")\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestStatementQueryExecs(t *testing.T) {\n\tctx := context.Background()\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"CREATE TABLE TestStatementQueryExecs (v INTEGER)\")\n\t\tdefer dbt.mustExec(\"DROP TABLE IF EXISTS TestStatementForExecs\")\n\n\t\ttestcases := []struct {\n\t\t\tname    string\n\t\t\tquery   string\n\t\t\tf       func(stmt driver.Stmt) (driver.Result, error)\n\t\t\twantErr bool\n\t\t}{\n\t\t\t{\n\t\t\t\t\"validExec\",\n\t\t\t\t\"INSERT INTO TestStatementQueryExecs VALUES (1)\",\n\t\t\t\tfunc(stmt driver.Stmt) (driver.Result, error) {\n\t\t\t\t\treturn stmt.Exec(nil)\n\t\t\t\t},\n\t\t\t\tfalse,\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"validExecContext\",\n\t\t\t\t\"INSERT INTO TestStatementQueryExecs VALUES (1)\",\n\t\t\t\tfunc(stmt driver.Stmt) (driver.Result, error) {\n\t\t\t\t\treturn stmt.(driver.StmtExecContext).ExecContext(ctx, nil)\n\t\t\t\t},\n\t\t\t\tfalse,\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"invalidExec\",\n\t\t\t\t\"INSERT INTO TestStatementQueryExecs VALUES ('invalid_data')\",\n\t\t\t\tfunc(stmt driver.Stmt) (driver.Result, error) {\n\t\t\t\t\treturn stmt.Exec(nil)\n\t\t\t\t},\n\t\t\t\ttrue,\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"invalidExecContext\",\n\t\t\t\t\"INSERT INTO TestStatementQueryExecs VALUES ('invalid_data')\",\n\t\t\t\tfunc(stmt driver.Stmt) (driver.Result, error) {\n\t\t\t\t\treturn stmt.(driver.StmtExecContext).ExecContext(ctx, nil)\n\t\t\t\t},\n\t\t\t\ttrue,\n\t\t\t},\n\t\t}\n\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\terr := dbt.conn.Raw(func(x any) error {\n\t\t\t\t\tstmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, tc.query)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tif tc.wantErr {\n\t\t\t\t\t\t\treturn nil // expected error\n\t\t\t\t\t\t}\n\t\t\t\t\t\tt.Fatal(err)\n\t\t\t\t\t}\n\n\t\t\t\t\t_, err = tc.f(stmt)\n\t\t\t\t\tif (err != nil) != tc.wantErr {\n\t\t\t\t\t\tt.Fatalf(\"error = %v, wantErr %v\", err, tc.wantErr)\n\t\t\t\t\t}\n\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatal(err)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestWithQueryTag(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttestQueryTag := \"TEST QUERY TAG\"\n\t\tctx := WithQueryTag(context.Background(), testQueryTag)\n\n\t\t// This query itself will be part of the history and will have the query tag\n\t\trows := dbt.mustQueryContext(\n\t\t\tctx,\n\t\t\t\"SELECT QUERY_TAG FROM table(information_schema.query_history_by_session())\")\n\t\tdefer rows.Close()\n\n\t\tassertTrueF(t, rows.Next())\n\t\tvar tag sql.NullString\n\t\terr := rows.Scan(&tag)\n\t\tassertNilF(t, err)\n\t\tassertTrueF(t, tag.Valid, \"no QUERY_TAG set\")\n\t\tassertEqualF(t, tag.String, testQueryTag)\n\t})\n}\n"
  },
  {
    "path": "storage_client.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"math\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"time\"\n)\n\nconst (\n\tdefaultConcurrency = 1\n\tdefaultMaxRetry    = 5\n)\n\n// implemented by localUtil and remoteStorageUtil\ntype storageUtil interface {\n\tcreateClient(*execResponseStageInfo, bool, *Config, *snowflakeTelemetry) (cloudClient, error)\n\tuploadOneFileWithRetry(context.Context, *fileMetadata) error\n\tdownloadOneFile(context.Context, *fileMetadata) error\n}\n\n// implemented by snowflakeS3Util, snowflakeAzureUtil and snowflakeGcsUtil\ntype cloudUtil interface {\n\tcreateClient(*execResponseStageInfo, bool, *snowflakeTelemetry) (cloudClient, error)\n\tgetFileHeader(context.Context, *fileMetadata, string) (*fileHeader, error)\n\tuploadFile(context.Context, string, *fileMetadata, int, int64) error\n\tnativeDownloadFile(context.Context, *fileMetadata, string, int64, int64) error\n}\n\ntype cloudClient any\n\ntype remoteStorageUtil struct {\n\tcfg       *Config\n\ttelemetry *snowflakeTelemetry\n}\n\nfunc (rsu *remoteStorageUtil) getNativeCloudType(cli string, cfg *Config) cloudUtil {\n\tif cloudType(cli) == s3Client {\n\t\tlogger.Info(\"Using S3 client for remote storage\")\n\t\treturn &snowflakeS3Client{\n\t\t\tcfg,\n\t\t\trsu.telemetry,\n\t\t}\n\t} else if cloudType(cli) == azureClient {\n\t\tlogger.Info(\"Using Azure client for remote storage\")\n\t\treturn &snowflakeAzureClient{\n\t\t\tcfg,\n\t\t\trsu.telemetry,\n\t\t}\n\t} else if cloudType(cli) == gcsClient {\n\t\tlogger.Info(\"Using GCS client for remote storage\")\n\t\treturn &snowflakeGcsClient{\n\t\t\tcfg,\n\t\t\trsu.telemetry,\n\t\t}\n\t}\n\treturn nil\n}\n\n// call cloud utils' native create client methods\nfunc (rsu *remoteStorageUtil) createClient(info *execResponseStageInfo, useAccelerateEndpoint bool, cfg *Config, telemetry *snowflakeTelemetry) (cloudClient, error) {\n\tutilClass := rsu.getNativeCloudType(info.LocationType, cfg)\n\treturn utilClass.createClient(info, useAccelerateEndpoint, telemetry)\n}\n\nfunc (rsu *remoteStorageUtil) uploadOneFile(ctx context.Context, meta *fileMetadata) error {\n\tutilClass := rsu.getNativeCloudType(meta.stageInfo.LocationType, meta.sfa.sc.cfg)\n\tmaxConcurrency := int(meta.parallel)\n\tvar lastErr error\n\tvar timer time.Time\n\tvar elapsedTime string\n\tmaxRetry := defaultMaxRetry\n\tlogger.Debugf(\n\t\t\"Started Uploading. File: %v, location: %v\", meta.realSrcFileName, meta.stageInfo.Location)\n\tfor retry := range maxRetry {\n\t\ttimer = time.Now()\n\t\tif !meta.overwrite {\n\t\t\theader, err := utilClass.getFileHeader(ctx, meta, meta.dstFileName)\n\t\t\tif meta.resStatus == notFoundFile {\n\t\t\t\terr := utilClass.uploadFile(ctx, meta.realSrcFileName, meta, maxConcurrency, meta.options.MultiPartThreshold)\n\t\t\t\tif err != nil {\n\t\t\t\t\tlogger.Warnf(\"Error uploading %v. err: %v\", meta.realSrcFileName, err)\n\t\t\t\t}\n\t\t\t} else if err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif header != nil && meta.resStatus == uploaded {\n\t\t\t\tmeta.dstFileSize = 0\n\t\t\t\tmeta.resStatus = skipped\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t\tif meta.overwrite || meta.resStatus == notFoundFile {\n\t\t\terr := utilClass.uploadFile(ctx, meta.realSrcFileName, meta, maxConcurrency, meta.options.MultiPartThreshold)\n\t\t\tif err != nil {\n\t\t\t\tlogger.Warnf(\"Error uploading %v. err: %v\", meta.realSrcFileName, err)\n\t\t\t}\n\t\t}\n\t\telapsedTime = time.Since(timer).String()\n\t\tswitch meta.resStatus {\n\t\tcase uploaded, renewToken, renewPresignedURL:\n\t\t\tlogger.Debugf(\"Uploading file: %v finished in %v ms with the status: %v.\", meta.realSrcFileName, elapsedTime, meta.resStatus)\n\t\t\treturn nil\n\t\tcase needRetry:\n\t\t\tif !meta.noSleepingTime {\n\t\t\t\tsleepingTime := intMin(int(math.Exp2(float64(retry))), 16)\n\t\t\t\tlogger.Debugf(\"Need to retry for uploading file: %v. Current retry: %v, Sleeping time: %v.\", meta.realSrcFileName, retry, sleepingTime)\n\t\t\t\ttime.Sleep(time.Second * time.Duration(sleepingTime))\n\t\t\t} else {\n\t\t\t\tlogger.Debugf(\"Need to retry for uploading file:  %v. Current retry: %v without the sleeping time.\", meta.realSrcFileName, retry)\n\t\t\t}\n\t\tcase needRetryWithLowerConcurrency:\n\t\t\tmaxConcurrency = int(meta.parallel) - (retry * int(meta.parallel) / maxRetry)\n\t\t\tmaxConcurrency = intMax(defaultConcurrency, maxConcurrency)\n\t\t\tmeta.lastMaxConcurrency = maxConcurrency\n\t\t\tif !meta.noSleepingTime {\n\t\t\t\tsleepingTime := intMin(int(math.Exp2(float64(retry))), 16)\n\t\t\t\tlogger.Debugf(\"Need to retry with lower concurrency for uploading file: %v. Current retry: %v, Sleeping time: %v.\", meta.realSrcFileName, retry, sleepingTime)\n\t\t\t\ttime.Sleep(time.Second * time.Duration(sleepingTime))\n\t\t\t} else {\n\t\t\t\tlogger.Debugf(\"Need to retry with lower concurrency for uploading file: %v. Current retry: %v without Sleeping time.\", meta.realSrcFileName, retry)\n\n\t\t\t}\n\t\t}\n\t\tlastErr = meta.lastError\n\t}\n\tif lastErr != nil {\n\t\tlogger.Errorf(`Failed to uploading file: %v, with error: %v`, meta.realSrcFileName, lastErr)\n\t\treturn lastErr\n\t}\n\treturn fmt.Errorf(\"unkown error uploading %v\", meta.realSrcFileName)\n}\n\nfunc (rsu *remoteStorageUtil) uploadOneFileWithRetry(ctx context.Context, meta *fileMetadata) error {\n\tutilClass := rsu.getNativeCloudType(meta.stageInfo.LocationType, rsu.cfg)\n\tretryOuter := true\n\tfor range 10 {\n\t\t// retry\n\t\tif err := rsu.uploadOneFile(ctx, meta); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tretryInner := true\n\t\tif meta.resStatus == uploaded || meta.resStatus == skipped {\n\t\t\tfor range 10 {\n\t\t\t\tstatus := meta.resStatus\n\t\t\t\tif _, err := utilClass.getFileHeader(ctx, meta, meta.dstFileName); err != nil {\n\t\t\t\t\tlogger.Warnf(\"error while getting file %v header. %v\", meta.dstFileSize, err)\n\t\t\t\t}\n\t\t\t\t// check file header status and verify upload/skip\n\t\t\t\tif meta.resStatus == notFoundFile {\n\t\t\t\t\tif !meta.noSleepingTime {\n\t\t\t\t\t\ttime.Sleep(time.Second) // wait 1 second for S3 eventual consistency\n\t\t\t\t\t}\n\t\t\t\t\tcontinue\n\t\t\t\t} else {\n\t\t\t\t\tretryInner = false\n\t\t\t\t\tmeta.resStatus = status\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif !retryInner {\n\t\t\tretryOuter = false\n\t\t\tbreak\n\t\t} else {\n\t\t\tcontinue\n\t\t}\n\t}\n\tif retryOuter {\n\t\t// wanted to continue retrying but could not upload/find file\n\t\tmeta.resStatus = errStatus\n\t}\n\treturn nil\n}\n\nfunc (rsu *remoteStorageUtil) downloadOneFile(ctx context.Context, meta *fileMetadata) error {\n\tfullDstFileName := path.Join(meta.localLocation, baseName(meta.dstFileName))\n\tfullDstFileName, err := expandUser(fullDstFileName)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif !filepath.IsAbs(fullDstFileName) {\n\t\tcwd, err := os.Getwd()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tfullDstFileName = filepath.Join(cwd, fullDstFileName)\n\t}\n\tbaseDir, err := getDirectory()\n\tif err != nil {\n\t\treturn err\n\t}\n\tif _, err = os.Stat(baseDir); os.IsNotExist(err) {\n\t\tif err = os.MkdirAll(baseDir, os.ModePerm); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tutilClass := rsu.getNativeCloudType(meta.stageInfo.LocationType, meta.sfa.sc.cfg)\n\theader, err := utilClass.getFileHeader(ctx, meta, meta.srcFileName)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif header != nil {\n\t\tmeta.srcFileSize = header.contentLength\n\t}\n\n\tmaxConcurrency := meta.parallel\n\tpartSize := meta.options.MultiPartThreshold\n\tvar lastErr error\n\tmaxRetry := defaultMaxRetry\n\n\ttimer := time.Now()\n\tfor range maxRetry {\n\t\ttempDownloadFile := fullDstFileName + \".tmp\"\n\t\tdefer func() {\n\t\t\t// Clean up temp file if it still exists\n\t\t\tif _, statErr := os.Stat(tempDownloadFile); statErr == nil {\n\t\t\t\tlogger.Debugf(\"Cleaning up temporary download file: %s\", tempDownloadFile)\n\t\t\t\tif removeErr := os.Remove(tempDownloadFile); removeErr != nil {\n\t\t\t\t\tlogger.Warnf(\"Failed to clean up temporary file %s: %v\", tempDownloadFile, removeErr)\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\n\t\tif err = utilClass.nativeDownloadFile(ctx, meta, tempDownloadFile, maxConcurrency, partSize); err != nil {\n\t\t\tlogger.Errorf(\"Failed to download file to temporary location %s: %v\", tempDownloadFile, err)\n\t\t\treturn err\n\t\t}\n\t\tif meta.resStatus == downloaded {\n\t\t\tlogger.Debugf(\"Downloading file: %v finished in %v ms. File size: %v\", meta.srcFileName, time.Since(timer).String(), meta.srcFileSize)\n\t\t\tif meta.encryptionMaterial != nil {\n\t\t\t\tif meta.presignedURL != nil {\n\t\t\t\t\theader, err = utilClass.getFileHeader(ctx, meta, meta.srcFileName)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tlogger.Errorf(\"Failed to get file header for %s: %v\", meta.srcFileName, err)\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ttimer = time.Now()\n\t\t\t\tif isFileGetStream(ctx) {\n\t\t\t\t\ttotalFileSize, err := decryptStreamCBC(header.encryptionMetadata,\n\t\t\t\t\t\tmeta.encryptionMaterial, 0, meta.dstStream, meta.sfa.streamBuffer)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tlogger.Errorf(\"Stream decryption failed for %s - temp file will be cleaned up to prevent corrupted data: %v\", meta.srcFileName, err)\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tlogger.Debugf(\"Total file size: %d\", totalFileSize)\n\t\t\t\t\tif totalFileSize < 0 || totalFileSize > meta.sfa.streamBuffer.Len() {\n\t\t\t\t\t\treturn fmt.Errorf(\"invalid total file size: %d\", totalFileSize)\n\t\t\t\t\t}\n\t\t\t\t\tmeta.sfa.streamBuffer.Truncate(totalFileSize)\n\t\t\t\t\tmeta.dstFileSize = int64(totalFileSize)\n\t\t\t\t} else {\n\t\t\t\t\tif err = rsu.processEncryptedFileToDestination(meta, header, tempDownloadFile, fullDstFileName); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tlogger.Debugf(\"Decrypting file: %v finished in %v ms.\", meta.srcFileName, time.Since(timer).String())\n\n\t\t\t} else {\n\t\t\t\t// file is not encrypted\n\t\t\t\tif !isFileGetStream(ctx) {\n\t\t\t\t\t// if we have a real file, and not a stream, move the file\n\t\t\t\t\tif err = os.Rename(tempDownloadFile, fullDstFileName); err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"failed to move downloaded file to destination: %w\", err)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t// if we have a stream and no encyrption, just reuse the stream\n\t\t\t\t\tmeta.sfa.streamBuffer = meta.dstStream\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !isFileGetStream(ctx) {\n\t\t\t\tif fi, err := os.Stat(fullDstFileName); err == nil {\n\t\t\t\t\tmeta.dstFileSize = fi.Size()\n\t\t\t\t} else {\n\t\t\t\t\tlogger.Warnf(\"Failed to get file size for %s: %v\", fullDstFileName, err)\n\t\t\t\t}\n\t\t\t}\n\t\t\tlogger.Debugf(\"File download completed successfully for %s (size: %d bytes)\", meta.srcFileName, meta.dstFileSize)\n\t\t\treturn nil\n\t\t}\n\t\tlastErr = meta.lastError\n\t}\n\tif lastErr != nil {\n\t\tlogger.Errorf(`Failed to downloading file: %v, with error: %v`, meta.srcFileName, lastErr)\n\n\t\treturn lastErr\n\t}\n\treturn fmt.Errorf(\"unkown error downloading %v\", fullDstFileName)\n}\n\nfunc (rsu *remoteStorageUtil) processEncryptedFileToDestination(meta *fileMetadata, header *fileHeader, tempDownloadFile, fullDstFileName string) error {\n\t// Clean up the temp download file on any exit path\n\tdefer func() {\n\t\tif _, statErr := os.Stat(tempDownloadFile); statErr == nil {\n\t\t\tlogger.Debugf(\"Cleaning up temporary download file: %s\", tempDownloadFile)\n\t\t\terr := os.Remove(tempDownloadFile)\n\t\t\tif err != nil {\n\t\t\t\tlogger.Warnf(\"Failed to clean up temporary download file %s: %v\", tempDownloadFile, err)\n\t\t\t}\n\t\t}\n\t}()\n\n\ttmpDstFileName, err := decryptFileCBC(header.encryptionMetadata, meta.encryptionMaterial, tempDownloadFile, 0, meta.tmpDir)\n\t// Ensure cleanup of the decrypted temp file if decryption or rename fails\n\tdefer func() {\n\t\tif _, statErr := os.Stat(tmpDstFileName); statErr == nil {\n\t\t\terr := os.Remove(tmpDstFileName)\n\t\t\tif err != nil {\n\t\t\t\tlogger.Warnf(\"Failed to clean up temporary decrypted file %s: %v\", tmpDstFileName, err)\n\t\t\t}\n\t\t}\n\t}()\n\tif err != nil {\n\t\tlogger.Errorf(\"File decryption failed for %s: %v\", meta.srcFileName, err)\n\t\treturn err\n\t}\n\n\tif err = os.Rename(tmpDstFileName, fullDstFileName); err != nil {\n\t\tlogger.Errorf(\"Failed to move decrypted file from %s to final destination %s: %v\", tmpDstFileName, fullDstFileName, err)\n\t\treturn err\n\t}\n\tlogger.Debugf(\"Successfully decrypted and moved file to %s\", fullDstFileName)\n\treturn nil\n}\n"
  },
  {
    "path": "storage_client_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n)\n\n// TestProcessEncryptedFileToDestination_DecryptionFailure tests that temporary files\n// are cleaned up when decryption fails due to invalid encryption data\nfunc TestProcessEncryptedFileToDestination_DecryptionFailure(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tfullDstFileName := filepath.Join(tmpDir, \"final_destination.txt\")\n\ttempDownloadFile := fullDstFileName + \".tmp\"\n\tassertNilF(t, os.WriteFile(tempDownloadFile, []byte(\"invalid encrypted content\"), 0644), \"Failed to create temp download file\")\n\n\t// Create metadata with invalid encryption material to trigger decryption failure\n\tmeta := &fileMetadata{\n\t\tencryptionMaterial: &snowflakeFileEncryption{\n\t\t\tQueryStageMasterKey: \"invalid_key\", // Invalid key to cause decryption failure\n\t\t\tQueryID:             \"test-query-id\",\n\t\t\tSMKID:               12345,\n\t\t},\n\t\ttmpDir:      tmpDir,\n\t\tsrcFileName: \"test_file.txt\",\n\t}\n\n\t// Create header with invalid encryption metadata\n\theader := &fileHeader{\n\t\tencryptionMetadata: &encryptMetadata{\n\t\t\tkey:     \"invalid_key_data\", // Invalid encryption data\n\t\t\tiv:      \"invalid_iv_data\",\n\t\t\tmatdesc: `{\"smkId\":\"12345\",\"queryId\":\"test-query-id\",\"keySize\":\"256\"}`,\n\t\t},\n\t}\n\n\t// Test: decryption should fail due to invalid encryption data\n\trsu := &remoteStorageUtil{}\n\terr := rsu.processEncryptedFileToDestination(meta, header, tempDownloadFile, fullDstFileName)\n\tassertNotNilF(t, err, \"Expected decryption to fail with invalid encryption data\")\n\n\t// Verify that the final destination file was not created\n\t_, err = os.Stat(fullDstFileName)\n\tassertTrueF(t, os.IsNotExist(err), \"Final destination file should not exist after decryption failure\")\n\n\t// Verify the temp download file was cleaned up even though decryption failed\n\t_, err = os.Stat(tempDownloadFile)\n\tassertTrueF(t, os.IsNotExist(err), \"Temp download file should be cleaned up even after decryption failure\")\n\n\tverifyNoTmpFilesLeftBehind(t, fullDstFileName)\n}\n\n// TestProcessEncryptedFileToDestination_Success tests successful decryption and file handling\nfunc TestProcessEncryptedFileToDestination_Success(t *testing.T) {\n\ttmpDir := t.TempDir()\n\n\t// Create test data and encrypt it properly\n\tinputData := \"test data for successful encryption/decryption\"\n\tinputFile := filepath.Join(tmpDir, \"input.txt\")\n\tassertNilF(t, os.WriteFile(inputFile, []byte(inputData), 0644), \"Failed to create input file\")\n\n\t// Create valid encryption material\n\tencMat := &snowflakeFileEncryption{\n\t\tQueryStageMasterKey: \"ztke8tIdVt1zmlQIZm0BMA==\",\n\t\tQueryID:             \"test-query-id\",\n\t\tSMKID:               12345,\n\t}\n\n\t// Encrypt the file to create valid encrypted content\n\tmetadata, encryptedFile, err := encryptFileCBC(encMat, inputFile, 0, tmpDir)\n\tassertNilF(t, err, \"Failed to encrypt test file\")\n\tdefer os.Remove(encryptedFile)\n\n\t// Create final destination path\n\tfullDstFileName := filepath.Join(tmpDir, \"final_destination.txt\")\n\n\t// Create metadata for decryption\n\tmeta := &fileMetadata{\n\t\tencryptionMaterial: encMat,\n\t\ttmpDir:             tmpDir,\n\t\tsrcFileName:        \"test_file.txt\",\n\t}\n\n\theader := &fileHeader{\n\t\tencryptionMetadata: metadata,\n\t}\n\n\t// Test: successful decryption and file move\n\trsu := &remoteStorageUtil{}\n\terr = rsu.processEncryptedFileToDestination(meta, header, encryptedFile, fullDstFileName)\n\tassertNilF(t, err, \"Expected successful decryption and file move\")\n\n\t// Verify that the final destination file was created with correct content\n\tfinalContent, err := os.ReadFile(fullDstFileName)\n\tassertNilF(t, err, \"Failed to read final destination file\")\n\tassertEqualF(t, string(finalContent), inputData, \"Final file content should match original input\")\n\n\t// Verify the final destination file exists and has correct content\n\t_, err = os.Stat(fullDstFileName)\n\tassertNilF(t, err, \"Final destination file should exist\")\n\n\tverifyNoTmpFilesLeftBehind(t, fullDstFileName)\n}\n\nfunc verifyNoTmpFilesLeftBehind(t *testing.T, fullDstFileName string) {\n\tdestDir := filepath.Dir(fullDstFileName)\n\tfiles, err := os.ReadDir(destDir)\n\tassertNilF(t, err, \"Failed to read destination directory\")\n\n\ttmpFileCount := 0\n\tfor _, file := range files {\n\t\tif strings.HasSuffix(file.Name(), \".tmp\") {\n\t\t\ttmpFileCount++\n\t\t}\n\t}\n\tassertEqualF(t, tmpFileCount, 0, \"No .tmp files should remain in destination directory after successful operation\")\n}\n"
  },
  {
    "path": "storage_file_util_test.go",
    "content": "package gosnowflake\n\nfunc testEncryptionMeta() *encryptMetadata {\n\tconst mockMatDesc = \"{\\\"queryid\\\":\\\"01abc874-0406-1bf0-0000-53b10668e056\\\",\\\"smkid\\\":\\\"92019681909886\\\",\\\"key\\\":\\\"128\\\"}\"\n\treturn &encryptMetadata{\n\t\tkey:     \"testencryptedkey12345678910==\",\n\t\tiv:      \"testIVkey12345678910==\",\n\t\tmatdesc: mockMatDesc,\n\t}\n}\n"
  },
  {
    "path": "structured_type.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"encoding/hex\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/query\"\n\t\"github.com/snowflakedb/gosnowflake/v2/internal/types\"\n\t\"math/big\"\n\t\"reflect\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\t\"unicode\"\n)\n\n// ObjectType Empty marker of an object used in column type ScanType function\ntype ObjectType struct {\n}\n\nvar structuredObjectWriterType = reflect.TypeFor[StructuredObjectWriter]()\n\n// StructuredObject is a representation of structured object for reading.\ntype StructuredObject interface {\n\tGetString(fieldName string) (string, error)\n\tGetNullString(fieldName string) (sql.NullString, error)\n\tGetByte(fieldName string) (byte, error)\n\tGetNullByte(fieldName string) (sql.NullByte, error)\n\tGetInt16(fieldName string) (int16, error)\n\tGetNullInt16(fieldName string) (sql.NullInt16, error)\n\tGetInt32(fieldName string) (int32, error)\n\tGetNullInt32(fieldName string) (sql.NullInt32, error)\n\tGetInt64(fieldName string) (int64, error)\n\tGetNullInt64(fieldName string) (sql.NullInt64, error)\n\tGetBigInt(fieldName string) (*big.Int, error)\n\tGetFloat32(fieldName string) (float32, error)\n\tGetFloat64(fieldName string) (float64, error)\n\tGetNullFloat64(fieldName string) (sql.NullFloat64, error)\n\tGetBigFloat(fieldName string) (*big.Float, error)\n\tGetBool(fieldName string) (bool, error)\n\tGetNullBool(fieldName string) (sql.NullBool, error)\n\tGetBytes(fieldName string) ([]byte, error)\n\tGetTime(fieldName string) (time.Time, error)\n\tGetNullTime(fieldName string) (sql.NullTime, error)\n\tGetStruct(fieldName string, scanner sql.Scanner) (sql.Scanner, error)\n\tGetRaw(fieldName string) (any, error)\n\tScanTo(sc sql.Scanner) error\n}\n\n// StructuredObjectWriter is an interface to implement, when binding structured objects.\ntype StructuredObjectWriter interface {\n\tWrite(sowc StructuredObjectWriterContext) error\n}\n\n// StructuredObjectWriterContext is a helper interface to write particular fields of structured object.\ntype StructuredObjectWriterContext interface {\n\tWriteString(fieldName string, value string) error\n\tWriteNullString(fieldName string, value sql.NullString) error\n\tWriteByt(fieldName string, value byte) error // WriteByte name is prohibited by go vet\n\tWriteNullByte(fieldName string, value sql.NullByte) error\n\tWriteInt16(fieldName string, value int16) error\n\tWriteNullInt16(fieldName string, value sql.NullInt16) error\n\tWriteInt32(fieldName string, value int32) error\n\tWriteNullInt32(fieldName string, value sql.NullInt32) error\n\tWriteInt64(fieldName string, value int64) error\n\tWriteNullInt64(fieldName string, value sql.NullInt64) error\n\tWriteFloat32(fieldName string, value float32) error\n\tWriteFloat64(fieldName string, value float64) error\n\tWriteNullFloat64(fieldName string, value sql.NullFloat64) error\n\tWriteBytes(fieldName string, value []byte) error\n\tWriteBool(fieldName string, value bool) error\n\tWriteNullBool(fieldName string, value sql.NullBool) error\n\tWriteTime(fieldName string, value time.Time, tsmode []byte) error\n\tWriteNullTime(fieldName string, value sql.NullTime, tsmode []byte) error\n\tWriteStruct(fieldName string, value StructuredObjectWriter) error\n\tWriteNullableStruct(fieldName string, value StructuredObjectWriter, typ reflect.Type) error\n\t// WriteRaw is used for inserting slices and maps only.\n\tWriteRaw(fieldName string, value any, tsmode ...[]byte) error\n\t// WriteNullRaw is used for inserting nil slices and maps only.\n\tWriteNullRaw(fieldName string, typ reflect.Type, tsmode ...[]byte) error\n\tWriteAll(sow StructuredObjectWriter) error\n}\n\n// NilMapTypes is used to define types when binding nil maps.\ntype NilMapTypes struct {\n\tKey   reflect.Type\n\tValue reflect.Type\n}\n\ntype structuredObjectWriterEntry struct {\n\tname      string\n\ttyp       string\n\tnullable  bool\n\tlength    int\n\tscale     int\n\tprecision int\n\tfields    []query.FieldMetadata\n}\n\nfunc (e *structuredObjectWriterEntry) toFieldMetadata() query.FieldMetadata {\n\treturn query.FieldMetadata{\n\t\tName:      e.name,\n\t\tType:      e.typ,\n\t\tNullable:  e.nullable,\n\t\tLength:    e.length,\n\t\tScale:     e.scale,\n\t\tPrecision: e.precision,\n\t\tFields:    e.fields,\n\t}\n}\n\ntype structuredObjectWriterContext struct {\n\tvalues  map[string]any\n\tentries []structuredObjectWriterEntry\n\tparams  *syncParams\n}\n\nfunc (sowc *structuredObjectWriterContext) init(params *syncParams) {\n\tsowc.values = make(map[string]any)\n\tsowc.params = params\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteString(fieldName string, value string) error {\n\treturn sowc.writeString(fieldName, value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullString(fieldName string, value sql.NullString) error {\n\tif value.Valid {\n\t\treturn sowc.WriteString(fieldName, value.String)\n\t}\n\treturn sowc.writeString(fieldName, nil)\n}\n\nfunc (sowc *structuredObjectWriterContext) writeString(fieldName string, value any) error {\n\treturn sowc.write(value, structuredObjectWriterEntry{\n\t\tname:     fieldName,\n\t\ttyp:      \"text\",\n\t\tnullable: true,\n\t\tlength:   134217728,\n\t})\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteByt(fieldName string, value byte) error {\n\treturn sowc.writeFixed(fieldName, value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullByte(fieldName string, value sql.NullByte) error {\n\tif value.Valid {\n\t\treturn sowc.writeFixed(fieldName, value.Byte)\n\t}\n\treturn sowc.writeFixed(fieldName, nil)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteInt16(fieldName string, value int16) error {\n\treturn sowc.writeFixed(fieldName, value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullInt16(fieldName string, value sql.NullInt16) error {\n\tif value.Valid {\n\t\treturn sowc.writeFixed(fieldName, value.Int16)\n\t}\n\treturn sowc.writeFixed(fieldName, nil)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteInt32(fieldName string, value int32) error {\n\treturn sowc.writeFixed(fieldName, value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullInt32(fieldName string, value sql.NullInt32) error {\n\tif value.Valid {\n\t\treturn sowc.writeFixed(fieldName, value.Int32)\n\t}\n\treturn sowc.writeFixed(fieldName, nil)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteInt64(fieldName string, value int64) error {\n\treturn sowc.writeFixed(fieldName, value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullInt64(fieldName string, value sql.NullInt64) error {\n\tif value.Valid {\n\t\treturn sowc.writeFixed(fieldName, value.Int64)\n\t}\n\treturn sowc.writeFixed(fieldName, nil)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteFloat32(fieldName string, value float32) error {\n\treturn sowc.writeFloat(fieldName, value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteFloat64(fieldName string, value float64) error {\n\treturn sowc.writeFloat(fieldName, value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullFloat64(fieldName string, value sql.NullFloat64) error {\n\tif value.Valid {\n\t\treturn sowc.writeFloat(fieldName, value.Float64)\n\t}\n\treturn sowc.writeFloat(fieldName, nil)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteBool(fieldName string, value bool) error {\n\treturn sowc.writeBool(fieldName, value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullBool(fieldName string, value sql.NullBool) error {\n\tif value.Valid {\n\t\treturn sowc.writeBool(fieldName, value.Bool)\n\t}\n\treturn sowc.writeBool(fieldName, nil)\n}\n\nfunc (sowc *structuredObjectWriterContext) writeBool(fieldName string, value any) error {\n\treturn sowc.write(value, structuredObjectWriterEntry{\n\t\tname:     fieldName,\n\t\ttyp:      \"boolean\",\n\t\tnullable: true,\n\t})\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteBytes(fieldName string, value []byte) error {\n\tvar res *string\n\tif value != nil {\n\t\tr := hex.EncodeToString(value)\n\t\tres = &r\n\t}\n\treturn sowc.write(res, structuredObjectWriterEntry{\n\t\tname:     fieldName,\n\t\ttyp:      \"binary\",\n\t\tnullable: true,\n\t})\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteTime(fieldName string, value time.Time, tsmode []byte) error {\n\tsnowflakeType, err := dataTypeMode(tsmode)\n\tif err != nil {\n\t\treturn err\n\t}\n\ttyp := types.DriverTypeToSnowflake[snowflakeType]\n\tsfFormat, err := dateTimeInputFormatByType(typ, sowc.params)\n\tif err != nil {\n\t\treturn err\n\t}\n\tgoFormat, err := snowflakeFormatToGoFormat(sfFormat)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn sowc.writeTime(fieldName, value.Format(goFormat), typ)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullTime(fieldName string, value sql.NullTime, tsmode []byte) error {\n\tif value.Valid {\n\t\treturn sowc.WriteTime(fieldName, value.Time, tsmode)\n\t}\n\tsnowflakeType, err := dataTypeMode(tsmode)\n\tif err != nil {\n\t\treturn err\n\t}\n\ttyp := types.DriverTypeToSnowflake[snowflakeType]\n\treturn sowc.writeTime(fieldName, nil, typ)\n}\n\nfunc (sowc *structuredObjectWriterContext) writeTime(fieldName string, value any, typ string) error {\n\treturn sowc.write(value, structuredObjectWriterEntry{\n\t\tname:     fieldName,\n\t\ttyp:      strings.ToLower(typ),\n\t\tnullable: true,\n\t\tscale:    9,\n\t})\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteStruct(fieldName string, value StructuredObjectWriter) error {\n\tif reflect.ValueOf(value).IsNil() {\n\t\treturn fmt.Errorf(\"%s is nil, use WriteNullableStruct instead\", fieldName)\n\t}\n\tchildSowc := structuredObjectWriterContext{}\n\tchildSowc.init(sowc.params)\n\terr := value.Write(&childSowc)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn sowc.write(childSowc.values, structuredObjectWriterEntry{\n\t\tname:     fieldName,\n\t\ttyp:      \"object\",\n\t\tnullable: true,\n\t\tfields:   childSowc.toFields(),\n\t})\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullableStruct(structFieldName string, value StructuredObjectWriter, typ reflect.Type) error {\n\tif value == nil || reflect.ValueOf(value).IsNil() {\n\t\tchildSowc, err := buildSowcFromType(sowc.params, typ)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn sowc.write(nil, structuredObjectWriterEntry{\n\t\t\tname:     structFieldName,\n\t\t\ttyp:      \"OBJECT\",\n\t\t\tnullable: true,\n\t\t\tfields:   childSowc.toFields(),\n\t\t})\n\t}\n\treturn sowc.WriteStruct(structFieldName, value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteRaw(fieldName string, value any, dataTypeModes ...[]byte) error {\n\tdataTypeModeSingle := DataTypeArray\n\tif len(dataTypeModes) == 1 && dataTypeModes[0] != nil {\n\t\tdataTypeModeSingle = dataTypeModes[0]\n\t}\n\ttsmode, err := dataTypeMode(dataTypeModeSingle)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tswitch reflect.ValueOf(value).Kind() {\n\tcase reflect.Slice:\n\t\tmetadata, err := goTypeToFieldMetadata(reflect.TypeOf(value).Elem(), tsmode, sowc.params)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn sowc.write(value, structuredObjectWriterEntry{\n\t\t\tname:     fieldName,\n\t\t\ttyp:      \"ARRAY\",\n\t\t\tnullable: true,\n\t\t\tfields:   []query.FieldMetadata{metadata},\n\t\t})\n\tcase reflect.Map:\n\t\tkeyMetadata, err := goTypeToFieldMetadata(reflect.TypeOf(value).Key(), tsmode, sowc.params)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvalueMetadata, err := goTypeToFieldMetadata(reflect.TypeOf(value).Elem(), tsmode, sowc.params)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn sowc.write(value, structuredObjectWriterEntry{\n\t\t\tname:     fieldName,\n\t\t\ttyp:      \"MAP\",\n\t\t\tnullable: true,\n\t\t\tfields:   []query.FieldMetadata{keyMetadata, valueMetadata},\n\t\t})\n\t}\n\treturn fmt.Errorf(\"unsupported raw type: %T\", value)\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteNullRaw(fieldName string, typ reflect.Type, dataTypeModes ...[]byte) error {\n\tdataTypeModeSingle := DataTypeArray\n\tif len(dataTypeModes) == 1 && dataTypeModes[0] != nil {\n\t\tdataTypeModeSingle = dataTypeModes[0]\n\t}\n\ttsmode, err := dataTypeMode(dataTypeModeSingle)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif typ.Kind() == reflect.Slice || typ.Kind() == reflect.Map {\n\t\tmetadata, err := goTypeToFieldMetadata(typ, tsmode, sowc.params)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif err := sowc.write(nil, structuredObjectWriterEntry{\n\t\t\tname:     fieldName,\n\t\t\ttyp:      metadata.Type,\n\t\t\tnullable: true,\n\t\t\tfields:   metadata.Fields,\n\t\t}); err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t}\n\treturn fmt.Errorf(\"cannot use %v as nillable field\", typ.Kind().String())\n}\n\nfunc buildSowcFromType(params *syncParams, typ reflect.Type) (*structuredObjectWriterContext, error) {\n\tchildSowc := &structuredObjectWriterContext{}\n\tchildSowc.init(params)\n\tif typ.Kind() == reflect.Pointer {\n\t\ttyp = typ.Elem()\n\t}\n\tfor i := 0; i < typ.NumField(); i++ {\n\t\tfield := typ.Field(i)\n\t\tfieldName := getSfFieldName(field)\n\t\tif field.Type.Kind() == reflect.String {\n\t\t\tif err := childSowc.writeString(fieldName, nil); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Uint8 || field.Type.Kind() == reflect.Int16 || field.Type.Kind() == reflect.Int32 || field.Type.Kind() == reflect.Int64 {\n\t\t\tif err := childSowc.writeFixed(fieldName, nil); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Float32 || field.Type.Kind() == reflect.Float64 {\n\t\t\tif err := childSowc.writeFloat(fieldName, nil); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Bool {\n\t\t\tif err := childSowc.writeBool(fieldName, nil); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t} else if (field.Type.Kind() == reflect.Slice || field.Type.Kind() == reflect.Array) && field.Type.Elem().Kind() == reflect.Uint8 {\n\t\t\tif err := childSowc.WriteBytes(fieldName, nil); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Struct || field.Type.Kind() == reflect.Pointer {\n\t\t\tt := field.Type\n\t\t\tif field.Type.Kind() == reflect.Pointer {\n\t\t\t\tt = field.Type.Elem()\n\t\t\t}\n\t\t\tif t.AssignableTo(reflect.TypeFor[sql.NullString]()) {\n\t\t\t\tif err := childSowc.WriteNullString(fieldName, sql.NullString{}); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else if t.AssignableTo(reflect.TypeFor[sql.NullByte]()) {\n\t\t\t\tif err := childSowc.WriteNullByte(fieldName, sql.NullByte{}); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else if t.AssignableTo(reflect.TypeFor[sql.NullInt16]()) {\n\t\t\t\tif err := childSowc.WriteNullInt16(fieldName, sql.NullInt16{}); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else if t.AssignableTo(reflect.TypeFor[sql.NullInt32]()) {\n\t\t\t\tif err := childSowc.WriteNullInt32(fieldName, sql.NullInt32{}); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else if t.AssignableTo(reflect.TypeFor[sql.NullInt64]()) {\n\t\t\t\tif err := childSowc.WriteNullInt64(fieldName, sql.NullInt64{}); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else if t.AssignableTo(reflect.TypeFor[sql.NullFloat64]()) {\n\t\t\t\tif err := childSowc.WriteNullFloat64(fieldName, sql.NullFloat64{}); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else if t.AssignableTo(reflect.TypeFor[sql.NullBool]()) {\n\t\t\t\tif err := childSowc.WriteNullBool(fieldName, sql.NullBool{}); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else if t.AssignableTo(reflect.TypeFor[sql.NullTime]()) || t.AssignableTo(reflect.TypeFor[time.Time]()) {\n\t\t\t\ttimeSnowflakeType, err := getTimeSnowflakeType(field)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tif timeSnowflakeType == nil {\n\t\t\t\t\treturn nil, fmt.Errorf(\"field %v does not have proper sf tag\", fieldName)\n\t\t\t\t}\n\t\t\t\tif err := childSowc.WriteNullTime(fieldName, sql.NullTime{}, timeSnowflakeType); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else if field.Type.AssignableTo(structuredObjectWriterType) {\n\t\t\t\tif err := childSowc.WriteNullableStruct(fieldName, nil, field.Type); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else if t.Implements(reflect.TypeFor[driver.Valuer]()) {\n\t\t\t\tif err := childSowc.WriteNullString(fieldName, sql.NullString{}); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\treturn nil, fmt.Errorf(\"field %s has unsupported type\", field.Name)\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Slice || field.Type.Kind() == reflect.Map {\n\t\t\ttimeSnowflakeType, err := getTimeSnowflakeType(field)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tif err := childSowc.WriteNullRaw(fieldName, field.Type, timeSnowflakeType); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t}\n\treturn childSowc, nil\n}\n\nfunc (sowc *structuredObjectWriterContext) writeFixed(fieldName string, value any) error {\n\treturn sowc.write(value, structuredObjectWriterEntry{\n\t\tname:      fieldName,\n\t\ttyp:       \"fixed\",\n\t\tnullable:  true,\n\t\tprecision: 38,\n\t\tscale:     0,\n\t})\n}\n\nfunc (sowc *structuredObjectWriterContext) writeFloat(fieldName string, value any) error {\n\treturn sowc.write(value, structuredObjectWriterEntry{\n\t\tname:      fieldName,\n\t\ttyp:       \"real\",\n\t\tnullable:  true,\n\t\tprecision: 38,\n\t\tscale:     0,\n\t})\n}\n\nfunc (sowc *structuredObjectWriterContext) write(value any, entry structuredObjectWriterEntry) error {\n\tsowc.values[entry.name] = value\n\tsowc.entries = append(sowc.entries, entry)\n\treturn nil\n}\n\nfunc (sowc *structuredObjectWriterContext) WriteAll(sow StructuredObjectWriter) error {\n\ttyp := reflect.TypeOf(sow)\n\tif typ.Kind() == reflect.Pointer {\n\t\ttyp = typ.Elem()\n\t}\n\tval := reflect.Indirect(reflect.ValueOf(sow))\n\tfor i := 0; i < typ.NumField(); i++ {\n\t\tfield := typ.Field(i)\n\t\tif shouldIgnoreField(field) {\n\t\t\tcontinue\n\t\t}\n\t\tfieldName := getSfFieldName(field)\n\t\tif field.Type.Kind() == reflect.String {\n\t\t\tif err := sowc.WriteString(fieldName, val.Field(i).String()); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Uint8 {\n\t\t\tif err := sowc.WriteByt(fieldName, byte(val.Field(i).Uint())); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Int16 {\n\t\t\tif err := sowc.WriteInt16(fieldName, int16(val.Field(i).Int())); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Int32 {\n\t\t\tif err := sowc.WriteInt32(fieldName, int32(val.Field(i).Int())); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Int64 {\n\t\t\tif err := sowc.WriteInt64(fieldName, val.Field(i).Int()); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Float32 {\n\t\t\tif err := sowc.WriteFloat32(fieldName, float32(val.Field(i).Float())); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Float64 {\n\t\t\tif err := sowc.WriteFloat64(fieldName, val.Field(i).Float()); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Bool {\n\t\t\tif err := sowc.WriteBool(fieldName, val.Field(i).Bool()); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else if (field.Type.Kind() == reflect.Array || field.Type.Kind() == reflect.Slice) && field.Type.Elem().Kind() == reflect.Uint8 {\n\t\t\tif err := sowc.WriteBytes(fieldName, val.Field(i).Bytes()); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Struct || field.Type.Kind() == reflect.Pointer {\n\t\t\tif v, ok := val.Field(i).Interface().(time.Time); ok {\n\t\t\t\ttimeSnowflakeType, err := getTimeSnowflakeType(typ.Field(i))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tif timeSnowflakeType == nil {\n\t\t\t\t\treturn fmt.Errorf(\"field %v does not have a proper sf tag\", fieldName)\n\t\t\t\t}\n\t\t\t\tif err := sowc.WriteTime(fieldName, v, timeSnowflakeType); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if v, ok := val.Field(i).Interface().(sql.NullString); ok {\n\t\t\t\tif err := sowc.WriteNullString(fieldName, v); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if v, ok := val.Field(i).Interface().(sql.NullByte); ok {\n\t\t\t\tif err := sowc.WriteNullByte(fieldName, v); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if v, ok := val.Field(i).Interface().(sql.NullInt16); ok {\n\t\t\t\tif err := sowc.WriteNullInt16(fieldName, v); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if v, ok := val.Field(i).Interface().(sql.NullInt32); ok {\n\t\t\t\tif err := sowc.WriteNullInt32(fieldName, v); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if v, ok := val.Field(i).Interface().(sql.NullInt64); ok {\n\t\t\t\tif err := sowc.WriteNullInt64(fieldName, v); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if v, ok := val.Field(i).Interface().(sql.NullFloat64); ok {\n\t\t\t\tif err := sowc.WriteNullFloat64(fieldName, v); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if v, ok := val.Field(i).Interface().(sql.NullBool); ok {\n\t\t\t\tif err := sowc.WriteNullBool(fieldName, v); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if v, ok := val.Field(i).Interface().(sql.NullTime); ok {\n\t\t\t\ttimeSnowflakeType, err := getTimeSnowflakeType(typ.Field(i))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tif timeSnowflakeType == nil {\n\t\t\t\t\treturn fmt.Errorf(\"field %v does not have a proper sf tag\", fieldName)\n\t\t\t\t}\n\t\t\t\tif err := sowc.WriteNullTime(fieldName, v, timeSnowflakeType); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if v, ok := val.Field(i).Interface().(StructuredObjectWriter); ok {\n\t\t\t\tif reflect.ValueOf(v).IsNil() {\n\t\t\t\t\tif err := sowc.WriteNullableStruct(fieldName, nil, reflect.TypeOf(v)); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tchildSowc := &structuredObjectWriterContext{}\n\t\t\t\t\tchildSowc.init(sowc.params)\n\t\t\t\t\tif err := v.Write(childSowc); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif err := sowc.write(childSowc.values, structuredObjectWriterEntry{\n\t\t\t\t\t\tname:     fieldName,\n\t\t\t\t\t\ttyp:      \"OBJECT\",\n\t\t\t\t\t\tnullable: true,\n\t\t\t\t\t\tfields:   childSowc.toFields(),\n\t\t\t\t\t}); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else if field.Type.Kind() == reflect.Slice || field.Type.Kind() == reflect.Map {\n\t\t\tvar timeSfType []byte\n\t\t\tvar err error\n\t\t\tif field.Type.Elem().AssignableTo(reflect.TypeFor[time.Time]()) || field.Type.Elem().AssignableTo(reflect.TypeFor[sql.NullTime]()) {\n\t\t\t\ttimeSfType, err = getTimeSnowflakeType(typ.Field(i))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t\tif err := sowc.WriteRaw(fieldName, val.Field(i).Interface(), timeSfType); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else {\n\t\t\treturn fmt.Errorf(\"field %s has unsupported type\", field.Name)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (sowc *structuredObjectWriterContext) toFields() []query.FieldMetadata {\n\tfieldMetadatas := make([]query.FieldMetadata, len(sowc.entries))\n\tfor i, entry := range sowc.entries {\n\t\tfieldMetadatas[i] = entry.toFieldMetadata()\n\t}\n\treturn fieldMetadatas\n}\n\n// ArrayOfScanners Helper type for scanning array of sql.Scanner values.\ntype ArrayOfScanners[T sql.Scanner] []T\n\nfunc (st *ArrayOfScanners[T]) Scan(val any) error {\n\tif val == nil {\n\t\treturn nil\n\t}\n\tsts := val.([]*structuredType)\n\t*st = make([]T, len(sts))\n\tvar t T\n\tfor i, s := range sts {\n\t\t(*st)[i] = reflect.New(reflect.TypeOf(t).Elem()).Interface().(T)\n\t\tif err := (*st)[i].Scan(s); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// ScanArrayOfScanners is a helper function for scanning arrays of sql.Scanner values.\n// Example:\n//\n//\tvar res []*simpleObject\n//\terr := rows.Scan(ScanArrayOfScanners(&res))\nfunc ScanArrayOfScanners[T sql.Scanner](value *[]T) *ArrayOfScanners[T] {\n\treturn (*ArrayOfScanners[T])(value)\n}\n\n// MapOfScanners Helper type for scanning map of sql.Scanner values.\ntype MapOfScanners[K comparable, V sql.Scanner] map[K]V\n\nfunc (st *MapOfScanners[K, V]) Scan(val any) error {\n\tif val == nil {\n\t\treturn nil\n\t}\n\tsts := val.(map[K]*structuredType)\n\t*st = make(map[K]V)\n\tvar someV V\n\tfor k, v := range sts {\n\t\tif v != nil && !reflect.ValueOf(v).IsNil() {\n\t\t\t(*st)[k] = reflect.New(reflect.TypeOf(someV).Elem()).Interface().(V)\n\t\t\tif err := (*st)[k].Scan(sts[k]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else {\n\t\t\t(*st)[k] = reflect.Zero(reflect.TypeOf(someV)).Interface().(V)\n\t\t}\n\t}\n\treturn nil\n}\n\n// ScanMapOfScanners is a helper function for scanning maps of sql.Scanner values.\n// Example:\n//\n//\tvar res map[string]*simpleObject\n//\terr := rows.Scan(ScanMapOfScanners(&res))\nfunc ScanMapOfScanners[K comparable, V sql.Scanner](m *map[K]V) *MapOfScanners[K, V] {\n\treturn (*MapOfScanners[K, V])(m)\n}\n\ntype structuredType struct {\n\tvalues        map[string]any\n\tfieldMetadata []query.FieldMetadata\n\tparams        *syncParams\n}\n\nfunc getType[T any](st *structuredType, fieldName string, emptyValue T) (T, bool, error) {\n\tv, ok := st.values[fieldName]\n\tif !ok {\n\t\treturn emptyValue, false, errors.New(\"field \" + fieldName + \" does not exist\")\n\t}\n\tif v == nil {\n\t\treturn emptyValue, true, nil\n\t}\n\tv, ok = v.(T)\n\tif !ok {\n\t\treturn emptyValue, false, fmt.Errorf(\"cannot convert field %v to %T\", fieldName, emptyValue)\n\t}\n\treturn v.(T), false, nil\n}\n\nfunc (st *structuredType) GetString(fieldName string) (string, error) {\n\tnullString, err := st.GetNullString(fieldName)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif !nullString.Valid {\n\t\treturn \"\", fmt.Errorf(\"nil value for %v, use GetNullString instead\", fieldName)\n\t}\n\treturn nullString.String, nil\n}\n\nfunc (st *structuredType) GetNullString(fieldName string) (sql.NullString, error) {\n\ts, wasNil, err := getType[string](st, fieldName, \"\")\n\tif err != nil {\n\t\treturn sql.NullString{}, err\n\t}\n\tif wasNil {\n\t\treturn sql.NullString{Valid: false}, err\n\t}\n\treturn sql.NullString{Valid: true, String: s}, nil\n}\n\nfunc (st *structuredType) GetByte(fieldName string) (byte, error) {\n\tnullByte, err := st.GetNullByte(fieldName)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\tif !nullByte.Valid {\n\t\treturn 0, fmt.Errorf(\"nil value for %v, use GetNullByte instead\", fieldName)\n\t}\n\treturn nullByte.Byte, nil\n}\n\nfunc (st *structuredType) GetNullByte(fieldName string) (sql.NullByte, error) {\n\tb, err := st.GetNullInt64(fieldName)\n\tif err != nil {\n\t\treturn sql.NullByte{}, err\n\t}\n\tif !b.Valid {\n\t\treturn sql.NullByte{Valid: false}, nil\n\t}\n\treturn sql.NullByte{Valid: true, Byte: byte(b.Int64)}, nil\n}\n\nfunc (st *structuredType) GetInt16(fieldName string) (int16, error) {\n\tnullInt16, err := st.GetNullInt16(fieldName)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\tif !nullInt16.Valid {\n\t\treturn 0, fmt.Errorf(\"nil value for %v, use GetNullInt16 instead\", fieldName)\n\t}\n\treturn nullInt16.Int16, nil\n}\n\nfunc (st *structuredType) GetNullInt16(fieldName string) (sql.NullInt16, error) {\n\tb, err := st.GetNullInt64(fieldName)\n\tif err != nil {\n\t\treturn sql.NullInt16{}, err\n\t}\n\tif !b.Valid {\n\t\treturn sql.NullInt16{Valid: false}, nil\n\t}\n\treturn sql.NullInt16{Valid: true, Int16: int16(b.Int64)}, nil\n}\n\nfunc (st *structuredType) GetInt32(fieldName string) (int32, error) {\n\tnullInt32, err := st.GetNullInt32(fieldName)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\tif !nullInt32.Valid {\n\t\treturn 0, fmt.Errorf(\"nil value for %v, use GetNullInt32 instead\", fieldName)\n\t}\n\treturn nullInt32.Int32, nil\n}\n\nfunc (st *structuredType) GetNullInt32(fieldName string) (sql.NullInt32, error) {\n\tb, err := st.GetNullInt64(fieldName)\n\tif err != nil {\n\t\treturn sql.NullInt32{}, err\n\t}\n\tif !b.Valid {\n\t\treturn sql.NullInt32{Valid: false}, nil\n\t}\n\treturn sql.NullInt32{Valid: true, Int32: int32(b.Int64)}, nil\n}\n\nfunc (st *structuredType) GetInt64(fieldName string) (int64, error) {\n\tnullInt64, err := st.GetNullInt64(fieldName)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\tif !nullInt64.Valid {\n\t\treturn 0, fmt.Errorf(\"nil value for %v, use GetNullInt64 instead\", fieldName)\n\t}\n\treturn nullInt64.Int64, nil\n}\n\nfunc (st *structuredType) GetNullInt64(fieldName string) (sql.NullInt64, error) {\n\ti64, wasNil, err := getType[int64](st, fieldName, 0)\n\tif wasNil {\n\t\treturn sql.NullInt64{Valid: false}, err\n\t}\n\tif err == nil {\n\t\treturn sql.NullInt64{Valid: true, Int64: i64}, nil\n\t}\n\tif s, _, err := getType[string](st, fieldName, \"\"); err == nil {\n\t\ti, err := strconv.ParseInt(s, 10, 64)\n\t\tif err != nil {\n\t\t\treturn sql.NullInt64{Valid: false}, err\n\t\t}\n\t\treturn sql.NullInt64{Valid: true, Int64: i}, nil\n\t} else if b, _, err := getType[float64](st, fieldName, 0); err == nil {\n\t\treturn sql.NullInt64{Valid: true, Int64: int64(b)}, nil\n\t} else if b, _, err := getType[json.Number](st, fieldName, \"\"); err == nil {\n\t\ti, err := strconv.ParseInt(string(b), 10, 64)\n\t\tif err != nil {\n\t\t\treturn sql.NullInt64{Valid: false}, err\n\t\t}\n\t\treturn sql.NullInt64{Valid: true, Int64: i}, err\n\t} else {\n\t\treturn sql.NullInt64{Valid: false}, fmt.Errorf(\"cannot cast column %v to byte\", fieldName)\n\t}\n}\n\nfunc (st *structuredType) GetBigInt(fieldName string) (*big.Int, error) {\n\tb, wasNull, err := getType[*big.Int](st, fieldName, new(big.Int))\n\tif wasNull {\n\t\treturn nil, nil\n\t}\n\treturn b, err\n}\n\nfunc (st *structuredType) GetFloat32(fieldName string) (float32, error) {\n\tf32, err := st.GetFloat64(fieldName)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\treturn float32(f32), err\n}\n\nfunc (st *structuredType) GetFloat64(fieldName string) (float64, error) {\n\tnullFloat64, err := st.GetNullFloat64(fieldName)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\tif !nullFloat64.Valid {\n\t\treturn 0, fmt.Errorf(\"nil value for %v, use GetNullFloat64 instead\", fieldName)\n\t}\n\treturn nullFloat64.Float64, nil\n}\n\nfunc (st *structuredType) GetNullFloat64(fieldName string) (sql.NullFloat64, error) {\n\tfloat64, wasNull, err := getType[float64](st, fieldName, 0)\n\tif wasNull {\n\t\treturn sql.NullFloat64{Valid: false}, nil\n\t}\n\tif err == nil {\n\t\treturn sql.NullFloat64{Valid: true, Float64: float64}, nil\n\t}\n\ts, _, err := getType[string](st, fieldName, \"\")\n\tif err == nil {\n\t\tf64, err := strconv.ParseFloat(s, 64)\n\t\tif err != nil {\n\t\t\treturn sql.NullFloat64{}, err\n\t\t}\n\t\treturn sql.NullFloat64{Valid: true, Float64: f64}, err\n\t}\n\tjsonNumber, _, err := getType[json.Number](st, fieldName, \"\")\n\tif err != nil {\n\t\treturn sql.NullFloat64{}, err\n\t}\n\tf64, err := strconv.ParseFloat(string(jsonNumber), 64)\n\tif err != nil {\n\t\treturn sql.NullFloat64{}, err\n\t}\n\treturn sql.NullFloat64{Valid: true, Float64: f64}, nil\n}\n\nfunc (st *structuredType) GetBigFloat(fieldName string) (*big.Float, error) {\n\tfloat, wasNull, err := getType[*big.Float](st, fieldName, new(big.Float))\n\tif wasNull {\n\t\treturn nil, nil\n\t}\n\treturn float, err\n}\n\nfunc (st *structuredType) GetBool(fieldName string) (bool, error) {\n\tnullBool, err := st.GetNullBool(fieldName)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tif !nullBool.Valid {\n\t\treturn false, fmt.Errorf(\"nil value for %v, use GetNullBool instead\", fieldName)\n\t}\n\treturn nullBool.Bool, err\n}\n\nfunc (st *structuredType) GetNullBool(fieldName string) (sql.NullBool, error) {\n\tb, wasNull, err := getType[bool](st, fieldName, false)\n\tif wasNull {\n\t\treturn sql.NullBool{Valid: false}, nil\n\t}\n\tif err != nil {\n\t\treturn sql.NullBool{}, err\n\t}\n\treturn sql.NullBool{Valid: true, Bool: b}, nil\n}\n\nfunc (st *structuredType) GetBytes(fieldName string) ([]byte, error) {\n\tif bi, _, err := getType[[]byte](st, fieldName, nil); err == nil {\n\t\treturn bi, nil\n\t} else if bi, _, err := getType[string](st, fieldName, \"\"); err == nil {\n\t\treturn hex.DecodeString(bi)\n\t}\n\tbytes, _, err := getType[[]byte](st, fieldName, []byte{})\n\treturn bytes, err\n}\n\nfunc (st *structuredType) GetTime(fieldName string) (time.Time, error) {\n\tnullTime, err := st.GetNullTime(fieldName)\n\tif err != nil {\n\t\treturn time.Time{}, err\n\t}\n\tif !nullTime.Valid {\n\t\treturn time.Time{}, fmt.Errorf(\"nil value for %v, use GetNullBool instead\", fieldName)\n\t}\n\treturn nullTime.Time, nil\n}\n\nfunc (st *structuredType) GetNullTime(fieldName string) (sql.NullTime, error) {\n\ts, wasNull, err := getType[string](st, fieldName, \"\")\n\tif wasNull {\n\t\treturn sql.NullTime{Valid: false}, nil\n\t}\n\tif err == nil {\n\t\tfieldMetadata, err := st.fieldMetadataByFieldName(fieldName)\n\t\tif err != nil {\n\t\t\treturn sql.NullTime{}, err\n\t\t}\n\t\tformat, err := dateTimeOutputFormatByType(fieldMetadata.Type, st.params)\n\t\tif err != nil {\n\t\t\treturn sql.NullTime{}, err\n\t\t}\n\t\tgoFormat, err := snowflakeFormatToGoFormat(format)\n\t\tif err != nil {\n\t\t\treturn sql.NullTime{}, err\n\t\t}\n\t\ttime, err := time.Parse(goFormat, s)\n\t\treturn sql.NullTime{Valid: true, Time: time}, err\n\t}\n\ttime, _, err := getType[time.Time](st, fieldName, time.Time{})\n\tif err != nil {\n\t\treturn sql.NullTime{}, err\n\t}\n\treturn sql.NullTime{Valid: true, Time: time}, nil\n}\n\nfunc (st *structuredType) GetStruct(fieldName string, scanner sql.Scanner) (sql.Scanner, error) {\n\tchildSt, wasNull, err := getType[*structuredType](st, fieldName, &structuredType{})\n\tif wasNull {\n\t\treturn nil, nil\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\terr = scanner.Scan(childSt)\n\treturn scanner, err\n}\nfunc (st *structuredType) GetRaw(fieldName string) (any, error) {\n\treturn st.values[fieldName], nil\n}\n\nfunc (st *structuredType) ScanTo(sc sql.Scanner) error {\n\tv := reflect.Indirect(reflect.ValueOf(sc))\n\tt := v.Type()\n\tfor i := 0; i < t.NumField(); i++ {\n\t\tfield := t.Field(i)\n\t\tif shouldIgnoreField(field) {\n\t\t\tcontinue\n\t\t}\n\t\tswitch field.Type.Kind() {\n\t\tcase reflect.String:\n\t\t\ts, err := st.GetString(getSfFieldName(field))\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tv.FieldByName(field.Name).SetString(s)\n\t\tcase reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\ti, err := st.GetInt64(getSfFieldName(field))\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tv.FieldByName(field.Name).SetInt(i)\n\t\tcase reflect.Uint8:\n\t\t\tb, err := st.GetByte(getSfFieldName(field))\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tv.FieldByName(field.Name).SetUint(uint64(int64(b)))\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\tf, err := st.GetFloat64(getSfFieldName(field))\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tv.FieldByName(field.Name).SetFloat(f)\n\t\tcase reflect.Bool:\n\t\t\tb, err := st.GetBool(getSfFieldName(field))\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tv.FieldByName(field.Name).SetBool(b)\n\t\tcase reflect.Slice, reflect.Array:\n\t\t\tswitch field.Type.Elem().Kind() {\n\t\t\tcase reflect.Uint8:\n\t\t\t\tb, err := st.GetBytes(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).SetBytes(b)\n\t\t\tdefault:\n\t\t\t\traw, err := st.GetRaw(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tif raw != nil {\n\t\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(raw))\n\t\t\t\t}\n\t\t\t}\n\t\tcase reflect.Map:\n\t\t\traw, err := st.GetRaw(getSfFieldName(field))\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif raw != nil {\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(raw))\n\t\t\t}\n\t\tcase reflect.Struct:\n\t\t\ta := v.FieldByName(field.Name).Interface()\n\t\t\tif _, ok := a.(time.Time); ok {\n\t\t\t\ttime, err := st.GetTime(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(time))\n\t\t\t} else if _, ok := a.(sql.Scanner); ok {\n\t\t\t\tscanner := reflect.New(reflect.TypeOf(a)).Interface().(sql.Scanner)\n\t\t\t\ts, err := st.GetStruct(getSfFieldName(field), scanner)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.Indirect(reflect.ValueOf(s)))\n\t\t\t} else if _, ok := a.(sql.NullString); ok {\n\t\t\t\tns, err := st.GetNullString(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(ns))\n\t\t\t} else if _, ok := a.(sql.NullByte); ok {\n\t\t\t\tnb, err := st.GetNullByte(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(nb))\n\t\t\t} else if _, ok := a.(sql.NullBool); ok {\n\t\t\t\tnb, err := st.GetNullBool(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(nb))\n\t\t\t} else if _, ok := a.(sql.NullInt16); ok {\n\t\t\t\tni, err := st.GetNullInt16(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(ni))\n\t\t\t} else if _, ok := a.(sql.NullInt32); ok {\n\t\t\t\tni, err := st.GetNullInt32(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(ni))\n\t\t\t} else if _, ok := a.(sql.NullInt64); ok {\n\t\t\t\tni, err := st.GetNullInt64(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(ni))\n\t\t\t} else if _, ok := a.(sql.NullFloat64); ok {\n\t\t\t\tnf, err := st.GetNullFloat64(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(nf))\n\t\t\t} else if _, ok := a.(sql.NullTime); ok {\n\t\t\t\tnt, err := st.GetNullTime(getSfFieldName(field))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(nt))\n\t\t\t}\n\t\tcase reflect.Pointer:\n\t\t\tswitch field.Type.Elem().Kind() {\n\t\t\tcase reflect.Struct:\n\t\t\t\ta := reflect.New(field.Type.Elem()).Interface()\n\t\t\t\ts, err := st.GetStruct(getSfFieldName(field), a.(sql.Scanner))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tif s != nil {\n\t\t\t\t\tv.FieldByName(field.Name).Set(reflect.ValueOf(s))\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\treturn errors.New(\"only struct pointers are supported\")\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (st *structuredType) fieldMetadataByFieldName(fieldName string) (query.FieldMetadata, error) {\n\tfor _, fm := range st.fieldMetadata {\n\t\tif fm.Name == fieldName {\n\t\t\treturn fm, nil\n\t\t}\n\t}\n\treturn query.FieldMetadata{}, errors.New(\"no metadata for field \" + fieldName)\n}\n\nfunc structuredTypesEnabled(ctx context.Context) bool {\n\tv := ctx.Value(enableStructuredTypes)\n\tif v == nil {\n\t\treturn false\n\t}\n\td, ok := v.(bool)\n\treturn ok && d\n}\n\nfunc embeddedValuesNullableEnabled(ctx context.Context) bool {\n\tv := ctx.Value(embeddedValuesNullable)\n\tif v == nil {\n\t\treturn false\n\t}\n\td, ok := v.(bool)\n\treturn ok && d\n}\n\nfunc getSfFieldName(field reflect.StructField) string {\n\tsfTag := field.Tag.Get(\"sf\")\n\tif sfTag != \"\" {\n\t\treturn strings.Split(sfTag, \",\")[0]\n\t}\n\tr := []rune(field.Name)\n\tr[0] = unicode.ToLower(r[0])\n\treturn string(r)\n}\n\nfunc shouldIgnoreField(field reflect.StructField) bool {\n\tsfTag := strings.ToLower(field.Tag.Get(\"sf\"))\n\tif sfTag == \"\" {\n\t\treturn false\n\t}\n\treturn slices.Contains(strings.Split(sfTag, \",\")[1:], \"ignore\")\n}\n\nfunc getTimeSnowflakeType(field reflect.StructField) ([]byte, error) {\n\tsfTag := strings.ToLower(field.Tag.Get(\"sf\"))\n\tif sfTag == \"\" {\n\t\treturn nil, nil\n\t}\n\tvalues := strings.Split(sfTag, \",\")[1:]\n\tif slices.Contains(values, \"time\") {\n\t\treturn DataTypeTime, nil\n\t} else if slices.Contains(values, \"date\") {\n\t\treturn DataTypeDate, nil\n\t} else if slices.Contains(values, \"ltz\") {\n\t\treturn DataTypeTimestampLtz, nil\n\t} else if slices.Contains(values, \"ntz\") {\n\t\treturn DataTypeTimestampNtz, nil\n\t} else if slices.Contains(values, \"tz\") {\n\t\treturn DataTypeTimestampTz, nil\n\t}\n\treturn nil, nil\n}\n"
  },
  {
    "path": "structured_type_arrow_batches_test.go",
    "content": "package gosnowflake_test\n\nimport (\n\t\"context\"\n\t\"crypto/rsa\"\n\t\"crypto/x509\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"reflect\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow\"\n\t\"github.com/apache/arrow-go/v18/arrow/array\"\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n\n\tsf \"github.com/snowflakedb/gosnowflake/v2\"\n\t\"github.com/snowflakedb/gosnowflake/v2/arrowbatches\"\n)\n\nfunc arrowTestRepoRoot(t *testing.T) string {\n\tt.Helper()\n\tdir, err := os.Getwd()\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get working directory: %v\", err)\n\t}\n\tfor {\n\t\tif _, err = os.Stat(filepath.Join(dir, \"go.mod\")); err == nil {\n\t\t\treturn dir\n\t\t}\n\t\tif !os.IsNotExist(err) {\n\t\t\tt.Fatalf(\"failed to stat go.mod in %q: %v\", dir, err)\n\t\t}\n\t\tparent := filepath.Dir(dir)\n\t\tif parent == dir {\n\t\t\tt.Fatal(\"could not find repository root (no go.mod found)\")\n\t\t}\n\t\tdir = parent\n\t}\n}\n\nfunc arrowTestReadPrivateKey(t *testing.T, path string) *rsa.PrivateKey {\n\tt.Helper()\n\tif !filepath.IsAbs(path) {\n\t\tpath = filepath.Join(arrowTestRepoRoot(t), path)\n\t}\n\tdata, err := os.ReadFile(path)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to read private key file %q: %v\", path, err)\n\t}\n\tblock, _ := pem.Decode(data)\n\tif block == nil {\n\t\tt.Fatalf(\"failed to decode PEM block from %q\", path)\n\t}\n\tkey, err := x509.ParsePKCS8PrivateKey(block.Bytes)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to parse private key from %q: %v\", path, err)\n\t}\n\trsaKey, ok := key.(*rsa.PrivateKey)\n\tif !ok {\n\t\tt.Fatalf(\"private key in %q is not RSA (got %T)\", path, key)\n\t}\n\treturn rsaKey\n}\n\n// arrowTestConn manages a Snowflake connection for arrow batch tests.\ntype arrowTestConn struct {\n\tdb   *sql.DB\n\tconn *sql.Conn\n}\n\nfunc openArrowTestConn(t *testing.T) *arrowTestConn {\n\tt.Helper()\n\tconfigParams := []*sf.ConfigParam{\n\t\t{Name: \"Account\", EnvName: \"SNOWFLAKE_TEST_ACCOUNT\", FailOnMissing: true},\n\t\t{Name: \"User\", EnvName: \"SNOWFLAKE_TEST_USER\", FailOnMissing: true},\n\t\t{Name: \"Host\", EnvName: \"SNOWFLAKE_TEST_HOST\", FailOnMissing: false},\n\t\t{Name: \"Port\", EnvName: \"SNOWFLAKE_TEST_PORT\", FailOnMissing: false},\n\t\t{Name: \"Protocol\", EnvName: \"SNOWFLAKE_TEST_PROTOCOL\", FailOnMissing: false},\n\t\t{Name: \"Warehouse\", EnvName: \"SNOWFLAKE_TEST_WAREHOUSE\", FailOnMissing: false},\n\t}\n\tisJWT := os.Getenv(\"SNOWFLAKE_TEST_AUTHENTICATOR\") == \"SNOWFLAKE_JWT\"\n\tif !isJWT {\n\t\tconfigParams = append(configParams,\n\t\t\t&sf.ConfigParam{Name: \"Password\", EnvName: \"SNOWFLAKE_TEST_PASSWORD\", FailOnMissing: true},\n\t\t)\n\t}\n\tcfg, err := sf.GetConfigFromEnv(configParams)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to get config from environment: %v\", err)\n\t}\n\tif isJWT {\n\t\tprivKeyPath := os.Getenv(\"SNOWFLAKE_TEST_PRIVATE_KEY\")\n\t\tif privKeyPath == \"\" {\n\t\t\tt.Fatal(\"SNOWFLAKE_TEST_PRIVATE_KEY must be set for JWT authentication\")\n\t\t}\n\t\tcfg.PrivateKey = arrowTestReadPrivateKey(t, privKeyPath)\n\t\tcfg.Authenticator = sf.AuthTypeJwt\n\t}\n\ttz := \"UTC\"\n\tif cfg.Params == nil {\n\t\tcfg.Params = make(map[string]*string)\n\t}\n\tcfg.Params[\"timezone\"] = &tz\n\tdsn, err := sf.DSN(cfg)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create DSN: %v\", err)\n\t}\n\tdb, err := sql.Open(\"snowflake\", dsn)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to open db: %v\", err)\n\t}\n\tconn, err := db.Conn(context.Background())\n\tif err != nil {\n\t\tdb.Close()\n\t\tt.Fatalf(\"failed to get connection: %v\", err)\n\t}\n\treturn &arrowTestConn{db: db, conn: conn}\n}\n\nfunc (tc *arrowTestConn) close() {\n\ttc.conn.Close()\n\ttc.db.Close()\n}\n\nfunc (tc *arrowTestConn) exec(t *testing.T, query string) {\n\tt.Helper()\n\t_, err := tc.conn.ExecContext(context.Background(), query)\n\tif err != nil {\n\t\tt.Fatalf(\"exec %q failed: %v\", query, err)\n\t}\n}\n\nfunc (tc *arrowTestConn) enableStructuredTypes(t *testing.T) {\n\tt.Helper()\n\ttc.exec(t, \"ALTER SESSION SET ENABLE_STRUCTURED_TYPES_IN_CLIENT_RESPONSE = true\")\n\ttc.exec(t, \"ALTER SESSION SET IGNORE_CLIENT_VESRION_IN_STRUCTURED_TYPES_RESPONSE = true\")\n}\n\nfunc (tc *arrowTestConn) forceNativeArrow(t *testing.T) {\n\tt.Helper()\n\ttc.exec(t, \"ALTER SESSION SET GO_QUERY_RESULT_FORMAT = ARROW\")\n\ttc.exec(t, \"ALTER SESSION SET ENABLE_STRUCTURED_TYPES_NATIVE_ARROW_FORMAT = true\")\n\ttc.exec(t, \"ALTER SESSION SET FORCE_ENABLE_STRUCTURED_TYPES_NATIVE_ARROW_FORMAT = true\")\n}\n\nfunc (tc *arrowTestConn) queryArrowBatches(t *testing.T, ctx context.Context, query string) ([]*arrowbatches.ArrowBatch, func()) {\n\tt.Helper()\n\tvar rows driver.Rows\n\tvar err error\n\terr = tc.conn.Raw(func(x any) error {\n\t\tqueryer, ok := x.(driver.QueryerContext)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"connection does not implement QueryerContext\")\n\t\t}\n\t\trows, err = queryer.QueryContext(ctx, query, nil)\n\t\treturn err\n\t})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to execute query: %v\", err)\n\t}\n\tsfRows, ok := rows.(sf.SnowflakeRows)\n\tif !ok {\n\t\trows.Close()\n\t\tt.Fatalf(\"rows do not implement SnowflakeRows\")\n\t}\n\tbatches, err := arrowbatches.GetArrowBatches(sfRows)\n\tif err != nil {\n\t\trows.Close()\n\t\tt.Fatalf(\"GetArrowBatches failed: %v\", err)\n\t}\n\tif len(batches) == 0 {\n\t\trows.Close()\n\t\tt.Fatal(\"expected at least one batch\")\n\t}\n\treturn batches, func() { rows.Close() }\n}\n\nfunc (tc *arrowTestConn) fetchFirst(t *testing.T, ctx context.Context, query string) ([]arrow.Record, func()) {\n\tt.Helper()\n\tbatches, closeRows := tc.queryArrowBatches(t, ctx, query)\n\trecords, err := batches[0].Fetch()\n\tif err != nil {\n\t\tcloseRows()\n\t\tt.Fatalf(\"Fetch failed: %v\", err)\n\t}\n\tif records == nil || len(*records) == 0 {\n\t\tcloseRows()\n\t\tt.Fatal(\"expected at least one record\")\n\t}\n\treturn *records, closeRows\n}\n\nfunc equalIgnoringWhitespace(a, b string) bool {\n\treturn strings.ReplaceAll(strings.ReplaceAll(a, \" \", \"\"), \"\\n\", \"\") ==\n\t\tstrings.ReplaceAll(strings.ReplaceAll(b, \" \", \"\"), \"\\n\", \"\")\n}\n\nfunc TestStructuredTypeInArrowBatchesSimple(t *testing.T) {\n\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\tdefer pool.AssertSize(t, 0)\n\tctx := sf.WithArrowAllocator(arrowbatches.WithArrowBatches(context.Background()), pool)\n\n\ttc := openArrowTestConn(t)\n\tdefer tc.close()\n\ttc.enableStructuredTypes(t)\n\ttc.forceNativeArrow(t)\n\n\trecords, closeRows := tc.fetchFirst(t, ctx,\n\t\t\"SELECT 1, {'s': 'some string'}::OBJECT(s VARCHAR)\")\n\tdefer closeRows()\n\n\tfor _, record := range records {\n\t\tdefer record.Release()\n\t\tif v := record.Column(0).(*array.Int8).Value(0); v != int8(1) {\n\t\t\tt.Errorf(\"expected column 0 = 1, got %v\", v)\n\t\t}\n\t\tif v := record.Column(1).(*array.Struct).Field(0).(*array.String).Value(0); v != \"some string\" {\n\t\t\tt.Errorf(\"expected 'some string', got %q\", v)\n\t\t}\n\t}\n}\n\nfunc TestStructuredTypeInArrowBatchesAllTypes(t *testing.T) {\n\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\tdefer pool.AssertSize(t, 0)\n\tctx := sf.WithArrowAllocator(arrowbatches.WithArrowBatches(context.Background()), pool)\n\n\ttc := openArrowTestConn(t)\n\tdefer tc.close()\n\ttc.enableStructuredTypes(t)\n\ttc.forceNativeArrow(t)\n\n\trecords, closeRows := tc.fetchFirst(t, ctx,\n\t\t\"SELECT 1, {'s': 'some string', 'i': 1, 'time': '11:22:33'::TIME, 'date': '2024-04-16'::DATE, \"+\n\t\t\t\"'ltz': '2024-04-16 11:22:33'::TIMESTAMPLTZ, 'tz': '2025-04-16 22:33:11 +0100'::TIMESTAMPTZ, \"+\n\t\t\t\"'ntz': '2026-04-16 15:22:31'::TIMESTAMPNTZ}::OBJECT(s VARCHAR, i INTEGER, time TIME, date DATE, \"+\n\t\t\t\"ltz TIMESTAMPLTZ, tz TIMESTAMPTZ, ntz TIMESTAMPNTZ)\")\n\tdefer closeRows()\n\n\tfor _, record := range records {\n\t\tdefer record.Release()\n\t\tif v := record.Column(0).(*array.Int8).Value(0); v != int8(1) {\n\t\t\tt.Errorf(\"expected column 0 = 1, got %v\", v)\n\t\t}\n\t\tst := record.Column(1).(*array.Struct)\n\t\tif v := st.Field(0).(*array.String).Value(0); v != \"some string\" {\n\t\t\tt.Errorf(\"expected 'some string', got %q\", v)\n\t\t}\n\t\tif v := st.Field(1).(*array.Int64).Value(0); v != 1 {\n\t\t\tt.Errorf(\"expected i=1, got %v\", v)\n\t\t}\n\t\tif v := st.Field(2).(*array.Time64).Value(0).ToTime(arrow.Nanosecond); !v.Equal(time.Date(1970, 1, 1, 11, 22, 33, 0, time.UTC)) {\n\t\t\tt.Errorf(\"expected time 11:22:33, got %v\", v)\n\t\t}\n\t\tif v := st.Field(3).(*array.Date32).Value(0).ToTime(); !v.Equal(time.Date(2024, 4, 16, 0, 0, 0, 0, time.UTC)) {\n\t\t\tt.Errorf(\"expected date 2024-04-16, got %v\", v)\n\t\t}\n\t\tif v := st.Field(4).(*array.Timestamp).Value(0).ToTime(arrow.Nanosecond); !v.Equal(time.Date(2024, 4, 16, 11, 22, 33, 0, time.UTC)) {\n\t\t\tt.Errorf(\"expected ltz 2024-04-16 11:22:33 UTC, got %v\", v)\n\t\t}\n\t\tif v := st.Field(5).(*array.Timestamp).Value(0).ToTime(arrow.Nanosecond); !v.Equal(time.Date(2025, 4, 16, 21, 33, 11, 0, time.UTC)) {\n\t\t\tt.Errorf(\"expected tz 2025-04-16 21:33:11 UTC, got %v\", v)\n\t\t}\n\t\tif v := st.Field(6).(*array.Timestamp).Value(0).ToTime(arrow.Nanosecond); !v.Equal(time.Date(2026, 4, 16, 15, 22, 31, 0, time.UTC)) {\n\t\t\tt.Errorf(\"expected ntz 2026-04-16 15:22:31, got %v\", v)\n\t\t}\n\t}\n}\n\nfunc TestStructuredTypeInArrowBatchesWithTimestampOptionAndHigherPrecisionAndUtf8Validation(t *testing.T) {\n\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\tdefer pool.AssertSize(t, 0)\n\tctx := arrowbatches.WithUtf8Validation(\n\t\tsf.WithHigherPrecision(\n\t\t\tarrowbatches.WithTimestampOption(\n\t\t\t\tsf.WithArrowAllocator(arrowbatches.WithArrowBatches(context.Background()), pool),\n\t\t\t\tarrowbatches.UseOriginalTimestamp,\n\t\t\t),\n\t\t),\n\t)\n\n\ttc := openArrowTestConn(t)\n\tdefer tc.close()\n\ttc.enableStructuredTypes(t)\n\ttc.forceNativeArrow(t)\n\n\trecords, closeRows := tc.fetchFirst(t, ctx,\n\t\t\"SELECT 1, {'i': 123, 'f': 12.34, 'n0': 321, 'n19': 1.5, 's': 'some string', \"+\n\t\t\t\"'bi': TO_BINARY('616263', 'HEX'), 'bool': true, 'time': '11:22:33', \"+\n\t\t\t\"'date': '2024-04-18', 'ntz': '2024-04-01 11:22:33', \"+\n\t\t\t\"'tz': '2024-04-02 11:22:33 +0100', 'ltz': '2024-04-03 11:22:33'}::\"+\n\t\t\t\"OBJECT(i INTEGER, f DOUBLE, n0 NUMBER(38, 0), n19 NUMBER(38, 19), \"+\n\t\t\t\"s VARCHAR, bi BINARY, bool BOOLEAN, time TIME, date DATE, \"+\n\t\t\t\"ntz TIMESTAMP_NTZ, tz TIMESTAMP_TZ, ltz TIMESTAMP_LTZ)\")\n\tdefer closeRows()\n\n\tfor _, record := range records {\n\t\tdefer record.Release()\n\t\tif v := record.Column(0).(*array.Int8).Value(0); v != int8(1) {\n\t\t\tt.Errorf(\"expected column 0 = 1, got %v\", v)\n\t\t}\n\t\tst := record.Column(1).(*array.Struct)\n\t\tif v := st.Field(0).(*array.Decimal128).Value(0).LowBits(); v != uint64(123) {\n\t\t\tt.Errorf(\"expected i=123, got %v\", v)\n\t\t}\n\t\tif v := st.Field(1).(*array.Float64).Value(0); v != 12.34 {\n\t\t\tt.Errorf(\"expected f=12.34, got %v\", v)\n\t\t}\n\t\tif v := st.Field(2).(*array.Decimal128).Value(0).LowBits(); v != uint64(321) {\n\t\t\tt.Errorf(\"expected n0=321, got %v\", v)\n\t\t}\n\t\tif v := st.Field(3).(*array.Decimal128).Value(0).LowBits(); v != uint64(15000000000000000000) {\n\t\t\tt.Errorf(\"expected n19=15000000000000000000, got %v\", v)\n\t\t}\n\t\tif v := st.Field(4).(*array.String).Value(0); v != \"some string\" {\n\t\t\tt.Errorf(\"expected 'some string', got %q\", v)\n\t\t}\n\t\tif v := st.Field(5).(*array.Binary).Value(0); !reflect.DeepEqual(v, []byte{'a', 'b', 'c'}) {\n\t\t\tt.Errorf(\"expected 'abc' binary, got %v\", v)\n\t\t}\n\t\tif v := st.Field(6).(*array.Boolean).Value(0); v != true {\n\t\t\tt.Errorf(\"expected true, got %v\", v)\n\t\t}\n\t\tif v := st.Field(7).(*array.Time64).Value(0).ToTime(arrow.Nanosecond); !v.Equal(time.Date(1970, 1, 1, 11, 22, 33, 0, time.UTC)) {\n\t\t\tt.Errorf(\"expected time 11:22:33, got %v\", v)\n\t\t}\n\t\tif v := st.Field(8).(*array.Date32).Value(0).ToTime(); !v.Equal(time.Date(2024, 4, 18, 0, 0, 0, 0, time.UTC)) {\n\t\t\tt.Errorf(\"expected date 2024-04-18, got %v\", v)\n\t\t}\n\t\t// With UseOriginalTimestamp, timestamps remain as raw structs (epoch + fraction)\n\t\tif v := st.Field(9).(*array.Struct).Field(0).(*array.Int64).Value(0); v != int64(1711970553) {\n\t\t\tt.Errorf(\"expected ntz epoch=1711970553, got %v\", v)\n\t\t}\n\t\tif v := st.Field(9).(*array.Struct).Field(1).(*array.Int32).Value(0); v != int32(0) {\n\t\t\tt.Errorf(\"expected ntz fraction=0, got %v\", v)\n\t\t}\n\t\tif v := st.Field(10).(*array.Struct).Field(0).(*array.Int64).Value(0); v != int64(1712053353) {\n\t\t\tt.Errorf(\"expected tz epoch=1712053353, got %v\", v)\n\t\t}\n\t\tif v := st.Field(10).(*array.Struct).Field(1).(*array.Int32).Value(0); v != int32(0) {\n\t\t\tt.Errorf(\"expected tz fraction=0, got %v\", v)\n\t\t}\n\t\tif v := st.Field(11).(*array.Struct).Field(0).(*array.Int64).Value(0); v != int64(1712143353) {\n\t\t\tt.Errorf(\"expected ltz epoch=1712143353, got %v\", v)\n\t\t}\n\t\tif v := st.Field(11).(*array.Struct).Field(1).(*array.Int32).Value(0); v != int32(0) {\n\t\t\tt.Errorf(\"expected ltz fraction=0, got %v\", v)\n\t\t}\n\t}\n}\n\nfunc TestStructuredTypeInArrowBatchesWithEmbeddedObject(t *testing.T) {\n\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\tdefer pool.AssertSize(t, 0)\n\tctx := sf.WithArrowAllocator(arrowbatches.WithArrowBatches(context.Background()), pool)\n\n\ttc := openArrowTestConn(t)\n\tdefer tc.close()\n\ttc.enableStructuredTypes(t)\n\ttc.forceNativeArrow(t)\n\n\trecords, closeRows := tc.fetchFirst(t, ctx,\n\t\t\"SELECT {'o': {'s': 'some string'}}::OBJECT(o OBJECT(s VARCHAR))\")\n\tdefer closeRows()\n\n\tfor _, record := range records {\n\t\tdefer record.Release()\n\t\tif v := record.Column(0).(*array.Struct).Field(0).(*array.Struct).Field(0).(*array.String).Value(0); v != \"some string\" {\n\t\t\tt.Errorf(\"expected 'some string', got %q\", v)\n\t\t}\n\t}\n}\n\nfunc TestStructuredTypeInArrowBatchesAsNull(t *testing.T) {\n\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\tdefer pool.AssertSize(t, 0)\n\tctx := sf.WithArrowAllocator(arrowbatches.WithArrowBatches(context.Background()), pool)\n\n\ttc := openArrowTestConn(t)\n\tdefer tc.close()\n\ttc.enableStructuredTypes(t)\n\ttc.forceNativeArrow(t)\n\n\trecords, closeRows := tc.fetchFirst(t, ctx,\n\t\t\"SELECT {'s': 'some string'}::OBJECT(s VARCHAR) UNION SELECT null ORDER BY 1\")\n\tdefer closeRows()\n\n\tfor _, record := range records {\n\t\tdefer record.Release()\n\t\tif record.Column(0).IsNull(0) {\n\t\t\tt.Error(\"expected first row to be non-null\")\n\t\t}\n\t\tif !record.Column(0).IsNull(1) {\n\t\t\tt.Error(\"expected second row to be null\")\n\t\t}\n\t}\n}\n\nfunc TestStructuredArrayInArrowBatches(t *testing.T) {\n\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\tdefer pool.AssertSize(t, 0)\n\tctx := sf.WithArrowAllocator(arrowbatches.WithArrowBatches(context.Background()), pool)\n\n\ttc := openArrowTestConn(t)\n\tdefer tc.close()\n\ttc.enableStructuredTypes(t)\n\ttc.forceNativeArrow(t)\n\n\trecords, closeRows := tc.fetchFirst(t, ctx,\n\t\t\"SELECT [1, 2, 3]::ARRAY(INTEGER) UNION SELECT [4, 5, 6]::ARRAY(INTEGER) ORDER BY 1\")\n\tdefer closeRows()\n\n\trecord := records[0]\n\tdefer record.Release()\n\n\tlistCol := record.Column(0).(*array.List)\n\tvals := listCol.ListValues().(*array.Int64)\n\texpectedVals := []int64{1, 2, 3, 4, 5, 6}\n\tfor i, exp := range expectedVals {\n\t\tif v := vals.Value(i); v != exp {\n\t\t\tt.Errorf(\"list value[%d]: expected %d, got %d\", i, exp, v)\n\t\t}\n\t}\n\texpectedOffsets := []int32{0, 3, 6}\n\tfor i, exp := range expectedOffsets {\n\t\tif v := listCol.Offsets()[i]; v != exp {\n\t\t\tt.Errorf(\"offset[%d]: expected %d, got %d\", i, exp, v)\n\t\t}\n\t}\n}\n\nfunc TestStructuredMapInArrowBatches(t *testing.T) {\n\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\tdefer pool.AssertSize(t, 0)\n\tctx := sf.WithArrowAllocator(arrowbatches.WithArrowBatches(context.Background()), pool)\n\n\ttc := openArrowTestConn(t)\n\tdefer tc.close()\n\ttc.enableStructuredTypes(t)\n\ttc.forceNativeArrow(t)\n\n\trecords, closeRows := tc.fetchFirst(t, ctx,\n\t\t\"SELECT {'a': 'b', 'c': 'd'}::MAP(VARCHAR, VARCHAR)\")\n\tdefer closeRows()\n\n\tfor _, record := range records {\n\t\tdefer record.Release()\n\t\tm := record.Column(0).(*array.Map)\n\t\tkeys := m.Keys().(*array.String)\n\t\titems := m.Items().(*array.String)\n\t\tif v := keys.Value(0); v != \"a\" {\n\t\t\tt.Errorf(\"expected key 'a', got %q\", v)\n\t\t}\n\t\tif v := keys.Value(1); v != \"c\" {\n\t\t\tt.Errorf(\"expected key 'c', got %q\", v)\n\t\t}\n\t\tif v := items.Value(0); v != \"b\" {\n\t\t\tt.Errorf(\"expected item 'b', got %q\", v)\n\t\t}\n\t\tif v := items.Value(1); v != \"d\" {\n\t\t\tt.Errorf(\"expected item 'd', got %q\", v)\n\t\t}\n\t}\n}\n\nfunc TestSelectingNullObjectsInArrowBatches(t *testing.T) {\n\ttestcases := []string{\n\t\t\"select null::object(v VARCHAR)\",\n\t\t\"select null::object\",\n\t}\n\n\ttc := openArrowTestConn(t)\n\tdefer tc.close()\n\ttc.enableStructuredTypes(t)\n\n\tfor _, query := range testcases {\n\t\tt.Run(query, func(t *testing.T) {\n\t\t\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\t\t\tdefer pool.AssertSize(t, 0)\n\t\t\tctx := sf.WithArrowAllocator(arrowbatches.WithArrowBatches(context.Background()), pool)\n\n\t\t\trecords, closeRows := tc.fetchFirst(t, ctx, query)\n\t\t\tdefer closeRows()\n\n\t\t\tfor _, record := range records {\n\t\t\t\tdefer record.Release()\n\t\t\t\tif record.NumRows() != 1 {\n\t\t\t\t\tt.Fatalf(\"wrong number of rows: expected 1, got %d\", record.NumRows())\n\t\t\t\t}\n\t\t\t\tif record.NumCols() != 1 {\n\t\t\t\t\tt.Fatalf(\"wrong number of cols: expected 1, got %d\", record.NumCols())\n\t\t\t\t}\n\t\t\t\tif !record.Column(0).IsNull(0) {\n\t\t\t\t\tt.Error(\"expected null value\")\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestSelectingSemistructuredTypesInArrowBatches(t *testing.T) {\n\ttestcases := []struct {\n\t\tname               string\n\t\tquery              string\n\t\texpected           string\n\t\twithUtf8Validation bool\n\t}{\n\t\t{\n\t\t\tname:               \"semistructured object with utf8 validation\",\n\t\t\twithUtf8Validation: true,\n\t\t\texpected:           `{\"s\":\"someString\"}`,\n\t\t\tquery:              \"SELECT {'s':'someString'}::OBJECT\",\n\t\t},\n\t\t{\n\t\t\tname:               \"semistructured object without utf8 validation\",\n\t\t\twithUtf8Validation: false,\n\t\t\texpected:           `{\"s\":\"someString\"}`,\n\t\t\tquery:              \"SELECT {'s':'someString'}::OBJECT\",\n\t\t},\n\t\t{\n\t\t\tname:               \"semistructured array without utf8 validation\",\n\t\t\twithUtf8Validation: false,\n\t\t\texpected:           `[1,2,3]`,\n\t\t\tquery:              \"SELECT [1, 2, 3]::ARRAY\",\n\t\t},\n\t\t{\n\t\t\tname:               \"semistructured array with utf8 validation\",\n\t\t\twithUtf8Validation: true,\n\t\t\texpected:           `[1,2,3]`,\n\t\t\tquery:              \"SELECT [1, 2, 3]::ARRAY\",\n\t\t},\n\t}\n\n\ttc := openArrowTestConn(t)\n\tdefer tc.close()\n\n\tfor _, tc2 := range testcases {\n\t\tt.Run(tc2.name, func(t *testing.T) {\n\t\t\tpool := memory.NewCheckedAllocator(memory.DefaultAllocator)\n\t\t\tdefer pool.AssertSize(t, 0)\n\t\t\tctx := sf.WithArrowAllocator(arrowbatches.WithArrowBatches(context.Background()), pool)\n\t\t\tif tc2.withUtf8Validation {\n\t\t\t\tctx = arrowbatches.WithUtf8Validation(ctx)\n\t\t\t}\n\n\t\t\trecords, closeRows := tc.fetchFirst(t, ctx, tc2.query)\n\t\t\tdefer closeRows()\n\n\t\t\tfor _, record := range records {\n\t\t\t\tdefer record.Release()\n\t\t\t\tif record.NumCols() != 1 {\n\t\t\t\t\tt.Fatalf(\"unexpected number of columns: %d\", record.NumCols())\n\t\t\t\t}\n\t\t\t\tif record.NumRows() != 1 {\n\t\t\t\t\tt.Fatalf(\"unexpected number of rows: %d\", record.NumRows())\n\t\t\t\t}\n\t\t\t\tstringCol, ok := record.Column(0).(*array.String)\n\t\t\t\tif !ok {\n\t\t\t\t\tt.Fatalf(\"wrong type for column, expected string, got %T\", record.Column(0))\n\t\t\t\t}\n\t\t\t\tif !equalIgnoringWhitespace(stringCol.Value(0), tc2.expected) {\n\t\t\t\t\tt.Errorf(\"expected %q, got %q\", tc2.expected, stringCol.Value(0))\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "structured_type_read_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"fmt\"\n\t\"math/big\"\n\t\"reflect\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\ntype objectWithAllTypes struct {\n\ts         string\n\tb         byte\n\ti16       int16\n\ti32       int32\n\ti64       int64\n\tf32       float32\n\tf64       float64\n\tnfraction float64\n\tbo        bool\n\tbi        []byte\n\tdate      time.Time `sf:\"date,date\"`\n\ttime      time.Time `sf:\"time,time\"`\n\tltz       time.Time `sf:\"ltz,ltz\"`\n\ttz        time.Time `sf:\"tz,tz\"`\n\tntz       time.Time `sf:\"ntz,ntz\"`\n\tso        *simpleObject\n\tsArr      []string\n\tf64Arr    []float64\n\tsomeMap   map[string]bool\n\tuuid      testUUID\n}\n\nfunc (o *objectWithAllTypes) Scan(val any) error {\n\tst, ok := val.(StructuredObject)\n\tif !ok {\n\t\treturn fmt.Errorf(\"expected StructuredObject, got %T\", val)\n\t}\n\n\tvar err error\n\tif o.s, err = st.GetString(\"s\"); err != nil {\n\t\treturn err\n\t}\n\tif o.b, err = st.GetByte(\"b\"); err != nil {\n\t\treturn err\n\t}\n\tif o.i16, err = st.GetInt16(\"i16\"); err != nil {\n\t\treturn err\n\t}\n\tif o.i32, err = st.GetInt32(\"i32\"); err != nil {\n\t\treturn err\n\t}\n\tif o.i64, err = st.GetInt64(\"i64\"); err != nil {\n\t\treturn err\n\t}\n\tif o.f32, err = st.GetFloat32(\"f32\"); err != nil {\n\t\treturn err\n\t}\n\tif o.f64, err = st.GetFloat64(\"f64\"); err != nil {\n\t\treturn err\n\t}\n\tif o.nfraction, err = st.GetFloat64(\"nfraction\"); err != nil {\n\t\treturn err\n\t}\n\tif o.bo, err = st.GetBool(\"bo\"); err != nil {\n\t\treturn err\n\t}\n\tif o.bi, err = st.GetBytes(\"bi\"); err != nil {\n\t\treturn err\n\t}\n\tif o.date, err = st.GetTime(\"date\"); err != nil {\n\t\treturn err\n\t}\n\tif o.time, err = st.GetTime(\"time\"); err != nil {\n\t\treturn err\n\t}\n\tif o.ltz, err = st.GetTime(\"ltz\"); err != nil {\n\t\treturn err\n\t}\n\tif o.tz, err = st.GetTime(\"tz\"); err != nil {\n\t\treturn err\n\t}\n\tif o.ntz, err = st.GetTime(\"ntz\"); err != nil {\n\t\treturn err\n\t}\n\tso, err := st.GetStruct(\"so\", &simpleObject{})\n\tif err != nil {\n\t\treturn err\n\t}\n\to.so = so.(*simpleObject)\n\tsArr, err := st.GetRaw(\"sArr\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif sArr != nil {\n\t\to.sArr = sArr.([]string)\n\t}\n\tf64Arr, err := st.GetRaw(\"f64Arr\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif f64Arr != nil {\n\t\to.f64Arr = f64Arr.([]float64)\n\t}\n\tsomeMap, err := st.GetRaw(\"someMap\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif someMap != nil {\n\t\to.someMap = someMap.(map[string]bool)\n\t}\n\tuuidStr, err := st.GetString(\"uuid\")\n\tif err != nil {\n\t\treturn err\n\t}\n\n\to.uuid = parseTestUUID(uuidStr)\n\n\treturn nil\n}\n\nfunc (o objectWithAllTypes) Write(sowc StructuredObjectWriterContext) error {\n\tif err := sowc.WriteString(\"s\", o.s); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteByt(\"b\", o.b); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteInt16(\"i16\", o.i16); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteInt32(\"i32\", o.i32); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteInt64(\"i64\", o.i64); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteFloat32(\"f32\", o.f32); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteFloat64(\"f64\", o.f64); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteFloat64(\"nfraction\", o.nfraction); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteBool(\"bo\", o.bo); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteBytes(\"bi\", o.bi); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteTime(\"date\", o.date, DataTypeDate); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteTime(\"time\", o.time, DataTypeTime); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteTime(\"ltz\", o.ltz, DataTypeTimestampLtz); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteTime(\"ntz\", o.ntz, DataTypeTimestampNtz); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteTime(\"tz\", o.tz, DataTypeTimestampTz); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteStruct(\"so\", o.so); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteRaw(\"sArr\", o.sArr); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteRaw(\"f64Arr\", o.f64Arr); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteRaw(\"someMap\", o.someMap); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteString(\"uuid\", o.uuid.String()); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\ntype simpleObject struct {\n\ts string\n\ti int32\n}\n\nfunc (so *simpleObject) Scan(val any) error {\n\tst, ok := val.(StructuredObject)\n\tif !ok {\n\t\treturn fmt.Errorf(\"expected StructuredObject, got %T\", val)\n\t}\n\n\tvar err error\n\tif so.s, err = st.GetString(\"s\"); err != nil {\n\t\treturn err\n\t}\n\tif so.i, err = st.GetInt32(\"i\"); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc (so *simpleObject) Write(sowc StructuredObjectWriterContext) error {\n\tif err := sowc.WriteString(\"s\", so.s); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteInt32(\"i\", so.i); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc TestObjectWithAllTypesAsString(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tskipForStringingNativeArrow(t, format)\n\t\t\trows := dbt.mustQuery(\"SELECT {'s': 'some string', 'i32': 3}::OBJECT(s VARCHAR, i32 INTEGER)\")\n\t\t\tdefer rows.Close()\n\t\t\tassertTrueF(t, rows.Next())\n\t\t\tvar res string\n\t\t\terr := rows.Scan(&res)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualIgnoringWhitespaceE(t, res, `{\"s\": \"some string\", \"i32\": 3}`)\n\t\t})\n\t})\n}\n\nfunc TestObjectWithAllTypesAsObject(t *testing.T) {\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tuid := newTestUUID()\n\t\t\trows := dbt.mustQueryContextT(ctx, t, fmt.Sprintf(\"SELECT 1, {'s': 'some string', 'b': 1, 'i16': 2, 'i32': 3, 'i64': 9223372036854775807, 'f32': '1.1', 'f64': 2.2, 'nfraction': 3.3, 'bo': true, 'bi': TO_BINARY('616263', 'HEX'), 'date': '2024-03-21'::DATE, 'time': '13:03:02'::TIME, 'ltz': '2021-07-21 11:22:33'::TIMESTAMP_LTZ, 'tz': '2022-08-31 13:43:22 +0200'::TIMESTAMP_TZ, 'ntz': '2023-05-22 01:17:19'::TIMESTAMP_NTZ, 'so': {'s': 'child', 'i': 9}, 'sArr': ARRAY_CONSTRUCT('x', 'y', 'z'), 'f64Arr': ARRAY_CONSTRUCT(1.1, 2.2, 3.3), 'someMap': {'x': true, 'y': false}, 'uuid': '%s'}::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f32 FLOAT, f64 DOUBLE, nfraction NUMBER(38, 19), bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)\", uid))\n\t\t\tdefer rows.Close()\n\t\t\trows.Next()\n\t\t\tvar ignore int\n\t\t\tvar res objectWithAllTypes\n\t\t\terr := rows.Scan(&ignore, &res)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, res.s, \"some string\")\n\t\t\tassertEqualE(t, res.b, byte(1))\n\t\t\tassertEqualE(t, res.i16, int16(2))\n\t\t\tassertEqualE(t, res.i32, int32(3))\n\t\t\tassertEqualE(t, res.i64, int64(9223372036854775807))\n\t\t\tassertEqualE(t, res.f32, float32(1.1))\n\t\t\tassertEqualE(t, res.f64, 2.2)\n\t\t\tassertEqualE(t, res.nfraction, 3.3)\n\t\t\tassertEqualE(t, res.bo, true)\n\t\t\tassertBytesEqualE(t, res.bi, []byte{'a', 'b', 'c'})\n\t\t\tassertEqualE(t, res.date, time.Date(2024, time.March, 21, 0, 0, 0, 0, time.UTC))\n\t\t\tassertEqualE(t, res.time.Hour(), 13)\n\t\t\tassertEqualE(t, res.time.Minute(), 3)\n\t\t\tassertEqualE(t, res.time.Second(), 2)\n\t\t\tassertTrueE(t, res.ltz.Equal(time.Date(2021, time.July, 21, 11, 22, 33, 0, warsawTz)))\n\t\t\tassertTrueE(t, res.tz.Equal(time.Date(2022, time.August, 31, 13, 43, 22, 0, warsawTz)))\n\t\t\tassertTrueE(t, res.ntz.Equal(time.Date(2023, time.May, 22, 1, 17, 19, 0, time.UTC)))\n\t\t\tassertDeepEqualE(t, res.so, &simpleObject{s: \"child\", i: 9})\n\t\t\tassertDeepEqualE(t, res.sArr, []string{\"x\", \"y\", \"z\"})\n\t\t\tassertDeepEqualE(t, res.f64Arr, []float64{1.1, 2.2, 3.3})\n\t\t\tassertDeepEqualE(t, res.someMap, map[string]bool{\"x\": true, \"y\": false})\n\t\t\tassertEqualE(t, res.uuid.String(), uid.String())\n\t\t})\n\t})\n}\n\nfunc TestNullObject(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tt.Run(\"null\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT null::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f32 FLOAT, f64 DOUBLE, nfraction NUMBER(38, 19), bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\tassertTrueF(t, rows.Next())\n\t\t\t\tvar res *objectWithAllTypes\n\t\t\t\terr := rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertNilE(t, res)\n\t\t\t})\n\t\t\tt.Run(\"not null\", func(t *testing.T) {\n\t\t\t\tuid := newTestUUID()\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, fmt.Sprintf(\"SELECT {'s': 'some string', 'b': 1, 'i16': 2, 'i32': 3, 'i64': 9223372036854775807, 'f32': '1.1', 'f64': 2.2, 'nfraction': 3.3, 'bo': true, 'bi': TO_BINARY('616263', 'HEX'), 'date': '2024-03-21'::DATE, 'time': '13:03:02'::TIME, 'ltz': '2021-07-21 11:22:33'::TIMESTAMP_LTZ, 'tz': '2022-08-31 13:43:22 +0200'::TIMESTAMP_TZ, 'ntz': '2023-05-22 01:17:19'::TIMESTAMP_NTZ, 'so': {'s': 'child', 'i': 9}, 'sArr': ARRAY_CONSTRUCT('x', 'y', 'z'), 'f64Arr': ARRAY_CONSTRUCT(1.1, 2.2, 3.3), 'someMap': {'x': true, 'y': false}, 'uuid': '%s'}::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f32 FLOAT, f64 DOUBLE, nfraction NUMBER(38, 19), bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)\", uid))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tassertTrueF(t, rows.Next())\n\t\t\t\tvar res *objectWithAllTypes\n\t\t\t\terr := rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, res.s, \"some string\")\n\t\t\t})\n\t\t})\n\t})\n}\n\ntype objectWithAllTypesNullable struct {\n\ts       sql.NullString\n\tb       sql.NullByte\n\ti16     sql.NullInt16\n\ti32     sql.NullInt32\n\ti64     sql.NullInt64\n\tf64     sql.NullFloat64\n\tbo      sql.NullBool\n\tbi      []byte\n\tdate    sql.NullTime\n\ttime    sql.NullTime\n\tltz     sql.NullTime\n\ttz      sql.NullTime\n\tntz     sql.NullTime\n\tso      *simpleObject\n\tsArr    []string\n\tf64Arr  []float64\n\tsomeMap map[string]bool\n\tuuid    testUUID\n}\n\nfunc (o *objectWithAllTypesNullable) Scan(val any) error {\n\tst, ok := val.(StructuredObject)\n\tif !ok {\n\t\treturn fmt.Errorf(\"expected StructuredObject, got %T\", val)\n\t}\n\n\tvar err error\n\tif o.s, err = st.GetNullString(\"s\"); err != nil {\n\t\treturn err\n\t}\n\tif o.b, err = st.GetNullByte(\"b\"); err != nil {\n\t\treturn err\n\t}\n\tif o.i16, err = st.GetNullInt16(\"i16\"); err != nil {\n\t\treturn err\n\t}\n\tif o.i32, err = st.GetNullInt32(\"i32\"); err != nil {\n\t\treturn err\n\t}\n\tif o.i64, err = st.GetNullInt64(\"i64\"); err != nil {\n\t\treturn err\n\t}\n\tif o.f64, err = st.GetNullFloat64(\"f64\"); err != nil {\n\t\treturn err\n\t}\n\tif o.bo, err = st.GetNullBool(\"bo\"); err != nil {\n\t\treturn err\n\t}\n\tif o.bi, err = st.GetBytes(\"bi\"); err != nil {\n\t\treturn err\n\t}\n\tif o.date, err = st.GetNullTime(\"date\"); err != nil {\n\t\treturn err\n\t}\n\tif o.time, err = st.GetNullTime(\"time\"); err != nil {\n\t\treturn err\n\t}\n\tif o.ltz, err = st.GetNullTime(\"ltz\"); err != nil {\n\t\treturn err\n\t}\n\tif o.tz, err = st.GetNullTime(\"tz\"); err != nil {\n\t\treturn err\n\t}\n\tif o.ntz, err = st.GetNullTime(\"ntz\"); err != nil {\n\t\treturn err\n\t}\n\tso, err := st.GetStruct(\"so\", &simpleObject{})\n\tif err != nil {\n\t\treturn err\n\t}\n\tif so != nil {\n\t\to.so = so.(*simpleObject)\n\t} else {\n\t\to.so = nil\n\t}\n\tsArr, err := st.GetRaw(\"sArr\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif sArr != nil {\n\t\to.sArr = sArr.([]string)\n\t}\n\tf64Arr, err := st.GetRaw(\"f64Arr\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif f64Arr != nil {\n\t\to.f64Arr = f64Arr.([]float64)\n\t}\n\tsomeMap, err := st.GetRaw(\"someMap\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif someMap != nil {\n\t\to.someMap = someMap.(map[string]bool)\n\t}\n\tuuidStr, err := st.GetNullString(\"uuid\")\n\tif err != nil {\n\t\treturn err\n\t}\n\n\to.uuid = parseTestUUID(uuidStr.String)\n\n\treturn nil\n}\n\nfunc (o *objectWithAllTypesNullable) Write(sowc StructuredObjectWriterContext) error {\n\tif err := sowc.WriteNullString(\"s\", o.s); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullByte(\"b\", o.b); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullInt16(\"i16\", o.i16); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullInt32(\"i32\", o.i32); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullInt64(\"i64\", o.i64); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullFloat64(\"f64\", o.f64); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullBool(\"bo\", o.bo); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteBytes(\"bi\", o.bi); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullTime(\"date\", o.date, DataTypeDate); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullTime(\"time\", o.time, DataTypeTime); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullTime(\"ltz\", o.ltz, DataTypeTimestampLtz); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullTime(\"ntz\", o.ntz, DataTypeTimestampNtz); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullTime(\"tz\", o.tz, DataTypeTimestampTz); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullableStruct(\"so\", o.so, reflect.TypeFor[simpleObject]()); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteRaw(\"sArr\", o.sArr); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteRaw(\"f64Arr\", o.f64Arr); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteRaw(\"someMap\", o.someMap); err != nil {\n\t\treturn err\n\t}\n\tif err := sowc.WriteNullString(\"uuid\", sql.NullString{String: o.uuid.String(), Valid: true}); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc TestObjectWithAllTypesNullable(t *testing.T) {\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tt.Run(\"null\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, \"select null, object_construct_keep_null('s', null, 'b', null, 'i16', null, 'i32', null, 'i64', null, 'f64', null, 'bo', null, 'bi', null, 'date', null, 'time', null, 'ltz', null, 'tz', null, 'ntz', null, 'so', null, 'sArr', null, 'f64Arr', null, 'someMap', null, 'uuid', null)::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f64 DOUBLE, bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\tassertTrueF(t, rows.Next())\n\t\t\t\tvar ignore sql.NullInt32\n\t\t\t\tvar res objectWithAllTypesNullable\n\t\t\t\terr := rows.Scan(&ignore, &res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, ignore, sql.NullInt32{Valid: false})\n\t\t\t\tassertEqualE(t, res.s, sql.NullString{Valid: false})\n\t\t\t\tassertEqualE(t, res.b, sql.NullByte{Valid: false})\n\t\t\t\tassertEqualE(t, res.i16, sql.NullInt16{Valid: false})\n\t\t\t\tassertEqualE(t, res.i32, sql.NullInt32{Valid: false})\n\t\t\t\tassertEqualE(t, res.i64, sql.NullInt64{Valid: false})\n\t\t\t\tassertEqualE(t, res.f64, sql.NullFloat64{Valid: false})\n\t\t\t\tassertEqualE(t, res.bo, sql.NullBool{Valid: false})\n\t\t\t\tassertBytesEqualE(t, res.bi, nil)\n\t\t\t\tassertEqualE(t, res.date, sql.NullTime{Valid: false})\n\t\t\t\tassertEqualE(t, res.time, sql.NullTime{Valid: false})\n\t\t\t\tassertEqualE(t, res.ltz, sql.NullTime{Valid: false})\n\t\t\t\tassertEqualE(t, res.tz, sql.NullTime{Valid: false})\n\t\t\t\tassertEqualE(t, res.ntz, sql.NullTime{Valid: false})\n\t\t\t\tvar so *simpleObject\n\t\t\t\tassertDeepEqualE(t, res.so, so)\n\t\t\t\tassertEqualE(t, res.uuid, testUUID{})\n\t\t\t})\n\t\t\tt.Run(\"not null\", func(t *testing.T) {\n\t\t\t\tuuid := newTestUUID()\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, fmt.Sprintf(\"select 1, object_construct_keep_null('s', 'abc', 'b', 1, 'i16', 2, 'i32', 3, 'i64', 9223372036854775807, 'f64', 2.2, 'bo', true, 'bi', TO_BINARY('616263', 'HEX'), 'date', '2024-03-21'::DATE, 'time', '13:03:02'::TIME, 'ltz', '2021-07-21 11:22:33'::TIMESTAMP_LTZ, 'tz', '2022-08-31 13:43:22 +0200'::TIMESTAMP_TZ, 'ntz', '2023-05-22 01:17:19'::TIMESTAMP_NTZ, 'so', {'s': 'child', 'i': 9}::OBJECT, 'sArr', ARRAY_CONSTRUCT('x', 'y', 'z'), 'f64Arr', ARRAY_CONSTRUCT(1.1, 2.2, 3.3), 'someMap', {'x': true, 'y': false}, 'uuid', '%s')::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f64 DOUBLE, bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)\", uuid))\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar ignore sql.NullInt32\n\t\t\t\tvar res objectWithAllTypesNullable\n\t\t\t\terr := rows.Scan(&ignore, &res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, ignore, sql.NullInt32{Valid: true, Int32: 1})\n\t\t\t\tassertEqualE(t, res.s, sql.NullString{Valid: true, String: \"abc\"})\n\t\t\t\tassertEqualE(t, res.b, sql.NullByte{Valid: true, Byte: byte(1)})\n\t\t\t\tassertEqualE(t, res.i16, sql.NullInt16{Valid: true, Int16: int16(2)})\n\t\t\t\tassertEqualE(t, res.i32, sql.NullInt32{Valid: true, Int32: 3})\n\t\t\t\tassertEqualE(t, res.i64, sql.NullInt64{Valid: true, Int64: 9223372036854775807})\n\t\t\t\tassertEqualE(t, res.f64, sql.NullFloat64{Valid: true, Float64: 2.2})\n\t\t\t\tassertEqualE(t, res.bo, sql.NullBool{Valid: true, Bool: true})\n\t\t\t\tassertBytesEqualE(t, res.bi, []byte{'a', 'b', 'c'})\n\t\t\t\tassertEqualE(t, res.date, sql.NullTime{Valid: true, Time: time.Date(2024, time.March, 21, 0, 0, 0, 0, time.UTC)})\n\t\t\t\tassertTrueE(t, res.time.Valid)\n\t\t\t\tassertEqualE(t, res.time.Time.Hour(), 13)\n\t\t\t\tassertEqualE(t, res.time.Time.Minute(), 3)\n\t\t\t\tassertEqualE(t, res.time.Time.Second(), 2)\n\t\t\t\tassertTrueE(t, res.ltz.Valid)\n\t\t\t\tassertTrueE(t, res.ltz.Time.Equal(time.Date(2021, time.July, 21, 11, 22, 33, 0, warsawTz)))\n\t\t\t\tassertTrueE(t, res.tz.Valid)\n\t\t\t\tassertTrueE(t, res.tz.Time.Equal(time.Date(2022, time.August, 31, 13, 43, 22, 0, warsawTz)))\n\t\t\t\tassertTrueE(t, res.ntz.Valid)\n\t\t\t\tassertTrueE(t, res.ntz.Time.Equal(time.Date(2023, time.May, 22, 1, 17, 19, 0, time.UTC)))\n\t\t\t\tassertDeepEqualE(t, res.so, &simpleObject{s: \"child\", i: 9})\n\t\t\t\tassertDeepEqualE(t, res.sArr, []string{\"x\", \"y\", \"z\"})\n\t\t\t\tassertDeepEqualE(t, res.f64Arr, []float64{1.1, 2.2, 3.3})\n\t\t\t\tassertDeepEqualE(t, res.someMap, map[string]bool{\"x\": true, \"y\": false})\n\t\t\t\tassertEqualE(t, res.uuid.String(), uuid.String())\n\t\t\t})\n\t\t})\n\t})\n}\n\ntype objectWithAllTypesSimpleScan struct {\n\tS         string\n\tB         byte\n\tI16       int16\n\tI32       int32\n\tI64       int64\n\tF32       float32\n\tF64       float64\n\tNfraction float64\n\tBo        bool\n\tBi        []byte\n\tDate      time.Time `sf:\"date,date\"`\n\tTime      time.Time `sf:\"time,time\"`\n\tLtz       time.Time `sf:\"ltz,ltz\"`\n\tTz        time.Time `sf:\"tz,tz\"`\n\tNtz       time.Time `sf:\"ntz,ntz\"`\n\tSo        *simpleObject\n\tSArr      []string\n\tF64Arr    []float64\n\tSomeMap   map[string]bool\n}\n\nfunc (so *objectWithAllTypesSimpleScan) Scan(val any) error {\n\tst, ok := val.(StructuredObject)\n\tif !ok {\n\t\treturn fmt.Errorf(\"expected StructuredObject, got %T\", val)\n\t}\n\n\treturn st.ScanTo(so)\n}\n\nfunc (so *objectWithAllTypesSimpleScan) Write(sowc StructuredObjectWriterContext) error {\n\treturn sowc.WriteAll(so)\n}\n\nfunc TestObjectWithAllTypesSimpleScan(t *testing.T) {\n\tuid := newTestUUID()\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\trows := dbt.mustQueryContextT(ctx, t, fmt.Sprintf(\"SELECT 1, {'s': 'some string', 'b': 1, 'i16': 2, 'i32': 3, 'i64': 9223372036854775807, 'f32': '1.1', 'f64': 2.2, 'nfraction': 3.3, 'bo': true, 'bi': TO_BINARY('616263', 'HEX'), 'date': '2024-03-21'::DATE, 'time': '13:03:02'::TIME, 'ltz': '2021-07-21 11:22:33'::TIMESTAMP_LTZ, 'tz': '2022-08-31 13:43:22 +0200'::TIMESTAMP_TZ, 'ntz': '2023-05-22 01:17:19'::TIMESTAMP_NTZ, 'so': {'s': 'child', 'i': 9}, 'sArr': ARRAY_CONSTRUCT('x', 'y', 'z'), 'f64Arr': ARRAY_CONSTRUCT(1.1, 2.2, 3.3), 'someMap': {'x': true, 'y': false}, 'uuid': '%s'}::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f32 FLOAT, f64 DOUBLE, nfraction NUMBER(38, 19), bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)\", uid))\n\t\t\tdefer rows.Close()\n\t\t\trows.Next()\n\t\t\tvar ignore int\n\t\t\tvar res objectWithAllTypesSimpleScan\n\t\t\terr := rows.Scan(&ignore, &res)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, res.S, \"some string\")\n\t\t\tassertEqualE(t, res.B, byte(1))\n\t\t\tassertEqualE(t, res.I16, int16(2))\n\t\t\tassertEqualE(t, res.I32, int32(3))\n\t\t\tassertEqualE(t, res.I64, int64(9223372036854775807))\n\t\t\tassertEqualE(t, res.F32, float32(1.1))\n\t\t\tassertEqualE(t, res.F64, 2.2)\n\t\t\tassertEqualE(t, res.Nfraction, 3.3)\n\t\t\tassertEqualE(t, res.Bo, true)\n\t\t\tassertBytesEqualE(t, res.Bi, []byte{'a', 'b', 'c'})\n\t\t\tassertEqualE(t, res.Date, time.Date(2024, time.March, 21, 0, 0, 0, 0, time.UTC))\n\t\t\tassertEqualE(t, res.Time.Hour(), 13)\n\t\t\tassertEqualE(t, res.Time.Minute(), 3)\n\t\t\tassertEqualE(t, res.Time.Second(), 2)\n\t\t\tassertTrueE(t, res.Ltz.Equal(time.Date(2021, time.July, 21, 11, 22, 33, 0, warsawTz)))\n\t\t\tassertTrueE(t, res.Tz.Equal(time.Date(2022, time.August, 31, 13, 43, 22, 0, warsawTz)))\n\t\t\tassertTrueE(t, res.Ntz.Equal(time.Date(2023, time.May, 22, 1, 17, 19, 0, time.UTC)))\n\t\t\tassertDeepEqualE(t, res.So, &simpleObject{s: \"child\", i: 9})\n\t\t\tassertDeepEqualE(t, res.SArr, []string{\"x\", \"y\", \"z\"})\n\t\t\tassertDeepEqualE(t, res.F64Arr, []float64{1.1, 2.2, 3.3})\n\t\t\tassertDeepEqualE(t, res.SomeMap, map[string]bool{\"x\": true, \"y\": false})\n\t\t})\n\t})\n}\n\nfunc TestNullObjectSimpleScan(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tt.Run(\"null\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT null::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f32 FLOAT, f64 DOUBLE, nfraction NUMBER(38, 19), bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\tassertTrueF(t, rows.Next())\n\t\t\t\tvar res *objectWithAllTypesSimpleScan\n\t\t\t\terr := rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertNilE(t, res)\n\t\t\t})\n\t\t\tt.Run(\"not null\", func(t *testing.T) {\n\t\t\t\tuid := newTestUUID()\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, fmt.Sprintf(\"SELECT {'s': 'some string', 'b': 1, 'i16': 2, 'i32': 3, 'i64': 9223372036854775807, 'f32': '1.1', 'f64': 2.2, 'nfraction': 3.3, 'bo': true, 'bi': TO_BINARY('616263', 'HEX'), 'date': '2024-03-21'::DATE, 'time': '13:03:02'::TIME, 'ltz': '2021-07-21 11:22:33'::TIMESTAMP_LTZ, 'tz': '2022-08-31 13:43:22 +0200'::TIMESTAMP_TZ, 'ntz': '2023-05-22 01:17:19'::TIMESTAMP_NTZ, 'so': {'s': 'child', 'i': 9}, 'sArr': ARRAY_CONSTRUCT('x', 'y', 'z'), 'f64Arr': ARRAY_CONSTRUCT(1.1, 2.2, 3.3), 'someMap': {'x': true, 'y': false}, 'uuid': '%s'}::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f32 FLOAT, f64 DOUBLE, nfraction NUMBER(38, 19), bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)\", uid))\n\t\t\t\tdefer rows.Close()\n\t\t\t\tassertTrueF(t, rows.Next())\n\t\t\t\tvar res *objectWithAllTypesSimpleScan\n\t\t\t\terr := rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, res.S, \"some string\")\n\t\t\t})\n\t\t})\n\t})\n}\n\ntype objectWithAllTypesNullableSimpleScan struct {\n\tS       sql.NullString\n\tB       sql.NullByte\n\tI16     sql.NullInt16\n\tI32     sql.NullInt32\n\tI64     sql.NullInt64\n\tF64     sql.NullFloat64\n\tBo      sql.NullBool\n\tBi      []byte\n\tDate    sql.NullTime `sf:\"date,date\"`\n\tTime    sql.NullTime `sf:\"time,time\"`\n\tLtz     sql.NullTime `sf:\"ltz,ltz\"`\n\tTz      sql.NullTime `sf:\"tz,tz\"`\n\tNtz     sql.NullTime `sf:\"ntz,ntz\"`\n\tSo      *simpleObject\n\tSArr    []string\n\tF64Arr  []float64\n\tSomeMap map[string]bool\n}\n\nfunc (o *objectWithAllTypesNullableSimpleScan) Scan(val any) error {\n\tst, ok := val.(StructuredObject)\n\tif !ok {\n\t\treturn fmt.Errorf(\"expected StructuredObject, got %T\", val)\n\t}\n\n\treturn st.ScanTo(o)\n}\n\nfunc (o *objectWithAllTypesNullableSimpleScan) Write(sowc StructuredObjectWriterContext) error {\n\treturn sowc.WriteAll(o)\n}\n\nfunc TestObjectWithAllTypesSimpleScanNullable(t *testing.T) {\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tt.Run(\"null\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, \"select null, object_construct_keep_null('s', null, 'b', null, 'i16', null, 'i32', null, 'i64', null, 'f64', null, 'bo', null, 'bi', null, 'date', null, 'time', null, 'ltz', null, 'tz', null, 'ntz', null, 'so', null, 'sArr', null, 'f64Arr', null, 'someMap', null)::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f64 DOUBLE, bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN))\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar ignore sql.NullInt32\n\t\t\t\tvar res objectWithAllTypesNullableSimpleScan\n\t\t\t\terr := rows.Scan(&ignore, &res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, ignore, sql.NullInt32{Valid: false})\n\t\t\t\tassertEqualE(t, res.S, sql.NullString{Valid: false})\n\t\t\t\tassertEqualE(t, res.B, sql.NullByte{Valid: false})\n\t\t\t\tassertEqualE(t, res.I16, sql.NullInt16{Valid: false})\n\t\t\t\tassertEqualE(t, res.I32, sql.NullInt32{Valid: false})\n\t\t\t\tassertEqualE(t, res.I64, sql.NullInt64{Valid: false})\n\t\t\t\tassertEqualE(t, res.F64, sql.NullFloat64{Valid: false})\n\t\t\t\tassertEqualE(t, res.Bo, sql.NullBool{Valid: false})\n\t\t\t\tassertBytesEqualE(t, res.Bi, nil)\n\t\t\t\tassertEqualE(t, res.Date, sql.NullTime{Valid: false})\n\t\t\t\tassertEqualE(t, res.Time, sql.NullTime{Valid: false})\n\t\t\t\tassertEqualE(t, res.Ltz, sql.NullTime{Valid: false})\n\t\t\t\tassertEqualE(t, res.Tz, sql.NullTime{Valid: false})\n\t\t\t\tassertEqualE(t, res.Ntz, sql.NullTime{Valid: false})\n\t\t\t\tvar so *simpleObject\n\t\t\t\tassertDeepEqualE(t, res.So, so)\n\t\t\t})\n\t\t\tt.Run(\"not null\", func(t *testing.T) {\n\t\t\t\tuuid := newTestUUID()\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, fmt.Sprintf(\"select 1, object_construct_keep_null('s', 'abc', 'b', 1, 'i16', 2, 'i32', 3, 'i64', 9223372036854775807, 'f64', 2.2, 'bo', true, 'bi', TO_BINARY('616263', 'HEX'), 'date', '2024-03-21'::DATE, 'time', '13:03:02'::TIME, 'ltz', '2021-07-21 11:22:33'::TIMESTAMP_LTZ, 'tz', '2022-08-31 13:43:22 +0200'::TIMESTAMP_TZ, 'ntz', '2023-05-22 01:17:19'::TIMESTAMP_NTZ, 'so', {'s': 'child', 'i': 9}::OBJECT, 'sArr', ARRAY_CONSTRUCT('x', 'y', 'z'), 'f64Arr', ARRAY_CONSTRUCT(1.1, 2.2, 3.3), 'someMap', {'x': true, 'y': false}, 'uuid', '%s')::OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f64 DOUBLE, bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)\", uuid))\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar ignore sql.NullInt32\n\t\t\t\tvar res objectWithAllTypesNullableSimpleScan\n\t\t\t\terr := rows.Scan(&ignore, &res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, ignore, sql.NullInt32{Valid: true, Int32: 1})\n\t\t\t\tassertEqualE(t, res.S, sql.NullString{Valid: true, String: \"abc\"})\n\t\t\t\tassertEqualE(t, res.B, sql.NullByte{Valid: true, Byte: byte(1)})\n\t\t\t\tassertEqualE(t, res.I16, sql.NullInt16{Valid: true, Int16: int16(2)})\n\t\t\t\tassertEqualE(t, res.I32, sql.NullInt32{Valid: true, Int32: 3})\n\t\t\t\tassertEqualE(t, res.I64, sql.NullInt64{Valid: true, Int64: 9223372036854775807})\n\t\t\t\tassertEqualE(t, res.F64, sql.NullFloat64{Valid: true, Float64: 2.2})\n\t\t\t\tassertEqualE(t, res.Bo, sql.NullBool{Valid: true, Bool: true})\n\t\t\t\tassertBytesEqualE(t, res.Bi, []byte{'a', 'b', 'c'})\n\t\t\t\tassertEqualE(t, res.Date, sql.NullTime{Valid: true, Time: time.Date(2024, time.March, 21, 0, 0, 0, 0, time.UTC)})\n\t\t\t\tassertTrueE(t, res.Time.Valid)\n\t\t\t\tassertEqualE(t, res.Time.Time.Hour(), 13)\n\t\t\t\tassertEqualE(t, res.Time.Time.Minute(), 3)\n\t\t\t\tassertEqualE(t, res.Time.Time.Second(), 2)\n\t\t\t\tassertTrueE(t, res.Ltz.Valid)\n\t\t\t\tassertTrueE(t, res.Ltz.Time.Equal(time.Date(2021, time.July, 21, 11, 22, 33, 0, warsawTz)))\n\t\t\t\tassertTrueE(t, res.Tz.Valid)\n\t\t\t\tassertTrueE(t, res.Tz.Time.Equal(time.Date(2022, time.August, 31, 13, 43, 22, 0, warsawTz)))\n\t\t\t\tassertTrueE(t, res.Ntz.Valid)\n\t\t\t\tassertTrueE(t, res.Ntz.Time.Equal(time.Date(2023, time.May, 22, 1, 17, 19, 0, time.UTC)))\n\t\t\t\tassertDeepEqualE(t, res.So, &simpleObject{s: \"child\", i: 9})\n\t\t\t\tassertDeepEqualE(t, res.SArr, []string{\"x\", \"y\", \"z\"})\n\t\t\t\tassertDeepEqualE(t, res.F64Arr, []float64{1.1, 2.2, 3.3})\n\t\t\t\tassertDeepEqualE(t, res.SomeMap, map[string]bool{\"x\": true, \"y\": false})\n\t\t\t})\n\t\t})\n\t})\n}\n\ntype objectWithCustomNameAndIgnoredField struct {\n\tSomeString string `sf:\"anotherName\"`\n\tIgnoreMe   string `sf:\"ignoreMe,ignore\"`\n}\n\nfunc (o *objectWithCustomNameAndIgnoredField) Scan(val any) error {\n\tst, ok := val.(StructuredObject)\n\tif !ok {\n\t\treturn fmt.Errorf(\"expected StructuredObject, got %T\", val)\n\t}\n\n\treturn st.ScanTo(o)\n}\n\nfunc (o *objectWithCustomNameAndIgnoredField) Write(sowc StructuredObjectWriterContext) error {\n\treturn sowc.WriteAll(o)\n}\n\nfunc TestObjectWithCustomName(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT {'anotherName': 'some string'}::OBJECT(anotherName VARCHAR)\")\n\t\t\tdefer rows.Close()\n\t\t\trows.Next()\n\t\t\tvar res objectWithCustomNameAndIgnoredField\n\t\t\terr := rows.Scan(&res)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, res.SomeString, \"some string\")\n\t\t\tassertEqualE(t, res.IgnoreMe, \"\")\n\t\t})\n\t})\n}\n\nfunc TestObjectMetadataAsObject(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT {'a': 'b'}::OBJECT(a VARCHAR) as structured_type\")\n\t\t\tdefer rows.Close()\n\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[ObjectType]())\n\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"OBJECT\")\n\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t})\n\t})\n}\n\nfunc TestObjectMetadataAsString(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tskipForStringingNativeArrow(t, format)\n\t\t\trows := dbt.mustQueryT(t, \"SELECT {'a': 'b'}::OBJECT(a VARCHAR) as structured_type\")\n\t\t\tdefer rows.Close()\n\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[string]())\n\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"OBJECT\")\n\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t})\n\t})\n}\n\nfunc TestObjectWithoutSchema(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tif format == \"NATIVE_ARROW\" {\n\t\t\t\tt.Skip(\"Native arrow is not supported in objects without schema\")\n\t\t\t}\n\t\t\trows := dbt.mustQuery(\"SELECT {'a': 'b'}::OBJECT AS STRUCTURED_TYPE\")\n\t\t\tdefer rows.Close()\n\t\t\trows.Next()\n\t\t\tvar v string\n\t\t\terr := rows.Scan(&v)\n\t\t\tassertNilF(t, err)\n\t\t\tassertStringContainsE(t, v, `\"a\": \"b\"`)\n\t\t})\n\t})\n}\n\nfunc TestObjectWithoutSchemaMetadata(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tif format == \"NATIVE_ARROW\" {\n\t\t\t\tt.Skip(\"Native arrow is not supported in objects without schema\")\n\t\t\t}\n\t\t\trows := dbt.mustQuery(\"SELECT {'a': 'b'}::OBJECT AS structured_type\")\n\t\t\tdefer rows.Close()\n\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[string]())\n\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"OBJECT\")\n\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t})\n\t})\n}\n\nfunc TestArrayAndMetadataAsString(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tskipForStringingNativeArrow(t, format)\n\t\t\trows := dbt.mustQueryT(t, \"SELECT ARRAY_CONSTRUCT(1, 2)::ARRAY(INTEGER) AS STRUCTURED_TYPE\")\n\t\t\tdefer rows.Close()\n\t\t\tassertTrueF(t, rows.Next())\n\t\t\tvar res string\n\t\t\terr := rows.Scan(&res)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualIgnoringWhitespaceE(t, \"[1, 2]\", res)\n\n\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[string]())\n\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"ARRAY\")\n\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t})\n\t})\n}\n\nfunc TestArrayAndMetadataAsArray(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\ttestcases := []struct {\n\t\t\t\tname      string\n\t\t\t\tquery     string\n\t\t\t\texpected1 any\n\t\t\t\texpected2 any\n\t\t\t\tactual    any\n\t\t\t}{\n\t\t\t\t{\n\t\t\t\t\tname:      \"integer\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT(1, 2)::ARRAY(INTEGER) as structured_type UNION SELECT ARRAY_CONSTRUCT(4, 5, 6)::ARRAY(INTEGER) ORDER BY 1\",\n\t\t\t\t\texpected1: []int64{1, 2},\n\t\t\t\t\texpected2: []int64{4, 5, 6},\n\t\t\t\t\tactual:    []int64{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"double\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT(1.1, 2.2)::ARRAY(DOUBLE) as structured_type UNION SELECT ARRAY_CONSTRUCT(3.3)::ARRAY(DOUBLE) ORDER BY 1\",\n\t\t\t\t\texpected1: []float64{1.1, 2.2},\n\t\t\t\t\texpected2: []float64{3.3},\n\t\t\t\t\tactual:    []float64{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"number - fixed integer\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT(1, 2)::ARRAY(NUMBER(38, 0)) as structured_type UNION SELECT ARRAY_CONSTRUCT(3)::ARRAY(NUMBER(38, 0)) ORDER BY 1\",\n\t\t\t\t\texpected1: []int64{1, 2},\n\t\t\t\t\texpected2: []int64{3},\n\t\t\t\t\tactual:    []int64{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"number - fixed fraction\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT(1.1, 2.2)::ARRAY(NUMBER(38, 19)) as structured_type UNION SELECT ARRAY_CONSTRUCT()::ARRAY(NUMBER(38, 19)) ORDER BY 1\",\n\t\t\t\t\texpected1: []float64{},\n\t\t\t\t\texpected2: []float64{1.1, 2.2},\n\t\t\t\t\tactual:    []float64{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"string\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT('a', 'b')::ARRAY(VARCHAR) as structured_type\",\n\t\t\t\t\texpected1: []string{\"a\", \"b\"},\n\t\t\t\t\tactual:    []string{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"time\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT('13:03:02'::TIME, '05:13:22'::TIME)::ARRAY(TIME) as structured_type\",\n\t\t\t\t\texpected1: []time.Time{time.Date(0, 1, 1, 13, 3, 2, 0, time.UTC), time.Date(0, 1, 1, 5, 13, 22, 0, time.UTC)},\n\t\t\t\t\tactual:    []time.Time{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"date\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT('2024-01-05'::DATE, '2001-11-12'::DATE)::ARRAY(DATE) as structured_type\",\n\t\t\t\t\texpected1: []time.Time{time.Date(2024, time.January, 5, 0, 0, 0, 0, time.UTC), time.Date(2001, time.November, 12, 0, 0, 0, 0, time.UTC)},\n\t\t\t\t\tactual:    []time.Time{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"timestamp_ntz\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT('2024-01-05 11:22:33'::TIMESTAMP_NTZ, '2001-11-12 11:22:33'::TIMESTAMP_NTZ)::ARRAY(TIMESTAMP_NTZ) as structured_type\",\n\t\t\t\t\texpected1: []time.Time{time.Date(2024, time.January, 5, 11, 22, 33, 0, time.UTC), time.Date(2001, time.November, 12, 11, 22, 33, 0, time.UTC)},\n\t\t\t\t\tactual:    []time.Time{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"timestamp_ltz\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT('2024-01-05 11:22:33'::TIMESTAMP_LTZ, '2001-11-12 11:22:33'::TIMESTAMP_LTZ)::ARRAY(TIMESTAMP_LTZ) as structured_type\",\n\t\t\t\t\texpected1: []time.Time{time.Date(2024, time.January, 5, 11, 22, 33, 0, warsawTz), time.Date(2001, time.November, 12, 11, 22, 33, 0, warsawTz)},\n\t\t\t\t\tactual:    []time.Time{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"timestamp_tz\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT('2024-01-05 11:22:33 +0100'::TIMESTAMP_TZ, '2001-11-12 11:22:33 +0100'::TIMESTAMP_TZ)::ARRAY(TIMESTAMP_TZ) as structured_type\",\n\t\t\t\t\texpected1: []time.Time{time.Date(2024, time.January, 5, 11, 22, 33, 0, warsawTz), time.Date(2001, time.November, 12, 11, 22, 33, 0, warsawTz)},\n\t\t\t\t\tactual:    []time.Time{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"bool\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT(true, false)::ARRAY(boolean) as structured_type\",\n\t\t\t\t\texpected1: []bool{true, false},\n\t\t\t\t\tactual:    []bool{},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tname:      \"binary\",\n\t\t\t\t\tquery:     \"SELECT ARRAY_CONSTRUCT(TO_BINARY('616263', 'HEX'), TO_BINARY('646566', 'HEX'))::ARRAY(BINARY) as structured_type\",\n\t\t\t\t\texpected1: [][]byte{{'a', 'b', 'c'}, {'d', 'e', 'f'}},\n\t\t\t\t\tactual:    [][]byte{},\n\t\t\t\t},\n\t\t\t}\n\t\t\tfor _, tc := range testcases {\n\t\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\t\trows := dbt.mustQueryContextT(ctx, t, tc.query)\n\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\trows.Next()\n\t\t\t\t\terr := rows.Scan(&tc.actual)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tif _, ok := tc.actual.([]time.Time); ok {\n\t\t\t\t\t\tassertEqualE(t, len(tc.actual.([]time.Time)), len(tc.expected1.([]time.Time)))\n\t\t\t\t\t\tfor i := range tc.actual.([]time.Time) {\n\t\t\t\t\t\t\tif tc.name == \"time\" {\n\t\t\t\t\t\t\t\tassertEqualE(t, tc.actual.([]time.Time)[i].Hour(), tc.expected1.([]time.Time)[i].Hour())\n\t\t\t\t\t\t\t\tassertEqualE(t, tc.actual.([]time.Time)[i].Minute(), tc.expected1.([]time.Time)[i].Minute())\n\t\t\t\t\t\t\t\tassertEqualE(t, tc.actual.([]time.Time)[i].Second(), tc.expected1.([]time.Time)[i].Second())\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tassertTrueE(t, tc.actual.([]time.Time)[i].UTC().Equal(tc.expected1.([]time.Time)[i].UTC()))\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tassertDeepEqualE(t, tc.actual, tc.expected1)\n\t\t\t\t\t}\n\t\t\t\t\tif tc.expected2 != nil {\n\t\t\t\t\t\trows.Next()\n\t\t\t\t\t\terr := rows.Scan(&tc.actual)\n\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\tif _, ok := tc.actual.([]time.Time); ok {\n\t\t\t\t\t\t\tassertEqualE(t, len(tc.actual.([]time.Time)), len(tc.expected2.([]time.Time)))\n\t\t\t\t\t\t\tfor i := range tc.actual.([]time.Time) {\n\t\t\t\t\t\t\t\tassertTrueE(t, tc.actual.([]time.Time)[i].UTC().Equal(tc.expected2.([]time.Time)[i].UTC()))\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tassertDeepEqualE(t, tc.actual, tc.expected2)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeOf(tc.expected1))\n\t\t\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"ARRAY\")\n\t\t\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t\t\t})\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestArrayWithoutSchema(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tif format == \"NATIVE_ARROW\" {\n\t\t\t\tt.Skip(\"Native arrow is not supported in arrays without schema\")\n\t\t\t}\n\t\t\trows := dbt.mustQuery(\"SELECT ARRAY_CONSTRUCT(1, 2)\")\n\t\t\tdefer rows.Close()\n\t\t\trows.Next()\n\t\t\tvar v string\n\t\t\terr := rows.Scan(&v)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualIgnoringWhitespaceE(t, v, \"[1, 2]\")\n\t\t})\n\t})\n}\n\nfunc TestEmptyArraysAndNullArrays(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT ARRAY_CONSTRUCT(1, 2)::ARRAY(INTEGER) as structured_type UNION SELECT ARRAY_CONSTRUCT()::ARRAY(INTEGER) UNION SELECT NULL UNION SELECT ARRAY_CONSTRUCT(4, 5, 6)::ARRAY(INTEGER) ORDER BY 1\")\n\t\t\tdefer rows.Close()\n\t\t\tcheckRow := func(rows *RowsExtended, expected *[]int64) {\n\t\t\t\tvar res *[]int64\n\t\t\t\trows.Next()\n\t\t\t\terr := rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertDeepEqualE(t, res, expected)\n\t\t\t}\n\n\t\t\tcheckRow(rows, &[]int64{})\n\t\t\tcheckRow(rows, &[]int64{1, 2})\n\t\t\tcheckRow(rows, &[]int64{4, 5, 6})\n\t\t\tcheckRow(rows, nil)\n\t\t})\n\t})\n}\n\nfunc TestArrayWithoutSchemaMetadata(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tif format == \"NATIVE_ARROW\" {\n\t\t\t\tt.Skip(\"Native arrow is not supported in arrays without schema\")\n\t\t\t}\n\t\t\trows := dbt.mustQuery(\"SELECT ARRAY_CONSTRUCT(1, 2) AS structured_type\")\n\t\t\tdefer rows.Close()\n\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[string]())\n\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"ARRAY\")\n\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t})\n\t})\n}\n\nfunc TestArrayOfObjects(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT ARRAY_CONSTRUCT({'s': 's1', 'i': 9}, {'s': 's2', 'i': 8})::ARRAY(OBJECT(s VARCHAR, i INTEGER)) as structured_type UNION SELECT ARRAY_CONSTRUCT({'s': 's3', 'i': 7})::ARRAY(OBJECT(s VARCHAR, i INTEGER)) ORDER BY 1\")\n\t\t\tdefer rows.Close()\n\t\t\trows.Next()\n\t\t\tvar res []*simpleObject\n\t\t\terr := rows.Scan(ScanArrayOfScanners(&res))\n\t\t\tassertNilF(t, err)\n\t\t\tassertDeepEqualE(t, res, []*simpleObject{{s: \"s3\", i: 7}})\n\t\t\trows.Next()\n\t\t\terr = rows.Scan(ScanArrayOfScanners(&res))\n\t\t\tassertNilF(t, err)\n\t\t\tassertDeepEqualE(t, res, []*simpleObject{{s: \"s1\", i: 9}, {s: \"s2\", i: 8}})\n\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[[]ObjectType]())\n\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"ARRAY\")\n\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t})\n\t})\n}\n\nfunc TestArrayOfArrays(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\ttestcases := []struct {\n\t\tname     string\n\t\tquery    string\n\t\tactual   any\n\t\texpected any\n\t}{\n\t\t{\n\t\t\tname:     \"string\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT('a', 'b', 'c'), ARRAY_CONSTRUCT('d', 'e'))::ARRAY(ARRAY(VARCHAR))\",\n\t\t\tactual:   make([][]string, 2),\n\t\t\texpected: [][]string{{\"a\", \"b\", \"c\"}, {\"d\", \"e\"}},\n\t\t},\n\t\t{\n\t\t\tname:     \"int64\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT(1, 2), ARRAY_CONSTRUCT(3, 4))::ARRAY(ARRAY(INTEGER))\",\n\t\t\tactual:   make([][]int64, 2),\n\t\t\texpected: [][]int64{{1, 2}, {3, 4}},\n\t\t},\n\t\t{\n\t\t\tname:     \"float64 - fixed\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT(1.1, 2.2), ARRAY_CONSTRUCT(3.3, 4.4))::ARRAY(ARRAY(NUMBER(38, 19)))\",\n\t\t\tactual:   make([][]float64, 2),\n\t\t\texpected: [][]float64{{1.1, 2.2}, {3.3, 4.4}},\n\t\t},\n\t\t{\n\t\t\tname:     \"float64 - real\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT(1.1, 2.2), ARRAY_CONSTRUCT(3.3, 4.4))::ARRAY(ARRAY(DOUBLE))\",\n\t\t\tactual:   make([][]float64, 2),\n\t\t\texpected: [][]float64{{1.1, 2.2}, {3.3, 4.4}},\n\t\t},\n\t\t{\n\t\t\tname:     \"bool\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT(true, false), ARRAY_CONSTRUCT(false, true, false))::ARRAY(ARRAY(BOOLEAN))\",\n\t\t\tactual:   make([][]bool, 2),\n\t\t\texpected: [][]bool{{true, false}, {false, true, false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"binary\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT(TO_BINARY('6162'), TO_BINARY('6364')), ARRAY_CONSTRUCT(TO_BINARY('6566'), TO_BINARY('6768')))::ARRAY(ARRAY(BINARY))\",\n\t\t\tactual:   make([][][]byte, 2),\n\t\t\texpected: [][][]byte{{{'a', 'b'}, {'c', 'd'}}, {{'e', 'f'}, {'g', 'h'}}},\n\t\t},\n\t\t{\n\t\t\tname:     \"date\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT('2024-01-01'::DATE, '2024-02-02'::DATE), ARRAY_CONSTRUCT('2024-03-03'::DATE, '2024-04-04'::DATE))::ARRAY(ARRAY(DATE))\",\n\t\t\tactual:   make([][]time.Time, 2),\n\t\t\texpected: [][]time.Time{{time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC), time.Date(2024, 2, 2, 0, 0, 0, 0, time.UTC)}, {time.Date(2024, 3, 3, 0, 0, 0, 0, time.UTC), time.Date(2024, 4, 4, 0, 0, 0, 0, time.UTC)}},\n\t\t},\n\t\t{\n\t\t\tname:     \"time\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT('01:01:01'::TIME, '02:02:02'::TIME), ARRAY_CONSTRUCT('03:03:03'::TIME, '04:04:04'::TIME))::ARRAY(ARRAY(TIME))\",\n\t\t\tactual:   make([][]time.Time, 2),\n\t\t\texpected: [][]time.Time{{time.Date(0, 1, 1, 1, 1, 1, 0, time.UTC), time.Date(0, 1, 1, 2, 2, 2, 0, time.UTC)}, {time.Date(0, 1, 1, 3, 3, 3, 0, time.UTC), time.Date(0, 1, 1, 4, 4, 4, 0, time.UTC)}},\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_ltz\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT('2024-01-05 11:22:33'::TIMESTAMP_LTZ), ARRAY_CONSTRUCT('2001-11-12 11:22:33'::TIMESTAMP_LTZ))::ARRAY(ARRAY(TIMESTAMP_LTZ))\",\n\t\t\tactual:   make([][]time.Time, 2),\n\t\t\texpected: [][]time.Time{{time.Date(2024, time.January, 5, 11, 22, 33, 0, warsawTz)}, {time.Date(2001, time.November, 12, 11, 22, 33, 0, warsawTz)}},\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_ntz\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT('2024-01-05 11:22:33'::TIMESTAMP_NTZ), ARRAY_CONSTRUCT('2001-11-12 11:22:33'::TIMESTAMP_NTZ))::ARRAY(ARRAY(TIMESTAMP_NTZ))\",\n\t\t\tactual:   make([][]time.Time, 2),\n\t\t\texpected: [][]time.Time{{time.Date(2024, time.January, 5, 11, 22, 33, 0, time.UTC)}, {time.Date(2001, time.November, 12, 11, 22, 33, 0, time.UTC)}},\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_tz\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT('2024-01-05 11:22:33 +0100'::TIMESTAMP_TZ), ARRAY_CONSTRUCT('2001-11-12 11:22:33 +0100'::TIMESTAMP_TZ))::ARRAY(ARRAY(TIMESTAMP_TZ))\",\n\t\t\tactual:   make([][]time.Time, 2),\n\t\t\texpected: [][]time.Time{{time.Date(2024, time.January, 5, 11, 22, 33, 0, warsawTz)}, {time.Date(2001, time.November, 12, 11, 22, 33, 0, warsawTz)}},\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tfor _, tc := range testcases {\n\t\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\t\trows := dbt.mustQueryContextT(ctx, t, tc.query)\n\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\trows.Next()\n\t\t\t\t\terr := rows.Scan(&tc.actual)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tif timesOfTimes, ok := tc.expected.([][]time.Time); ok {\n\t\t\t\t\t\tfor i, timeOfTimes := range timesOfTimes {\n\t\t\t\t\t\t\tfor j, tm := range timeOfTimes {\n\t\t\t\t\t\t\t\tif tc.name == \"time\" {\n\t\t\t\t\t\t\t\t\tassertEqualE(t, tm.Hour(), tc.actual.([][]time.Time)[i][j].Hour())\n\t\t\t\t\t\t\t\t\tassertEqualE(t, tm.Minute(), tc.actual.([][]time.Time)[i][j].Minute())\n\t\t\t\t\t\t\t\t\tassertEqualE(t, tm.Second(), tc.actual.([][]time.Time)[i][j].Second())\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tassertTrueE(t, tm.Equal(tc.actual.([][]time.Time)[i][j]))\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tassertDeepEqualE(t, tc.actual, tc.expected)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestMapAndMetadataAsString(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tif format == \"NATIVE_ARROW\" {\n\t\t\t\tt.Skip(\"Native arrow is not supported in maps without schema\")\n\t\t\t}\n\t\t\trows := dbt.mustQuery(\"SELECT {'a': 'b', 'c': 'd'}::MAP(VARCHAR, VARCHAR) AS STRUCTURED_TYPE\")\n\t\t\tdefer rows.Close()\n\t\t\tassertTrueF(t, rows.Next())\n\t\t\tvar v string\n\t\t\terr := rows.Scan(&v)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualIgnoringWhitespaceE(t, v, `{\"a\": \"b\", \"c\": \"d\"}`)\n\n\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[string]())\n\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"MAP\")\n\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t})\n\t})\n}\n\nfunc TestMapAndMetadataAsMap(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\ttestcases := []struct {\n\t\t\tname      string\n\t\t\tquery     string\n\t\t\texpected1 any\n\t\t\texpected2 any\n\t\t\tactual    any\n\t\t}{\n\t\t\t{\n\t\t\t\tname:      \"string string\",\n\t\t\t\tquery:     \"SELECT {'a': 'x', 'b': 'y'}::MAP(VARCHAR, VARCHAR) as structured_type UNION SELECT {'c': 'z'}::MAP(VARCHAR, VARCHAR) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]string{\"a\": \"x\", \"b\": \"y\"},\n\t\t\t\texpected2: map[string]string{\"c\": \"z\"},\n\t\t\t\tactual:    make(map[string]string),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer string\",\n\t\t\t\tquery:     \"SELECT {'1': 'x', '2': 'y'}::MAP(INTEGER, VARCHAR) as structured_type UNION SELECT {'3': 'z'}::MAP(INTEGER, VARCHAR) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]string{int64(1): \"x\", int64(2): \"y\"},\n\t\t\t\texpected2: map[int64]string{int64(3): \"z\"},\n\t\t\t\tactual:    make(map[int64]string),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string bool\",\n\t\t\t\tquery:     \"SELECT {'a': true, 'b': false}::MAP(VARCHAR, BOOLEAN) as structured_type UNION SELECT {'c': true}::MAP(VARCHAR, BOOLEAN) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]bool{\"a\": true, \"b\": false},\n\t\t\t\texpected2: map[string]bool{\"c\": true},\n\t\t\t\tactual:    make(map[string]bool),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer bool\",\n\t\t\t\tquery:     \"SELECT {'1': true, '2': false}::MAP(INTEGER, BOOLEAN) as structured_type UNION SELECT {'3': true}::MAP(INTEGER, BOOLEAN) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]bool{int64(1): true, int64(2): false},\n\t\t\t\texpected2: map[int64]bool{int64(3): true},\n\t\t\t\tactual:    make(map[int64]bool),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string integer\",\n\t\t\t\tquery:     \"SELECT {'a': 11, 'b': 22}::MAP(VARCHAR, INTEGER) as structured_type UNION SELECT {'c': 33}::MAP(VARCHAR, INTEGER) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]int64{\"a\": 11, \"b\": 22},\n\t\t\t\texpected2: map[string]int64{\"c\": 33},\n\t\t\t\tactual:    make(map[string]int64),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer integer\",\n\t\t\t\tquery:     \"SELECT {'1': 11, '2': 22}::MAP(INTEGER, INTEGER) as structured_type UNION SELECT {'3': 33}::MAP(INTEGER, INTEGER) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]int64{int64(1): int64(11), int64(2): int64(22)},\n\t\t\t\texpected2: map[int64]int64{int64(3): int64(33)},\n\t\t\t\tactual:    make(map[int64]int64),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string double\",\n\t\t\t\tquery:     \"SELECT {'a': 11.1, 'b': 22.2}::MAP(VARCHAR, DOUBLE) as structured_type UNION SELECT {'c': 33.3}::MAP(VARCHAR, DOUBLE) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]float64{\"a\": 11.1, \"b\": 22.2},\n\t\t\t\texpected2: map[string]float64{\"c\": 33.3},\n\t\t\t\tactual:    make(map[string]float64),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer double\",\n\t\t\t\tquery:     \"SELECT {'1': 11.1, '2': 22.2}::MAP(INTEGER, DOUBLE) as structured_type UNION SELECT {'3': 33.3}::MAP(INTEGER, DOUBLE) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]float64{int64(1): 11.1, int64(2): 22.2},\n\t\t\t\texpected2: map[int64]float64{int64(3): 33.3},\n\t\t\t\tactual:    make(map[int64]float64),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string number integer\",\n\t\t\t\tquery:     \"SELECT {'a': 11, 'b': 22}::MAP(VARCHAR, NUMBER(38, 0)) as structured_type UNION SELECT {'c': 33}::MAP(VARCHAR, NUMBER(38, 0)) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]int64{\"a\": 11, \"b\": 22},\n\t\t\t\texpected2: map[string]int64{\"c\": 33},\n\t\t\t\tactual:    make(map[string]int64),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer number integer\",\n\t\t\t\tquery:     \"SELECT {'1': 11, '2': 22}::MAP(INTEGER, NUMBER(38, 0)) as structured_type UNION SELECT {'3': 33}::MAP(INTEGER, NUMBER(38, 0)) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]int64{int64(1): int64(11), int64(2): int64(22)},\n\t\t\t\texpected2: map[int64]int64{int64(3): int64(33)},\n\t\t\t\tactual:    make(map[int64]int64),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string number fraction\",\n\t\t\t\tquery:     \"SELECT {'a': 11.1, 'b': 22.2}::MAP(VARCHAR, NUMBER(38, 19)) as structured_type UNION SELECT {'c': 33.3}::MAP(VARCHAR, NUMBER(38, 19)) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]float64{\"a\": 11.1, \"b\": 22.2},\n\t\t\t\texpected2: map[string]float64{\"c\": 33.3},\n\t\t\t\tactual:    make(map[string]float64),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer number fraction\",\n\t\t\t\tquery:     \"SELECT {'1': 11.1, '2': 22.2}::MAP(INTEGER, NUMBER(38, 19)) as structured_type UNION SELECT {'3': 33.3}::MAP(INTEGER, NUMBER(38, 19)) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]float64{int64(1): 11.1, int64(2): 22.2},\n\t\t\t\texpected2: map[int64]float64{int64(3): 33.3},\n\t\t\t\tactual:    make(map[int64]float64),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string binary\",\n\t\t\t\tquery:     \"SELECT {'a': TO_BINARY('616263', 'HEX'), 'b': TO_BINARY('646566', 'HEX')}::MAP(VARCHAR, BINARY) as structured_type UNION SELECT {'c': TO_BINARY('676869', 'HEX')}::MAP(VARCHAR, BINARY) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string][]byte{\"a\": {'a', 'b', 'c'}, \"b\": {'d', 'e', 'f'}},\n\t\t\t\texpected2: map[string][]byte{\"c\": {'g', 'h', 'i'}},\n\t\t\t\tactual:    make(map[string][]byte),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer binary\",\n\t\t\t\tquery:     \"SELECT {'1': TO_BINARY('616263', 'HEX'), '2': TO_BINARY('646566', 'HEX')}::MAP(INTEGER, BINARY) as structured_type UNION SELECT {'3': TO_BINARY('676869', 'HEX')}::MAP(INTEGER, BINARY) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64][]byte{1: {'a', 'b', 'c'}, 2: {'d', 'e', 'f'}},\n\t\t\t\texpected2: map[int64][]byte{3: {'g', 'h', 'i'}},\n\t\t\t\tactual:    make(map[int64][]byte),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string date\",\n\t\t\t\tquery:     \"SELECT {'a': '2024-04-02'::DATE, 'b': '2024-04-03'::DATE}::MAP(VARCHAR, DATE) as structured_type UNION SELECT {'c': '2024-04-04'::DATE}::MAP(VARCHAR, DATE) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]time.Time{\"a\": time.Date(2024, time.April, 2, 0, 0, 0, 0, time.UTC), \"b\": time.Date(2024, time.April, 3, 0, 0, 0, 0, time.UTC)},\n\t\t\t\texpected2: map[string]time.Time{\"c\": time.Date(2024, time.April, 4, 0, 0, 0, 0, time.UTC)},\n\t\t\t\tactual:    make(map[string]time.Time),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer date\",\n\t\t\t\tquery:     \"SELECT {'1': '2024-04-02'::DATE, '2': '2024-04-03'::DATE}::MAP(INTEGER, DATE) as structured_type UNION SELECT {'3': '2024-04-04'::DATE}::MAP(INTEGER, DATE) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]time.Time{1: time.Date(2024, time.April, 2, 0, 0, 0, 0, time.UTC), 2: time.Date(2024, time.April, 3, 0, 0, 0, 0, time.UTC)},\n\t\t\t\texpected2: map[int64]time.Time{3: time.Date(2024, time.April, 4, 0, 0, 0, 0, time.UTC)},\n\t\t\t\tactual:    make(map[int64]time.Time),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string time\",\n\t\t\t\tquery:     \"SELECT {'a': '13:03:02'::TIME, 'b': '13:03:03'::TIME}::MAP(VARCHAR, TIME) as structured_type UNION SELECT {'c': '13:03:04'::TIME}::MAP(VARCHAR, TIME) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]time.Time{\"a\": time.Date(0, 0, 0, 13, 3, 2, 0, time.UTC), \"b\": time.Date(0, 0, 0, 13, 3, 3, 0, time.UTC)},\n\t\t\t\texpected2: map[string]time.Time{\"c\": time.Date(0, 0, 0, 13, 3, 4, 0, time.UTC)},\n\t\t\t\tactual:    make(map[string]time.Time),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer time\",\n\t\t\t\tquery:     \"SELECT {'1': '13:03:02'::TIME, '2': '13:03:03'::TIME}::MAP(VARCHAR, TIME) as structured_type UNION SELECT {'3': '13:03:04'::TIME}::MAP(VARCHAR, TIME) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]time.Time{\"1\": time.Date(0, 0, 0, 13, 3, 2, 0, time.UTC), \"2\": time.Date(0, 0, 0, 13, 3, 3, 0, time.UTC)},\n\t\t\t\texpected2: map[string]time.Time{\"3\": time.Date(0, 0, 0, 13, 3, 4, 0, time.UTC)},\n\t\t\t\tactual:    make(map[int64]time.Time),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string timestamp_ntz\",\n\t\t\t\tquery:     \"SELECT {'a': '2024-01-05 11:22:33'::TIMESTAMP_NTZ, 'b': '2024-01-06 11:22:33'::TIMESTAMP_NTZ}::MAP(VARCHAR, TIMESTAMP_NTZ) as structured_type UNION SELECT {'c': '2024-01-07 11:22:33'::TIMESTAMP_NTZ}::MAP(VARCHAR, TIMESTAMP_NTZ) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]time.Time{\"a\": time.Date(2024, time.January, 5, 11, 22, 33, 0, time.UTC), \"b\": time.Date(2024, time.January, 6, 11, 22, 33, 0, time.UTC)},\n\t\t\t\texpected2: map[string]time.Time{\"c\": time.Date(2024, time.January, 7, 11, 22, 33, 0, time.UTC)},\n\t\t\t\tactual:    make(map[string]time.Time),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer timestamp_ntz\",\n\t\t\t\tquery:     \"SELECT {'1': '2024-01-05 11:22:33'::TIMESTAMP_NTZ, '2': '2024-01-06 11:22:33'::TIMESTAMP_NTZ}::MAP(INTEGER, TIMESTAMP_NTZ) as structured_type UNION SELECT {'3': '2024-01-07 11:22:33'::TIMESTAMP_NTZ}::MAP(INTEGER, TIMESTAMP_NTZ) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]time.Time{1: time.Date(2024, time.January, 5, 11, 22, 33, 0, time.UTC), 2: time.Date(2024, time.January, 6, 11, 22, 33, 0, time.UTC)},\n\t\t\t\texpected2: map[int64]time.Time{3: time.Date(2024, time.January, 7, 11, 22, 33, 0, time.UTC)},\n\t\t\t\tactual:    make(map[int64]time.Time),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string timestamp_tz\",\n\t\t\t\tquery:     \"SELECT {'a': '2024-01-05 11:22:33 +0100'::TIMESTAMP_TZ, 'b': '2024-01-06 11:22:33 +0100'::TIMESTAMP_TZ}::MAP(VARCHAR, TIMESTAMP_TZ) as structured_type UNION SELECT {'c': '2024-01-07 11:22:33 +0100'::TIMESTAMP_TZ}::MAP(VARCHAR, TIMESTAMP_TZ) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]time.Time{\"a\": time.Date(2024, time.January, 5, 11, 22, 33, 0, warsawTz), \"b\": time.Date(2024, time.January, 6, 11, 22, 33, 0, warsawTz)},\n\t\t\t\texpected2: map[string]time.Time{\"c\": time.Date(2024, time.January, 7, 11, 22, 33, 0, warsawTz)},\n\t\t\t\tactual:    make(map[string]time.Time),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer timestamp_tz\",\n\t\t\t\tquery:     \"SELECT {'1': '2024-01-05 11:22:33 +0100'::TIMESTAMP_TZ, '2': '2024-01-06 11:22:33 +0100'::TIMESTAMP_TZ}::MAP(INTEGER, TIMESTAMP_TZ) as structured_type UNION SELECT {'3': '2024-01-07 11:22:33 +0100'::TIMESTAMP_TZ}::MAP(INTEGER, TIMESTAMP_TZ) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]time.Time{1: time.Date(2024, time.January, 5, 11, 22, 33, 0, time.UTC), 2: time.Date(2024, time.January, 6, 11, 22, 33, 0, time.UTC)},\n\t\t\t\texpected2: map[int64]time.Time{3: time.Date(2024, time.January, 7, 11, 22, 33, 0, time.UTC)},\n\t\t\t\tactual:    make(map[int64]time.Time),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"string timestamp_ltz\",\n\t\t\t\tquery:     \"SELECT {'a': '2024-01-05 11:22:33'::TIMESTAMP_LTZ, 'b': '2024-01-06 11:22:33'::TIMESTAMP_LTZ}::MAP(VARCHAR, TIMESTAMP_LTZ) as structured_type UNION SELECT {'c': '2024-01-07 11:22:33'::TIMESTAMP_LTZ}::MAP(VARCHAR, TIMESTAMP_LTZ) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[string]time.Time{\"a\": time.Date(2024, time.January, 5, 11, 22, 33, 0, warsawTz), \"b\": time.Date(2024, time.January, 6, 11, 22, 33, 0, warsawTz)},\n\t\t\t\texpected2: map[string]time.Time{\"c\": time.Date(2024, time.January, 7, 11, 22, 33, 0, warsawTz)},\n\t\t\t\tactual:    make(map[string]time.Time),\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"integer timestamp_ltz\",\n\t\t\t\tquery:     \"SELECT {'1': '2024-01-05 11:22:33'::TIMESTAMP_LTZ, '2': '2024-01-06 11:22:33'::TIMESTAMP_LTZ}::MAP(INTEGER, TIMESTAMP_LTZ) as structured_type UNION SELECT {'3': '2024-01-07 11:22:33'::TIMESTAMP_LTZ}::MAP(INTEGER, TIMESTAMP_LTZ) ORDER BY 1 DESC\",\n\t\t\t\texpected1: map[int64]time.Time{1: time.Date(2024, time.January, 5, 11, 22, 33, 0, time.UTC), 2: time.Date(2024, time.January, 6, 11, 22, 33, 0, time.UTC)},\n\t\t\t\texpected2: map[int64]time.Time{3: time.Date(2024, time.January, 7, 11, 22, 33, 0, time.UTC)},\n\t\t\t\tactual:    make(map[int64]time.Time),\n\t\t\t},\n\t\t}\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tfor _, tc := range testcases {\n\t\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\t\trows := dbt.mustQueryContextT(ctx, t, tc.query)\n\t\t\t\t\tdefer rows.Close()\n\n\t\t\t\t\tcheckRow := func(expected any) {\n\t\t\t\t\t\trows.Next()\n\t\t\t\t\t\terr := rows.Scan(&tc.actual)\n\t\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\t\tif _, ok := expected.(map[string]time.Time); ok {\n\t\t\t\t\t\t\tassertEqualE(t, len(tc.actual.(map[string]time.Time)), len(expected.(map[string]time.Time)))\n\t\t\t\t\t\t\tfor k, v := range expected.(map[string]time.Time) {\n\t\t\t\t\t\t\t\tif strings.Contains(tc.name, \"time\") {\n\t\t\t\t\t\t\t\t\tassertEqualE(t, v.Hour(), tc.actual.(map[string]time.Time)[k].Hour())\n\t\t\t\t\t\t\t\t\tassertEqualE(t, v.Minute(), tc.actual.(map[string]time.Time)[k].Minute())\n\t\t\t\t\t\t\t\t\tassertEqualE(t, v.Second(), tc.actual.(map[string]time.Time)[k].Second())\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tassertTrueE(t, v.UTC().Equal(tc.actual.(map[string]time.Time)[k].UTC()))\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else if _, ok := expected.(map[int64]time.Time); ok {\n\t\t\t\t\t\t\tassertEqualE(t, len(tc.actual.(map[int64]time.Time)), len(expected.(map[int64]time.Time)))\n\t\t\t\t\t\t\tfor k, v := range expected.(map[int64]time.Time) {\n\t\t\t\t\t\t\t\tif strings.Contains(tc.name, \"time\") {\n\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tassertTrueE(t, v.UTC().Equal(tc.actual.(map[int64]time.Time)[k].UTC()))\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tassertDeepEqualE(t, tc.actual, expected)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tcheckRow(tc.expected1)\n\t\t\t\t\tcheckRow(tc.expected2)\n\n\t\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeOf(tc.expected1))\n\t\t\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"MAP\")\n\t\t\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t\t\t})\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestMapOfObjects(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT {'x': {'s': 'abc', 'i': 1}, 'y': {'s': 'def', 'i': 2}}::MAP(VARCHAR, OBJECT(s VARCHAR, i INTEGER))\")\n\t\t\tdefer rows.Close()\n\t\t\tvar res map[string]*simpleObject\n\t\t\trows.Next()\n\t\t\terr := rows.Scan(ScanMapOfScanners(&res))\n\t\t\tassertNilF(t, err)\n\t\t\tassertDeepEqualE(t, res, map[string]*simpleObject{\"x\": {s: \"abc\", i: 1}, \"y\": {s: \"def\", i: 2}})\n\t\t})\n\t})\n}\n\nfunc TestMapOfArrays(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\ttestcases := []struct {\n\t\tname     string\n\t\tquery    string\n\t\tactual   any\n\t\texpected any\n\t}{\n\t\t{\n\t\t\tname:     \"string\",\n\t\t\tquery:    \"SELECT {'x': ARRAY_CONSTRUCT('ab', 'cd'), 'y': ARRAY_CONSTRUCT('ef')}::MAP(VARCHAR, ARRAY(VARCHAR))\",\n\t\t\tactual:   make(map[string][]string),\n\t\t\texpected: map[string][]string{\"x\": {\"ab\", \"cd\"}, \"y\": {\"ef\"}},\n\t\t},\n\t\t{\n\t\t\tname:     \"fixed - scale == 0\",\n\t\t\tquery:    \"SELECT {'x': ARRAY_CONSTRUCT(1, 2), 'y': ARRAY_CONSTRUCT(3, 4)}::MAP(VARCHAR, ARRAY(INTEGER))\",\n\t\t\tactual:   make(map[string][]int64),\n\t\t\texpected: map[string][]int64{\"x\": {1, 2}, \"y\": {3, 4}},\n\t\t},\n\t\t{\n\t\t\tname:     \"fixed - scale != 0\",\n\t\t\tquery:    \"SELECT {'x': ARRAY_CONSTRUCT(1.1, 2.2), 'y': ARRAY_CONSTRUCT(3.3, 4.4)}::MAP(VARCHAR, ARRAY(NUMBER(38, 19)))\",\n\t\t\tactual:   make(map[string][]float64),\n\t\t\texpected: map[string][]float64{\"x\": {1.1, 2.2}, \"y\": {3.3, 4.4}},\n\t\t},\n\t\t{\n\t\t\tname:     \"real\",\n\t\t\tquery:    \"SELECT {'x': ARRAY_CONSTRUCT(1.1, 2.2), 'y': ARRAY_CONSTRUCT(3.3, 4.4)}::MAP(VARCHAR, ARRAY(DOUBLE))\",\n\t\t\tactual:   make(map[string][]float64),\n\t\t\texpected: map[string][]float64{\"x\": {1.1, 2.2}, \"y\": {3.3, 4.4}},\n\t\t},\n\t\t{\n\t\t\tname:     \"binary\",\n\t\t\tquery:    \"SELECT {'x': ARRAY_CONSTRUCT(TO_BINARY('6162')), 'y': ARRAY_CONSTRUCT(TO_BINARY('6364'), TO_BINARY('6566'))}::MAP(VARCHAR, ARRAY(BINARY))\",\n\t\t\tactual:   make(map[string][][]byte),\n\t\t\texpected: map[string][][]byte{\"x\": {[]byte{'a', 'b'}}, \"y\": {[]byte{'c', 'd'}, []byte{'e', 'f'}}},\n\t\t},\n\t\t{\n\t\t\tname:     \"boolean\",\n\t\t\tquery:    \"SELECT {'x': ARRAY_CONSTRUCT(true, false), 'y': ARRAY_CONSTRUCT(false, true)}::MAP(VARCHAR, ARRAY(BOOLEAN))\",\n\t\t\tactual:   make(map[string][]bool),\n\t\t\texpected: map[string][]bool{\"x\": {true, false}, \"y\": {false, true}},\n\t\t},\n\t\t{\n\t\t\tname:     \"date\",\n\t\t\tquery:    \"SELECT {'a': ARRAY_CONSTRUCT('2024-04-02'::DATE, '2024-04-03'::DATE)}::MAP(VARCHAR, ARRAY(DATE))\",\n\t\t\texpected: map[string][]time.Time{\"a\": {time.Date(2024, time.April, 2, 0, 0, 0, 0, time.UTC), time.Date(2024, time.April, 3, 0, 0, 0, 0, time.UTC)}},\n\t\t\tactual:   make(map[string]time.Time),\n\t\t},\n\t\t{\n\t\t\tname:     \"time\",\n\t\t\tquery:    \"SELECT {'a': ARRAY_CONSTRUCT('13:03:02'::TIME, '13:03:03'::TIME)}::MAP(VARCHAR, ARRAY(TIME))\",\n\t\t\texpected: map[string][]time.Time{\"a\": {time.Date(0, 0, 0, 13, 3, 2, 0, time.UTC), time.Date(0, 0, 0, 13, 3, 3, 0, time.UTC)}},\n\t\t\tactual:   make(map[string]time.Time),\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_ntz\",\n\t\t\tquery:    \"SELECT {'a': ARRAY_CONSTRUCT('2024-01-05 11:22:33'::TIMESTAMP_NTZ, '2024-01-06 11:22:33'::TIMESTAMP_NTZ)}::MAP(VARCHAR, ARRAY(TIMESTAMP_NTZ))\",\n\t\t\texpected: map[string][]time.Time{\"a\": {time.Date(2024, time.January, 5, 11, 22, 33, 0, time.UTC), time.Date(2024, time.January, 6, 11, 22, 33, 0, time.UTC)}},\n\t\t\tactual:   make(map[string]time.Time),\n\t\t},\n\t\t{\n\t\t\tname:     \"string timestamp_tz\",\n\t\t\tquery:    \"SELECT {'a': ARRAY_CONSTRUCT('2024-01-05 11:22:33 +0100'::TIMESTAMP_TZ, '2024-01-06 11:22:33 +0100'::TIMESTAMP_TZ)}::MAP(VARCHAR, ARRAY(TIMESTAMP_TZ))\",\n\t\t\texpected: map[string][]time.Time{\"a\": {time.Date(2024, time.January, 5, 11, 22, 33, 0, warsawTz), time.Date(2024, time.January, 6, 11, 22, 33, 0, warsawTz)}},\n\t\t\tactual:   make(map[string]time.Time),\n\t\t},\n\t\t{\n\t\t\tname:     \"string timestamp_ltz\",\n\t\t\tquery:    \"SELECT {'a': ARRAY_CONSTRUCT('2024-01-05 11:22:33'::TIMESTAMP_LTZ, '2024-01-06 11:22:33'::TIMESTAMP_LTZ)}::MAP(VARCHAR, ARRAY(TIMESTAMP_LTZ))\",\n\t\t\texpected: map[string][]time.Time{\"a\": {time.Date(2024, time.January, 5, 11, 22, 33, 0, warsawTz), time.Date(2024, time.January, 6, 11, 22, 33, 0, warsawTz)}},\n\t\t\tactual:   make(map[string]time.Time),\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tfor _, tc := range testcases {\n\t\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\t\trows := dbt.mustQueryContextT(ctx, t, tc.query)\n\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\trows.Next()\n\t\t\t\t\terr := rows.Scan(&tc.actual)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tif expected, ok := tc.expected.(map[string][]time.Time); ok {\n\t\t\t\t\t\tfor k, v := range expected {\n\t\t\t\t\t\t\tfor i, expectedTime := range v {\n\t\t\t\t\t\t\t\tif tc.name == \"time\" {\n\t\t\t\t\t\t\t\t\tassertEqualE(t, expectedTime.Hour(), tc.actual.(map[string][]time.Time)[k][i].Hour())\n\t\t\t\t\t\t\t\t\tassertEqualE(t, expectedTime.Minute(), tc.actual.(map[string][]time.Time)[k][i].Minute())\n\t\t\t\t\t\t\t\t\tassertEqualE(t, expectedTime.Second(), tc.actual.(map[string][]time.Time)[k][i].Second())\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tassertTrueE(t, expectedTime.Equal(tc.actual.(map[string][]time.Time)[k][i]))\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tassertDeepEqualE(t, tc.actual, tc.expected)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestNullAndEmptyMaps(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT {'a': 1}::MAP(VARCHAR, INTEGER) UNION SELECT NULL UNION SELECT {}::MAP(VARCHAR, INTEGER) UNION SELECT {'d': 4}::MAP(VARCHAR, INTEGER) ORDER BY 1\")\n\t\t\tdefer rows.Close()\n\t\t\tcheckRow := func(rows *RowsExtended, expected *map[string]int64) {\n\t\t\t\trows.Next()\n\t\t\t\tvar res *map[string]int64\n\t\t\t\terr := rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertDeepEqualE(t, res, expected)\n\t\t\t}\n\t\t\tcheckRow(rows, &map[string]int64{})\n\t\t\tcheckRow(rows, &map[string]int64{\"d\": 4})\n\t\t\tcheckRow(rows, &map[string]int64{\"a\": 1})\n\t\t\tcheckRow(rows, nil)\n\t\t})\n\t})\n}\n\nfunc TestMapWithNullValues(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\ttestcases := []struct {\n\t\tname     string\n\t\tquery    string\n\t\tactual   any\n\t\texpected any\n\t}{\n\t\t{\n\t\t\tname:     \"string\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', 'abc', 'y', null)::MAP(VARCHAR, VARCHAR)\",\n\t\t\tactual:   make(map[string]sql.NullString),\n\t\t\texpected: map[string]sql.NullString{\"x\": {Valid: true, String: \"abc\"}, \"y\": {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"bool\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', true, 'y', null)::MAP(VARCHAR, BOOLEAN)\",\n\t\t\tactual:   make(map[string]sql.NullBool),\n\t\t\texpected: map[string]sql.NullBool{\"x\": {Valid: true, Bool: true}, \"y\": {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"fixed - scale == 0\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', 1, 'y', null)::MAP(VARCHAR, BIGINT)\",\n\t\t\tactual:   make(map[string]sql.NullInt64),\n\t\t\texpected: map[string]sql.NullInt64{\"x\": {Valid: true, Int64: 1}, \"y\": {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"fixed - scale != 0\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', 1.1, 'y', null)::MAP(VARCHAR, NUMBER(38, 19))\",\n\t\t\tactual:   make(map[string]sql.NullFloat64),\n\t\t\texpected: map[string]sql.NullFloat64{\"x\": {Valid: true, Float64: 1.1}, \"y\": {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"real\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', 1.1, 'y', null)::MAP(VARCHAR, DOUBLE)\",\n\t\t\tactual:   make(map[string]sql.NullFloat64),\n\t\t\texpected: map[string]sql.NullFloat64{\"x\": {Valid: true, Float64: 1.1}, \"y\": {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"binary\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', TO_BINARY('616263'), 'y', null)::MAP(VARCHAR, BINARY)\",\n\t\t\tactual:   make(map[string][]byte),\n\t\t\texpected: map[string][]byte{\"x\": {'a', 'b', 'c'}, \"y\": nil},\n\t\t},\n\t\t{\n\t\t\tname:     \"date\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', '2024-04-05'::DATE, 'y', null)::MAP(VARCHAR, DATE)\",\n\t\t\tactual:   make(map[string]sql.NullTime),\n\t\t\texpected: map[string]sql.NullTime{\"x\": {Valid: true, Time: time.Date(2024, time.April, 5, 0, 0, 0, 0, time.UTC)}, \"y\": {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"time\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', '13:14:15'::TIME, 'y', null)::MAP(VARCHAR, TIME)\",\n\t\t\tactual:   make(map[string]sql.NullTime),\n\t\t\texpected: map[string]sql.NullTime{\"x\": {Valid: true, Time: time.Date(1, 0, 0, 13, 14, 15, 0, time.UTC)}, \"y\": {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_tz\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', '2022-08-31 13:43:22 +0200'::TIMESTAMP_TZ, 'y', null)::MAP(VARCHAR, TIMESTAMP_TZ)\",\n\t\t\tactual:   make(map[string]sql.NullTime),\n\t\t\texpected: map[string]sql.NullTime{\"x\": {Valid: true, Time: time.Date(2022, 8, 31, 13, 43, 22, 0, warsawTz)}, \"y\": {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_ntz\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', '2022-08-31 13:43:22'::TIMESTAMP_NTZ, 'y', null)::MAP(VARCHAR, TIMESTAMP_NTZ)\",\n\t\t\tactual:   make(map[string]sql.NullTime),\n\t\t\texpected: map[string]sql.NullTime{\"x\": {Valid: true, Time: time.Date(2022, 8, 31, 13, 43, 22, 0, time.UTC)}, \"y\": {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_ltz\",\n\t\t\tquery:    \"SELECT object_construct_keep_null('x', '2022-08-31 13:43:22'::TIMESTAMP_LTZ, 'y', null)::MAP(VARCHAR, TIMESTAMP_LTZ)\",\n\t\t\tactual:   make(map[string]sql.NullTime),\n\t\t\texpected: map[string]sql.NullTime{\"x\": {Valid: true, Time: time.Date(2022, 8, 31, 13, 43, 22, 0, warsawTz)}, \"y\": {Valid: false}},\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tfor _, tc := range testcases {\n\t\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\t\trows := dbt.mustQueryContextT(WithEmbeddedValuesNullable(ctx), t, tc.query)\n\t\t\t\t\tdefer rows.Close()\n\t\t\t\t\trows.Next()\n\t\t\t\t\terr = rows.Scan(&tc.actual)\n\t\t\t\t\tassertNilF(t, err)\n\t\t\t\t\tswitch tc.name {\n\t\t\t\t\tcase \"time\":\n\t\t\t\t\t\tfor i, nt := range tc.actual.(map[string]sql.NullTime) {\n\t\t\t\t\t\t\tassertEqualE(t, nt.Valid, tc.expected.(map[string]sql.NullTime)[i].Valid)\n\t\t\t\t\t\t\tassertEqualE(t, nt.Time.Hour(), tc.expected.(map[string]sql.NullTime)[i].Time.Hour())\n\t\t\t\t\t\t\tassertEqualE(t, nt.Time.Minute(), tc.expected.(map[string]sql.NullTime)[i].Time.Minute())\n\t\t\t\t\t\t\tassertEqualE(t, nt.Time.Second(), tc.expected.(map[string]sql.NullTime)[i].Time.Second())\n\t\t\t\t\t\t}\n\t\t\t\t\tcase \"timestamp_tz\", \"timestamp_ltz\", \"timestamp_ntz\":\n\t\t\t\t\t\tfor i, nt := range tc.actual.(map[string]sql.NullTime) {\n\t\t\t\t\t\t\tassertEqualE(t, nt.Valid, tc.expected.(map[string]sql.NullTime)[i].Valid)\n\t\t\t\t\t\t\tassertTrueE(t, nt.Time.Equal(tc.expected.(map[string]sql.NullTime)[i].Time))\n\t\t\t\t\t\t}\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tassertDeepEqualE(t, tc.actual, tc.expected)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestArraysWithNullValues(t *testing.T) {\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\ttestcases := []struct {\n\t\tname     string\n\t\tquery    string\n\t\tactual   any\n\t\texpected any\n\t}{\n\t\t{\n\t\t\tname:     \"string\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT('x', null, 'yz', null)::ARRAY(STRING)\",\n\t\t\tactual:   []sql.NullString{},\n\t\t\texpected: []sql.NullString{{Valid: true, String: \"x\"}, {Valid: false}, {Valid: true, String: \"yz\"}, {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"bool\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(true, null, false)::ARRAY(BOOLEAN)\",\n\t\t\tactual:   []sql.NullBool{},\n\t\t\texpected: []sql.NullBool{{Valid: true, Bool: true}, {Valid: false}, {Valid: true, Bool: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"fixed - scale == 0\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(null, 2, 3)::ARRAY(BIGINT)\",\n\t\t\tactual:   []sql.NullInt64{},\n\t\t\texpected: []sql.NullInt64{{Valid: false}, {Valid: true, Int64: 2}, {Valid: true, Int64: 3}},\n\t\t},\n\t\t{\n\t\t\tname:     \"fixed - scale == 0\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(1.3, 2.0, null, null)::ARRAY(NUMBER(38, 19))\",\n\t\t\tactual:   []sql.NullFloat64{},\n\t\t\texpected: []sql.NullFloat64{{Valid: true, Float64: 1.3}, {Valid: true, Float64: 2.0}, {Valid: false}, {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"real\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(1.9, 0.2, null)::ARRAY(DOUBLE)\",\n\t\t\tactual:   []sql.NullFloat64{},\n\t\t\texpected: []sql.NullFloat64{{Valid: true, Float64: 1.9}, {Valid: true, Float64: 0.2}, {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"binary\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(null, TO_BINARY('616263'))::ARRAY(BINARY)\",\n\t\t\tactual:   [][]byte{},\n\t\t\texpected: [][]byte{nil, {'a', 'b', 'c'}},\n\t\t},\n\t\t{\n\t\t\tname:     \"date\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT('2024-04-05'::DATE, null)::ARRAY(DATE)\",\n\t\t\tactual:   []sql.NullTime{},\n\t\t\texpected: []sql.NullTime{{Valid: true, Time: time.Date(2024, time.April, 5, 0, 0, 0, 0, time.UTC)}, {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"time\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT('13:14:15'::TIME, null)::ARRAY(TIME)\",\n\t\t\tactual:   []sql.NullTime{},\n\t\t\texpected: []sql.NullTime{{Valid: true, Time: time.Date(1, 0, 0, 13, 14, 15, 0, time.UTC)}, {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_tz\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT('2022-08-31 13:43:22 +0200'::TIMESTAMP_TZ, null)::ARRAY(TIMESTAMP_TZ)\",\n\t\t\tactual:   []sql.NullTime{},\n\t\t\texpected: []sql.NullTime{{Valid: true, Time: time.Date(2022, 8, 31, 13, 43, 22, 0, warsawTz)}, {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_ntz\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT('2022-08-31 13:43:22'::TIMESTAMP_NTZ, null)::ARRAY(TIMESTAMP_NTZ)\",\n\t\t\tactual:   []sql.NullTime{},\n\t\t\texpected: []sql.NullTime{{Valid: true, Time: time.Date(2022, 8, 31, 13, 43, 22, 0, time.UTC)}, {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"timestamp_ltz\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT('2022-08-31 13:43:22'::TIMESTAMP_LTZ, null)::ARRAY(TIMESTAMP_LTZ)\",\n\t\t\tactual:   []sql.NullTime{},\n\t\t\texpected: []sql.NullTime{{Valid: true, Time: time.Date(2022, 8, 31, 13, 43, 22, 0, warsawTz)}, {Valid: false}},\n\t\t},\n\t\t{\n\t\t\tname:     \"array\",\n\t\t\tquery:    \"SELECT ARRAY_CONSTRUCT(ARRAY_CONSTRUCT(true, null), null, ARRAY_CONSTRUCT(null, false, true))::ARRAY(ARRAY(BOOLEAN))\",\n\t\t\tactual:   [][]sql.NullBool{},\n\t\t\texpected: [][]sql.NullBool{{{Valid: true, Bool: true}, {Valid: false}}, nil, {{Valid: false}, {Valid: true, Bool: false}, {Valid: true, Bool: true}}},\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.forceNativeArrow()\n\t\tdbt.enableStructuredTypes()\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContext(WithStructuredTypesEnabled(WithEmbeddedValuesNullable(context.Background())), tc.query)\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\terr := rows.Scan(&tc.actual)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tswitch tc.name {\n\t\t\t\tcase \"time\":\n\t\t\t\t\tfor i, nt := range tc.actual.([]sql.NullTime) {\n\t\t\t\t\t\tassertEqualE(t, nt.Valid, tc.expected.([]sql.NullTime)[i].Valid)\n\t\t\t\t\t\tassertEqualE(t, nt.Time.Hour(), tc.expected.([]sql.NullTime)[i].Time.Hour())\n\t\t\t\t\t\tassertEqualE(t, nt.Time.Minute(), tc.expected.([]sql.NullTime)[i].Time.Minute())\n\t\t\t\t\t\tassertEqualE(t, nt.Time.Second(), tc.expected.([]sql.NullTime)[i].Time.Second())\n\t\t\t\t\t}\n\t\t\t\tcase \"timestamp_tz\", \"timestamp_ltz\", \"timestamp_ntz\":\n\t\t\t\t\tfor i, nt := range tc.actual.([]sql.NullTime) {\n\t\t\t\t\t\tassertEqualE(t, nt.Valid, tc.expected.([]sql.NullTime)[i].Valid)\n\t\t\t\t\t\tassertTrueE(t, nt.Time.Equal(tc.expected.([]sql.NullTime)[i].Time))\n\t\t\t\t\t}\n\t\t\t\tdefault:\n\t\t\t\t\tassertDeepEqualE(t, tc.actual, tc.expected)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n\n}\n\nfunc TestArraysWithNullValuesHigherPrecision(t *testing.T) {\n\ttestcases := []struct {\n\t\tname     string\n\t\tquery    string\n\t\tactual   any\n\t\texpected any\n\t}{\n\t\t{\n\t\t\tname:   \"fixed - scale == 0\",\n\t\t\tquery:  \"SELECT ARRAY_CONSTRUCT(null, 2)::ARRAY(BIGINT)\",\n\t\t\tactual: []*big.Int{},\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.forceNativeArrow()\n\t\tdbt.enableStructuredTypes()\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\tctx := WithHigherPrecision(WithStructuredTypesEnabled(WithEmbeddedValuesNullable(context.Background())))\n\t\t\t\trows := dbt.mustQueryContext(ctx, tc.query)\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\terr := rows.Scan(&tc.actual)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertNilF(t, tc.actual.([]*big.Int)[0])\n\t\t\t\tbigInt, _ := new(big.Int).SetString(\"2\", 10)\n\t\t\t\tassertEqualE(t, tc.actual.([]*big.Int)[1].Cmp(bigInt), 0)\n\t\t\t})\n\t\t}\n\t})\n\n}\n\ntype HigherPrecisionStruct struct {\n\ti *big.Int\n\tf *big.Float\n}\n\nfunc (hps *HigherPrecisionStruct) Scan(val any) error {\n\tst, ok := val.(StructuredObject)\n\tif !ok {\n\t\treturn fmt.Errorf(\"expected StructuredObject, got %T\", val)\n\t}\n\n\tvar err error\n\tif hps.i, err = st.GetBigInt(\"i\"); err != nil {\n\t\treturn err\n\t}\n\tif hps.f, err = st.GetBigFloat(\"f\"); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc TestWithHigherPrecision(t *testing.T) {\n\tctx := WithHigherPrecision(WithStructuredTypesEnabled(context.Background()))\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tforAllStructureTypeFormats(dbt, func(t *testing.T, format string) {\n\t\t\tif format != \"NATIVE_ARROW\" {\n\t\t\t\tt.Skip(\"JSON structured type does not support higher precision\")\n\t\t\t}\n\t\t\tt.Run(\"object\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContext(ctx, \"SELECT {'i': 10000000000000000000000000000000000000::DECIMAL(38, 0), 'f': 1.2345678901234567890123456789012345678::DECIMAL(38, 37)}::OBJECT(i DECIMAL(38, 0), f DECIMAL(38, 37)) as structured_type\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar v HigherPrecisionStruct\n\t\t\t\terr := rows.Scan(&v)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tbigInt, b := new(big.Int).SetString(\"10000000000000000000000000000000000000\", 10)\n\t\t\t\tassertTrueF(t, b)\n\t\t\t\tassertEqualE(t, bigInt.Cmp(v.i), 0)\n\t\t\t\tbigFloat, b := new(big.Float).SetPrec(v.f.Prec()).SetString(\"1.2345678901234567890123456789012345678\")\n\t\t\t\tassertTrueE(t, b)\n\t\t\t\tassertEqualE(t, bigFloat.Cmp(v.f), 0)\n\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[ObjectType]())\n\t\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"OBJECT\")\n\t\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t\t})\n\t\t\tt.Run(\"array of big ints\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContext(ctx, \"SELECT ARRAY_CONSTRUCT(10000000000000000000000000000000000000)::ARRAY(DECIMAL(38, 0)) as structured_type\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar v *[]*big.Int\n\t\t\t\terr := rows.Scan(&v)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tbigInt, b := new(big.Int).SetString(\"10000000000000000000000000000000000000\", 10)\n\t\t\t\tassertTrueF(t, b)\n\t\t\t\tassertEqualE(t, bigInt.Cmp((*v)[0]), 0)\n\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[[]*big.Int]())\n\t\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"ARRAY\")\n\t\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t\t})\n\t\t\tt.Run(\"array of big floats\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContext(ctx, \"SELECT ARRAY_CONSTRUCT(1.2345678901234567890123456789012345678)::ARRAY(DECIMAL(38, 37)) as structured_type\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar v *[]*big.Float\n\t\t\t\terr := rows.Scan(&v)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tbigFloat, b := new(big.Float).SetPrec((*v)[0].Prec()).SetString(\"1.2345678901234567890123456789012345678\")\n\t\t\t\tassertTrueE(t, b)\n\t\t\t\tassertEqualE(t, bigFloat.Cmp((*v)[0]), 0)\n\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[[]*big.Float]())\n\t\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"ARRAY\")\n\t\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t\t})\n\t\t\tt.Run(\"map of string to big ints\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContext(ctx, \"SELECT object_construct_keep_null('x', 10000000000000000000000000000000000000, 'y', null)::MAP(VARCHAR, DECIMAL(38, 0)) as structured_type\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar v *map[string]*big.Int\n\t\t\t\terr := rows.Scan(&v)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tbigInt, b := new(big.Int).SetString(\"10000000000000000000000000000000000000\", 10)\n\t\t\t\tassertTrueF(t, b)\n\t\t\t\tassertEqualE(t, bigInt.Cmp((*v)[\"x\"]), 0)\n\t\t\t\tassertEqualE(t, (*v)[\"y\"], (*big.Int)(nil))\n\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[map[string]*big.Int]())\n\t\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"MAP\")\n\t\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t\t})\n\t\t\tt.Run(\"map of string to big floats\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContext(ctx, \"SELECT {'x': 1.2345678901234567890123456789012345678, 'y': null}::MAP(VARCHAR, DECIMAL(38, 37)) as structured_type\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar v *map[string]*big.Float\n\t\t\t\terr := rows.Scan(&v)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tbigFloat, b := new(big.Float).SetPrec((*v)[\"x\"].Prec()).SetString(\"1.2345678901234567890123456789012345678\")\n\t\t\t\tassertTrueE(t, b)\n\t\t\t\tassertEqualE(t, bigFloat.Cmp((*v)[\"x\"]), 0)\n\t\t\t\tassertEqualE(t, (*v)[\"y\"], (*big.Float)(nil))\n\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[map[string]*big.Float]())\n\t\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"MAP\")\n\t\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t\t})\n\t\t\tt.Run(\"map of int64 to big ints\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContext(ctx, \"SELECT {'1': 10000000000000000000000000000000000000}::MAP(INTEGER, DECIMAL(38, 0)) as structured_type\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar v *map[int64]*big.Int\n\t\t\t\terr := rows.Scan(&v)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tbigInt, b := new(big.Int).SetString(\"10000000000000000000000000000000000000\", 10)\n\t\t\t\tassertTrueF(t, b)\n\t\t\t\tassertEqualE(t, bigInt.Cmp((*v)[1]), 0)\n\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[map[int64]*big.Int]())\n\t\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"MAP\")\n\t\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t\t})\n\t\t\tt.Run(\"map of int64 to big floats\", func(t *testing.T) {\n\t\t\t\trows := dbt.mustQueryContext(ctx, \"SELECT {'1': 1.2345678901234567890123456789012345678}::MAP(INTEGER, DECIMAL(38, 37)) as structured_type\")\n\t\t\t\tdefer rows.Close()\n\t\t\t\trows.Next()\n\t\t\t\tvar v *map[int64]*big.Float\n\t\t\t\terr := rows.Scan(&v)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tbigFloat, b := new(big.Float).SetPrec((*v)[1].Prec()).SetString(\"1.2345678901234567890123456789012345678\")\n\t\t\t\tassertTrueE(t, b)\n\t\t\t\tassertEqualE(t, bigFloat.Cmp((*v)[1]), 0)\n\t\t\t\tcolumnTypes, err := rows.ColumnTypes()\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertEqualE(t, len(columnTypes), 1)\n\t\t\t\tassertEqualE(t, columnTypes[0].ScanType(), reflect.TypeFor[map[int64]*big.Float]())\n\t\t\t\tassertEqualE(t, columnTypes[0].DatabaseTypeName(), \"MAP\")\n\t\t\t\tassertEqualE(t, columnTypes[0].Name(), \"STRUCTURED_TYPE\")\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc forAllStructureTypeFormats(dbt *DBTest, f func(t *testing.T, format string)) {\n\tfor _, tc := range []struct {\n\t\tname        string\n\t\tforceFormat func(test *DBTest)\n\t}{\n\t\t{\n\t\t\tname: \"JSON\",\n\t\t\tforceFormat: func(test *DBTest) {\n\t\t\t\tdbt.forceJSON()\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"ARROW\",\n\t\t\tforceFormat: func(test *DBTest) {\n\t\t\t\tdbt.forceArrow()\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"NATIVE_ARROW\",\n\t\t\tforceFormat: func(test *DBTest) {\n\t\t\t\tdbt.forceNativeArrow()\n\t\t\t},\n\t\t},\n\t} {\n\t\tdbt.Run(tc.name, func(t *testing.T) {\n\t\t\ttc.forceFormat(dbt)\n\t\t\tdbt.enableStructuredTypes()\n\t\t\tf(t, tc.name)\n\t\t})\n\t}\n}\n\nfunc skipForStringingNativeArrow(t *testing.T, format string) {\n\tif format == \"NATIVE_ARROW\" {\n\t\tt.Skip(\"returning native arrow structured types as string is currently not supported\")\n\t}\n}\n"
  },
  {
    "path": "structured_type_write_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestBindingVariant(t *testing.T) {\n\tt.Skip(\"binding variant is currently not supported\")\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE TABLE test_variant_binding (var VARIANT)\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_variant_binding\")\n\t\t}()\n\t\tdbt.mustExec(\"INSERT INTO test_variant_binding SELECT (?)\", DataTypeVariant, nil)\n\t\tdbt.mustExec(\"INSERT INTO test_variant_binding SELECT (?)\", DataTypeVariant, sql.NullString{Valid: false})\n\t\tdbt.mustExec(\"INSERT INTO test_variant_binding SELECT (?)\", DataTypeVariant, \"{'s': 'some string'}\")\n\t\tdbt.mustExec(\"INSERT INTO test_variant_binding SELECT (?)\", DataTypeVariant, sql.NullString{Valid: true, String: \"{'s': 'some string2'}\"})\n\t\trows := dbt.mustQuery(\"SELECT * FROM test_variant_binding\")\n\t\tdefer rows.Close()\n\t\tvar res sql.NullString\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertFalseF(t, res.Valid)\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertFalseF(t, res.Valid)\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertTrueE(t, res.Valid)\n\t\tassertEqualIgnoringWhitespaceE(t, res.String, `{\"s\": \"some string\"}`)\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertTrueE(t, res.Valid)\n\t\tassertEqualIgnoringWhitespaceE(t, res.String, `{\"s\": \"some string2\"}`)\n\t})\n}\n\nfunc TestBindingObjectWithoutSchema(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE TABLE test_object_binding (obj OBJECT)\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_object_binding\")\n\t\t}()\n\t\tdbt.mustExec(\"INSERT INTO test_object_binding SELECT (?)\", DataTypeObject, nil)\n\t\tdbt.mustExec(\"INSERT INTO test_object_binding SELECT (?)\", DataTypeObject, sql.NullString{Valid: false})\n\t\tdbt.mustExec(\"INSERT INTO test_object_binding SELECT (?)\", DataTypeObject, \"{'s': 'some string'}\")\n\t\tdbt.mustExec(\"INSERT INTO test_object_binding SELECT (?)\", DataTypeObject, sql.NullString{Valid: true, String: \"{'s': 'some string2'}\"})\n\t\trows := dbt.mustQuery(\"SELECT * FROM test_object_binding\")\n\t\tdefer rows.Close()\n\t\tvar res sql.NullString\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertFalseF(t, res.Valid)\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertFalseF(t, res.Valid)\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertTrueE(t, res.Valid)\n\t\tassertEqualIgnoringWhitespaceE(t, res.String, `{\"s\": \"some string\"}`)\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertTrueE(t, res.Valid)\n\t\tassertEqualIgnoringWhitespaceE(t, res.String, `{\"s\": \"some string2\"}`)\n\t})\n}\n\nfunc TestBindingArrayWithoutSchema(t *testing.T) {\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE TABLE test_array_binding (arr ARRAY)\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_array_binding\")\n\t\t}()\n\t\tdbt.mustExec(\"INSERT INTO test_array_binding SELECT (?)\", DataTypeArray, nil)\n\t\tdbt.mustExec(\"INSERT INTO test_array_binding SELECT (?)\", DataTypeArray, sql.NullString{Valid: false})\n\t\tdbt.mustExec(\"INSERT INTO test_array_binding SELECT (?)\", DataTypeArray, \"[1, 2, 3]\")\n\t\tdbt.mustExec(\"INSERT INTO test_array_binding SELECT (?)\", DataTypeArray, sql.NullString{Valid: true, String: \"[1, 2, 3]\"})\n\t\tdbt.mustExec(\"INSERT INTO test_array_binding SELECT (?)\", DataTypeArray, []int{1, 2, 3})\n\t\trows := dbt.mustQuery(\"SELECT * FROM test_array_binding\")\n\t\tdefer rows.Close()\n\t\tvar res sql.NullString\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertFalseF(t, res.Valid)\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertFalseF(t, res.Valid)\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertTrueE(t, res.Valid)\n\t\tassertEqualIgnoringWhitespaceE(t, res.String, `[1, 2, 3]`)\n\n\t\tassertTrueF(t, rows.Next())\n\t\terr = rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertTrueE(t, res.Valid)\n\t\tassertEqualIgnoringWhitespaceE(t, res.String, `[1, 2, 3]`)\n\t})\n}\n\nfunc TestBindingObjectWithSchema(t *testing.T) {\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tctx := WithStructuredTypesEnabled(context.Background())\n\tassertNilF(t, err)\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_object_binding (obj OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f32 FLOAT, f64 DOUBLE, nfraction NUMBER(38, 9), bo boolean, bi BINARY, date DATE, time TIME, ltz TIMESTAMPLTZ, ntz TIMESTAMPNTZ, tz TIMESTAMPTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_object_binding\")\n\t\t}()\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF9 TZHTZM'\")\n\t\to := objectWithAllTypes{\n\t\t\ts:         \"some string\",\n\t\t\tb:         1,\n\t\t\ti16:       2,\n\t\t\ti32:       3,\n\t\t\ti64:       4,\n\t\t\tf32:       1.1,\n\t\t\tf64:       2.2,\n\t\t\tnfraction: 3.3,\n\t\t\tbo:        true,\n\t\t\tbi:        []byte{'a', 'b', 'c'},\n\t\t\tdate:      time.Date(2024, time.May, 24, 0, 0, 0, 0, time.UTC),\n\t\t\ttime:      time.Date(1, 1, 1, 11, 22, 33, 0, time.UTC),\n\t\t\tltz:       time.Date(2025, time.May, 24, 11, 22, 33, 44, warsawTz),\n\t\t\tntz:       time.Date(2026, time.May, 24, 11, 22, 33, 0, time.UTC),\n\t\t\ttz:        time.Date(2027, time.May, 24, 11, 22, 33, 44, warsawTz),\n\t\t\tso:        &simpleObject{s: \"another string\", i: 123},\n\t\t\tsArr:      []string{\"a\", \"b\"},\n\t\t\tf64Arr:    []float64{1.1, 2.2},\n\t\t\tsomeMap:   map[string]bool{\"a\": true, \"b\": false},\n\t\t\tuuid:      newTestUUID(),\n\t\t}\n\t\tdbt.mustExecT(t, \"INSERT INTO test_object_binding SELECT (?)\", o)\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_object_binding WHERE obj = ?\", o)\n\t\tdefer rows.Close()\n\n\t\tassertTrueE(t, rows.Next())\n\t\tvar res objectWithAllTypes\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, res.s, o.s)\n\t\tassertEqualE(t, res.b, o.b)\n\t\tassertEqualE(t, res.i16, o.i16)\n\t\tassertEqualE(t, res.i32, o.i32)\n\t\tassertEqualE(t, res.i64, o.i64)\n\t\tassertEqualE(t, res.f32, o.f32)\n\t\tassertEqualE(t, res.f64, o.f64)\n\t\tassertEqualE(t, res.nfraction, o.nfraction)\n\t\tassertEqualE(t, res.bo, o.bo)\n\t\tassertDeepEqualE(t, res.bi, o.bi)\n\t\tassertTrueE(t, res.date.Equal(o.date))\n\t\tassertEqualE(t, res.time.Hour(), o.time.Hour())\n\t\tassertEqualE(t, res.time.Minute(), o.time.Minute())\n\t\tassertEqualE(t, res.time.Second(), o.time.Second())\n\t\tassertTrueE(t, res.ltz.Equal(o.ltz))\n\t\tassertTrueE(t, res.tz.Equal(o.tz))\n\t\tassertTrueE(t, res.ntz.Equal(o.ntz))\n\t\tassertDeepEqualE(t, res.so, o.so)\n\t\tassertDeepEqualE(t, res.sArr, o.sArr)\n\t\tassertDeepEqualE(t, res.f64Arr, o.f64Arr)\n\t\tassertDeepEqualE(t, res.someMap, o.someMap)\n\t\tassertEqualE(t, res.uuid.String(), o.uuid.String())\n\t})\n}\n\nfunc TestBindingObjectWithNullableFieldsWithSchema(t *testing.T) {\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_object_binding (obj OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f64 DOUBLE, bo boolean, bi BINARY, date DATE, time TIME, ltz TIMESTAMPLTZ, ntz TIMESTAMPNTZ, tz TIMESTAMPTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_object_binding\")\n\t\t}()\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF9 TZHTZM'\")\n\t\tt.Run(\"not null\", func(t *testing.T) {\n\t\t\to := &objectWithAllTypesNullable{\n\t\t\t\ts:       sql.NullString{String: \"some string\", Valid: true},\n\t\t\t\tb:       sql.NullByte{Byte: 1, Valid: true},\n\t\t\t\ti16:     sql.NullInt16{Int16: 2, Valid: true},\n\t\t\t\ti32:     sql.NullInt32{Int32: 3, Valid: true},\n\t\t\t\ti64:     sql.NullInt64{Int64: 4, Valid: true},\n\t\t\t\tf64:     sql.NullFloat64{Float64: 2.2, Valid: true},\n\t\t\t\tbo:      sql.NullBool{Bool: true, Valid: true},\n\t\t\t\tbi:      []byte{'a', 'b', 'c'},\n\t\t\t\tdate:    sql.NullTime{Time: time.Date(2024, time.May, 24, 0, 0, 0, 0, time.UTC), Valid: true},\n\t\t\t\ttime:    sql.NullTime{Time: time.Date(1, 1, 1, 11, 22, 33, 0, time.UTC), Valid: true},\n\t\t\t\tltz:     sql.NullTime{Time: time.Date(2025, time.May, 24, 11, 22, 33, 44, warsawTz), Valid: true},\n\t\t\t\tntz:     sql.NullTime{Time: time.Date(2026, time.May, 24, 11, 22, 33, 0, time.UTC), Valid: true},\n\t\t\t\ttz:      sql.NullTime{Time: time.Date(2027, time.May, 24, 11, 22, 33, 44, warsawTz), Valid: true},\n\t\t\t\tso:      &simpleObject{s: \"another string\", i: 123},\n\t\t\t\tsArr:    []string{\"a\", \"b\"},\n\t\t\t\tf64Arr:  []float64{1.1, 2.2},\n\t\t\t\tsomeMap: map[string]bool{\"a\": true, \"b\": false},\n\t\t\t\tuuid:    newTestUUID(),\n\t\t\t}\n\t\t\tdbt.mustExecT(t, \"INSERT INTO test_object_binding SELECT (?)\", o)\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_object_binding WHERE obj = ?\", o)\n\t\t\tdefer rows.Close()\n\n\t\t\tassertTrueE(t, rows.Next())\n\t\t\tvar res objectWithAllTypesNullable\n\t\t\terr := rows.Scan(&res)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, res.s, o.s)\n\t\t\tassertEqualE(t, res.b, o.b)\n\t\t\tassertEqualE(t, res.i16, o.i16)\n\t\t\tassertEqualE(t, res.i32, o.i32)\n\t\t\tassertEqualE(t, res.i64, o.i64)\n\t\t\tassertEqualE(t, res.f64, o.f64)\n\t\t\tassertEqualE(t, res.bo, o.bo)\n\t\t\tassertDeepEqualE(t, res.bi, o.bi)\n\t\t\tassertTrueE(t, res.date.Time.Equal(o.date.Time))\n\t\t\tassertEqualE(t, res.time.Time.Hour(), o.time.Time.Hour())\n\t\t\tassertEqualE(t, res.time.Time.Minute(), o.time.Time.Minute())\n\t\t\tassertEqualE(t, res.time.Time.Second(), o.time.Time.Second())\n\t\t\tassertTrueE(t, res.ltz.Time.Equal(o.ltz.Time))\n\t\t\tassertTrueE(t, res.tz.Time.Equal(o.tz.Time))\n\t\t\tassertTrueE(t, res.ntz.Time.Equal(o.ntz.Time))\n\t\t\tassertDeepEqualE(t, res.so, o.so)\n\t\t\tassertDeepEqualE(t, res.sArr, o.sArr)\n\t\t\tassertDeepEqualE(t, res.f64Arr, o.f64Arr)\n\t\t\tassertDeepEqualE(t, res.someMap, o.someMap)\n\t\t\tassertEqualE(t, res.uuid.String(), o.uuid.String())\n\t\t})\n\t\tt.Run(\"null\", func(t *testing.T) {\n\t\t\to := &objectWithAllTypesNullable{\n\t\t\t\ts:       sql.NullString{},\n\t\t\t\tb:       sql.NullByte{},\n\t\t\t\ti16:     sql.NullInt16{},\n\t\t\t\ti32:     sql.NullInt32{},\n\t\t\t\ti64:     sql.NullInt64{},\n\t\t\t\tf64:     sql.NullFloat64{},\n\t\t\t\tbo:      sql.NullBool{},\n\t\t\t\tbi:      nil,\n\t\t\t\tdate:    sql.NullTime{},\n\t\t\t\ttime:    sql.NullTime{},\n\t\t\t\tltz:     sql.NullTime{},\n\t\t\t\tntz:     sql.NullTime{},\n\t\t\t\ttz:      sql.NullTime{},\n\t\t\t\tso:      nil,\n\t\t\t\tsArr:    nil,\n\t\t\t\tf64Arr:  nil,\n\t\t\t\tsomeMap: nil,\n\t\t\t}\n\t\t\tdbt.mustExecT(t, \"INSERT INTO test_object_binding SELECT (?)\", o)\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_object_binding WHERE obj = ?\", o)\n\t\t\tdefer rows.Close()\n\n\t\t\tassertTrueE(t, rows.Next())\n\t\t\tvar res objectWithAllTypesNullable\n\t\t\terr := rows.Scan(&res)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, res.s, o.s)\n\t\t\tassertEqualE(t, res.b, o.b)\n\t\t\tassertEqualE(t, res.i16, o.i16)\n\t\t\tassertEqualE(t, res.i32, o.i32)\n\t\t\tassertEqualE(t, res.i64, o.i64)\n\t\t\tassertEqualE(t, res.f64, o.f64)\n\t\t\tassertEqualE(t, res.bo, o.bo)\n\t\t\tassertDeepEqualE(t, res.bi, o.bi)\n\t\t\tassertTrueE(t, res.date.Time.Equal(o.date.Time))\n\t\t\tassertEqualE(t, res.time.Time.Hour(), o.time.Time.Hour())\n\t\t\tassertEqualE(t, res.time.Time.Minute(), o.time.Time.Minute())\n\t\t\tassertEqualE(t, res.time.Time.Second(), o.time.Time.Second())\n\t\t\tassertTrueE(t, res.ltz.Time.Equal(o.ltz.Time))\n\t\t\tassertTrueE(t, res.tz.Time.Equal(o.tz.Time))\n\t\t\tassertTrueE(t, res.ntz.Time.Equal(o.ntz.Time))\n\t\t\tassertDeepEqualE(t, res.so, o.so)\n\t\t\tassertDeepEqualE(t, res.sArr, o.sArr)\n\t\t\tassertDeepEqualE(t, res.f64Arr, o.f64Arr)\n\t\t\tassertDeepEqualE(t, res.someMap, o.someMap)\n\t\t})\n\t})\n}\n\nfunc TestBindingObjectWithSchemaSimpleWrite(t *testing.T) {\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_object_binding (obj OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f32 FLOAT, f64 DOUBLE, nfraction NUMBER(38, 9), bo BOOLEAN, bi BINARY, date DATE, time TIME, ltz TIMESTAMP_LTZ, tz TIMESTAMP_TZ, ntz TIMESTAMP_NTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_object_binding\")\n\t\t}()\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF9 TZHTZM'\")\n\t\to := &objectWithAllTypesSimpleScan{\n\t\t\tS:         \"some string\",\n\t\t\tB:         1,\n\t\t\tI16:       2,\n\t\t\tI32:       3,\n\t\t\tI64:       4,\n\t\t\tF32:       1.1,\n\t\t\tF64:       2.2,\n\t\t\tNfraction: 3.3,\n\t\t\tBo:        true,\n\t\t\tBi:        []byte{'a', 'b', 'c'},\n\t\t\tDate:      time.Date(2024, time.May, 24, 0, 0, 0, 0, time.UTC),\n\t\t\tTime:      time.Date(1, 1, 1, 11, 22, 33, 0, time.UTC),\n\t\t\tLtz:       time.Date(2025, time.May, 24, 11, 22, 33, 44, warsawTz),\n\t\t\tNtz:       time.Date(2026, time.May, 24, 11, 22, 33, 0, time.UTC),\n\t\t\tTz:        time.Date(2027, time.May, 24, 11, 22, 33, 44, warsawTz),\n\t\t\tSo:        &simpleObject{s: \"another string\", i: 123},\n\t\t\tSArr:      []string{\"a\", \"b\"},\n\t\t\tF64Arr:    []float64{1.1, 2.2},\n\t\t\tSomeMap:   map[string]bool{\"a\": true, \"b\": false},\n\t\t}\n\t\tdbt.mustExecT(t, \"INSERT INTO test_object_binding SELECT (?)\", o)\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_object_binding WHERE obj = ?\", o)\n\t\tdefer rows.Close()\n\n\t\tassertTrueE(t, rows.Next())\n\t\tvar res objectWithAllTypesSimpleScan\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, res.S, o.S)\n\t\tassertEqualE(t, res.B, o.B)\n\t\tassertEqualE(t, res.I16, o.I16)\n\t\tassertEqualE(t, res.I32, o.I32)\n\t\tassertEqualE(t, res.I64, o.I64)\n\t\tassertEqualE(t, res.F32, o.F32)\n\t\tassertEqualE(t, res.F64, o.F64)\n\t\tassertEqualE(t, res.Nfraction, o.Nfraction)\n\t\tassertEqualE(t, res.Bo, o.Bo)\n\t\tassertDeepEqualE(t, res.Bi, o.Bi)\n\t\tassertTrueE(t, res.Date.Equal(o.Date))\n\t\tassertEqualE(t, res.Time.Hour(), o.Time.Hour())\n\t\tassertEqualE(t, res.Time.Minute(), o.Time.Minute())\n\t\tassertEqualE(t, res.Time.Second(), o.Time.Second())\n\t\tassertTrueE(t, res.Ltz.Equal(o.Ltz))\n\t\tassertTrueE(t, res.Tz.Equal(o.Tz))\n\t\tassertTrueE(t, res.Ntz.Equal(o.Ntz))\n\t\tassertDeepEqualE(t, res.So, o.So)\n\t\tassertDeepEqualE(t, res.SArr, o.SArr)\n\t\tassertDeepEqualE(t, res.F64Arr, o.F64Arr)\n\t\tassertDeepEqualE(t, res.SomeMap, o.SomeMap)\n\t})\n}\n\nfunc TestBindingObjectWithNullableFieldsWithSchemaSimpleWrite(t *testing.T) {\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.forceJSON()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_object_binding (obj OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f64 DOUBLE, bo boolean, bi BINARY, date DATE, time TIME, ltz TIMESTAMPLTZ, tz TIMESTAMPTZ, ntz TIMESTAMPNTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_object_binding\")\n\t\t}()\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF9 TZHTZM'\")\n\t\tt.Run(\"not null\", func(t *testing.T) {\n\t\t\to := &objectWithAllTypesNullableSimpleScan{\n\t\t\t\tS:       sql.NullString{String: \"some string\", Valid: true},\n\t\t\t\tB:       sql.NullByte{Byte: 1, Valid: true},\n\t\t\t\tI16:     sql.NullInt16{Int16: 2, Valid: true},\n\t\t\t\tI32:     sql.NullInt32{Int32: 3, Valid: true},\n\t\t\t\tI64:     sql.NullInt64{Int64: 4, Valid: true},\n\t\t\t\tF64:     sql.NullFloat64{Float64: 2.2, Valid: true},\n\t\t\t\tBo:      sql.NullBool{Bool: true, Valid: true},\n\t\t\t\tBi:      []byte{'a', 'b', 'c'},\n\t\t\t\tDate:    sql.NullTime{Time: time.Date(2024, time.May, 24, 0, 0, 0, 0, time.UTC), Valid: true},\n\t\t\t\tTime:    sql.NullTime{Time: time.Date(1, 1, 1, 11, 22, 33, 0, time.UTC), Valid: true},\n\t\t\t\tLtz:     sql.NullTime{Time: time.Date(2025, time.May, 24, 11, 22, 33, 44, warsawTz), Valid: true},\n\t\t\t\tNtz:     sql.NullTime{Time: time.Date(2026, time.May, 24, 11, 22, 33, 0, time.UTC), Valid: true},\n\t\t\t\tTz:      sql.NullTime{Time: time.Date(2027, time.May, 24, 11, 22, 33, 44, warsawTz), Valid: true},\n\t\t\t\tSo:      &simpleObject{s: \"another string\", i: 123},\n\t\t\t\tSArr:    []string{\"a\", \"b\"},\n\t\t\t\tF64Arr:  []float64{1.1, 2.2},\n\t\t\t\tSomeMap: map[string]bool{\"a\": true, \"b\": false},\n\t\t\t}\n\t\t\tdbt.mustExecT(t, \"INSERT INTO test_object_binding SELECT (?)\", o)\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_object_binding WHERE obj = ?\", o)\n\t\t\tdefer rows.Close()\n\n\t\t\tassertTrueE(t, rows.Next())\n\t\t\tvar res objectWithAllTypesNullableSimpleScan\n\t\t\terr := rows.Scan(&res)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, res.S, o.S)\n\t\t\tassertEqualE(t, res.B, o.B)\n\t\t\tassertEqualE(t, res.I16, o.I16)\n\t\t\tassertEqualE(t, res.I32, o.I32)\n\t\t\tassertEqualE(t, res.I64, o.I64)\n\t\t\tassertEqualE(t, res.F64, o.F64)\n\t\t\tassertEqualE(t, res.Bo, o.Bo)\n\t\t\tassertDeepEqualE(t, res.Bi, o.Bi)\n\t\t\tassertTrueE(t, res.Date.Time.Equal(o.Date.Time))\n\t\t\tassertEqualE(t, res.Time.Time.Hour(), o.Time.Time.Hour())\n\t\t\tassertEqualE(t, res.Time.Time.Minute(), o.Time.Time.Minute())\n\t\t\tassertEqualE(t, res.Time.Time.Second(), o.Time.Time.Second())\n\t\t\tassertTrueE(t, res.Ltz.Time.Equal(o.Ltz.Time))\n\t\t\tassertTrueE(t, res.Tz.Time.Equal(o.Tz.Time))\n\t\t\tassertTrueE(t, res.Ntz.Time.Equal(o.Ntz.Time))\n\t\t\tassertDeepEqualE(t, res.So, o.So)\n\t\t\tassertDeepEqualE(t, res.SArr, o.SArr)\n\t\t\tassertDeepEqualE(t, res.F64Arr, o.F64Arr)\n\t\t\tassertDeepEqualE(t, res.SomeMap, o.SomeMap)\n\t\t})\n\t\tt.Run(\"null\", func(t *testing.T) {\n\t\t\to := &objectWithAllTypesNullableSimpleScan{\n\t\t\t\tS:       sql.NullString{},\n\t\t\t\tB:       sql.NullByte{},\n\t\t\t\tI16:     sql.NullInt16{},\n\t\t\t\tI32:     sql.NullInt32{},\n\t\t\t\tI64:     sql.NullInt64{},\n\t\t\t\tF64:     sql.NullFloat64{},\n\t\t\t\tBo:      sql.NullBool{},\n\t\t\t\tBi:      nil,\n\t\t\t\tDate:    sql.NullTime{},\n\t\t\t\tTime:    sql.NullTime{},\n\t\t\t\tLtz:     sql.NullTime{},\n\t\t\t\tNtz:     sql.NullTime{},\n\t\t\t\tTz:      sql.NullTime{},\n\t\t\t\tSo:      nil,\n\t\t\t\tSArr:    nil,\n\t\t\t\tF64Arr:  nil,\n\t\t\t\tSomeMap: nil,\n\t\t\t}\n\t\t\tdbt.mustExecT(t, \"INSERT INTO test_object_binding SELECT (?)\", o)\n\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_object_binding WHERE obj = ?\", o)\n\t\t\tdefer rows.Close()\n\n\t\t\tassertTrueE(t, rows.Next())\n\t\t\tvar res objectWithAllTypesNullableSimpleScan\n\t\t\terr := rows.Scan(&res)\n\t\t\tassertNilF(t, err)\n\t\t\tassertEqualE(t, res.S, o.S)\n\t\t\tassertEqualE(t, res.B, o.B)\n\t\t\tassertEqualE(t, res.I16, o.I16)\n\t\t\tassertEqualE(t, res.I32, o.I32)\n\t\t\tassertEqualE(t, res.I64, o.I64)\n\t\t\tassertEqualE(t, res.F64, o.F64)\n\t\t\tassertEqualE(t, res.Bo, o.Bo)\n\t\t\tassertDeepEqualE(t, res.Bi, o.Bi)\n\t\t\tassertTrueE(t, res.Date.Time.Equal(o.Date.Time))\n\t\t\tassertEqualE(t, res.Time.Time.Hour(), o.Time.Time.Hour())\n\t\t\tassertEqualE(t, res.Time.Time.Minute(), o.Time.Time.Minute())\n\t\t\tassertEqualE(t, res.Time.Time.Second(), o.Time.Time.Second())\n\t\t\tassertTrueE(t, res.Ltz.Time.Equal(o.Ltz.Time))\n\t\t\tassertTrueE(t, res.Tz.Time.Equal(o.Tz.Time))\n\t\t\tassertTrueE(t, res.Ntz.Time.Equal(o.Ntz.Time))\n\t\t\tassertDeepEqualE(t, res.So, o.So)\n\t\t\tassertDeepEqualE(t, res.SArr, o.SArr)\n\t\t\tassertDeepEqualE(t, res.F64Arr, o.F64Arr)\n\t\t\tassertDeepEqualE(t, res.SomeMap, o.SomeMap)\n\t\t})\n\t})\n}\n\ntype objectWithAllTypesWrapper struct {\n\to *objectWithAllTypes\n}\n\nfunc (o *objectWithAllTypesWrapper) Scan(val any) error {\n\tst := val.(StructuredObject)\n\tvar owat *objectWithAllTypes\n\t_, err := st.GetStruct(\"o\", owat)\n\tif err == nil {\n\t\treturn err\n\t}\n\to.o = owat\n\treturn err\n}\n\nfunc (o *objectWithAllTypesWrapper) Write(sowc StructuredObjectWriterContext) error {\n\treturn sowc.WriteNullableStruct(\"o\", o.o, reflect.TypeFor[objectWithAllTypes]())\n}\n\nfunc TestBindingObjectWithAllTypesNullable(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.forceJSON()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_object_binding (o OBJECT(o OBJECT(s VARCHAR, b TINYINT, i16 SMALLINT, i32 INTEGER, i64 BIGINT, f32 FLOAT, f64 DOUBLE, nfraction NUMBER(38, 9), bo boolean, bi BINARY, date DATE, time TIME, ltz TIMESTAMPLTZ, tz TIMESTAMPTZ, ntz TIMESTAMPNTZ, so OBJECT(s VARCHAR, i INTEGER), sArr ARRAY(VARCHAR), f64Arr ARRAY(DOUBLE), someMap MAP(VARCHAR, BOOLEAN), uuid VARCHAR)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_object_binding\")\n\t\t}()\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.mustExec(\"ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF9 TZHTZM'\")\n\t\to := &objectWithAllTypesWrapper{}\n\t\tdbt.mustExec(\"INSERT INTO test_object_binding SELECT (?)\", o)\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_object_binding WHERE o = ?\", o)\n\t\tdefer rows.Close()\n\n\t\tassertTrueE(t, rows.Next())\n\t\tvar res objectWithAllTypesWrapper\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, o, &res)\n\t})\n}\n\nfunc TestBindingObjectWithSchemaWithCustomNameAndIgnoredField(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_object_binding (obj OBJECT(anotherName VARCHAR))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_object_binding\")\n\t\t}()\n\t\to := &objectWithCustomNameAndIgnoredField{\n\t\t\tSomeString: \"some string\",\n\t\t\tIgnoreMe:   \"ignore me\",\n\t\t}\n\t\tdbt.mustExec(\"INSERT INTO test_object_binding SELECT (?)\", o)\n\t\trows := dbt.mustQueryContext(ctx, \"SELECT * FROM test_object_binding WHERE obj = ?\", o)\n\t\tdefer rows.Close()\n\n\t\tassertTrueE(t, rows.Next())\n\t\tvar res objectWithCustomNameAndIgnoredField\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertEqualE(t, res.SomeString, \"some string\")\n\t\tassertEqualE(t, res.IgnoreMe, \"\")\n\t})\n}\n\nfunc TestBindingNullStructuredObjects(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_object_binding (obj OBJECT(s VARCHAR, i INTEGER))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_object_binding\")\n\t\t}()\n\t\tdbt.mustExec(\"INSERT INTO test_object_binding SELECT (?)\", DataTypeNilObject, reflect.TypeFor[simpleObject]())\n\n\t\trows := dbt.mustQueryContext(ctx, \"SELECT * FROM test_object_binding\")\n\t\tdefer rows.Close()\n\n\t\tassertTrueE(t, rows.Next())\n\t\tvar res *simpleObject\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertNilE(t, res)\n\t})\n}\n\nfunc TestBindingArrayWithSchema(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\ttestcases := []struct {\n\t\t\tname      string\n\t\t\tarrayType string\n\t\t\tvalues    []any\n\t\t\texpected  any\n\t\t}{\n\t\t\t{\n\t\t\t\tname:      \"byte - empty\",\n\t\t\t\tarrayType: \"TINYINT\",\n\t\t\t\tvalues:    []any{[]byte{}},\n\t\t\t\texpected:  []int64{},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"byte - not empty\",\n\t\t\t\tarrayType: \"TINYINT\",\n\t\t\t\tvalues:    []any{[]byte{1, 2, 3}},\n\t\t\t\texpected:  []int64{1, 2, 3},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"int16\",\n\t\t\t\tarrayType: \"SMALLINT\",\n\t\t\t\tvalues:    []any{[]int16{1, 2, 3}},\n\t\t\t\texpected:  []int64{1, 2, 3},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"int16 - empty\",\n\t\t\t\tarrayType: \"SMALLINT\",\n\t\t\t\tvalues:    []any{[]int16{}},\n\t\t\t\texpected:  []int64{},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"int32\",\n\t\t\t\tarrayType: \"INTEGER\",\n\t\t\t\tvalues:    []any{[]int32{1, 2, 3}},\n\t\t\t\texpected:  []int64{1, 2, 3},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"int64\",\n\t\t\t\tarrayType: \"BIGINT\",\n\t\t\t\tvalues:    []any{[]int64{1, 2, 3}},\n\t\t\t\texpected:  []int64{1, 2, 3},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"float32\",\n\t\t\t\tarrayType: \"FLOAT\",\n\t\t\t\tvalues:    []any{[]float32{1.2, 3.4}},\n\t\t\t\texpected:  []float64{1.2, 3.4},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"float64\",\n\t\t\t\tarrayType: \"FLOAT\",\n\t\t\t\tvalues:    []any{[]float64{1.2, 3.4}},\n\t\t\t\texpected:  []float64{1.2, 3.4},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"bool\",\n\t\t\t\tarrayType: \"BOOLEAN\",\n\t\t\t\tvalues:    []any{[]bool{true, false}},\n\t\t\t\texpected:  []bool{true, false},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"binary\",\n\t\t\t\tarrayType: \"BINARY\",\n\t\t\t\tvalues:    []any{DataTypeBinary, [][]byte{{'a', 'b'}, {'c', 'd'}}},\n\t\t\t\texpected:  [][]byte{{'a', 'b'}, {'c', 'd'}},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"binary - empty\",\n\t\t\t\tarrayType: \"BINARY\",\n\t\t\t\tvalues:    []any{DataTypeBinary, [][]byte{}},\n\t\t\t\texpected:  [][]byte{},\n\t\t\t},\n\t\t\t{\n\t\t\t\tname:      \"date\",\n\t\t\t\tarrayType: \"DATE\",\n\t\t\t\tvalues:    []any{DataTypeDate, []time.Time{time.Date(2024, time.June, 4, 0, 0, 0, 0, time.UTC)}},\n\t\t\t\texpected:  []time.Time{time.Date(2024, time.June, 4, 0, 0, 0, 0, time.UTC)},\n\t\t\t},\n\t\t}\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\t\tdbt.mustExecT(t, fmt.Sprintf(\"CREATE OR REPLACE TABLE test_array_binding (arr ARRAY(%s))\", tc.arrayType))\n\t\t\t\tdefer func() {\n\t\t\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_array_binding\")\n\t\t\t\t}()\n\n\t\t\t\tdbt.mustExecT(t, \"INSERT INTO test_array_binding SELECT (?)\", tc.values...)\n\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_array_binding\")\n\t\t\t\tdefer rows.Close()\n\n\t\t\t\tassertTrueE(t, rows.Next())\n\t\t\t\tvar res any\n\t\t\t\terr := rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tassertDeepEqualE(t, res, tc.expected)\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestBindingArrayOfObjects(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_array_binding (arr ARRAY(OBJECT(s VARCHAR, i INTEGER)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_array_binding\")\n\t\t}()\n\n\t\tarr := []*simpleObject{{s: \"some string\", i: 123}}\n\t\tdbt.mustExec(\"INSERT INTO test_array_binding SELECT (?)\", arr)\n\n\t\trows := dbt.mustQueryContext(ctx, \"SELECT * FROM test_array_binding WHERE arr = ?\", arr)\n\t\tdefer rows.Close()\n\n\t\tassertTrueE(t, rows.Next())\n\t\tvar res []*simpleObject\n\t\terr := rows.Scan(ScanArrayOfScanners(&res))\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, arr)\n\t})\n}\n\nfunc TestBindingEmptyArrayOfObjects(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_array_binding (arr ARRAY(OBJECT(s VARCHAR, i INTEGER)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_array_binding\")\n\t\t}()\n\n\t\tarr := []*simpleObject{}\n\t\tdbt.mustExec(\"INSERT INTO test_array_binding SELECT (?)\", arr)\n\n\t\trows := dbt.mustQueryContext(ctx, \"SELECT * FROM test_array_binding WHERE arr = ?\", arr)\n\t\tdefer rows.Close()\n\n\t\tassertTrueF(t, rows.Next())\n\t\tvar res []*simpleObject\n\t\terr := rows.Scan(ScanArrayOfScanners(&res))\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, arr)\n\t})\n}\n\nfunc TestBindingNilArrayOfObjects(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_array_binding (arr ARRAY(OBJECT(s VARCHAR, i INTEGER)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_array_binding\")\n\t\t}()\n\n\t\tvar arr []*simpleObject\n\t\tdbt.mustExec(\"INSERT INTO test_array_binding SELECT (?)\", DataTypeNilArray, reflect.TypeFor[simpleObject]())\n\n\t\trows := dbt.mustQueryContext(ctx, \"SELECT * FROM test_array_binding\")\n\t\tdefer rows.Close()\n\n\t\tassertTrueF(t, rows.Next())\n\t\tvar res []*simpleObject\n\t\terr := rows.Scan(ScanArrayOfScanners(&res))\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, arr)\n\t})\n}\n\nfunc TestBindingNilArrayOfInts(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_array_binding (arr ARRAY(INTEGER))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExec(\"DROP TABLE IF EXISTS test_array_binding\")\n\t\t}()\n\n\t\tvar arr *[]int64\n\t\tdbt.mustExec(\"INSERT INTO test_array_binding SELECT (?)\", DataTypeNilArray, reflect.TypeFor[int]())\n\n\t\trows := dbt.mustQueryContext(ctx, \"SELECT * FROM test_array_binding\")\n\t\tdefer rows.Close()\n\n\t\tassertTrueF(t, rows.Next())\n\t\tvar res *[]int64\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, arr)\n\t})\n}\n\nfunc TestBindingMap(t *testing.T) {\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tctx := WithStructuredTypesEnabled(context.Background())\n\ttestcases := []struct {\n\t\ttableDefinition string\n\t\tvalues          []any\n\t\texpected        any\n\t\tisTimeOnly      bool\n\t}{\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, VARCHAR\",\n\t\t\tvalues: []any{map[string]string{\n\t\t\t\t\"a\": \"b\",\n\t\t\t\t\"c\": \"d\",\n\t\t\t}},\n\t\t\texpected: map[string]string{\n\t\t\t\t\"a\": \"b\",\n\t\t\t\t\"c\": \"d\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"INTEGER, VARCHAR\",\n\t\t\tvalues: []any{map[int64]string{\n\t\t\t\t1: \"b\",\n\t\t\t\t2: \"d\",\n\t\t\t}},\n\t\t\texpected: map[int64]string{\n\t\t\t\t1: \"b\",\n\t\t\t\t2: \"d\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, BOOLEAN\",\n\t\t\tvalues: []any{map[string]bool{\n\t\t\t\t\"a\": true,\n\t\t\t\t\"c\": false,\n\t\t\t}},\n\t\t\texpected: map[string]bool{\n\t\t\t\t\"a\": true,\n\t\t\t\t\"c\": false,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, INTEGER\",\n\t\t\tvalues: []any{map[string]int64{\n\t\t\t\t\"a\": 1,\n\t\t\t\t\"b\": 2,\n\t\t\t}},\n\t\t\texpected: map[string]int64{\n\t\t\t\t\"a\": 1,\n\t\t\t\t\"b\": 2,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, DOUBLE\",\n\t\t\tvalues: []any{map[string]float64{\n\t\t\t\t\"a\": 1.1,\n\t\t\t\t\"b\": 2.2,\n\t\t\t}},\n\t\t\texpected: map[string]float64{\n\t\t\t\t\"a\": 1.1,\n\t\t\t\t\"b\": 2.2,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"INTEGER, BINARY\",\n\t\t\tvalues: []any{DataTypeBinary, map[int64][]byte{\n\t\t\t\t1: {'a', 'b'},\n\t\t\t\t2: {'c', 'd'},\n\t\t\t}},\n\t\t\texpected: map[int64][]byte{\n\t\t\t\t1: {'a', 'b'},\n\t\t\t\t2: {'c', 'd'},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, BINARY\",\n\t\t\tvalues: []any{DataTypeBinary, map[string][]byte{\n\t\t\t\t\"a\": {'a', 'b'},\n\t\t\t\t\"b\": {'c', 'd'},\n\t\t\t}},\n\t\t\texpected: map[string][]byte{\n\t\t\t\t\"a\": {'a', 'b'},\n\t\t\t\t\"b\": {'c', 'd'},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, DATE\",\n\t\t\tvalues: []any{DataTypeDate, map[string]time.Time{\n\t\t\t\t\"a\": time.Date(2024, time.June, 25, 0, 0, 0, 0, time.UTC),\n\t\t\t\t\"b\": time.Date(2024, time.June, 26, 0, 0, 0, 0, time.UTC),\n\t\t\t}},\n\t\t\texpected: map[string]time.Time{\n\t\t\t\t\"a\": time.Date(2024, time.June, 25, 0, 0, 0, 0, time.UTC),\n\t\t\t\t\"b\": time.Date(2024, time.June, 26, 0, 0, 0, 0, time.UTC),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, TIME\",\n\t\t\tvalues: []any{DataTypeTime, map[string]time.Time{\n\t\t\t\t\"a\": time.Date(1, time.January, 1, 11, 22, 33, 0, time.UTC),\n\t\t\t\t\"b\": time.Date(2, time.January, 1, 22, 11, 44, 0, time.UTC),\n\t\t\t}},\n\t\t\texpected: map[string]time.Time{\n\t\t\t\t\"a\": time.Date(1, time.January, 1, 11, 22, 33, 0, time.UTC),\n\t\t\t\t\"b\": time.Date(2, time.January, 1, 22, 11, 44, 0, time.UTC),\n\t\t\t},\n\t\t\tisTimeOnly: true,\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, TIMESTAMPNTZ\",\n\t\t\tvalues: []any{DataTypeTimestampNtz, map[string]time.Time{\n\t\t\t\t\"a\": time.Date(2024, time.June, 25, 11, 22, 33, 0, time.UTC),\n\t\t\t\t\"b\": time.Date(2024, time.June, 26, 11, 22, 33, 0, time.UTC),\n\t\t\t}},\n\t\t\texpected: map[string]time.Time{\n\t\t\t\t\"a\": time.Date(2024, time.June, 25, 11, 22, 33, 0, time.UTC),\n\t\t\t\t\"b\": time.Date(2024, time.June, 26, 11, 22, 33, 0, time.UTC),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, TIMESTAMPTZ\",\n\t\t\tvalues: []any{DataTypeTimestampTz, map[string]time.Time{\n\t\t\t\t\"a\": time.Date(2024, time.June, 25, 11, 22, 33, 0, warsawTz),\n\t\t\t\t\"b\": time.Date(2024, time.June, 26, 11, 22, 33, 0, warsawTz),\n\t\t\t}},\n\t\t\texpected: map[string]time.Time{\n\t\t\t\t\"a\": time.Date(2024, time.June, 25, 11, 22, 33, 0, warsawTz),\n\t\t\t\t\"b\": time.Date(2024, time.June, 26, 11, 22, 33, 0, warsawTz),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, TIMESTAMPLTZ\",\n\t\t\tvalues: []any{DataTypeTimestampLtz, map[string]time.Time{\n\t\t\t\t\"a\": time.Date(2024, time.June, 25, 11, 22, 33, 0, warsawTz),\n\t\t\t\t\"b\": time.Date(2024, time.June, 26, 11, 22, 33, 0, warsawTz),\n\t\t\t}},\n\t\t\texpected: map[string]time.Time{\n\t\t\t\t\"a\": time.Date(2024, time.June, 25, 11, 22, 33, 0, warsawTz),\n\t\t\t\t\"b\": time.Date(2024, time.June, 26, 11, 22, 33, 0, warsawTz),\n\t\t\t},\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExecT(t, \"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.enableStructuredTypesBinding()\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.tableDefinition, func(t *testing.T) {\n\t\t\t\tdbt.mustExecT(t, fmt.Sprintf(\"CREATE OR REPLACE TABLE test_map_binding (m MAP(%v))\", tc.tableDefinition))\n\t\t\t\tdefer func() {\n\t\t\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_map_binding\")\n\t\t\t\t}()\n\n\t\t\t\tdbt.mustExecT(t, \"INSERT INTO test_map_binding SELECT (?)\", tc.values...)\n\n\t\t\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_map_binding WHERE m = ?\", tc.values...)\n\t\t\t\tdefer rows.Close()\n\n\t\t\t\tassertTrueE(t, rows.Next())\n\t\t\t\tvar res any\n\t\t\t\terr := rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tif m, ok := tc.expected.(map[string]time.Time); ok {\n\t\t\t\t\tresTimes := res.(map[string]time.Time)\n\t\t\t\t\tfor k, v := range m {\n\t\t\t\t\t\tif tc.isTimeOnly {\n\t\t\t\t\t\t\tassertEqualE(t, resTimes[k].Hour(), v.Hour())\n\t\t\t\t\t\t\tassertEqualE(t, resTimes[k].Minute(), v.Minute())\n\t\t\t\t\t\t\tassertEqualE(t, resTimes[k].Second(), v.Second())\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tassertTrueE(t, resTimes[k].Equal(v))\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tassertDeepEqualE(t, res, tc.expected)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestBindingMapOfStructs(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_map_binding (m MAP(VARCHAR, OBJECT(s VARCHAR, i INTEGER)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_map_binding\")\n\t\t}()\n\t\tm := map[string]*simpleObject{\n\t\t\t\"a\": {\"abc\", 1},\n\t\t\t\"b\": nil,\n\t\t\t\"c\": {\"def\", 2},\n\t\t}\n\n\t\tdbt.mustExecT(t, \"INSERT INTO test_map_binding SELECT ?\", m)\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_map_binding WHERE m = ?\", m)\n\t\tdefer rows.Close()\n\n\t\trows.Next()\n\t\tvar res map[string]*simpleObject\n\t\terr := rows.Scan(ScanMapOfScanners(&res))\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, m)\n\t})\n}\n\nfunc TestBindingMapOfWithAllValuesNil(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_map_binding (m MAP(VARCHAR, OBJECT(s VARCHAR, i INTEGER)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_map_binding\")\n\t\t}()\n\t\tm := map[string]*simpleObject{\n\t\t\t\"a\": nil,\n\t\t}\n\n\t\tdbt.mustExecT(t, \"INSERT INTO test_map_binding SELECT ?\", m)\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_map_binding WHERE m = ?\", m)\n\t\tdefer rows.Close()\n\n\t\trows.Next()\n\t\tvar res map[string]*simpleObject\n\t\terr := rows.Scan(ScanMapOfScanners(&res))\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, m)\n\t})\n}\n\nfunc TestBindingEmptyMapOfStructs(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_map_binding (m MAP(VARCHAR, OBJECT(s VARCHAR, i INTEGER)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_map_binding\")\n\t\t}()\n\n\t\tm := map[string]*simpleObject{}\n\t\tdbt.mustExecT(t, \"INSERT INTO test_map_binding SELECT ?\", m)\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_map_binding WHERE m = ?\", m)\n\t\tdefer rows.Close()\n\n\t\tassertTrueF(t, rows.Next())\n\t\tvar res map[string]*simpleObject\n\t\terr := rows.Scan(ScanMapOfScanners(&res))\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, m)\n\t})\n}\n\nfunc TestBindingEmptyMapOfInts(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_map_binding (m MAP(VARCHAR, INTEGER))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_map_binding\")\n\t\t}()\n\n\t\tm := map[string]int64{}\n\t\tdbt.mustExecT(t, \"INSERT INTO test_map_binding SELECT ?\", m)\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_map_binding WHERE m = ?\", m)\n\t\tdefer rows.Close()\n\n\t\tassertTrueF(t, rows.Next())\n\t\tvar res map[string]int64\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, m)\n\t})\n}\n\nfunc TestBindingNilMapOfStructs(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_map_binding (m MAP(VARCHAR, OBJECT(s VARCHAR, i INTEGER)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_map_binding\")\n\t\t}()\n\n\t\tvar m map[string]*simpleObject\n\t\tdbt.mustExecT(t, \"INSERT INTO test_map_binding SELECT ?\", DataTypeNilMap, NilMapTypes{Key: reflect.TypeFor[string](), Value: reflect.TypeFor[*simpleObject]()})\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_map_binding\", DataTypeNilMap, NilMapTypes{Key: reflect.TypeFor[string](), Value: reflect.TypeFor[*simpleObject]()})\n\t\tdefer rows.Close()\n\n\t\tassertTrueF(t, rows.Next())\n\t\tvar res map[string]*simpleObject\n\t\terr := rows.Scan(ScanMapOfScanners(&res))\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, m)\n\t})\n}\n\nfunc TestBindingNilMapOfInts(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_map_binding (m MAP(VARCHAR, INTEGER))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_map_binding\")\n\t\t}()\n\n\t\tvar m *map[string]int64\n\t\tdbt.mustExecT(t, \"INSERT INTO test_map_binding SELECT ?\", DataTypeNilMap, NilMapTypes{Key: reflect.TypeFor[string](), Value: reflect.TypeFor[int]()})\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_map_binding\", DataTypeNilMap, NilMapTypes{Key: reflect.TypeFor[string](), Value: reflect.TypeFor[int]()})\n\t\tdefer rows.Close()\n\n\t\tassertTrueF(t, rows.Next())\n\t\tvar res *map[string]int64\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, m)\n\t})\n}\n\nfunc TestBindingMapOfArrays(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.enableStructuredTypesBinding()\n\t\tdbt.mustExec(\"CREATE OR REPLACE TABLE test_map_binding (m MAP(VARCHAR, ARRAY(INTEGER)))\")\n\t\tdefer func() {\n\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_map_binding\")\n\t\t}()\n\n\t\tm := map[string][]int64{\n\t\t\t\"a\": {1, 2},\n\t\t\t\"b\": nil,\n\t\t}\n\t\tdbt.mustExecT(t, \"INSERT INTO test_map_binding SELECT ?\", m)\n\t\trows := dbt.mustQueryContextT(ctx, t, \"SELECT * FROM test_map_binding\", m)\n\t\tdefer rows.Close()\n\n\t\tassertTrueF(t, rows.Next())\n\t\tvar res map[string][]int64\n\t\terr := rows.Scan(&res)\n\t\tassertNilF(t, err)\n\t\tassertDeepEqualE(t, res, m)\n\t})\n}\n\nfunc TestBindingMapWithNillableValues(t *testing.T) {\n\tctx := WithStructuredTypesEnabled(context.Background())\n\twarsawTz, err := time.LoadLocation(\"Europe/Warsaw\")\n\tassertNilF(t, err)\n\tvar testcases = []struct {\n\t\ttableDefinition string\n\t\tvalues          []any\n\t\texpected        any\n\t\tisTimeOnly      bool\n\t}{\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, VARCHAR\",\n\t\t\tvalues: []any{map[string]sql.NullString{\n\t\t\t\t\"a\": {String: \"b\", Valid: true},\n\t\t\t\t\"c\": {},\n\t\t\t}},\n\t\t\texpected: map[string]sql.NullString{\n\t\t\t\t\"a\": {String: \"b\", Valid: true},\n\t\t\t\t\"c\": {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"INTEGER, VARCHAR\",\n\t\t\tvalues: []any{map[int64]sql.NullString{\n\t\t\t\t1: {String: \"b\", Valid: true},\n\t\t\t\t2: {},\n\t\t\t}},\n\t\t\texpected: map[int64]sql.NullString{\n\t\t\t\t1: {String: \"b\", Valid: true},\n\t\t\t\t2: {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, BOOLEAN\",\n\t\t\tvalues: []any{map[string]sql.NullBool{\n\t\t\t\t\"a\": {Bool: true, Valid: true},\n\t\t\t\t\"c\": {},\n\t\t\t}},\n\t\t\texpected: map[string]sql.NullBool{\n\t\t\t\t\"a\": {Bool: true, Valid: true},\n\t\t\t\t\"c\": {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, INTEGER\",\n\t\t\tvalues: []any{map[string]sql.NullInt64{\n\t\t\t\t\"a\": {Int64: 1, Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t}},\n\t\t\texpected: map[string]sql.NullInt64{\n\t\t\t\t\"a\": {Int64: 1, Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, DOUBLE\",\n\t\t\tvalues: []any{map[string]sql.NullFloat64{\n\t\t\t\t\"a\": {Float64: 1.1, Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t}},\n\t\t\texpected: map[string]sql.NullFloat64{\n\t\t\t\t\"a\": {Float64: 1.1, Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"INTEGER, BINARY\",\n\t\t\tvalues: []any{DataTypeBinary, map[int64][]byte{\n\t\t\t\t1: {'a', 'b'},\n\t\t\t\t2: nil,\n\t\t\t}},\n\t\t\texpected: map[int64][]byte{\n\t\t\t\t1: {'a', 'b'},\n\t\t\t\t2: nil,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, BINARY\",\n\t\t\tvalues: []any{DataTypeBinary, map[string][]byte{\n\t\t\t\t\"a\": {'a', 'b'},\n\t\t\t\t\"b\": nil,\n\t\t\t}},\n\t\t\texpected: map[string][]byte{\n\t\t\t\t\"a\": {'a', 'b'},\n\t\t\t\t\"b\": nil,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, DATE\",\n\t\t\tvalues: []any{DataTypeDate, map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(2024, time.June, 25, 0, 0, 0, 0, time.UTC), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t}},\n\t\t\texpected: map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(2024, time.June, 25, 0, 0, 0, 0, time.UTC), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, TIME\",\n\t\t\tvalues: []any{DataTypeTime, map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(1, time.January, 1, 11, 22, 33, 0, time.UTC), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t}},\n\t\t\texpected: map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(1, time.January, 1, 11, 22, 33, 0, time.UTC), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t},\n\t\t\tisTimeOnly: true,\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, TIMESTAMPNTZ\",\n\t\t\tvalues: []any{DataTypeTimestampNtz, map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(2024, time.June, 25, 11, 22, 33, 0, time.UTC), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t}},\n\t\t\texpected: map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(2024, time.June, 25, 11, 22, 33, 0, time.UTC), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, TIMESTAMPTZ\",\n\t\t\tvalues: []any{DataTypeTimestampTz, map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(2024, time.June, 25, 11, 22, 33, 0, warsawTz), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t}},\n\t\t\texpected: map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(2024, time.June, 25, 11, 22, 33, 0, warsawTz), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\ttableDefinition: \"VARCHAR, TIMESTAMPLTZ\",\n\t\t\tvalues: []any{DataTypeTimestampLtz, map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(2024, time.June, 25, 11, 22, 33, 0, warsawTz), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t}},\n\t\t\texpected: map[string]sql.NullTime{\n\t\t\t\t\"a\": {Time: time.Date(2024, time.June, 25, 11, 22, 33, 0, warsawTz), Valid: true},\n\t\t\t\t\"b\": {},\n\t\t\t},\n\t\t},\n\t}\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tdbt.mustExecT(t, \"ALTER SESSION SET TIMEZONE = 'Europe/Warsaw'\")\n\t\tdbt.enableStructuredTypesBinding()\n\t\tfor _, tc := range testcases {\n\t\t\tt.Run(tc.tableDefinition, func(t *testing.T) {\n\t\t\t\tdbt.mustExecT(t, fmt.Sprintf(\"CREATE OR REPLACE TABLE test_map_binding (m MAP(%v))\", tc.tableDefinition))\n\t\t\t\tdefer func() {\n\t\t\t\t\tdbt.mustExecT(t, \"DROP TABLE IF EXISTS test_map_binding\")\n\t\t\t\t}()\n\n\t\t\t\tdbt.mustExecT(t, \"INSERT INTO test_map_binding SELECT (?)\", tc.values...)\n\n\t\t\t\trows := dbt.mustQueryContextT(WithEmbeddedValuesNullable(ctx), t, \"SELECT * FROM test_map_binding WHERE m = ?\", tc.values...)\n\t\t\t\tdefer rows.Close()\n\n\t\t\t\tassertTrueE(t, rows.Next())\n\t\t\t\tvar res any\n\t\t\t\terr := rows.Scan(&res)\n\t\t\t\tassertNilF(t, err)\n\t\t\t\tif m, ok := tc.expected.(map[string]sql.NullTime); ok {\n\t\t\t\t\tresTimes := res.(map[string]sql.NullTime)\n\t\t\t\t\tfor k, v := range m {\n\t\t\t\t\t\tif tc.isTimeOnly {\n\t\t\t\t\t\t\tassertEqualE(t, resTimes[k].Valid, v.Valid)\n\t\t\t\t\t\t\tassertEqualE(t, resTimes[k].Time.Hour(), v.Time.Hour())\n\t\t\t\t\t\t\tassertEqualE(t, resTimes[k].Time.Minute(), v.Time.Minute())\n\t\t\t\t\t\t\tassertEqualE(t, resTimes[k].Time.Second(), v.Time.Second())\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tassertEqualE(t, resTimes[k].Valid, v.Valid)\n\t\t\t\t\t\t\tif v.Valid {\n\t\t\t\t\t\t\t\tassertTrueE(t, resTimes[k].Time.Equal(v.Time))\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tassertDeepEqualE(t, res, tc.expected)\n\t\t\t\t}\n\t\t\t})\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "telemetry.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n)\n\nconst (\n\ttelemetryPath           = \"/telemetry/send\"\n\tdefaultTelemetryTimeout = 10 * time.Second\n\tdefaultFlushSize        = 100\n)\n\nconst (\n\ttypeKey          = \"type\"\n\tsourceKey        = \"source\"\n\tqueryIDKey       = \"QueryID\"\n\tdriverTypeKey    = \"DriverType\"\n\tdriverVersionKey = \"DriverVersion\"\n\tgolangVersionKey = \"GolangVersion\"\n\tsqlStateKey      = \"SQLState\"\n\treasonKey        = \"reason\"\n\terrorNumberKey   = \"ErrorNumber\"\n\tstacktraceKey    = \"Stacktrace\"\n)\n\nconst (\n\ttelemetrySource      = \"golang_driver\"\n\tsqlException         = \"client_sql_exception\"\n\tconnectionParameters = \"client_connection_parameters\"\n)\n\ntype telemetryData struct {\n\tTimestamp int64             `json:\"timestamp,omitempty\"`\n\tMessage   map[string]string `json:\"message,omitempty\"`\n}\n\ntype snowflakeTelemetry struct {\n\tlogs      []*telemetryData\n\tflushSize int\n\tsr        *snowflakeRestful\n\tmutex     *sync.Mutex\n\tenabled   bool\n}\n\nfunc (st *snowflakeTelemetry) addLog(data *telemetryData) error {\n\tif !st.enabled {\n\t\tlogger.Debug(\"telemetry disabled; not adding log\")\n\t\treturn nil\n\t}\n\tst.mutex.Lock()\n\tst.logs = append(st.logs, data)\n\tshouldFlush := len(st.logs) >= st.flushSize\n\tst.mutex.Unlock()\n\tif shouldFlush {\n\t\tif err := st.sendBatch(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (st *snowflakeTelemetry) sendBatch() error {\n\tif !st.enabled {\n\t\tlogger.Debug(\"telemetry disabled; not sending log\")\n\t\treturn nil\n\t}\n\ttype telemetry struct {\n\t\tLogs []*telemetryData `json:\"logs\"`\n\t}\n\n\tst.mutex.Lock()\n\tlogsToSend := st.logs\n\tminicoreLoadLogs.mu.Lock()\n\tif mcLogs := minicoreLoadLogs.logs; len(mcLogs) > 0 {\n\t\tlogsToSend = append(logsToSend, &telemetryData{\n\t\t\tTimestamp: time.Now().UnixMilli(),\n\t\t\tMessage: map[string]string{\n\t\t\t\t\"minicoreLogs\": strings.Join(mcLogs, \"; \"),\n\t\t\t},\n\t\t})\n\t\tminicoreLoadLogs.logs = make([]string, 0)\n\t}\n\tminicoreLoadLogs.mu.Unlock()\n\tst.logs = make([]*telemetryData, 0)\n\tst.mutex.Unlock()\n\n\tif len(logsToSend) == 0 {\n\t\tlogger.Debug(\"nothing to send to telemetry\")\n\t\treturn nil\n\t}\n\n\ts := &telemetry{logsToSend}\n\tbody, err := json.Marshal(s)\n\tif err != nil {\n\t\treturn err\n\t}\n\tlogger.Debugf(\"sending %v logs to telemetry.\", len(logsToSend))\n\tlogger.Debugf(\"telemetry payload being sent: %v\", string(body))\n\n\theaders := getHeaders()\n\tif token, _, _ := st.sr.TokenAccessor.GetTokens(); token != \"\" {\n\t\theaders[headerAuthorizationKey] = fmt.Sprintf(headerSnowflakeToken, token)\n\t}\n\tfullURL := st.sr.getFullURL(telemetryPath, nil)\n\tresp, err := st.sr.FuncPost(context.Background(), st.sr,\n\t\tfullURL, headers, body,\n\t\tdefaultTelemetryTimeout, defaultTimeProvider, nil)\n\tif err != nil {\n\t\tlogger.Errorf(\"failed to upload metrics to telemetry. err: %v\", err)\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif err = resp.Body.Close(); err != nil {\n\t\t\tlogger.Errorf(\"failed to close response body for %v. err: %v\", fullURL, err)\n\t\t}\n\t}()\n\tif resp.StatusCode != http.StatusOK {\n\t\terr = fmt.Errorf(\"non-successful response from telemetry server: %v. \"+\n\t\t\t\"disabling telemetry\", resp.StatusCode)\n\t\tlogger.Error(err.Error())\n\t\tst.enabled = false\n\t\treturn err\n\t}\n\tvar respd telemetryResponse\n\tif err = json.NewDecoder(resp.Body).Decode(&respd); err != nil {\n\t\tlogger.Errorf(\"cannot decode telemetry response body: %v\", err)\n\t\tst.enabled = false\n\t\treturn err\n\t}\n\tif !respd.Success {\n\t\terr = fmt.Errorf(\"telemetry send failed with error code: %v, message: %v\",\n\t\t\trespd.Code, respd.Message)\n\t\tlogger.Error(err.Error())\n\t\tst.enabled = false\n\t\treturn err\n\t}\n\tlogger.Debug(\"successfully uploaded metrics to telemetry\")\n\treturn nil\n}\n"
  },
  {
    "path": "telemetry_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"math/rand\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestTelemetryAddLog(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tst := &snowflakeTelemetry{\n\t\t\tsr:        sct.sc.rest,\n\t\t\tmutex:     &sync.Mutex{},\n\t\t\tenabled:   true,\n\t\t\tflushSize: defaultFlushSize,\n\t\t}\n\t\tr := rand.New(rand.NewSource(time.Now().UnixNano()))\n\t\trandNum := r.Int() % 10000\n\t\tfor range randNum {\n\t\t\tif err := st.addLog(&telemetryData{\n\t\t\t\tMessage: map[string]string{\n\t\t\t\t\ttypeKey:    \"client_telemetry_type\",\n\t\t\t\t\tqueryIDKey: \"123\",\n\t\t\t\t},\n\t\t\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t\t\t}); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t}\n\t\tif len(st.logs) != randNum%defaultFlushSize {\n\t\t\tt.Errorf(\"length of remaining logs does not match. expected: %v, got: %v\",\n\t\t\t\trandNum%defaultFlushSize, len(st.logs))\n\t\t}\n\t\tif err := st.sendBatch(); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t})\n}\n\nfunc TestTelemetrySQLException(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tsct.sc.telemetry = &snowflakeTelemetry{\n\t\t\tsr:        sct.sc.rest,\n\t\t\tmutex:     &sync.Mutex{},\n\t\t\tenabled:   true,\n\t\t\tflushSize: defaultFlushSize,\n\t\t}\n\t\tsfa := &snowflakeFileTransferAgent{\n\t\t\tctx:         context.Background(),\n\t\t\tsc:          sct.sc,\n\t\t\tcommandType: uploadCommand,\n\t\t\tsrcFiles:    make([]string, 0),\n\t\t\tdata: &execResponseData{\n\t\t\t\tSrcLocations: make([]string, 0),\n\t\t\t},\n\t\t}\n\t\tif err := sfa.initFileMetadata(); err == nil {\n\t\t\tt.Fatal(\"this should have thrown an error\")\n\t\t}\n\t\tif len(sct.sc.telemetry.logs) != 1 {\n\t\t\tt.Errorf(\"there should be 1 telemetry data in log. found: %v\", len(sct.sc.telemetry.logs))\n\t\t}\n\t\tif sendErr := sct.sc.telemetry.sendBatch(); sendErr != nil {\n\t\t\tt.Fatal(sendErr)\n\t\t}\n\t\tif len(sct.sc.telemetry.logs) != 0 {\n\t\t\tt.Errorf(\"there should be no telemetry data in log. found: %v\", len(sct.sc.telemetry.logs))\n\t\t}\n\t})\n}\n\nfunc funcPostTelemetryRespFail(_ context.Context, _ *snowflakeRestful, _ *url.URL, _ map[string]string, _ []byte, _ time.Duration, _ currentTimeProvider, _ *Config) (*http.Response, error) {\n\treturn nil, errors.New(\"failed to upload metrics to telemetry\")\n}\n\nfunc TestTelemetryError(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tst := &snowflakeTelemetry{\n\t\t\tsr: &snowflakeRestful{\n\t\t\t\tFuncPost:      funcPostTelemetryRespFail,\n\t\t\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t\t\t},\n\t\t\tmutex:     &sync.Mutex{},\n\t\t\tenabled:   true,\n\t\t\tflushSize: defaultFlushSize,\n\t\t}\n\n\t\tif err := st.addLog(&telemetryData{\n\t\t\tMessage: map[string]string{\n\t\t\t\ttypeKey:    \"client_telemetry_type\",\n\t\t\t\tqueryIDKey: \"123\",\n\t\t\t},\n\t\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t\t}); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\terr := st.sendBatch()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have failed\")\n\t\t}\n\t})\n}\n\nfunc TestTelemetryDisabledOnBadResponse(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tst := &snowflakeTelemetry{\n\t\t\tsr: &snowflakeRestful{\n\t\t\t\tFuncPost:      postTestAppBadGatewayError,\n\t\t\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t\t\t},\n\t\t\tmutex:     &sync.Mutex{},\n\t\t\tenabled:   true,\n\t\t\tflushSize: defaultFlushSize,\n\t\t}\n\n\t\tif err := st.addLog(&telemetryData{\n\t\t\tMessage: map[string]string{\n\t\t\t\ttypeKey:    \"client_telemetry_type\",\n\t\t\t\tqueryIDKey: \"123\",\n\t\t\t},\n\t\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t\t}); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\terr := st.sendBatch()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have failed\")\n\t\t}\n\t\tif st.enabled == true {\n\t\t\tt.Fatal(\"telemetry should be disabled\")\n\t\t}\n\n\t\tst.enabled = true\n\t\tst.sr.FuncPost = postTestQueryNotExecuting\n\t\tif err = st.addLog(&telemetryData{\n\t\t\tMessage: map[string]string{\n\t\t\t\ttypeKey:    \"client_telemetry_type\",\n\t\t\t\tqueryIDKey: \"123\",\n\t\t\t},\n\t\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t\t}); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\terr = st.sendBatch()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have failed\")\n\t\t}\n\t\tif st.enabled == true {\n\t\t\tt.Fatal(\"telemetry should be disabled\")\n\t\t}\n\n\t\tst.enabled = true\n\t\tst.sr.FuncPost = postTestSuccessButInvalidJSON\n\t\tif err = st.addLog(&telemetryData{\n\t\t\tMessage: map[string]string{\n\t\t\t\ttypeKey:    \"client_telemetry_type\",\n\t\t\t\tqueryIDKey: \"123\",\n\t\t\t},\n\t\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t\t}); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\terr = st.sendBatch()\n\t\tif err == nil {\n\t\t\tt.Fatal(\"should have failed\")\n\t\t}\n\t\tif st.enabled == true {\n\t\t\tt.Fatal(\"telemetry should be disabled\")\n\t\t}\n\t})\n}\n\nfunc TestTelemetryDisabled(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tst := &snowflakeTelemetry{\n\t\t\tsr: &snowflakeRestful{\n\t\t\t\tFuncPost:      postTestAppBadGatewayError,\n\t\t\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t\t\t},\n\t\t\tmutex:     &sync.Mutex{},\n\t\t\tenabled:   false, // disable\n\t\t\tflushSize: defaultFlushSize,\n\t\t}\n\t\tif err := st.addLog(&telemetryData{\n\t\t\tMessage: map[string]string{\n\t\t\t\ttypeKey:    \"client_telemetry_type\",\n\t\t\t\tqueryIDKey: \"123\",\n\t\t\t},\n\t\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t\t}); err != nil {\n\t\t\tt.Fatalf(\"calling addLog should not return an error just because telemetry is disabled, but did: %v\", err)\n\t\t}\n\t\tst.enabled = true\n\t\tif err := st.addLog(&telemetryData{\n\t\t\tMessage: map[string]string{\n\t\t\t\ttypeKey:    \"client_telemetry_type\",\n\t\t\t\tqueryIDKey: \"123\",\n\t\t\t},\n\t\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t\t}); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tst.enabled = false\n\t\terr := st.sendBatch()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"calling sendBatch should not return an error just because telemetry is disabled, but did: %v\", err)\n\t\t}\n\t})\n}\n\nfunc TestAddLogError(t *testing.T) {\n\trunSnowflakeConnTest(t, func(sct *SCTest) {\n\t\tst := &snowflakeTelemetry{\n\t\t\tsr: &snowflakeRestful{\n\t\t\t\tFuncPost:      funcPostTelemetryRespFail,\n\t\t\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t\t\t},\n\t\t\tmutex:     &sync.Mutex{},\n\t\t\tenabled:   true,\n\t\t\tflushSize: 1,\n\t\t}\n\n\t\tif err := st.addLog(&telemetryData{\n\t\t\tMessage: map[string]string{\n\t\t\t\ttypeKey:    \"client_telemetry_type\",\n\t\t\t\tqueryIDKey: \"123\",\n\t\t\t},\n\t\t\tTimestamp: time.Now().UnixNano() / int64(time.Millisecond),\n\t\t}); err == nil {\n\t\t\tt.Fatal(\"should have failed\")\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "test_data/.gitignore",
    "content": "writeonly.csv"
  },
  {
    "path": "test_data/connections.toml",
    "content": "[default]\naccount = 'snowdriverswarsaw.us-west-2.aws'\nuser = 'test_default_user'\npassword = 'test_default_pass'\nwarehouse = 'testw_default'\ndatabase = 'test_default_db'\nschema = 'test_default_go'\nprotocol = 'https'\nport = '300'\n\n[aws-oauth]\naccount = 'snowdriverswarsaw.us-west-2.aws'\nuser = 'test_oauth_user'\npassword = 'test_oauth_pass'\nwarehouse = 'testw_oauth'\ndatabase = 'test_oauth_db'\nschema = 'test_oauth_go'\nprotocol = 'https'\nport = '443'\nauthenticator = 'oauth'\ntestNot = 'problematicParameter'\ntoken = 'token_value'\ndisableOCSPChecks = true\n\n[aws-oauth-file]\naccount = 'snowdriverswarsaw.us-west-2.aws'\nuser = 'test_user'\npassword = 'test_pass'\nwarehouse = 'testw'\ndatabase = 'test_db'\nschema = 'test_go'\nprotocol = 'https'\nport = '443'\nauthenticator = 'oauth'\ntestNot = 'problematicParameter'\ntoken_file_path = '/Users/test/.snowflake/token'\n\n[read-token]\naccount = 'snowdriverswarsaw.us-west-2.aws'\nuser = 'test_default_user'\npassword = 'test_default_pass'\nwarehouse = 'testw_default'\ndatabase = 'test_default_db'\nschema = 'test_default_go'\nprotocol = 'https'\nauthenticator = 'oauth'\ntoken_file_path = './test_data/snowflake/session/token'\ndisable_ocsp_checks = true\n\n[snake-case]\naccount = 'snowdriverswarsaw.us-west-2.aws'\nuser = 'test_default_user'\npassword = 'test_default_pass'\nwarehouse = 'testw_default'\ndatabase = 'test_default_db'\nschema = 'test_default_go'\nprotocol = 'https'\nport = '300'\nocsp_fail_open=true\n"
  },
  {
    "path": "test_data/multistatements.sql",
    "content": "CREATE OR REPLACE TABLE jj_1(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_2(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_3(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_4(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_5(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_6(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_7(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_8(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_9(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_10(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_11(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_12(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_13(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_14(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_15(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_16(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_17(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_18(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_19(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_20(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_21(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_22(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_23(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_24(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_25(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_26(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_27(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_28(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_29(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_30(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_31(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_32(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_33(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_34(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_35(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_36(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_37(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_38(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_39(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_40(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_41(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_42(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_43(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_44(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_45(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_46(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_47(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_48(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_49(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_50(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_51(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_52(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_53(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_54(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_55(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_56(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_57(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_58(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_59(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_60(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_61(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_62(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_63(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_64(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_65(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_66(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_67(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_68(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_69(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_70(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_71(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_72(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_73(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_74(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_75(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_76(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_77(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_78(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_79(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_80(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_81(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_82(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_83(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_84(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_85(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_86(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_87(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_88(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_89(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_90(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_91(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_92(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_93(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_94(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_95(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_96(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_97(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_98(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_99(i int, v varchar(10));\nCREATE OR REPLACE TABLE jj_100(i int, v varchar(10));\n"
  },
  {
    "path": "test_data/multistatements_drop.sql",
    "content": "drop table if exists jj_1;\r\ndrop table if exists jj_2;\r\ndrop table if exists jj_3;\r\ndrop table if exists jj_4;\r\ndrop table if exists jj_5;\r\ndrop table if exists jj_6;\r\ndrop table if exists jj_7;\r\ndrop table if exists jj_8;\r\ndrop table if exists jj_9;\r\ndrop table if exists jj_10;\r\ndrop table if exists jj_11;\r\ndrop table if exists jj_12;\r\ndrop table if exists jj_13;\r\ndrop table if exists jj_14;\r\ndrop table if exists jj_15;\r\ndrop table if exists jj_16;\r\ndrop table if exists jj_17;\r\ndrop table if exists jj_18;\r\ndrop table if exists jj_19;\r\ndrop table if exists jj_20;\r\ndrop table if exists jj_21;\r\ndrop table if exists jj_22;\r\ndrop table if exists jj_23;\r\ndrop table if exists jj_24;\r\ndrop table if exists jj_25;\r\ndrop table if exists jj_26;\r\ndrop table if exists jj_27;\r\ndrop table if exists jj_28;\r\ndrop table if exists jj_29;\r\ndrop table if exists jj_30;\r\ndrop table if exists jj_31;\r\ndrop table if exists jj_32;\r\ndrop table if exists jj_33;\r\ndrop table if exists jj_34;\r\ndrop table if exists jj_35;\r\ndrop table if exists jj_36;\r\ndrop table if exists jj_37;\r\ndrop table if exists jj_38;\r\ndrop table if exists jj_39;\r\ndrop table if exists jj_40;\r\ndrop table if exists jj_41;\r\ndrop table if exists jj_42;\r\ndrop table if exists jj_43;\r\ndrop table if exists jj_44;\r\ndrop table if exists jj_45;\r\ndrop table if exists jj_46;\r\ndrop table if exists jj_47;\r\ndrop table if exists jj_48;\r\ndrop table if exists jj_49;\r\ndrop table if exists jj_50;\r\ndrop table if exists jj_51;\r\ndrop table if exists jj_52;\r\ndrop table if exists jj_53;\r\ndrop table if exists jj_54;\r\ndrop table if exists jj_55;\r\ndrop table if exists jj_56;\r\ndrop table if exists jj_57;\r\ndrop table if exists jj_58;\r\ndrop table if exists jj_59;\r\ndrop table if exists jj_60;\r\ndrop table if exists jj_61;\r\ndrop table if exists jj_62;\r\ndrop table if exists jj_63;\r\ndrop table if exists jj_64;\r\ndrop table if exists jj_65;\r\ndrop table if exists jj_66;\r\ndrop table if exists jj_67;\r\ndrop table if exists jj_68;\r\ndrop table if exists jj_69;\r\ndrop table if exists jj_70;\r\ndrop table if exists jj_71;\r\ndrop table if exists jj_72;\r\ndrop table if exists jj_73;\r\ndrop table if exists jj_74;\r\ndrop table if exists jj_75;\r\ndrop table if exists jj_76;\r\ndrop table if exists jj_77;\r\ndrop table if exists jj_78;\r\ndrop table if exists jj_79;\r\ndrop table if exists jj_80;\r\ndrop table if exists jj_81;\r\ndrop table if exists jj_82;\r\ndrop table if exists jj_83;\r\ndrop table if exists jj_84;\r\ndrop table if exists jj_85;\r\ndrop table if exists jj_86;\r\ndrop table if exists jj_87;\r\ndrop table if exists jj_88;\r\ndrop table if exists jj_89;\r\ndrop table if exists jj_90;\r\ndrop table if exists jj_91;\r\ndrop table if exists jj_92;\r\ndrop table if exists jj_93;\r\ndrop table if exists jj_94;\r\ndrop table if exists jj_95;\r\ndrop table if exists jj_96;\r\ndrop table if exists jj_97;\r\ndrop table if exists jj_98;\r\ndrop table if exists jj_99;\r\ndrop table if exists jj_100;\r\n"
  },
  {
    "path": "test_data/orders_100.csv",
    "content": "1|36901|O|173665.47|1996-01-02|5-LOW|Clerk#000000951|0|nstructions sleep furiously among |\n2|78002|O|46929.18|1996-12-01|1-URGENT|Clerk#000000880|0| foxes. pending accounts at the pending, silent asymptot|\n3|123314|F|193846.25|1993-10-14|5-LOW|Clerk#000000955|0|sly final accounts boost. carefully regular ideas cajole carefully. depos|\n4|136777|O|32151.78|1995-10-11|5-LOW|Clerk#000000124|0|sits. slyly regular warthogs cajole. regular, regular theodolites acro|\n5|44485|F|144659.20|1994-07-30|5-LOW|Clerk#000000925|0|quickly. bold deposits sleep slyly. packages use slyly|\n6|55624|F|58749.59|1992-02-21|4-NOT SPECIFIED|Clerk#000000058|0|ggle. special, final requests are against the furiously specia|\n7|39136|O|252004.18|1996-01-10|2-HIGH|Clerk#000000470|0|ly special requests |\n32|130057|O|208660.75|1995-07-16|2-HIGH|Clerk#000000616|0|ise blithely bold, regular requests. quickly unusual dep|\n33|66958|F|163243.98|1993-10-27|3-MEDIUM|Clerk#000000409|0|uriously. furiously final request|\n34|61001|O|58949.67|1998-07-21|3-MEDIUM|Clerk#000000223|0|ly final packages. fluffily final deposits wake blithely ideas. spe|\n35|127588|O|253724.56|1995-10-23|4-NOT SPECIFIED|Clerk#000000259|0|zzle. carefully enticing deposits nag furio|\n36|115252|O|68289.96|1995-11-03|1-URGENT|Clerk#000000358|0| quick packages are blithely. slyly silent accounts wake qu|\n37|86116|F|206680.66|1992-06-03|3-MEDIUM|Clerk#000000456|0|kly regular pinto beans. carefully unusual waters cajole never|\n38|124828|O|82500.05|1996-08-21|4-NOT SPECIFIED|Clerk#000000604|0|haggle blithely. furiously express ideas haggle blithely furiously regular re|\n39|81763|O|341734.47|1996-09-20|3-MEDIUM|Clerk#000000659|0|ole express, ironic requests: ir|\n64|32113|F|39414.99|1994-07-16|3-MEDIUM|Clerk#000000661|0|wake fluffily. sometimes ironic pinto beans about the dolphin|\n65|16252|P|110643.60|1995-03-18|1-URGENT|Clerk#000000632|0|ular requests are blithely pending orbits-- even requests against the deposit|\n66|129200|F|103740.67|1994-01-20|5-LOW|Clerk#000000743|0|y pending requests integrate|\n67|56614|O|169405.01|1996-12-19|4-NOT SPECIFIED|Clerk#000000547|0|symptotes haggle slyly around the furiously iron|\n68|28547|O|330793.52|1998-04-18|3-MEDIUM|Clerk#000000440|0| pinto beans sleep carefully. blithely ironic deposits haggle furiously acro|\n69|84487|F|197689.49|1994-06-04|4-NOT SPECIFIED|Clerk#000000330|0| depths atop the slyly thin deposits detect among the furiously silent accou|\n70|64340|F|113534.42|1993-12-18|5-LOW|Clerk#000000322|0| carefully ironic request|\n71|3373|O|276992.74|1998-01-24|4-NOT SPECIFIED|Clerk#000000271|0| express deposits along the blithely regul|\n96|107779|F|68989.90|1994-04-17|2-HIGH|Clerk#000000395|0|oost furiously. pinto|\n97|21061|F|110512.84|1993-01-29|3-MEDIUM|Clerk#000000547|0|hang blithely along the regular accounts. furiously even ideas after the|\n98|104480|F|69168.33|1994-09-25|1-URGENT|Clerk#000000448|0|c asymptotes. quickly regular packages should have to nag re|\n99|88910|F|112126.95|1994-03-13|4-NOT SPECIFIED|Clerk#000000973|0|e carefully ironic packages. pending|\n100|147004|O|187782.63|1998-02-28|4-NOT SPECIFIED|Clerk#000000577|0|heodolites detect slyly alongside of the ent|\n\n"
  },
  {
    "path": "test_data/orders_101.csv",
    "content": "353|1777|F|249710.43|1993-12-31|5-LOW|Clerk#000000449|0| quiet ideas sleep. even instructions cajole slyly. silently spe|\n354|138268|O|217160.72|1996-03-14|2-HIGH|Clerk#000000511|0|ly regular ideas wake across the slyly silent ideas. final deposits eat b|\n355|70007|F|99516.75|1994-06-14|5-LOW|Clerk#000000532|0|s. sometimes regular requests cajole. regular, pending accounts a|\n356|146809|F|209439.04|1994-06-30|4-NOT SPECIFIED|Clerk#000000944|0|as wake along the bold accounts. even, |\n357|60395|O|157411.61|1996-10-09|2-HIGH|Clerk#000000301|0|e blithely about the express, final accounts. quickl|\n358|2290|F|354132.39|1993-09-20|2-HIGH|Clerk#000000392|0|l, silent instructions are slyly. silently even de|\n359|77600|F|239998.53|1994-12-19|3-MEDIUM|Clerk#000000934|0|n dolphins. special courts above the carefully ironic requests use|\n384|113009|F|166753.71|1992-03-03|5-LOW|Clerk#000000206|0|, even accounts use furiously packages. slyly ironic pla|\n385|32947|O|54948.26|1996-03-22|5-LOW|Clerk#000000600|0|hless accounts unwind bold pain|\n386|60110|F|110216.57|1995-01-25|2-HIGH|Clerk#000000648|0| haggle quickly. stealthily bold asymptotes haggle among the furiously even re|\n387|3296|O|204546.39|1997-01-26|4-NOT SPECIFIED|Clerk#000000768|0| are carefully among the quickly even deposits. furiously silent req|\n388|44668|F|198800.71|1992-12-16|4-NOT SPECIFIED|Clerk#000000356|0|ar foxes above the furiously ironic deposits nag slyly final reque|\n389|126973|F|2519.40|1994-02-17|2-HIGH|Clerk#000000062|0|ing to the regular asymptotes. final, pending foxes about the blithely sil|\n390|102563|O|269761.09|1998-04-07|5-LOW|Clerk#000000404|0|xpress asymptotes use among the regular, final pinto b|\n391|110278|F|20890.17|1994-11-17|2-HIGH|Clerk#000000256|0|orges thrash fluffil|\n416|40130|F|105675.20|1993-09-27|5-LOW|Clerk#000000294|0| the accounts. fluffily bold depo|\n417|54583|F|125155.22|1994-02-06|3-MEDIUM|Clerk#000000468|0|ironic, even packages. thinly unusual accounts sleep along the slyly unusual |\n418|94834|P|53328.48|1995-04-13|4-NOT SPECIFIED|Clerk#000000643|0|. furiously ironic instruc|\n419|116261|O|165454.42|1996-10-01|3-MEDIUM|Clerk#000000376|0|osits. blithely pending theodolites boost carefully|\n420|90145|O|343254.06|1995-10-31|4-NOT SPECIFIED|Clerk#000000756|0|leep carefully final excuses. fluffily pending requests unwind carefully above|\n421|39149|F|1156.67|1992-02-22|5-LOW|Clerk#000000405|0|egular, even packages according to the final, un|\n422|73075|O|188124.81|1997-05-31|4-NOT SPECIFIED|Clerk#000000049|0|aggle carefully across the accounts. regular accounts eat fluffi|\n423|103396|O|50240.88|1996-06-01|1-URGENT|Clerk#000000674|0|quests. deposits cajole quickly. furiously bold accounts haggle q|\n448|149641|O|165954.35|1995-08-21|3-MEDIUM|Clerk#000000597|0| regular, express foxes use blithely. quic|\n449|95767|O|71120.82|1995-07-20|2-HIGH|Clerk#000000841|0|. furiously regular theodolites affix blithely |\n450|47380|P|228518.02|1995-03-05|4-NOT SPECIFIED|Clerk#000000293|0|d theodolites. boldly bold foxes since the pack|\n451|98758|O|141490.92|1998-05-25|5-LOW|Clerk#000000048|0|nic pinto beans. theodolites poach carefully; |\n452|59560|O|3270.20|1997-10-14|1-URGENT|Clerk#000000498|0|t, unusual instructions above the blithely bold pint|\n453|44030|O|329149.33|1997-05-26|5-LOW|Clerk#000000504|0|ss foxes. furiously regular ideas sleep according to t|\n454|48776|O|36743.83|1995-12-27|5-LOW|Clerk#000000890|0|dolites sleep carefully blithely regular deposits. quickly regul|\n455|12098|O|183606.42|1996-12-04|1-URGENT|Clerk#000000796|0| about the final platelets. dependen|\n480|71383|F|23699.64|1993-05-08|5-LOW|Clerk#000000004|0|ealthy pinto beans. fluffily regular requests along the special sheaves wake |\n481|30352|F|201254.08|1992-10-08|2-HIGH|Clerk#000000230|0|ly final ideas. packages haggle fluffily|\n482|125059|O|182312.78|1996-03-26|1-URGENT|Clerk#000000295|0|ts. deposits wake: final acco|\n483|34820|O|70146.28|1995-07-11|2-HIGH|Clerk#000000025|0|cross the carefully final e|\n484|54244|O|327889.57|1997-01-03|3-MEDIUM|Clerk#000000545|0|grouches use. furiously bold accounts maintain. bold, regular deposits|\n485|100561|O|192867.30|1997-03-26|2-HIGH|Clerk#000000105|0| regular ideas nag thinly furiously s|\n486|50861|O|284644.07|1996-03-11|4-NOT SPECIFIED|Clerk#000000803|0|riously dolphins. fluffily ironic requ|\n487|107825|F|90657.45|1992-08-18|1-URGENT|Clerk#000000086|0|ithely unusual courts eat accordi|\n512|63022|P|194834.40|1995-05-20|5-LOW|Clerk#000000814|0|ding requests. carefully express theodolites was quickly. furious|\n513|60569|O|105559.70|1995-05-01|2-HIGH|Clerk#000000522|0|regular packages. pinto beans cajole carefully against the even|\n514|74872|O|154735.68|1996-04-04|2-HIGH|Clerk#000000094|0| cajole furiously. slyly final excuses cajole. slyly special instructions |\n515|141829|F|244660.33|1993-08-29|4-NOT SPECIFIED|Clerk#000000700|0|eposits are furiously furiously silent pinto beans. pending pack|\n516|43903|O|21920.56|1998-04-21|2-HIGH|Clerk#000000305|0|lar, unusual platelets are carefully. even courts sleep bold, final pinto bea|\n517|9220|O|121396.01|1997-04-07|5-LOW|Clerk#000000359|0|slyly pending deposits cajole quickly packages. furiou|\n\n"
  },
  {
    "path": "test_data/put_get_1.txt",
    "content": "1,2014-01-02,2014-01-02 11:30:21,2014-01-02 11:30:22,2014-01-02 11:30:23,2014-01-02T11:30:24-07:00,8.765,9.876\n2,2014-02-02,2014-02-02 11:30:21,2014-02-02 11:30:22,2014-02-02 11:30:23,2014-02-02T11:30:24+02:00,8.764,9.875\n3,2014-03-02,2014-03-02 11:30:21,2014-03-02 11:30:22,2014-03-02 11:30:23,2014-03-02T11:30:24Z,8.763,9.874\n\n"
  },
  {
    "path": "test_data/snowflake/session/token",
    "content": "mock_token123456"
  },
  {
    "path": "test_data/wiremock/mappings/auth/external_browser/parallel_login_first_fails_then_successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"External browser parallel login first fails then successful flow\",\n      \"requiredScenarioState\": \"Started\",\n      \"newScenarioState\": \"First request failed\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/authenticator-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.TOKEN\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"code\": null,\n          \"message\": \"auth failed\",\n          \"success\": false\n        },\n        \"fixedDelayMilliseconds\": 2000\n      }\n    },\n    {\n      \"scenarioName\": \"External browser parallel login first fails then successful flow\",\n      \"requiredScenarioState\": \"First request failed\",\n      \"newScenarioState\": \"Second request successful\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/authenticator-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.TOKEN\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"ssoUrl\": \"http://localhost:{{ jsonPath request.body '$.data.BROWSER_MODE_REDIRECT_PORT' }}?token=test-saml-token\",\n            \"proofKey\": \"test-proof-key\"\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        },\n        \"transformers\": [\"response-template\"],\n        \"fixedDelayMilliseconds\": 2000\n      }\n    },\n    {\n      \"scenarioName\": \"External browser parallel login first fails then successful flow\",\n      \"requiredScenarioState\": \"Second request successful\",\n      \"newScenarioState\": \"Login request with ID token required\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"TOKEN\": \"test-saml-token\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": \"test-id-token\",\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    },\n    {\n      \"scenarioName\": \"External browser parallel login first fails then successful flow\",\n      \"requiredScenarioState\": \"Login request with ID token required\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"TOKEN\": \"test-id-token\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/external_browser/parallel_login_successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"External browser parallel login successful flow\",\n      \"requiredScenarioState\": \"Started\",\n      \"newScenarioState\": \"Login request with SAML token required\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/authenticator-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.TOKEN\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"ssoUrl\": \"http://localhost:{{ jsonPath request.body '$.data.BROWSER_MODE_REDIRECT_PORT' }}?token=test-saml-token\",\n            \"proofKey\": \"test-proof-key\"\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        },\n        \"transformers\": [\"response-template\"],\n        \"fixedDelayMilliseconds\": 2000\n      }\n    },\n    {\n      \"scenarioName\": \"External browser parallel login successful flow\",\n      \"requiredScenarioState\": \"Login request with SAML token required\",\n      \"newScenarioState\": \"Login request with ID token required\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"TOKEN\": \"test-saml-token\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": \"test-id-token\",\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    },\n    {\n      \"scenarioName\": \"External browser parallel login successful flow\",\n      \"requiredScenarioState\": \"Login request with ID token required\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"TOKEN\": \"test-id-token\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/external_browser/successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/authenticator-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.TOKEN\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"ssoUrl\": \"http://localhost:{{ jsonPath request.body '$.data.BROWSER_MODE_REDIRECT_PORT' }}?token=test-token\",\n            \"proofKey\": \"test-proof-key\"\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        },\n        \"transformers\": [\"response-template\"],\n        \"fixedDelayMilliseconds\": 2000\n      }\n    },\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"TOKEN\": \"test-token\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": \"test-id-token\",\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    },\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"TOKEN\": \"test-id-token\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/mfa/parallel_login_first_fails_then_successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"MFA Authentication Flow\",\n      \"requiredScenarioState\": \"Started\",\n      \"newScenarioState\": \"MFA first attempt failed\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"PASSWORD\": \"testPassword\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.TOKEN\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"code\": \"394508\",\n          \"data\": {\n            \"authnMethod\": \"USERNAME_PASSWORD\",\n            \"loginName\": \"testUser\",\n            \"nextAction\": \"RETRY_LOGIN\",\n            \"requestId\": \"8239b728-24d5-4d1b-5af6-593402a1cea2\",\n            \"signInOptions\": {}\n          },\n          \"headers\": null,\n          \"message\": \"Failed to authenticate: MFA with TOTP is required. To authenticate, provide both your password and a current TOTP passcode.\",\n          \"success\": false\n        },\n        \"fixedDelayMilliseconds\": 2000\n      }\n    },\n    {\n      \"scenarioName\": \"MFA Authentication Flow\",\n      \"requiredScenarioState\": \"MFA first attempt failed\",\n      \"newScenarioState\": \"MFA token required\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"PASSWORD\": \"testPassword\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.TOKEN\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": \"mfa-token\",\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        },\n        \"fixedDelayMilliseconds\": 2000\n      }\n    },\n    {\n      \"scenarioName\": \"MFA Authentication Flow\",\n      \"requiredScenarioState\": \"MFA token required\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"PASSWORD\": \"testPassword\",\n                \"TOKEN\": \"mfa-token\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/mfa/parallel_login_successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"MFA Authentication Flow\",\n      \"requiredScenarioState\": \"Started\",\n      \"newScenarioState\": \"MFA token required\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"PASSWORD\": \"testPassword\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.TOKEN\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": \"mfa-token\",\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        },\n        \"fixedDelayMilliseconds\": 2000\n      }\n    },\n    {\n      \"scenarioName\": \"MFA Authentication Flow\",\n      \"requiredScenarioState\": \"MFA token required\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"PASSWORD\": \"testPassword\",\n                \"TOKEN\": \"mfa-token\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/authorization_code/error_from_idp.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/authorize\",\n        \"queryParameters\": {\n          \"response_type\": {\n            \"equalTo\": \"code\"\n          },\n          \"scope\": {\n            \"equalTo\": \"session:role:ANALYST\"\n          },\n          \"code_challenge_method\": {\n            \"equalTo\": \"S256\"\n          },\n          \"redirect_uri\": {\n            \"equalTo\": \"http://localhost:1234/snowflake/oauth-redirect\"\n          },\n          \"code_challenge\": {\n            \"matches\": \".+\"\n          },\n          \"state\": {\n            \"matches\": \"testState|invalidState\"\n          },\n          \"client_id\": {\n            \"equalTo\": \"testClientId\"\n          }\n        },\n        \"method\": \"GET\"\n      },\n      \"response\": {\n        \"status\": 302,\n        \"headers\": {\n          \"Location\": \"http://localhost:1234/snowflake/oauth-redirect?error=some+error&error_description=some+error+desc\"\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/authorization_code/invalid_code.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/authorize\",\n        \"queryParameters\": {\n          \"response_type\": {\n            \"equalTo\": \"code\"\n          },\n          \"scope\": {\n            \"equalTo\": \"session:role:ANALYST\"\n          },\n          \"code_challenge_method\": {\n            \"equalTo\": \"S256\"\n          },\n          \"redirect_uri\": {\n            \"equalTo\": \"http://localhost:1234/snowflake/oauth-redirect\"\n          },\n          \"code_challenge\": {\n            \"matches\": \".+\"\n          },\n          \"state\": {\n            \"matches\": \"testState\"\n          },\n          \"client_id\": {\n            \"equalTo\": \"testClientId\"\n          }\n        },\n        \"method\": \"GET\"\n      },\n      \"response\": {\n        \"status\": 302,\n        \"headers\": {\n          \"Location\": \"http://localhost:1234/snowflake/oauth-redirect?code=testCode&state=testState\"\n        }\n      }\n    },\n    {\n      \"scenarioName\": \"Successful token exchange\",\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/token\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Content-Type\": {\n            \"contains\": \"application/x-www-form-urlencoded\"\n          },\n          \"Authorization\": {\n            \"equalTo\": \"Basic dGVzdENsaWVudElkOnRlc3RDbGllbnRTZWNyZXQ=\"\n          }\n        },\n        \"formParameters\": {\n          \"grant_type\": {\n            \"equalTo\": \"authorization_code\"\n          },\n          \"code_verifier\": {\n            \"matches\": \"[a-zA-Z0-9\\\\-_]+\"\n          },\n          \"code\": {\n            \"equalTo\": \"testCode\"\n          },\n          \"redirect_uri\": {\n            \"equalTo\": \"http://localhost:1234/snowflake/oauth-redirect\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 400,\n        \"jsonBody\": {\n          \"error\" : \"invalid_grant\",\n          \"error_description\" : \"The authorization code is invalid or has expired.\"\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/authorization_code/successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/authorize\",\n        \"queryParameters\": {\n          \"response_type\": {\n            \"equalTo\": \"code\"\n          },\n          \"scope\": {\n            \"equalTo\": \"session:role:ANALYST\"\n          },\n          \"code_challenge_method\": {\n            \"equalTo\": \"S256\"\n          },\n          \"redirect_uri\": {\n            \"matches\": \"http:.+\"\n          },\n          \"code_challenge\": {\n            \"matches\": \"JZpN_-zfNduuWm-zUo-D-m7vMw_pgUGv8wGDGqBR8PM\"\n          },\n          \"state\": {\n            \"matches\": \"testState|invalidState\"\n          },\n          \"client_id\": {\n            \"equalTo\": \"testClientId\"\n          }\n        },\n        \"method\": \"GET\"\n      },\n      \"response\": {\n        \"status\": 302,\n        \"headers\": {\n          \"Location\": \"{{ request.query.redirect_uri }}?code=testCode&state=testState\"\n        },\n        \"transformers\": [\"response-template\"]\n      }\n    },\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/token\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Content-Type\": {\n            \"contains\": \"application/x-www-form-urlencoded\"\n          },\n          \"Authorization\": {\n            \"equalTo\": \"Basic dGVzdENsaWVudElkOnRlc3RDbGllbnRTZWNyZXQ=\"\n          }\n        },\n        \"formParameters\": {\n          \"grant_type\": {\n            \"equalTo\": \"authorization_code\"\n          },\n          \"code_verifier\": {\n            \"matches\": \"testCodeVerifier\"\n          },\n          \"code\": {\n            \"equalTo\": \"testCode\"\n          },\n          \"redirect_uri\": {\n            \"matches\": \"http://(127.0.0.1|localhost):[0-9]+.*\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"access-token-123\",\n          \"token_type\": \"Bearer\",\n          \"username\": \"test-user\",\n          \"scope\": \"refresh_token session:role:ANALYST\",\n          \"expires_in\": 600,\n          \"refresh_token_expires_in\": 86399,\n          \"idpInitiated\": false\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/authorization_code/successful_flow_with_offline_access.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/authorize\",\n        \"queryParameters\": {\n          \"response_type\": {\n            \"equalTo\": \"code\"\n          },\n          \"scope\": {\n            \"equalTo\": \"session:role:ANALYST offline_access\"\n          },\n          \"code_challenge_method\": {\n            \"equalTo\": \"S256\"\n          },\n          \"redirect_uri\": {\n            \"equalTo\": \"http://localhost:1234/snowflake/oauth-redirect\"\n          },\n          \"code_challenge\": {\n            \"matches\": \"JZpN_-zfNduuWm-zUo-D-m7vMw_pgUGv8wGDGqBR8PM\"\n          },\n          \"state\": {\n            \"matches\": \"testState|invalidState\"\n          },\n          \"client_id\": {\n            \"equalTo\": \"testClientId\"\n          }\n        },\n        \"method\": \"GET\"\n      },\n      \"response\": {\n        \"status\": 302,\n        \"headers\": {\n          \"Location\": \"http://localhost:1234/snowflake/oauth-redirect?code=testCode&state=testState\"\n        }\n      }\n    },\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/token\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Content-Type\": {\n            \"contains\": \"application/x-www-form-urlencoded\"\n          },\n          \"Authorization\": {\n            \"equalTo\": \"Basic dGVzdENsaWVudElkOnRlc3RDbGllbnRTZWNyZXQ=\"\n          }\n        },\n        \"formParameters\": {\n          \"grant_type\": {\n            \"equalTo\": \"authorization_code\"\n          },\n          \"code_verifier\": {\n            \"matches\": \"testCodeVerifier\"\n          },\n          \"code\": {\n            \"equalTo\": \"testCode\"\n          },\n          \"redirect_uri\": {\n            \"equalTo\": \"http://localhost:1234/snowflake/oauth-redirect\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"access-token-123\",\n          \"refresh_token\": \"refresh-token-123\",\n          \"token_type\": \"Bearer\",\n          \"username\": \"test-user\",\n          \"scope\": \"refresh_token session:role:ANALYST\",\n          \"expires_in\": 600,\n          \"refresh_token_expires_in\": 86399,\n          \"idpInitiated\": false\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/authorization_code/successful_flow_with_single_use_refresh_token.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/authorize\",\n        \"queryParameters\": {\n          \"response_type\": {\n            \"equalTo\": \"code\"\n          },\n          \"scope\": {\n            \"equalTo\": \"session:role:ANALYST\"\n          },\n          \"code_challenge_method\": {\n            \"equalTo\": \"S256\"\n          },\n          \"redirect_uri\": {\n            \"equalTo\": \"http://localhost:1234/snowflake/oauth-redirect\"\n          },\n          \"code_challenge\": {\n            \"matches\": \"JZpN_-zfNduuWm-zUo-D-m7vMw_pgUGv8wGDGqBR8PM\"\n          },\n          \"state\": {\n            \"matches\": \"testState|invalidState\"\n          },\n          \"client_id\": {\n            \"equalTo\": \"testClientId\"\n          }\n        },\n        \"method\": \"GET\"\n      },\n      \"response\": {\n        \"status\": 302,\n        \"headers\": {\n          \"Location\": \"http://localhost:1234/snowflake/oauth-redirect?code=testCode&state=testState\"\n        }\n      }\n    },\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/token\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Content-Type\": {\n            \"contains\": \"application/x-www-form-urlencoded\"\n          },\n          \"Authorization\": {\n            \"equalTo\": \"Basic dGVzdENsaWVudElkOnRlc3RDbGllbnRTZWNyZXQ=\"\n          }\n        },\n        \"formParameters\": {\n          \"grant_type\": {\n            \"equalTo\": \"authorization_code\"\n          },\n          \"code_verifier\": {\n            \"matches\": \"testCodeVerifier\"\n          },\n          \"code\": {\n            \"equalTo\": \"testCode\"\n          },\n          \"redirect_uri\": {\n            \"equalTo\": \"http://localhost:1234/snowflake/oauth-redirect\"\n          },\n          \"enable_single_use_refresh_tokens\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"access-token-123\",\n          \"token_type\": \"Bearer\",\n          \"username\": \"test-user\",\n          \"scope\": \"refresh_token session:role:ANALYST\",\n          \"expires_in\": 600,\n          \"refresh_token_expires_in\": 86399,\n          \"idpInitiated\": false\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/client_credentials/invalid_client.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/token\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Content-Type\": {\n            \"contains\": \"application/x-www-form-urlencoded\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 401,\n        \"jsonBody\": {\n          \"error\": \"invalid_client\",\n          \"error_description\": \"The client secret supplied for a confidential client is invalid.\"\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/client_credentials/successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/token\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Content-Type\": {\n            \"contains\": \"application/x-www-form-urlencoded\"\n          },\n          \"Authorization\": {\n            \"equalTo\": \"Basic dGVzdENsaWVudElkOnRlc3RDbGllbnRTZWNyZXQ=\"\n          }\n        },\n        \"formParameters\": {\n          \"grant_type\": {\n            \"equalTo\": \"client_credentials\"\n          },\n          \"scope\": {\n            \"equalTo\": \"session:role:ANALYST\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"access-token-123\",\n          \"refresh_token\": \"123\",\n          \"token_type\": \"Bearer\",\n          \"username\": \"user\",\n          \"scope\": \"refresh_token session:role:ANALYST\",\n          \"expires_in\": 600,\n          \"refresh_token_expires_in\": 86399,\n          \"idpInitiated\": false\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/login_request.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\": {\n              \"data\": {\n                \"TOKEN\": \"access-token-123\"\n              }\n            },\n            \"ignoreExtraElements\": true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"headers\": {\n          \"Content-Type\": \"application/json\"\n        },\n        \"jsonBody\": {\n          \"code\": null,\n          \"data\": {\n            \"token\": \"session token\"\n          },\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/login_request_with_expired_access_token.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\": {\n              \"data\": {\n                \"TOKEN\": \"expired-token\"\n              }\n            },\n            \"ignoreExtraElements\": true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"headers\": {\n          \"Content-Type\": \"application/json\"\n        },\n        \"jsonBody\": {\n          \"code\": \"390303\",\n          \"data\": {\n            \"authnMethod\": \"OAUTH\",\n            \"nextAction\": \"RETRY_LOGIN\",\n            \"requestId\": \"89c7289e-b984-4038-565b-dda3d96dcef3\",\n            \"signInOptions\": {}\n          },\n          \"headers\": null,\n          \"message\": \"Invalid OAuth access token. \",\n          \"success\": false\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/refresh_token/invalid_refresh_token.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/token\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Content-Type\": {\n            \"contains\": \"application/x-www-form-urlencoded\"\n          },\n          \"Authorization\": {\n            \"equalTo\": \"Basic dGVzdENsaWVudElkOnRlc3RDbGllbnRTZWNyZXQ=\"\n          }\n        },\n        \"formParameters\": {\n          \"scope\": {\n            \"equalTo\": \"session:role:ANALYST offline_access\"\n          },\n          \"grant_type\": {\n            \"equalTo\": \"refresh_token\"\n          },\n          \"refresh_token\": {\n            \"equalTo\": \"expired-refresh-token\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 400,\n        \"jsonBody\": {\n          \"error\" : \"invalid_grant\",\n          \"error_description\" : \"The authorization code is invalid or has expired.\"\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/refresh_token/successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/token\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Content-Type\": {\n            \"contains\": \"application/x-www-form-urlencoded\"\n          },\n          \"Authorization\": {\n            \"equalTo\": \"Basic dGVzdENsaWVudElkOnRlc3RDbGllbnRTZWNyZXQ=\"\n          }\n        },\n        \"formParameters\": {\n          \"scope\": {\n            \"equalTo\": \"session:role:ANALYST offline_access\"\n          },\n          \"grant_type\": {\n            \"equalTo\": \"refresh_token\"\n          },\n          \"refresh_token\": {\n            \"equalTo\": \"refresh-token-123\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"access-token-123\",\n          \"refresh_token\": \"refresh-token-123a\",\n          \"token_type\": \"Bearer\",\n          \"username\": \"test-user\",\n          \"scope\": \"session:role:ANALYST offline_access\",\n          \"expires_in\": 600,\n          \"refresh_token_expires_in\": 86399,\n          \"idpInitiated\": false\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/oauth2/refresh_token/successful_flow_without_new_refresh_token.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/oauth/token\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Content-Type\": {\n            \"contains\": \"application/x-www-form-urlencoded\"\n          },\n          \"Authorization\": {\n            \"equalTo\": \"Basic dGVzdENsaWVudElkOnRlc3RDbGllbnRTZWNyZXQ=\"\n          }\n        },\n        \"formParameters\": {\n          \"scope\": {\n            \"equalTo\": \"session:role:ANALYST offline_access\"\n          },\n          \"grant_type\": {\n            \"equalTo\": \"refresh_token\"\n          },\n          \"refresh_token\": {\n            \"equalTo\": \"refresh-token-123\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"access-token-123\",\n          \"token_type\": \"Bearer\",\n          \"username\": \"test-user\",\n          \"scope\": \"session:role:ANALYST offline_access\",\n          \"expires_in\": 600,\n          \"refresh_token_expires_in\": 86399,\n          \"idpInitiated\": false\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/password/invalid_host.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\"\n      },\n      \"response\": {\n        \"status\": 403,\n        \"jsonBody\": {\n          \"data\": null,\n          \"code\": \"390144\",\n          \"message\": \"Invalid account name or host.\",\n          \"success\": false\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/password/invalid_password.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"PASSWORD\": \"INVALID_PASSWORD\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": null,\n          \"code\": \"390100\",\n          \"message\": \"Incorrect username or password was specified.\",\n          \"success\": false\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/password/invalid_user.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"bogus\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": null,\n          \"code\": \"390422\",\n          \"message\": \"Incorrect username or password was specified.\",\n          \"success\": false\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/password/successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"PASSWORD\": \"testPassword\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/password/successful_flow_with_telemetry.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"PASSWORD\": \"testPassword\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              },\n              {\n                \"name\": \"CLIENT_TELEMETRY_ENABLED\",\n                \"value\": %CLIENT_TELEMETRY_ENABLED%\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/pat/invalid_token.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Successful PAT authentication flow\",\n      \"requiredScenarioState\": \"Started\",\n      \"newScenarioState\": \"Authenticated\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"AUTHENTICATOR\": \"PROGRAMMATIC_ACCESS_TOKEN\",\n                \"TOKEN\": \"some PAT\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"nextAction\": \"RETRY_LOGIN\",\n            \"authnMethod\": \"PAT\",\n            \"signInOptions\": {}\n          },\n          \"code\": \"394400\",\n          \"message\": \"Programmatic access token is invalid.\",\n          \"success\": false,\n          \"headers\": null\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/pat/reading_fresh_token.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Successful PAT authentication flow\",\n      \"requiredScenarioState\": \"Started\",\n      \"newScenarioState\": \"Second authentication\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"AUTHENTICATOR\": \"PROGRAMMATIC_ACCESS_TOKEN\",\n                \"TOKEN\": \"some PAT\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.PASSWORD\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"OAUTH_TEST_AUTH_CODE\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DHEYMAN\",\n              \"schemaName\": \"TEST_JDBC\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    },\n    {\n      \"scenarioName\": \"Successful PAT authentication flow\",\n      \"requiredScenarioState\": \"Second authentication\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"AUTHENTICATOR\": \"PROGRAMMATIC_ACCESS_TOKEN\",\n                \"TOKEN\": \"some PAT 2\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.PASSWORD\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"OAUTH_TEST_AUTH_CODE\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DHEYMAN\",\n              \"schemaName\": \"TEST_JDBC\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/pat/successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Successful PAT authentication flow\",\n      \"requiredScenarioState\": \"Started\",\n      \"newScenarioState\": \"Authenticated\",\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"AUTHENTICATOR\": \"PROGRAMMATIC_ACCESS_TOKEN\",\n                \"TOKEN\": \"some PAT\"\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.PASSWORD\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"OAUTH_TEST_AUTH_CODE\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DHEYMAN\",\n              \"schemaName\": \"TEST_JDBC\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/http_error.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/oauth2/token.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2018-02-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 400\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/missing_issuer_claim.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/oauth2/token.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2018-02-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6Ijk0ZGI4N2NiMjdmNjdjZDA1Zjk5OTlkZjMwNjg1NmQ4In0.eyJhdWQiOiJhcGkxIiwic3ViIjoiNzcyMTNFMzAtRThDQi00NTk1LUIxQjYtNUYwNTBFODMwOEZEIiwiZXhwIjoxNzQ0NzE2MDUxLCJpYXQiOjE3NDQ3MTI0NTEsImp0aSI6Ijg3MTMzNzcwMDk0MTZmYmFhNDM0MmFkMjMxZGUwMDBkIn0.xv_rY9IUnnoC0SeBsoXbF2UZo5wmeYNuumLJuTa7cwq0P6OHa2R5DkrHVMu4Zgz3eipQ_O9wln66BQPr_VG1iQ\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/missing_sub_claim.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/oauth2/token.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2018-02-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6Ijk0ZGI4N2NiMjdmNjdjZDA1Zjk5OTlkZjMwNjg1NmQ4In0.eyJhdWQiOiJhcGkxIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZmExNWQ2OTItZTljNy00NDYwLWE3NDMtMjlmMjk1MjIyMjkvIiwiZXhwIjoxNzQ0NzE2MDUxLCJpYXQiOjE3NDQ3MTI0NTEsImp0aSI6Ijg3MTMzNzcwMDk0MTZmYmFhNDM0MmFkMjMxZGUwMDBkIn0.KfVQlyouRS2EoGZTvzTN77pTviXdyPl27WrC9rPsr9AiTwnsXnOxIj-CDahyeFksWGNuhRcyzN_nI_ewBS7fVw\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/non_json_response.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/oauth2/token.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2018-02-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"not a JSON format\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/successful_flow_azure_functions.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/endpoint/from/env.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2019-08-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          },\n          \"client_id\": {\n            \"equalTo\": \"managed-client-id-from-env\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"X-IDENTITY-HEADER\": {\n            \"equalTo\": \"some-identity-header-from-env\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6Ijk0ZGI4N2NiMjdmNjdjZDA1Zjk5OTlkZjMwNjg1NmQ4In0.eyJhdWQiOiJhcGkxIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZmExNWQ2OTItZTljNy00NDYwLWE3NDMtMjlmMjk1MjIyMjkvIiwic3ViIjoiNzcyMTNFMzAtRThDQi00NTk1LUIxQjYtNUYwNTBFODMwOEZEIiwiZXhwIjoxNzQ0NzE2MDUxLCJpYXQiOjE3NDQ3MTI0NTEsImp0aSI6Ijg3MTMzNzcwMDk0MTZmYmFhNDM0MmFkMjMxZGUwMDBkIn0.C5jTYoybRs5YF5GvPgoDq4WK5U9-gDzh_N3IPaqEBI0IifdYSWpKQ72v3UISnVpp7Fc46C-ZC8kijUGe3IU9zA\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/successful_flow_azure_functions_custom_entra_resource.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/endpoint/from/env.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2019-08-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://1111111-2222-3333-44444-55555555\"\n          },\n          \"client_id\": {\n            \"equalTo\": \"managed-client-id-from-env\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"X-IDENTITY-HEADER\": {\n            \"equalTo\": \"some-identity-header-from-env\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6Ijk0ZGI4N2NiMjdmNjdjZDA1Zjk5OTlkZjMwNjg1NmQ4In0.eyJhdWQiOiJhcGkxIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZmExNWQ2OTItZTljNy00NDYwLWE3NDMtMjlmMjk1MjIyMjkvIiwic3ViIjoiNzcyMTNFMzAtRThDQi00NTk1LUIxQjYtNUYwNTBFODMwOEZEIiwiZXhwIjoxNzQ0NzE2MDUxLCJpYXQiOjE3NDQ3MTI0NTEsImp0aSI6Ijg3MTMzNzcwMDk0MTZmYmFhNDM0MmFkMjMxZGUwMDBkIn0.C5jTYoybRs5YF5GvPgoDq4WK5U9-gDzh_N3IPaqEBI0IifdYSWpKQ72v3UISnVpp7Fc46C-ZC8kijUGe3IU9zA\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/successful_flow_azure_functions_no_client_id.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/endpoint/from/env.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2019-08-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"X-IDENTITY-HEADER\": {\n            \"equalTo\": \"some-identity-header-from-env\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6Ijk0ZGI4N2NiMjdmNjdjZDA1Zjk5OTlkZjMwNjg1NmQ4In0.eyJhdWQiOiJhcGkxIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZmExNWQ2OTItZTljNy00NDYwLWE3NDMtMjlmMjk1MjIyMjkvIiwic3ViIjoiNzcyMTNFMzAtRThDQi00NTk1LUIxQjYtNUYwNTBFODMwOEZEIiwiZXhwIjoxNzQ0NzE2MDUxLCJpYXQiOjE3NDQ3MTI0NTEsImp0aSI6Ijg3MTMzNzcwMDk0MTZmYmFhNDM0MmFkMjMxZGUwMDBkIn0.C5jTYoybRs5YF5GvPgoDq4WK5U9-gDzh_N3IPaqEBI0IifdYSWpKQ72v3UISnVpp7Fc46C-ZC8kijUGe3IU9zA\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/successful_flow_azure_functions_v2_issuer.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/endpoint/from/env.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2019-08-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          },\n          \"client_id\": {\n            \"equalTo\": \"managed-client-id-from-env\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"X-IDENTITY-HEADER\": {\n            \"equalTo\": \"some-identity-header-from-env\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhdWQiOiJhcGk6Ly9mZDNmNzUzYi1lZWQzLTQ2MmMtYjZhNy1hNGI1YmI2NTBhYWQiLCJleHAiOjE3NDQ3MTYwNTEsImlhdCI6MTc0NDcxMjQ1MSwiaXNzIjoiaHR0cHM6Ly9sb2dpbi5taWNyb3NvZnRvbmxpbmUuY29tL2ZhMTVkNjkyLWU5YzctNDQ2MC1hNzQzLTI5ZjI5NTIyMjI5LyIsImp0aSI6Ijg3MTMzNzcwMDk0MTZmYmFhNDM0MmFkMjMxZGUwMDBkIiwic3ViIjoiNzcyMTNFMzAtRThDQi00NTk1LUIxQjYtNUYwNTBFODMwOEZEIn0.5mAlEPkzHLR7YbllpKgk-8ZEd88XfzA15DUK8u1rLWs\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/successful_flow_basic.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/oauth2/token.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2018-02-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6Ijk0ZGI4N2NiMjdmNjdjZDA1Zjk5OTlkZjMwNjg1NmQ4In0.eyJhdWQiOiJhcGkxIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvZmExNWQ2OTItZTljNy00NDYwLWE3NDMtMjlmMjk1MjIyMjkvIiwic3ViIjoiNzcyMTNFMzAtRThDQi00NTk1LUIxQjYtNUYwNTBFODMwOEZEIiwiZXhwIjoxNzQ0NzE2MDUxLCJpYXQiOjE3NDQ3MTI0NTEsImp0aSI6Ijg3MTMzNzcwMDk0MTZmYmFhNDM0MmFkMjMxZGUwMDBkIn0.C5jTYoybRs5YF5GvPgoDq4WK5U9-gDzh_N3IPaqEBI0IifdYSWpKQ72v3UISnVpp7Fc46C-ZC8kijUGe3IU9zA\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/successful_flow_v2_issuer.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/oauth2/token.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2018-02-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhdWQiOiJhcGk6Ly9mZDNmNzUzYi1lZWQzLTQ2MmMtYjZhNy1hNGI1YmI2NTBhYWQiLCJleHAiOjE3NDQ3MTYwNTEsImlhdCI6MTc0NDcxMjQ1MSwiaXNzIjoiaHR0cHM6Ly9sb2dpbi5taWNyb3NvZnRvbmxpbmUuY29tL2ZhMTVkNjkyLWU5YzctNDQ2MC1hNzQzLTI5ZjI5NTIyMjI5LyIsImp0aSI6Ijg3MTMzNzcwMDk0MTZmYmFhNDM0MmFkMjMxZGUwMDBkIiwic3ViIjoiNzcyMTNFMzAtRThDQi00NTk1LUIxQjYtNUYwNTBFODMwOEZEIn0.5mAlEPkzHLR7YbllpKgk-8ZEd88XfzA15DUK8u1rLWs\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/azure/unparsable_token.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/metadata/identity/oauth2/token.*\",\n        \"queryParameters\": {\n          \"api-version\": {\n            \"equalTo\": \"2018-02-01\"\n          },\n          \"resource\": {\n            \"equalTo\": \"api://fd3f753b-eed3-462c-b6a7-a4b5bb650aad\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"access_token\": \"unparsable.token\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/gcp/http_error.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/computeMetadata/v1/instance/service-accounts/default/identity.*\",\n        \"queryParameters\": {\n          \"audience\": {\n            \"equalTo\": \"snowflakecomputing.com\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata-Flavor\": {\n            \"equalTo\": \"Google\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 400\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/gcp/missing_issuer_claim.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/computeMetadata/v1/instance/service-accounts/default/identity.*\",\n        \"queryParameters\": {\n          \"audience\": {\n            \"equalTo\": \"snowflakecomputing.com\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata-Flavor\": {\n            \"equalTo\": \"Google\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6ImU2M2I5NzA1OTRiY2NmZTAxMDlkOTg4OWM2MDk3OWEwIn0.eyJzdWIiOiJzb21lLXN1YmplY3QiLCJpYXQiOjE3NDM3NjEyMTMsImV4cCI6MTc0Mzc2NDgxMywiYXVkIjoid3d3LmV4YW1wbGUuY29tIn0.H6sN6kjA82EuijFcv-yCJTqau5qvVTCsk0ZQ4gvFQMkB7c71XPs4lkwTa7ZlNNlx9e6TpN1CVGnpCIRDDAZaDw\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/gcp/missing_sub_claim.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/computeMetadata/v1/instance/service-accounts/default/identity.*\",\n        \"queryParameters\": {\n          \"audience\": {\n            \"equalTo\": \"snowflakecomputing.com\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata-Flavor\": {\n            \"equalTo\": \"Google\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"eyJ0eXAiOiJhdCtqd3QiLCJhbGciOiJFUzI1NiIsImtpZCI6ImU2M2I5NzA1OTRiY2NmZTAxMDlkOTg4OWM2MDk3OWEwIn0.eyJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJpYXQiOjE3NDM3NjEyMTMsImV4cCI6MTc0Mzc2NDgxMywiYXVkIjoid3d3LmV4YW1wbGUuY29tIn0.w0njdpfWFETVK8Ktq9GdvuKRQJjvhOplcSyvQ_zHHwBUSMapqO1bjEWBx5VhGkdECZIGS1VY7db_IOqT45yOMA\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/gcp/successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/computeMetadata/v1/instance/service-accounts/default/identity.*\",\n        \"queryParameters\": {\n          \"audience\": {\n            \"equalTo\": \"snowflakecomputing.com\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata-Flavor\": {\n            \"equalTo\": \"Google\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJpYXQiOjE3NDM2OTIwMTcsImV4cCI6MTc3NTIyODAxNCwiYXVkIjoid3d3LmV4YW1wbGUuY29tIiwic3ViIjoic29tZS1zdWJqZWN0In0.k7018udXQjw-sgVY8sTLTnNrnJoGwVpjE6HozZN-h0w\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/gcp/successful_impersionation_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/computeMetadata/v1/instance/service-accounts/default/token\",\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata-Flavor\": {\n            \"equalTo\": \"Google\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\"access_token\":\"randomToken123\",\"expires_in\":3599,\"token_type\":\"Bearer\"}\n      }\n    },\n    {\n      \"request\": {\n        \"urlPattern\": \"/v1/projects/-/serviceAccounts/targetServiceAccount:generateIdToken\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.delegates\",\n              \"equalToJson\": \"[\\\"projects/-/serviceAccounts/delegate1\\\", \\\"projects/-/serviceAccounts/delegate2\\\"]\"\n            }\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.audience\",\n              \"equalTo\": \"snowflakecomputing.com\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJpYXQiOjE3NDM2OTIwMTcsImV4cCI6MTc3NTIyODAxNCwiYXVkIjoid3d3LmV4YW1wbGUuY29tIiwic3ViIjoic29tZS1pbXBlcnNvbmF0ZWQtc3ViamVjdCJ9.5KC0hjxwAheysO-hWCgjBGPUe143-xjytC72epRG8Ks\"}\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/auth/wif/gcp/unparsable_token.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \"/computeMetadata/v1/instance/service-accounts/default/identity.*\",\n        \"queryParameters\": {\n          \"audience\": {\n            \"equalTo\": \"snowflakecomputing.com\"\n          }\n        },\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Metadata-Flavor\": {\n            \"equalTo\": \"Google\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"unparsable.token\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/close_session.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Successful close session\",\n      \"request\": {\n        \"urlPathPattern\": \"/session\",\n        \"method\": \"POST\",\n        \"queryParameters\": {\n          \"delete\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"code\": null,\n          \"data\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/hang.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"url\": \"/hang\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"fixedDelayMilliseconds\": 2000\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/minicore/auth/disabled_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"matchesJsonPath\": \"$.data.CLIENT_ENVIRONMENT[?(@.CORE_LOAD_ERROR =~ /.*disabled at compile time.*/)]\"\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.CLIENT_ENVIRONMENT.CORE_VERSION\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/minicore/auth/successful_flow.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"PASSWORD\": \"testPassword\",\n                \"CLIENT_ENVIRONMENT\": {\n                  \"CORE_VERSION\": \"0.0.1\",\n                  \"CGO_ENABLED\": true,\n                  \"LINKING_MODE\": \"unknown\"\n                }\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": \"$.data.CLIENT_ENVIRONMENT[?(@.CORE_FILE_NAME =~ /.+/)]\"\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.CLIENT_ENVIRONMENT.CORE_LOAD_ERROR\",\n              \"absent\": \"(absent)\"\n            }\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/minicore/auth/successful_flow_linux.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\" : {\n              \"data\": {\n                \"LOGIN_NAME\": \"testUser\",\n                \"PASSWORD\": \"testPassword\",\n                \"CLIENT_ENVIRONMENT\": {\n                  \"CORE_VERSION\": \"0.0.1\",\n                  \"CGO_ENABLED\": true,\n                  \"LINKING_MODE\": \"dynamic\"\n                }\n              }\n            },\n            \"ignoreExtraElements\" : true\n          },\n          {\n            \"matchesJsonPath\": \"$.data.CLIENT_ENVIRONMENT[?(@.CORE_FILE_NAME =~ /.+/)]\"\n          },\n          {\n            \"matchesJsonPath\": {\n              \"expression\": \"$.data.CLIENT_ENVIRONMENT.CORE_LOAD_ERROR\",\n              \"absent\": \"(absent)\"\n            }\n          },\n          {\n            \"matchesJsonPath\": \"$.data.CLIENT_ENVIRONMENT[?(@.LIBC_FAMILY =~ /^(glibc|musl)$/)]\"\n          },\n          {\n            \"matchesJsonPath\": \"$.data.CLIENT_ENVIRONMENT[?(@.LIBC_VERSION =~ /\\\\d+\\\\.\\\\d+.*/)]\"\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"masterToken\": \"master token\",\n            \"token\": \"session token\",\n            \"validityInSeconds\": 3600,\n            \"masterValidityInSeconds\": 14400,\n            \"displayUserName\": \"TEST_USER\",\n            \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n            \"firstLogin\": false,\n            \"remMeToken\": null,\n            \"remMeValidityInSeconds\": 0,\n            \"healthCheckInterval\": 45,\n            \"newClientForUpgrade\": \"3.12.3\",\n            \"sessionId\": 1172562260498,\n            \"parameters\": [\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              }\n            ],\n            \"sessionInfo\": {\n              \"databaseName\": \"TEST_DB\",\n              \"schemaName\": \"TEST_GO\",\n              \"warehouseName\": \"TEST_XSMALL\",\n              \"roleName\": \"ANALYST\"\n            },\n            \"idToken\": null,\n            \"idTokenValidityInSeconds\": 0,\n            \"responseData\": null,\n            \"mfaToken\": null,\n            \"mfaTokenValidityInSeconds\": 0\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/ocsp/auth_failure.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/session/v1/login-request.*\",\n        \"method\": \"POST\"\n      },\n      \"response\": {\n        \"status\": 401,\n        \"jsonBody\": {\n          \"data\": null,\n          \"code\": \"390100\",\n          \"message\": \"Authentication failed for OCSP test\",\n          \"success\": false\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/ocsp/malformed.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"base64Body\": \"AQID\"\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/ocsp/unauthorized.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/\",\n        \"method\": \"POST\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"base64Body\": \"MAMKAQY=\"\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/platform_detection/aws_ec2_instance_success.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"method\": \"PUT\",\n        \"urlPath\": \"/latest/api/token\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"AQAEAEV4aW1hbGVUb2tlbg==\",\n        \"headers\": {\n          \"Content-Type\": \"text/plain\"\n        }\n      }\n    },\n    {\n      \"request\": {\n        \"method\": \"GET\",\n        \"urlPath\": \"/latest/meta-data/iam/security-credentials/\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"test-role\",\n        \"headers\": {\n          \"Content-Type\": \"text/plain\"\n        }\n      }\n    },\n    {\n      \"request\": {\n        \"method\": \"GET\",\n        \"urlPath\": \"/latest/meta-data/iam/security-credentials/test-role\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"Code\": \"Success\",\n          \"LastUpdated\": \"2023-01-01T00:00:00Z\",\n          \"Type\": \"AWS-HMAC\",\n          \"AccessKeyId\": \"AKIAIOSFODNN7EXAMPLE\",\n          \"SecretAccessKey\": \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\",\n          \"Token\": \"AQoDYXdzEJr...<remainder of security token>\",\n          \"Expiration\": \"2030-01-01T06:00:00Z\"\n        },\n        \"headers\": {\n          \"Content-Type\": \"application/json\"\n        }\n      }\n    },\n    {\n      \"request\": {\n        \"method\": \"GET\",\n        \"urlPath\": \"/latest/dynamic/instance-identity/document\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"instanceId\": \"i-1234567890abcdef0\",\n          \"imageId\": \"ami-12345678\",\n          \"availabilityZone\": \"us-east-1a\",\n          \"instanceType\": \"t2.micro\",\n          \"accountId\": \"123456789012\",\n          \"architecture\": \"x86_64\",\n          \"kernelId\": null,\n          \"ramdiskId\": null,\n          \"region\": \"us-east-1\",\n          \"version\": \"2017-09-30\",\n          \"privateIp\": \"10.0.0.1\",\n          \"billingProducts\": null,\n          \"marketplaceProductCodes\": null,\n          \"pendingTime\": \"2023-01-01T00:00:00Z\",\n          \"devpayProductCodes\": null\n        },\n        \"headers\": {\n          \"Content-Type\": \"application/json\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/platform_detection/aws_identity_success.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"method\": \"PUT\",\n        \"urlPath\": \"/latest/api/token\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"AQAEAEV4aW1hbGVUb2tlbg==\",\n        \"headers\": {\n          \"Content-Type\": \"text/plain\"\n        }\n      }\n    },\n    {\n      \"request\": {\n        \"method\": \"GET\",\n        \"urlPath\": \"/latest/meta-data/iam/security-credentials/\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"test-role\",\n        \"headers\": {\n          \"Content-Type\": \"text/plain\"\n        }\n      }\n    },\n    {\n      \"request\": {\n        \"method\": \"GET\",\n        \"urlPath\": \"/latest/meta-data/iam/security-credentials/test-role\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"Code\": \"Success\",\n          \"LastUpdated\": \"2023-01-01T00:00:00Z\",\n          \"Type\": \"AWS-HMAC\",\n          \"AccessKeyId\": \"AKIAIOSFODNN7EXAMPLE\",\n          \"SecretAccessKey\": \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\",\n          \"Token\": \"AQoDYXdzEJr...<remainder of security token>\",\n          \"Expiration\": \"2030-01-01T06:00:00Z\"\n        },\n        \"headers\": {\n          \"Content-Type\": \"application/json\"\n        }\n      }\n    },\n    {\n      \"request\": {\n        \"method\": \"POST\",\n        \"urlPath\": \"/\",\n        \"bodyPatterns\": [\n          {\n            \"contains\": \"Action=GetCallerIdentity\"\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"<?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?>\\n<GetCallerIdentityResponse xmlns=\\\"https://sts.amazonaws.com/doc/2011-06-15/\\\">\\n    <GetCallerIdentityResult>\\n        <Arn>arn:aws:iam::123456789012:user/test-user</Arn>\\n        <UserId>AIDACKCEVSQ6C2EXAMPLE</UserId>\\n        <Account>123456789012</Account>\\n    </GetCallerIdentityResult>\\n    <ResponseMetadata>\\n        <RequestId>01234567-89ab-cdef-0123-456789abcdef</RequestId>\\n    </ResponseMetadata>\\n</GetCallerIdentityResponse>\",\n        \"headers\": {\n          \"Content-Type\": \"text/xml\"\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/platform_detection/azure_managed_identity_success.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"method\": \"GET\",\n        \"urlPattern\": \"/metadata/identity/oauth2/token\\\\?.*\",\n        \"headers\": {\n          \"Metadata\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"headers\": {\n          \"Content-Type\": \"application/json\"\n        },\n        \"jsonBody\": {\n          \"access_token\": \"test-token\",\n          \"token_type\": \"Bearer\",\n          \"expires_in\": 3600\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/platform_detection/azure_vm_success.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"method\": \"GET\",\n        \"url\": \"/metadata/instance?api-version=2019-03-11\",\n        \"headers\": {\n          \"Metadata\": {\n            \"equalTo\": \"true\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"headers\": {\n          \"Content-Type\": \"application/json\"\n        },\n        \"jsonBody\": {\n          \"compute\": {\n            \"vmId\": \"test-vm-id\",\n            \"name\": \"test-vm\"\n          }\n        }\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/platform_detection/gce_identity_success.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"method\": \"GET\",\n        \"url\": \"/computeMetadata/v1/instance/service-accounts/default/email\",\n        \"headers\": {\n          \"Metadata-Flavor\": {\n            \"equalTo\": \"Google\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"headers\": {\n          \"Content-Type\": \"text/plain\"\n        },\n        \"body\": \"test-service-account@test-project.iam.gserviceaccount.com\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/platform_detection/gce_vm_success.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"method\": \"GET\",\n        \"url\": \"/\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"headers\": {\n          \"Metadata-Flavor\": \"Google\",\n          \"Content-Type\": \"text/plain\"\n        },\n        \"body\": \"v1/\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/platform_detection/timeout_response.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPattern\": \".*\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"fixedDelayMilliseconds\": 1000,\n        \"body\": \"timeout\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "test_data/wiremock/mappings/query/long_running_query.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"request\": {\n        \"urlPathPattern\": \"/queries/v1/query-request.*\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Authorization\": {\n            \"matches\": \".*\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"code\": \"333334\",\n          \"data\": {\n            \"getResultUrl\": \"/queries/01bfd516-0009-ae23-0000-4c390101d1aa/result\",\n            \"progressDesc\": null,\n            \"queryAbortsAfterSecs\": 300,\n            \"queryId\": \"01bfd516-0009-ae23-0000-4c390101d1aa\"\n          },\n          \"message\": \"Asynchronous execution in progress. Use provided query id to perform query monitoring and management.\",\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/query/query_by_id_timeout.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Query status monitoring - RUNNING\",\n      \"request\": {\n        \"urlPathPattern\": \"/queries.*\",\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Authorization\": {\n            \"matches\": \".*\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"fixedDelayMilliseconds\": 3000\n      }\n    }\n  ]\n} "
  },
  {
    "path": "test_data/wiremock/mappings/query/query_execution.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"SQL Query execution for fetchResultByQueryID\",\n      \"request\": {\n        \"urlPathPattern\": \"/queries/v1/query-request.*\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Authorization\": {\n            \"matches\": \".*\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"success\": true,\n          \"data\": {\n            \"queryId\": \"mock-query-id-12345\",\n            \"resultSetMetaData\": {\n              \"columnCount\": 2,\n              \"columns\": [\n                {\"name\": \"MS\", \"type\": \"number\"},\n                {\"name\": \"SUM(C1)\", \"type\": \"number\"}\n              ]\n            },\n            \"rowType\": [\n              {\"name\": \"MS\", \"type\": \"FIXED\", \"length\": 10, \"precision\": 38, \"scale\": 0},\n              {\"name\": \"SUM(C1)\", \"type\": \"FIXED\", \"length\": 10, \"precision\": 38, \"scale\": 0}\n            ],\n            \"rowset\": [[\"1\", \"5050\"], [\"2\", \"5100\"]],\n            \"total\": 2,\n            \"queryResultFormat\": \"json\"\n          }\n        }\n      }\n    },\n    {\n      \"scenarioName\": \"Query result fetching\",\n      \"request\": {\n        \"urlPathPattern\": \"/queries/.*/result.*\",\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Authorization\": {\n            \"matches\": \".*\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"success\": true,\n          \"data\": {\n            \"queryId\": \"mock-query-id-12345\",\n            \"resultSetMetaData\": {\n              \"columnCount\": 2,\n              \"columns\": [\n                {\"name\": \"MS\", \"type\": \"number\"},\n                {\"name\": \"SUM(C1)\", \"type\": \"number\"}\n              ]\n            },\n            \"rowType\": [\n              {\"name\": \"MS\", \"type\": \"FIXED\", \"length\": 10, \"precision\": 38, \"scale\": 0},\n              {\"name\": \"SUM(C1)\", \"type\": \"FIXED\", \"length\": 10, \"precision\": 38, \"scale\": 0}\n            ],\n            \"rowset\": [[\"1\", \"5050\"], [\"2\", \"5100\"]],\n            \"total\": 2,\n            \"queryResultFormat\": \"json\"\n          }\n        }\n      }\n    }\n  ]\n} "
  },
  {
    "path": "test_data/wiremock/mappings/query/query_monitoring.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Query status monitoring - SUCCESS\",\n      \"request\": {\n        \"urlPathPattern\": \"/monitoring/queries.*\",\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Authorization\": {\n            \"matches\": \".*\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"success\": true,\n          \"data\": {\n            \"queries\": [\n              {\n                \"id\": \"mock-query-id-12345\",\n                \"status\": \"SUCCESS\",\n                \"errorCode\": \"\",\n                \"errorMessage\": \"\"\n              }\n            ]\n          }\n        }\n      }\n    }\n  ]\n} "
  },
  {
    "path": "test_data/wiremock/mappings/query/query_monitoring_error.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Query status monitoring - FAILED_WITH_ERROR\",\n      \"request\": {\n        \"urlPathPattern\": \"/monitoring/queries.*\",\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Authorization\": {\n            \"matches\": \".*\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"success\": true,\n          \"data\": {\n            \"queries\": [\n              {\n                \"id\": \"mock-query-id-12345\",\n                \"status\": \"FAILED_WITH_ERROR\",\n                \"errorCode\": \"\",\n                \"errorMessage\": \"\"\n              }\n            ]\n          },\n          \"code\": null,\n          \"message\": null\n        }\n      }\n    }\n  ]\n} "
  },
  {
    "path": "test_data/wiremock/mappings/query/query_monitoring_malformed.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Query status monitoring - Malformed JSON\",\n      \"request\": {\n        \"urlPathPattern\": \"/monitoring/queries.*\",\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Authorization\": {\n            \"matches\": \".*\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"body\": \"{\\\"malformedJson\\\"}\",\n        \"headers\": {\n          \"Content-Type\": \"application/json\"\n        }\n      }\n    }\n  ]\n} "
  },
  {
    "path": "test_data/wiremock/mappings/query/query_monitoring_running.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Query status monitoring - RUNNING\",\n      \"request\": {\n        \"urlPathPattern\": \"/monitoring/queries.*\",\n        \"method\": \"GET\",\n        \"headers\": {\n          \"Authorization\": {\n            \"matches\": \".*\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"success\": true,\n          \"data\": {\n            \"queries\": [\n              {\n                \"id\": \"mock-query-id-12345\",\n                \"status\": \"RUNNING\",\n                \"state\": \"FILE_SET_INITIALIZATION\",\n                \"errorCode\": \"\",\n                \"errorMessage\": null\n              }\n            ]\n          },\n          \"code\": null,\n          \"message\": null\n        }\n      }\n    }\n  ]\n} "
  },
  {
    "path": "test_data/wiremock/mappings/retry/redirection_retry_workflow.json",
    "content": "{\n    \"mappings\": [\n        {\n            \"scenarioName\": \"wiremock retry strategy\",\n            \"requiredScenarioState\": \"Started\",\n            \"newScenarioState\": \"Successful login\",\n            \"request\": {\n                \"urlPathPattern\": \"/session/v1/login-request.*\",\n                \"method\": \"POST\",\n                \"bodyPatterns\": [\n                    {\n                        \"equalToJson\": {\n                            \"data\": {\n                                \"LOGIN_NAME\": \"testUser\",\n                                \"PASSWORD\": \"testPassword\"\n                            }\n                        },\n                        \"ignoreExtraElements\": true\n                    }\n                ]\n            },\n            \"response\": {\n                \"status\": 200,\n                \"jsonBody\": {\n                    \"data\": {\n                        \"masterToken\": \"master token\",\n                        \"token\": \"session token\",\n                        \"validityInSeconds\": 3600,\n                        \"masterValidityInSeconds\": 14400,\n                        \"displayUserName\": \"TEST_USER\",\n                        \"serverVersion\": \"8.48.0 b2024121104444034239f05\",\n                        \"firstLogin\": false,\n                        \"remMeToken\": null,\n                        \"remMeValidityInSeconds\": 0,\n                        \"healthCheckInterval\": 45,\n                        \"newClientForUpgrade\": \"3.12.3\",\n                        \"sessionId\": 1172562260498,\n                        \"parameters\": [\n                            {\n                                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                                \"value\": 4\n                            }\n                        ],\n                        \"sessionInfo\": {\n                            \"databaseName\": \"TEST_DB\",\n                            \"schemaName\": \"TEST_GO\",\n                            \"warehouseName\": \"TEST_XSMALL\",\n                            \"roleName\": \"ANALYST\"\n                        },\n                        \"idToken\": null,\n                        \"idTokenValidityInSeconds\": 0,\n                        \"responseData\": null,\n                        \"mfaToken\": \"mfa-token\",\n                        \"mfaTokenValidityInSeconds\": 0\n                    },\n                    \"code\": null,\n                    \"message\": null,\n                    \"success\": true\n                },\n                \"fixedDelayMilliseconds\": 2000\n            }\n        },\n        {\n            \"scenarioName\": \"wiremock retry strategy\",\n            \"requiredScenarioState\": \"Successful login\",\n            \"newScenarioState\": \"Query attempt with HTTP 3xx response\",\n            \"request\": {\n                \"urlPathPattern\": \"/queries/v1/query-request.*\",\n                \"method\": \"POST\"\n            },\n            \"response\": {\n                \"status\": 307,\n                \"headers\": {\n                    \"Location\": \"/temp-redirect-1\"\n                }\n            }\n        },\n        {\n            \"scenarioName\": \"wiremock retry strategy\",\n            \"requiredScenarioState\": \"Query attempt with HTTP 3xx response\",\n            \"newScenarioState\": \"3xx redirect followed and times out\",\n            \"request\": {\n                \"urlPathPattern\": \"/temp-redirect-1\",\n                \"method\": \"POST\"\n            },\n            \"response\": {\n                \"fixedDelayMilliseconds\": 5000\n            }\n        },\n        {\n            \"scenarioName\": \"wiremock retry strategy\",\n            \"requiredScenarioState\": \"3xx redirect followed and times out\",\n            \"newScenarioState\": \"Retry attempt successful\",\n            \"request\": {\n                \"urlPathPattern\": \"/queries/v1/query-request.*\",\n                \"method\": \"POST\"\n            },\n            \"response\": {\n                \"status\": 200,\n                \"headers\": {\n                    \"date\": \"Fri, 31 Oct 2025 06:26:51 GMT\",\n                    \"cache-control\": \"no-cache, no-store\",\n                    \"content-type\": \"application/json\",\n                    \"vary\": \"Accept-Encoding, User-Agent\",\n                    \"server\": \"SF-LB\",\n                    \"x-envoy-upstream-service-time\": \"72\",\n                    \"x-content-type-options\": \"nosniff\",\n                    \"x-xss-protection\": \"1; mode=block\",\n                    \"expect-ct\": \"enforce, max-age=3600\",\n                    \"strict-transport-security\": \"max-age=31536000\",\n                    \"x-snowflake-fe-instance\": \"-\",\n                    \"x-snowflake-fe-config\": \"v20251022.0.0-4d0dc170.1761148450.prod1.1761891997993\",\n                    \"x-frame-options\": \"deny\",\n                    \"x-envoy-attempt-count\": \"1\",\n                    \"transfer-encoding\": \"chunked\"\n                },\n                \"jsonBody\": {\n                    \"data\": {\n                        \"parameters\": [\n                            {\n                                \"name\": \"TIMESTAMP_OUTPUT_FORMAT\",\n                                \"value\": \"YYYY-MM-DD HH24:MI:SS.FF3 TZHTZM\"\n                            },\n                            {\n                                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                                \"value\": 4\n                            },\n                            {\n                                \"name\": \"JS_TREAT_INTEGER_AS_BIGINT\",\n                                \"value\": false\n                            },\n                            {\n                                \"name\": \"TIME_OUTPUT_FORMAT\",\n                                \"value\": \"HH24:MI:SS\"\n                            },\n                            {\n                                \"name\": \"CLIENT_RESULT_CHUNK_SIZE\",\n                                \"value\": 160\n                            },\n                            {\n                                \"name\": \"TIMESTAMP_TZ_OUTPUT_FORMAT\",\n                                \"value\": \"\"\n                            },\n                            {\n                                \"name\": \"CLIENT_SESSION_KEEP_ALIVE\",\n                                \"value\": false\n                            },\n                            {\n                                \"name\": \"CLIENT_OUT_OF_BAND_TELEMETRY_ENABLED\",\n                                \"value\": false\n                            },\n                            {\n                                \"name\": \"CLIENT_METADATA_USE_SESSION_DATABASE\",\n                                \"value\": false\n                            },\n                            {\n                                \"name\": \"QUERY_CONTEXT_CACHE_SIZE\",\n                                \"value\": 5\n                            },\n                            {\n                                \"name\": \"ENABLE_STAGE_S3_PRIVATELINK_FOR_US_EAST_1\",\n                                \"value\": true\n                            },\n                            {\n                                \"name\": \"TIMESTAMP_NTZ_OUTPUT_FORMAT\",\n                                \"value\": \"YYYY-MM-DD HH24:MI:SS.FF3\"\n                            },\n                            {\n                                \"name\": \"CLIENT_RESULT_PREFETCH_THREADS\",\n                                \"value\": 1\n                            },\n                            {\n                                \"name\": \"CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX\",\n                                \"value\": false\n                            },\n                            {\n                                \"name\": \"CLIENT_HONOR_CLIENT_TZ_FOR_TIMESTAMP_NTZ\",\n                                \"value\": true\n                            },\n                            {\n                                \"name\": \"CLIENT_MEMORY_LIMIT\",\n                                \"value\": 1536\n                            },\n                            {\n                                \"name\": \"CLIENT_TIMESTAMP_TYPE_MAPPING\",\n                                \"value\": \"TIMESTAMP_NTZ\"\n                            },\n                            {\n                                \"name\": \"TIMEZONE\",\n                                \"value\": \"America/Los_Angeles\"\n                            },\n                            {\n                                \"name\": \"CLIENT_RESULT_PREFETCH_SLOTS\",\n                                \"value\": 2\n                            },\n                            {\n                                \"name\": \"CLIENT_TELEMETRY_ENABLED\",\n                                \"value\": true\n                            },\n                            {\n                                \"name\": \"CLIENT_DISABLE_INCIDENTS\",\n                                \"value\": true\n                            },\n                            {\n                                \"name\": \"CLIENT_USE_V1_QUERY_API\",\n                                \"value\": true\n                            },\n                            {\n                                \"name\": \"CLIENT_RESULT_COLUMN_CASE_INSENSITIVE\",\n                                \"value\": false\n                            },\n                            {\n                                \"name\": \"BINARY_OUTPUT_FORMAT\",\n                                \"value\": \"HEX\"\n                            },\n                            {\n                                \"name\": \"CSV_TIMESTAMP_FORMAT\",\n                                \"value\": \"\"\n                            },\n                            {\n                                \"name\": \"CLIENT_ENABLE_LOG_INFO_STATEMENT_PARAMETERS\",\n                                \"value\": false\n                            },\n                            {\n                                \"name\": \"CLIENT_TELEMETRY_SESSIONLESS_ENABLED\",\n                                \"value\": true\n                            },\n                            {\n                                \"name\": \"JS_DRIVER_DISABLE_OCSP_FOR_NON_SF_ENDPOINTS\",\n                                \"value\": false\n                            },\n                            {\n                                \"name\": \"DATE_OUTPUT_FORMAT\",\n                                \"value\": \"YYYY-MM-DD\"\n                            },\n                            {\n                                \"name\": \"CLIENT_STAGE_ARRAY_BINDING_THRESHOLD\",\n                                \"value\": 65280\n                            },\n                            {\n                                \"name\": \"CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY\",\n                                \"value\": 3600\n                            },\n                            {\n                                \"name\": \"AUTOCOMMIT\",\n                                \"value\": true\n                            },\n                            {\n                                \"name\": \"CLIENT_SESSION_CLONE\",\n                                \"value\": false\n                            },\n                            {\n                                \"name\": \"TIMESTAMP_LTZ_OUTPUT_FORMAT\",\n                                \"value\": \"\"\n                            }\n                        ],\n                        \"rowtype\": [\n                            {\n                                \"name\": \"1\",\n                                \"database\": \"\",\n                                \"schema\": \"\",\n                                \"table\": \"\",\n                                \"scale\": 0,\n                                \"nullable\": false,\n                                \"byteLength\": null,\n                                \"precision\": 1,\n                                \"length\": null,\n                                \"type\": \"fixed\",\n                                \"collation\": null\n                            }\n                        ],\n                        \"rowset\": [\n                            [\n                                \"1\"\n                            ]\n                        ],\n                        \"total\": 1,\n                        \"returned\": 1,\n                        \"queryId\": \"01c01270-0e12-4b04-0000-53b10b9c95be\",\n                        \"databaseProvider\": null,\n                        \"finalDatabaseName\": \"WIREMOCKTESTDB\",\n                        \"finalSchemaName\": \"TESTSCHEMA\",\n                        \"finalWarehouseName\": \"WIREMOCK_WH\",\n                        \"finalRoleName\": \"SYSADMIN\",\n                        \"numberOfBinds\": 0,\n                        \"arrayBindSupported\": false,\n                        \"statementTypeId\": 4096,\n                        \"version\": 1,\n                        \"sendResultTime\": 1761890916147,\n                        \"queryResultFormat\": \"json\",\n                        \"queryContext\": {\n                            \"entries\": [\n                                {\n                                    \"id\": 0,\n                                    \"timestamp\": 1761890916132138,\n                                    \"priority\": 0,\n                                    \"context\": \"CJLYpAI=\"\n                                }\n                            ]\n                        }\n                    },\n                    \"code\": null,\n                    \"message\": null,\n                    \"success\": true\n                }\n            }\n        }\n    ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/select1.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Successful SELECT 1 flow\",\n      \"request\": {\n        \"urlPathPattern\": \"/queries/v1/query-request.*\",\n        \"method\": \"POST\",\n        \"headers\": {\n          \"Authorization\": {\n            \"equalTo\": \"Snowflake Token=\\\"session token\\\"\"\n          }\n        }\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"data\": {\n            \"parameters\": [\n              {\n                \"name\": \"TIMESTAMP_OUTPUT_FORMAT\",\n                \"value\": \"YYYY-MM-DD HH24:MI:SS.FF3 TZHTZM\"\n              },\n              {\n                \"name\": \"CLIENT_PREFETCH_THREADS\",\n                \"value\": 4\n              },\n              {\n                \"name\": \"TIME_OUTPUT_FORMAT\",\n                \"value\": \"HH24:MI:SS\"\n              },\n              {\n                \"name\": \"CLIENT_RESULT_CHUNK_SIZE\",\n                \"value\": 16\n              },\n              {\n                \"name\": \"TIMESTAMP_TZ_OUTPUT_FORMAT\",\n                \"value\": \"\"\n              },\n              {\n                \"name\": \"CLIENT_SESSION_KEEP_ALIVE\",\n                \"value\": false\n              },\n              {\n                \"name\": \"QUERY_CONTEXT_CACHE_SIZE\",\n                \"value\": 5\n              },\n              {\n                \"name\": \"CLIENT_METADATA_USE_SESSION_DATABASE\",\n                \"value\": false\n              },\n              {\n                \"name\": \"CLIENT_OUT_OF_BAND_TELEMETRY_ENABLED\",\n                \"value\": false\n              },\n              {\n                \"name\": \"ENABLE_STAGE_S3_PRIVATELINK_FOR_US_EAST_1\",\n                \"value\": true\n              },\n              {\n                \"name\": \"TIMESTAMP_NTZ_OUTPUT_FORMAT\",\n                \"value\": \"YYYY-MM-DD HH24:MI:SS.FF3\"\n              },\n              {\n                \"name\": \"CLIENT_RESULT_PREFETCH_THREADS\",\n                \"value\": 1\n              },\n              {\n                \"name\": \"CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX\",\n                \"value\": false\n              },\n              {\n                \"name\": \"CLIENT_HONOR_CLIENT_TZ_FOR_TIMESTAMP_NTZ\",\n                \"value\": true\n              },\n              {\n                \"name\": \"CLIENT_MEMORY_LIMIT\",\n                \"value\": 1536\n              },\n              {\n                \"name\": \"CLIENT_TIMESTAMP_TYPE_MAPPING\",\n                \"value\": \"TIMESTAMP_LTZ\"\n              },\n              {\n                \"name\": \"TIMEZONE\",\n                \"value\": \"America/Los_Angeles\"\n              },\n              {\n                \"name\": \"SERVICE_NAME\",\n                \"value\": \"\"\n              },\n              {\n                \"name\": \"CLIENT_RESULT_PREFETCH_SLOTS\",\n                \"value\": 2\n              },\n              {\n                \"name\": \"CLIENT_TELEMETRY_ENABLED\",\n                \"value\": true\n              },\n              {\n                \"name\": \"CLIENT_DISABLE_INCIDENTS\",\n                \"value\": true\n              },\n              {\n                \"name\": \"CLIENT_USE_V1_QUERY_API\",\n                \"value\": true\n              },\n              {\n                \"name\": \"CLIENT_RESULT_COLUMN_CASE_INSENSITIVE\",\n                \"value\": false\n              },\n              {\n                \"name\": \"CSV_TIMESTAMP_FORMAT\",\n                \"value\": \"\"\n              },\n              {\n                \"name\": \"BINARY_OUTPUT_FORMAT\",\n                \"value\": \"HEX\"\n              },\n              {\n                \"name\": \"CLIENT_ENABLE_LOG_INFO_STATEMENT_PARAMETERS\",\n                \"value\": false\n              },\n              {\n                \"name\": \"CLIENT_TELEMETRY_SESSIONLESS_ENABLED\",\n                \"value\": true\n              },\n              {\n                \"name\": \"DATE_OUTPUT_FORMAT\",\n                \"value\": \"YYYY-MM-DD\"\n              },\n              {\n                \"name\": \"CLIENT_STAGE_ARRAY_BINDING_THRESHOLD\",\n                \"value\": 65280\n              },\n              {\n                \"name\": \"CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY\",\n                \"value\": 3600\n              },\n              {\n                \"name\": \"CLIENT_SESSION_CLONE\",\n                \"value\": false\n              },\n              {\n                \"name\": \"AUTOCOMMIT\",\n                \"value\": true\n              },\n              {\n                \"name\": \"TIMESTAMP_LTZ_OUTPUT_FORMAT\",\n                \"value\": \"\"\n              }\n            ],\n            \"rowtype\": [\n              {\n                \"name\": \"1\",\n                \"database\": \"\",\n                \"schema\": \"\",\n                \"table\": \"\",\n                \"nullable\": false,\n                \"length\": null,\n                \"type\": \"fixed\",\n                \"scale\": 0,\n                \"precision\": 1,\n                \"byteLength\": null,\n                \"collation\": null\n              }\n            ],\n            \"rowset\": [\n              [\n                \"1\"\n              ]\n            ],\n            \"total\": 1,\n            \"returned\": 1,\n            \"queryId\": \"01ba13b4-0104-e9fd-0000-0111029ca00e\",\n            \"databaseProvider\": null,\n            \"finalDatabaseName\": null,\n            \"finalSchemaName\": null,\n            \"finalWarehouseName\": \"TEST_XSMALL\",\n            \"numberOfBinds\": 0,\n            \"arrayBindSupported\": false,\n            \"statementTypeId\": 4096,\n            \"version\": 1,\n            \"sendResultTime\": 1738317395581,\n            \"queryResultFormat\": \"json\",\n            \"queryContext\": {\n              \"entries\": [\n                {\n                  \"id\": 0,\n                  \"timestamp\": 1738317395574564,\n                  \"priority\": 0,\n                  \"context\": \"CPbPTg==\"\n                }\n              ]\n            }\n          },\n          \"code\": null,\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/telemetry/custom_telemetry.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Successful telemetry flow\",\n      \"request\": {\n        \"urlPathPattern\": \"/telemetry/send\",\n        \"method\": \"POST\",\n        \"bodyPatterns\": [\n          {\n            \"equalToJson\": {\n              \"logs\": {\n                \"message\": {\n                  \"test_key\": \"test_value\"\n                }\n              }\n            },\n            \"ignoreExtraElements\": true\n          }\n        ]\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"code\": null,\n          \"data\": \"Log Received\",\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_data/wiremock/mappings/telemetry/telemetry.json",
    "content": "{\n  \"mappings\": [\n    {\n      \"scenarioName\": \"Successful telemetry flow\",\n      \"request\": {\n        \"urlPathPattern\": \"/telemetry/send\",\n        \"method\": \"POST\"\n      },\n      \"response\": {\n        \"status\": 200,\n        \"jsonBody\": {\n          \"code\": null,\n          \"data\": \"Log Received\",\n          \"message\": null,\n          \"success\": true\n        }\n      }\n    }\n  ]\n}"
  },
  {
    "path": "test_utils_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"net/http\"\n\t\"os\"\n\t\"runtime\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\ntype countingRoundTripper struct {\n\tdelegate     http.RoundTripper\n\tgetReqCount  map[string]int\n\tpostReqCount map[string]int\n\tmu           sync.Mutex\n}\n\nfunc newCountingRoundTripper(delegate http.RoundTripper) *countingRoundTripper {\n\treturn &countingRoundTripper{\n\t\tdelegate:     delegate,\n\t\tgetReqCount:  make(map[string]int),\n\t\tpostReqCount: make(map[string]int),\n\t}\n}\n\nfunc (crt *countingRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {\n\tcrt.mu.Lock()\n\tswitch req.Method {\n\tcase http.MethodGet:\n\t\tcrt.getReqCount[req.URL.String()]++\n\tcase http.MethodPost:\n\t\tcrt.postReqCount[req.URL.String()]++\n\t}\n\tcrt.mu.Unlock()\n\n\treturn crt.delegate.RoundTrip(req)\n}\n\nfunc (crt *countingRoundTripper) reset() {\n\tcrt.getReqCount = make(map[string]int)\n\tcrt.postReqCount = make(map[string]int)\n}\n\nfunc (crt *countingRoundTripper) totalRequestsByPath(urlPath string) int {\n\ttotal := 0\n\tfor url, reqs := range crt.getReqCount {\n\t\tif strings.Contains(url, urlPath) {\n\t\t\ttotal += reqs\n\t\t}\n\t}\n\tfor url, reqs := range crt.postReqCount {\n\t\tif strings.Contains(url, urlPath) {\n\t\t\ttotal += reqs\n\t\t}\n\t}\n\treturn total\n}\n\nfunc (crt *countingRoundTripper) totalRequests() int {\n\ttotal := 0\n\tfor _, reqs := range crt.getReqCount {\n\t\ttotal += reqs\n\t}\n\tfor _, reqs := range crt.postReqCount {\n\t\ttotal += reqs\n\t}\n\treturn total\n}\n\ntype blockingRoundTripper struct {\n\tdelegate         http.RoundTripper\n\tdefaultBlockTime time.Duration\n\tpathBlockTime    map[string]time.Duration\n}\n\nfunc newBlockingRoundTripper(delegate http.RoundTripper, defaultBlockTime time.Duration) *blockingRoundTripper {\n\treturn &blockingRoundTripper{\n\t\tdelegate:         delegate,\n\t\tdefaultBlockTime: defaultBlockTime,\n\t\tpathBlockTime:    make(map[string]time.Duration),\n\t}\n}\n\nfunc (brt *blockingRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {\n\tif blockTime, exists := brt.pathBlockTime[req.URL.Path]; exists {\n\t\ttime.Sleep(blockTime)\n\t} else if brt.defaultBlockTime != 0 {\n\t\ttime.Sleep(brt.defaultBlockTime)\n\t}\n\treturn brt.delegate.RoundTrip(req)\n}\n\nfunc (brt *blockingRoundTripper) setPathBlockTime(path string, blockTime time.Duration) {\n\tbrt.pathBlockTime[path] = blockTime\n}\n\nfunc (brt *blockingRoundTripper) reset() {\n\tbrt.pathBlockTime = make(map[string]time.Duration)\n}\n\nfunc skipOnMissingHome(t *testing.T) {\n\tif (runtime.GOOS == \"linux\" || runtime.GOOS == \"darwin\") && os.Getenv(\"HOME\") == \"\" {\n\t\tt.Skip(\"skipping on missing HOME environment variable\")\n\t}\n}\n"
  },
  {
    "path": "tls_config.go",
    "content": "package gosnowflake\n\nimport (\n\t\"crypto/tls\"\n\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n)\n\n// RegisterTLSConfig registers a custom tls.Config to be used with sql.Open.\n// Use the key as a value in the DSN where tlsConfigName=value.\nfunc RegisterTLSConfig(key string, cfg *tls.Config) error {\n\treturn sfconfig.RegisterTLSConfig(key, cfg)\n}\n\n// DeregisterTLSConfig removes the tls.Config associated with key.\nfunc DeregisterTLSConfig(key string) error {\n\treturn sfconfig.DeregisterTLSConfig(key)\n}\n"
  },
  {
    "path": "tls_config_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"testing\"\n)\n\n// TODO move this test to config package when we have wiremock support in an internal package\nfunc TestShouldSetUpTlsConfig(t *testing.T) {\n\ttlsConfig := wiremockHTTPS.tlsConfig(t)\n\terr := RegisterTLSConfig(\"wiremock\", tlsConfig)\n\tassertNilF(t, err)\n\twiremockHTTPS.registerMappings(t, newWiremockMapping(\"auth/password/successful_flow.json\"))\n\n\tfor _, dbFunc := range []func() *sql.DB{\n\t\tfunc() *sql.DB {\n\t\t\tcfg := wiremockHTTPS.connectionConfig(t)\n\t\t\tcfg.TLSConfigName = \"wiremock\"\n\t\t\tcfg.Transporter = nil\n\t\t\treturn sql.OpenDB(NewConnector(SnowflakeDriver{}, *cfg))\n\t\t},\n\t\tfunc() *sql.DB {\n\t\t\tcfg := wiremockHTTPS.connectionConfig(t)\n\t\t\tcfg.TLSConfigName = \"wiremock\"\n\t\t\tcfg.Transporter = nil\n\t\t\tdsn, err := DSN(cfg)\n\t\t\tassertNilF(t, err)\n\t\t\tdb, err := sql.Open(\"snowflake\", dsn)\n\t\t\tassertNilF(t, err)\n\t\t\treturn db\n\t\t},\n\t} {\n\t\tt.Run(\"\", func(t *testing.T) {\n\t\t\tdb := dbFunc()\n\t\t\tdefer db.Close()\n\t\t\t// mock connection, no need to close\n\t\t\t_, err := db.Conn(context.Background())\n\t\t\tassertNilF(t, err)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "transaction.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"errors\"\n)\n\ntype snowflakeTx struct {\n\tsc  *snowflakeConn\n\tctx context.Context\n}\n\ntype txCommand int\n\nconst (\n\tcommit txCommand = iota\n\trollback\n)\n\nfunc (cmd txCommand) string() (string, error) {\n\tswitch cmd {\n\tcase commit:\n\t\treturn \"COMMIT\", nil\n\tcase rollback:\n\t\treturn \"ROLLBACK\", nil\n\t}\n\treturn \"\", errors.New(\"unsupported transaction command\")\n}\n\nfunc (tx *snowflakeTx) Commit() error {\n\treturn tx.execTxCommand(commit)\n}\n\nfunc (tx *snowflakeTx) Rollback() error {\n\treturn tx.execTxCommand(rollback)\n}\n\nfunc (tx *snowflakeTx) execTxCommand(command txCommand) (err error) {\n\ttxStr, err := command.string()\n\tif err != nil {\n\t\treturn\n\t}\n\tif tx.sc == nil || tx.sc.rest == nil {\n\t\treturn driver.ErrBadConn\n\t}\n\tisInternal := isInternal(tx.ctx)\n\t_, err = tx.sc.exec(tx.ctx, txStr, false /* noResult */, isInternal, false /* describeOnly */, nil)\n\tif err != nil {\n\t\treturn\n\t}\n\ttx.sc = nil\n\treturn\n}\n"
  },
  {
    "path": "transaction_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"errors\"\n\t\"fmt\"\n\terrors2 \"github.com/snowflakedb/gosnowflake/v2/internal/errors\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestTransactionOptions(t *testing.T) {\n\tvar tx *sql.Tx\n\tvar err error\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\ttx, err = dbt.conn.BeginTx(context.Background(), &sql.TxOptions{})\n\t\tif err != nil {\n\t\t\tt.Fatal(\"failed to start transaction.\")\n\t\t}\n\t\tif err = tx.Rollback(); err != nil {\n\t\t\tt.Fatal(\"failed to rollback\")\n\t\t}\n\t\tif _, err = dbt.conn.BeginTx(context.Background(), &sql.TxOptions{ReadOnly: true}); err == nil {\n\t\t\tt.Fatal(\"should have failed.\")\n\t\t}\n\t\tif driverErr, ok := err.(*SnowflakeError); !ok || driverErr.Number != ErrNoReadOnlyTransaction {\n\t\t\tt.Fatalf(\"should have returned Snowflake Error: %v\", errors2.ErrMsgNoReadOnlyTransaction)\n\t\t}\n\t\tif _, err = dbt.conn.BeginTx(context.Background(), &sql.TxOptions{Isolation: 100}); err == nil {\n\t\t\tt.Fatal(\"should have failed.\")\n\t\t}\n\t\tif driverErr, ok := err.(*SnowflakeError); !ok || driverErr.Number != ErrNoDefaultTransactionIsolationLevel {\n\t\t\tt.Fatalf(\"should have returned Snowflake Error: %v\", errors2.ErrMsgNoDefaultTransactionIsolationLevel)\n\t\t}\n\t})\n}\n\n// SNOW-823072: Test that transaction uses the context object supplied by BeginTx(), not from the parent connection\nfunc TestTransactionContext(t *testing.T) {\n\tvar tx *sql.Tx\n\tvar err error\n\n\tctx := context.Background()\n\n\trunDBTest(t, func(dbt *DBTest) {\n\t\tpingWithRetry := withRetry(PingFunc, 5, 3*time.Second)\n\n\t\terr = pingWithRetry(context.Background(), dbt.conn)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\ttx, err = dbt.conn.BeginTx(ctx, nil)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t_, err = tx.ExecContext(ctx, \"SELECT SYSTEM$WAIT(10, 'SECONDS')\")\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\terr = tx.Commit()\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t})\n}\n\nfunc PingFunc(ctx context.Context, conn *sql.Conn) error {\n\treturn conn.PingContext(ctx)\n}\n\n// Helper function for SNOW-823072 repro\nfunc withRetry(fn func(context.Context, *sql.Conn) error, numAttempts int, timeout time.Duration) func(context.Context, *sql.Conn) error {\n\treturn func(ctx context.Context, db *sql.Conn) error {\n\t\tfor currAttempt := 1; currAttempt <= numAttempts; currAttempt++ {\n\t\t\tctx, cancel := context.WithTimeout(ctx, timeout)\n\t\t\tdefer cancel()\n\t\t\terr := fn(ctx, db)\n\t\t\tif err != nil {\n\t\t\t\tif errors.Is(err, context.DeadlineExceeded) {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\t\treturn fmt.Errorf(\"context deadline exceeded, failed after [%d] attempts\", numAttempts)\n\t}\n}\n\nfunc TestTransactionError(t *testing.T) {\n\tsr := &snowflakeRestful{\n\t\tFuncPostQuery: postQueryFail,\n\t}\n\n\ttx := snowflakeTx{\n\t\tsc: &snowflakeConn{\n\t\t\tcfg:  &Config{},\n\t\t\trest: sr,\n\t\t},\n\t\tctx: context.Background(),\n\t}\n\n\t// test for post query error when executing the txCommand\n\terr := tx.execTxCommand(rollback)\n\tassertNotNilF(t, err, \"\")\n\tassertEqualE(t, err.Error(), \"failed to get query response\")\n\n\t// test for invalid txCommand\n\terr = tx.execTxCommand(2)\n\tassertNotNilF(t, err, \"\")\n\tassertEqualE(t, err.Error(), \"unsupported transaction command\")\n\n\t// test for bad connection error when snowflakeConn is nil\n\ttx.sc = nil\n\terr = tx.execTxCommand(rollback)\n\tassertNotNilF(t, err, \"\")\n\tassertEqualE(t, err.Error(), \"driver: bad connection\")\n}\n"
  },
  {
    "path": "transport.go",
    "content": "package gosnowflake\n\nimport (\n\t\"cmp\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"errors\"\n\t\"fmt\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"golang.org/x/net/http/httpproxy\"\n)\n\ntype transportConfigs interface {\n\tforTransportType(transportType transportType) *transportConfig\n}\n\ntype transportType int\n\nconst (\n\ttransportTypeOAuth transportType = iota\n\ttransportTypeCloudProvider\n\ttransportTypeOCSP\n\ttransportTypeCRL\n\ttransportTypeSnowflake\n\ttransportTypeWIF\n)\n\nvar defaultTransportConfigs transportConfigs = newDefaultTransportConfigs()\n\n// transportConfig holds the configuration for creating HTTP transports\ntype transportConfig struct {\n\tMaxIdleConns    int\n\tIdleConnTimeout time.Duration\n\tDialTimeout     time.Duration\n\tKeepAlive       time.Duration\n\tDisableProxy    bool\n}\n\n// TransportFactory handles creation of HTTP transports with different validation modes\ntype transportFactory struct {\n\tconfig    *Config\n\ttelemetry *snowflakeTelemetry\n}\n\nfunc (tf *transportConfig) String() string {\n\treturn fmt.Sprintf(\"{MaxIdleConns: %d, IdleConnTimeout: %s, DialTimeout: %s, KeepAlive: %s}\",\n\t\ttf.MaxIdleConns,\n\t\ttf.IdleConnTimeout,\n\t\ttf.DialTimeout,\n\t\ttf.KeepAlive)\n}\n\n// NewTransportFactory creates a new transport factory\nfunc newTransportFactory(config *Config, telemetry *snowflakeTelemetry) *transportFactory {\n\treturn &transportFactory{config: config, telemetry: telemetry}\n}\n\nfunc (tf *transportFactory) createProxy(transportConfig *transportConfig) func(*http.Request) (*url.URL, error) {\n\tif transportConfig.DisableProxy {\n\t\treturn nil\n\t}\n\tlogger.Debug(\"Initializing proxy configuration\")\n\tif tf.config == nil || tf.config.ProxyHost == \"\" {\n\t\tlogger.Debug(\"Config is empty or ProxyHost is not set. Using proxy settings from environment variables.\")\n\t\treturn http.ProxyFromEnvironment\n\t}\n\n\tconnectionProxy := &url.URL{\n\t\tScheme: tf.config.ProxyProtocol,\n\t\tHost:   fmt.Sprintf(\"%s:%d\", tf.config.ProxyHost, tf.config.ProxyPort),\n\t}\n\tif tf.config.ProxyUser != \"\" && tf.config.ProxyPassword != \"\" {\n\t\tconnectionProxy.User = url.UserPassword(tf.config.ProxyUser, tf.config.ProxyPassword)\n\t\tlogger.Infof(\"Connection Proxy is configured: Connection proxy %v: ****@%v NoProxy:%v\", tf.config.ProxyUser, connectionProxy.Host, tf.config.NoProxy)\n\t} else {\n\t\tlogger.Infof(\"Connection Proxy is configured: Connection proxy: %v NoProxy: %v\", connectionProxy.Host, tf.config.NoProxy)\n\t}\n\n\tcfg := httpproxy.Config{\n\t\tHTTPSProxy: connectionProxy.String(),\n\t\tHTTPProxy:  connectionProxy.String(),\n\t\tNoProxy:    tf.config.NoProxy,\n\t}\n\tproxyURLFunc := cfg.ProxyFunc()\n\n\treturn func(req *http.Request) (*url.URL, error) {\n\t\treturn proxyURLFunc(req.URL)\n\t}\n}\n\n// createBaseTransport creates a base HTTP transport with the given configuration\nfunc (tf *transportFactory) createBaseTransport(transportConfig *transportConfig, tlsConfig *tls.Config) *http.Transport {\n\tlogger.Debugf(\"Create a new Base Transport with transportConfig %v\", transportConfig.String())\n\tdialer := &net.Dialer{\n\t\tTimeout:   transportConfig.DialTimeout,\n\t\tKeepAlive: transportConfig.KeepAlive,\n\t}\n\n\tdefaultTransport := http.DefaultTransport.(*http.Transport)\n\treturn &http.Transport{\n\t\tTLSClientConfig:     tlsConfig,\n\t\tMaxIdleConns:        cmp.Or(transportConfig.MaxIdleConns, defaultTransport.MaxIdleConns),\n\t\tMaxIdleConnsPerHost: cmp.Or(transportConfig.MaxIdleConns, defaultTransport.MaxIdleConns),\n\t\tIdleConnTimeout:     cmp.Or(transportConfig.IdleConnTimeout, defaultTransport.IdleConnTimeout),\n\t\tProxy:               tf.createProxy(transportConfig),\n\t\tDialContext:         dialer.DialContext,\n\t}\n}\n\n// createOCSPTransport creates a transport with OCSP validation\nfunc (tf *transportFactory) createOCSPTransport(transportConfig *transportConfig) (*http.Transport, error) {\n\t// Chain OCSP verification with custom TLS config\n\tov := newOcspValidator(tf.config)\n\ttlsConfig, ok := sfconfig.GetTLSConfig(tf.config.TLSConfigName)\n\tif ok && tlsConfig != nil {\n\t\ttlsConfig.VerifyPeerCertificate = tf.chainVerificationCallbacks(tlsConfig.VerifyPeerCertificate, ov.verifyPeerCertificateSerial)\n\t} else {\n\t\ttlsConfig = &tls.Config{\n\t\t\tVerifyPeerCertificate: ov.verifyPeerCertificateSerial,\n\t\t}\n\t}\n\treturn tf.createBaseTransport(transportConfig, tlsConfig), nil\n}\n\n// createNoRevocationTransport creates a transport without certificate revocation checking\nfunc (tf *transportFactory) createNoRevocationTransport(transportConfig *transportConfig) http.RoundTripper {\n\tif tf.config != nil && tf.config.Transporter != nil {\n\t\treturn tf.config.Transporter\n\t}\n\treturn tf.createBaseTransport(transportConfig, nil)\n}\n\n// createCRLValidator creates a CRL validator\nfunc (tf *transportFactory) createCRLValidator() (*crlValidator, error) {\n\tallowCertificatesWithoutCrlURL := tf.config.CrlAllowCertificatesWithoutCrlURL == ConfigBoolTrue\n\tclient := &http.Client{\n\t\tTimeout:   cmp.Or(tf.config.CrlHTTPClientTimeout, defaultCrlHTTPClientTimeout),\n\t\tTransport: tf.createNoRevocationTransport(transportConfigFor(transportTypeCRL)),\n\t}\n\treturn newCrlValidator(\n\t\ttf.config.CertRevocationCheckMode,\n\t\tallowCertificatesWithoutCrlURL,\n\t\ttf.config.CrlInMemoryCacheDisabled,\n\t\ttf.config.CrlOnDiskCacheDisabled,\n\t\tcmp.Or(tf.config.CrlDownloadMaxSize, defaultCrlDownloadMaxSize),\n\t\tclient,\n\t\ttf.telemetry,\n\t)\n}\n\n// createTransport is the main entry point for creating transports\nfunc (tf *transportFactory) createTransport(transportConfig *transportConfig) (http.RoundTripper, error) {\n\tif tf.config == nil {\n\t\t// should never happen in production, only in tests\n\t\tlogger.Warn(\"createTransport: got nil Config, using default one\")\n\t\treturn tf.createNoRevocationTransport(transportConfig), nil\n\t}\n\n\t// if user configured a custom Transporter, prioritize that\n\tif tf.config.Transporter != nil {\n\t\tlogger.Debug(\"createTransport: using Transporter configured by the user\")\n\t\treturn tf.config.Transporter, nil\n\t}\n\n\t// Validate configuration\n\tif err := tf.validateRevocationConfig(); err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Handle CRL validation path\n\tif tf.config.CertRevocationCheckMode != CertRevocationCheckDisabled {\n\t\tlogger.Debug(\"createTransport: will perform CRL validation\")\n\t\tcrlValidator, err := tf.createCRLValidator()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tcrlCacheCleaner.startPeriodicCacheCleanup()\n\t\t// Chain CRL verification with custom TLS config\n\t\ttlsConfig, ok := sfconfig.GetTLSConfig(tf.config.TLSConfigName)\n\t\tif ok && tlsConfig != nil {\n\t\t\tcrlVerify := crlValidator.verifyPeerCertificates\n\t\t\ttlsConfig.VerifyPeerCertificate = tf.chainVerificationCallbacks(tlsConfig.VerifyPeerCertificate, crlVerify)\n\t\t} else {\n\t\t\ttlsConfig = &tls.Config{\n\t\t\t\tVerifyPeerCertificate: crlValidator.verifyPeerCertificates,\n\t\t\t}\n\t\t}\n\n\t\treturn tf.createBaseTransport(transportConfig, tlsConfig), nil\n\t}\n\n\t// Handle no revocation checking path\n\tif tf.config.DisableOCSPChecks {\n\t\tlogger.Debug(\"createTransport: skipping OCSP validation\")\n\t\treturn tf.createNoRevocationTransport(transportConfig), nil\n\t}\n\n\tlogger.Debug(\"createTransport: will perform OCSP validation\")\n\treturn tf.createOCSPTransport(transportConfig)\n}\n\n// validateRevocationConfig checks for conflicting revocation settings\nfunc (tf *transportFactory) validateRevocationConfig() error {\n\tif !tf.config.DisableOCSPChecks && tf.config.CertRevocationCheckMode != CertRevocationCheckDisabled {\n\t\treturn errors.New(\"both OCSP and CRL cannot be enabled at the same time, please disable one of them\")\n\t}\n\treturn nil\n}\n\n// chainVerificationCallbacks chains a user's custom verification with the provided verification function\nfunc (tf *transportFactory) chainVerificationCallbacks(orignalVerificationFunc func([][]byte, [][]*x509.Certificate) error, verificationFunc func([][]byte, [][]*x509.Certificate) error) func([][]byte, [][]*x509.Certificate) error {\n\tif orignalVerificationFunc == nil {\n\t\treturn verificationFunc\n\t}\n\n\t// Chain the existing verification with the new one\n\tnewVerify := func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {\n\t\t// Run the user's custom verification first\n\t\tif err := orignalVerificationFunc(rawCerts, verifiedChains); err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// Then run the provided verification\n\t\treturn verificationFunc(rawCerts, verifiedChains)\n\t}\n\treturn newVerify\n}\n\ntype defaultTransportConfigsType struct {\n\toauthTransportConfig         *transportConfig\n\tcloudProviderTransportConfig *transportConfig\n\tocspTransportConfig          *transportConfig\n\tcrlTransportConfig           *transportConfig\n\tsnowflakeTransportConfig     *transportConfig\n\twifTransportConfig           *transportConfig\n}\n\nfunc newDefaultTransportConfigs() *defaultTransportConfigsType {\n\treturn &defaultTransportConfigsType{\n\t\toauthTransportConfig: &transportConfig{\n\t\t\tMaxIdleConns:    1,\n\t\t\tIdleConnTimeout: 30 * time.Second,\n\t\t\tDialTimeout:     30 * time.Second,\n\t\t},\n\t\tcloudProviderTransportConfig: &transportConfig{\n\t\t\tMaxIdleConns:    15,\n\t\t\tIdleConnTimeout: 30 * time.Second,\n\t\t\tDialTimeout:     30 * time.Second,\n\t\t},\n\t\tocspTransportConfig: &transportConfig{\n\t\t\tMaxIdleConns:    1,\n\t\t\tIdleConnTimeout: 5 * time.Second,\n\t\t\tDialTimeout:     5 * time.Second,\n\t\t\tKeepAlive:       -1,\n\t\t},\n\t\tcrlTransportConfig: &transportConfig{\n\t\t\tMaxIdleConns:    1,\n\t\t\tIdleConnTimeout: 5 * time.Second,\n\t\t\tDialTimeout:     5 * time.Second,\n\t\t\tKeepAlive:       -1,\n\t\t},\n\t\tsnowflakeTransportConfig: &transportConfig{\n\t\t\tMaxIdleConns:    3,\n\t\t\tIdleConnTimeout: 30 * time.Minute,\n\t\t\tDialTimeout:     30 * time.Second,\n\t\t},\n\t\twifTransportConfig: &transportConfig{\n\t\t\tMaxIdleConns:    1,\n\t\t\tIdleConnTimeout: 30 * time.Second,\n\t\t\tDialTimeout:     30 * time.Second,\n\t\t\tDisableProxy:    true,\n\t\t},\n\t}\n}\n\nfunc (dtc *defaultTransportConfigsType) forTransportType(transportType transportType) *transportConfig {\n\tswitch transportType {\n\tcase transportTypeOAuth:\n\t\treturn dtc.oauthTransportConfig\n\tcase transportTypeCloudProvider:\n\t\treturn dtc.cloudProviderTransportConfig\n\tcase transportTypeOCSP:\n\t\treturn dtc.ocspTransportConfig\n\tcase transportTypeCRL:\n\t\treturn dtc.crlTransportConfig\n\tcase transportTypeSnowflake:\n\t\treturn dtc.snowflakeTransportConfig\n\tcase transportTypeWIF:\n\t\treturn dtc.wifTransportConfig\n\t}\n\tpanic(\"unknown transport type: \" + strconv.Itoa(int(transportType)))\n}\n"
  },
  {
    "path": "transport_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"crypto/tls\"\n\t\"net/http\"\n\t\"testing\"\n\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n)\n\nfunc TestTransportFactoryErrorHandling(t *testing.T) {\n\ttlsConfig := &tls.Config{InsecureSkipVerify: true}\n\tassertNilF(t, RegisterTLSConfig(\"TestTransportFactoryErrorHandlingTlsConfig\", tlsConfig))\n\t// Test CreateCustomTLSTransport with conflicting OCSP and CRL settings\n\tconflictingConfig := &Config{\n\t\tDisableOCSPChecks:       false,\n\t\tCertRevocationCheckMode: CertRevocationCheckEnabled,\n\t\tTLSConfigName:           \"TestTransportFactoryErrorHandlingTlsConfig\",\n\t}\n\n\tfactory := newTransportFactory(conflictingConfig, nil)\n\n\ttransport, err := factory.createTransport(transportConfigFor(transportTypeSnowflake))\n\tassertNotNilF(t, err, \"Expected error for conflicting OCSP and CRL configuration\")\n\tassertNilF(t, transport, \"Expected nil transport when error occurs\")\n\texpectedError := \"both OCSP and CRL cannot be enabled at the same time, please disable one of them\"\n\tassertEqualF(t, err.Error(), expectedError, \"Expected specific error message\")\n}\n\nfunc TestCreateStandardTransportErrorHandling(t *testing.T) {\n\t// Test CreateStandardTransport with conflicting settings\n\tconflictingConfig := &Config{\n\t\tDisableOCSPChecks:       false,\n\t\tCertRevocationCheckMode: CertRevocationCheckEnabled,\n\t}\n\n\tfactory := newTransportFactory(conflictingConfig, nil)\n\n\ttransport, err := factory.createTransport(transportConfigFor(transportTypeSnowflake))\n\tassertNotNilF(t, err, \"Expected error for conflicting OCSP and CRL configuration\")\n\tassertNilF(t, transport, \"Expected nil transport when error occurs\")\n}\n\nfunc TestCreateCustomTLSTransportSuccess(t *testing.T) {\n\ttlsConfig := &tls.Config{InsecureSkipVerify: true}\n\tassertNilF(t, RegisterTLSConfig(\"TestCreateCustomTLSTransportSuccessTlsConfig\", tlsConfig))\n\t// Test successful creation with valid config\n\tvalidConfig := &Config{\n\t\tDisableOCSPChecks:       true,\n\t\tCertRevocationCheckMode: CertRevocationCheckDisabled,\n\t\tTLSConfigName:           \"TestCreateCustomTLSTransportSuccessTlsConfig\",\n\t}\n\n\tfactory := newTransportFactory(validConfig, nil)\n\n\ttransport, err := factory.createTransport(transportConfigFor(transportTypeSnowflake))\n\tassertNilF(t, err, \"Unexpected error\")\n\tassertNotNilF(t, transport, \"Expected non-nil transport for valid configuration\")\n}\n\nfunc TestCreateStandardTransportSuccess(t *testing.T) {\n\t// Test successful creation with valid config\n\tvalidConfig := &Config{\n\t\tDisableOCSPChecks:       true,\n\t\tCertRevocationCheckMode: CertRevocationCheckDisabled,\n\t}\n\n\tfactory := newTransportFactory(validConfig, nil)\n\n\ttransport, err := factory.createTransport(transportConfigFor(transportTypeSnowflake))\n\tassertNilF(t, err, \"Unexpected error\")\n\tassertNotNilF(t, transport, \"Expected non-nil transport for valid configuration\")\n}\n\nfunc TestDirectTLSConfigUsage(t *testing.T) {\n\t// Test the new direct TLS config approach\n\tcustomTLS := &tls.Config{\n\t\tInsecureSkipVerify: true,\n\t\tServerName:         \"custom.example.com\",\n\t}\n\tassertNilF(t, RegisterTLSConfig(\"TestDirectTLSConfigUsageTlsConfig\", customTLS))\n\n\tconfig := &Config{\n\t\tDisableOCSPChecks:       true,\n\t\tCertRevocationCheckMode: CertRevocationCheckDisabled,\n\t\tTLSConfigName:           \"TestDirectTLSConfigUsageTlsConfig\",\n\t}\n\n\tfactory := newTransportFactory(config, nil)\n\ttransport, err := factory.createTransport(transportConfigFor(transportTypeSnowflake))\n\n\tassertNilF(t, err, \"Unexpected error\")\n\tassertNotNilF(t, transport, \"Expected non-nil transport\")\n}\n\nfunc TestRegisteredTLSConfigUsage(t *testing.T) {\n\t// Test registered TLS config approach through DSN parsing\n\n\t// Clean up any existing registry\n\tsfconfig.ResetTLSConfigRegistry()\n\n\t// Register a custom TLS config\n\tcustomTLS := &tls.Config{\n\t\tInsecureSkipVerify: true,\n\t\tServerName:         \"registered.example.com\",\n\t}\n\terr := RegisterTLSConfig(\"test-direct\", customTLS)\n\tassertNilF(t, err, \"Failed to register TLS config\")\n\tdefer func() {\n\t\terr := DeregisterTLSConfig(\"test-direct\")\n\t\tassertNilF(t, err, \"Failed to deregister test TLS config\")\n\t}()\n\n\t// Parse DSN that references the registered config\n\tdsn := \"user:pass@account/db?tls=test-direct&ocspFailOpen=false&disableOCSPChecks=true\"\n\tconfig, err2 := ParseDSN(dsn)\n\tassertNilF(t, err2, \"Failed to parse DSN\")\n\n\tconfig.CertRevocationCheckMode = CertRevocationCheckDisabled\n\n\tfactory := newTransportFactory(config, nil)\n\ttransport, err := factory.createTransport(transportConfigFor(transportTypeSnowflake))\n\n\tassertNilF(t, err, \"Unexpected error\")\n\tassertNotNilF(t, transport, \"Expected non-nil transport\")\n}\n\nfunc TestDirectTLSConfigOnly(t *testing.T) {\n\t// Test that direct TLS config works without any registration\n\n\t// Create a direct TLS config\n\tdirectTLS := &tls.Config{\n\t\tInsecureSkipVerify: true,\n\t\tServerName:         \"direct.example.com\",\n\t}\n\tassertNilF(t, RegisterTLSConfig(\"TestDirectTLSConfigOnlyTlsConfig\", directTLS))\n\n\tconfig := &Config{\n\t\tDisableOCSPChecks:       true,\n\t\tCertRevocationCheckMode: CertRevocationCheckDisabled,\n\t\tTLSConfigName:           \"TestDirectTLSConfigOnlyTlsConfig\",\n\t}\n\n\tfactory := newTransportFactory(config, nil)\n\ttransport, err := factory.createTransport(transportConfigFor(transportTypeSnowflake))\n\n\tassertNilF(t, err, \"Unexpected error\")\n\tassertNotNilF(t, transport, \"Expected non-nil transport\")\n}\n\nfunc TestProxyTransportCreation(t *testing.T) {\n\tproxyTests := []struct {\n\t\tconfig       *Config\n\t\tproxyURL     string\n\t\tdisableProxy bool\n\t}{\n\t\t{\n\t\t\tconfig: &Config{\n\t\t\t\tProxyProtocol: \"http\",\n\t\t\t\tProxyHost:     \"proxy.connection.com\",\n\t\t\t\tProxyPort:     1234,\n\t\t\t},\n\t\t\tdisableProxy: true,\n\t\t\tproxyURL:     \"\",\n\t\t},\n\t\t{\n\t\t\tconfig: &Config{\n\t\t\t\tProxyProtocol: \"https\",\n\t\t\t\tProxyHost:     \"proxy.connection.com\",\n\t\t\t\tProxyPort:     1234,\n\t\t\t},\n\t\t\tdisableProxy: true,\n\t\t\tproxyURL:     \"\",\n\t\t},\n\t\t{\n\t\t\tconfig: &Config{\n\t\t\t\tProxyProtocol: \"http\",\n\t\t\t\tProxyHost:     \"proxy.connection.com\",\n\t\t\t\tProxyPort:     1234,\n\t\t\t},\n\t\t\tproxyURL: \"http://proxy.connection.com:1234\",\n\t\t},\n\t\t{\n\t\t\tconfig: &Config{\n\t\t\t\tProxyProtocol: \"http\",\n\t\t\t\tProxyHost:     \"proxy.connection.com\",\n\t\t\t\tProxyPort:     1234,\n\t\t\t},\n\t\t\tproxyURL: \"http://proxy.connection.com:1234\",\n\t\t},\n\t\t{\n\t\t\tconfig: &Config{\n\t\t\t\tProxyProtocol: \"https\",\n\t\t\t\tProxyHost:     \"proxy.connection.com\",\n\t\t\t\tProxyPort:     1234,\n\t\t\t},\n\t\t\tproxyURL: \"https://proxy.connection.com:1234\",\n\t\t},\n\t\t{\n\t\t\tconfig: &Config{\n\t\t\t\tProxyProtocol: \"http\",\n\t\t\t\tProxyHost:     \"proxy.connection.com\",\n\t\t\t\tProxyPort:     1234,\n\t\t\t\tNoProxy:       \"*.snowflakecomputing.com,ocsp.testing.com\",\n\t\t\t},\n\t\t\tproxyURL: \"\",\n\t\t},\n\t}\n\n\tfor _, test := range proxyTests {\n\t\tt.Run(test.proxyURL, func(t *testing.T) {\n\t\t\tfactory := newTransportFactory(test.config, nil)\n\t\t\tproxyFunc := factory.createProxy(&transportConfig{DisableProxy: test.disableProxy})\n\n\t\t\tif test.disableProxy {\n\t\t\t\tassertNilF(t, proxyFunc, \"Expected nil proxy function when proxy is disabled\")\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\treq, _ := http.NewRequest(\"GET\", \"https://testing.snowflakecomputing.com\", nil)\n\t\t\tproxyURL, _ := proxyFunc(req)\n\n\t\t\tif test.proxyURL == \"\" {\n\t\t\t\tassertNilF(t, proxyURL, \"Expected nil proxy for https request\")\n\t\t\t} else {\n\t\t\t\tassertEqualF(t, proxyURL.String(), test.proxyURL)\n\t\t\t}\n\n\t\t\treq, _ = http.NewRequest(\"GET\", \"http://ocsp.testing.com\", nil)\n\t\t\tproxyURL, _ = proxyFunc(req)\n\n\t\t\tif test.proxyURL == \"\" {\n\t\t\t\tassertNilF(t, proxyURL, \"Expected nil proxy for https request\")\n\t\t\t} else {\n\t\t\t\tassertEqualF(t, proxyURL.String(), test.proxyURL)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc createTestNoRevocationTransport() http.RoundTripper {\n\treturn newTransportFactory(&Config{}, nil).createNoRevocationTransport(defaultTransportConfigs.forTransportType(transportTypeSnowflake))\n}\n"
  },
  {
    "path": "url_util.go",
    "content": "package gosnowflake\n\nimport (\n\t\"net/url\"\n\t\"regexp\"\n)\n\nvar (\n\tmatcher, _ = regexp.Compile(`^http(s?)\\:\\/\\/[0-9a-zA-Z]([-.\\w]*[0-9a-zA-Z@:])*(:(0-9)*)*(\\/?)([a-zA-Z0-9\\-\\.\\?\\,\\&\\(\\)\\/\\\\\\+&%\\$#_=@]*)?$`)\n)\n\nfunc isValidURL(targetURL string) bool {\n\tif !matcher.MatchString(targetURL) {\n\t\tlogger.Infof(\" The provided URL is not a valid URL - \" + targetURL)\n\t\treturn false\n\t}\n\treturn true\n}\n\nfunc urlEncode(targetString string) string {\n\t// We use QueryEscape instead of PathEscape here\n\t// for consistency across Drivers. For example:\n\t// QueryEscape escapes space as \"+\" whereas PE\n\t// it as %20F. PE also does not escape @ or &\n\t// either but QE does.\n\t// The behavior of QE in Golang is more in sync\n\t// with URL encoders in Python and Java hence the choice\n\treturn url.QueryEscape(targetString)\n}\n"
  },
  {
    "path": "util.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"io\"\n\t\"iter\"\n\t\"maps\"\n\t\"math/rand\"\n\t\"os\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/apache/arrow-go/v18/arrow/memory\"\n\tia \"github.com/snowflakedb/gosnowflake/v2/internal/arrow\"\n\tsfconfig \"github.com/snowflakedb/gosnowflake/v2/internal/config\"\n)\n\n// ContextKey is a type for context keys used in gosnowflake. Using a custom type helps avoid collisions with other context keys.\ntype ContextKey string\n\nconst (\n\tmultiStatementCount    ContextKey = \"MULTI_STATEMENT_COUNT\"\n\tasyncMode              ContextKey = \"ASYNC_MODE_QUERY\"\n\tqueryIDChannel         ContextKey = \"QUERY_ID_CHANNEL\"\n\tsnowflakeRequestIDKey  ContextKey = \"SNOWFLAKE_REQUEST_ID\"\n\tfetchResultByID        ContextKey = \"SF_FETCH_RESULT_BY_ID\"\n\tfilePutStream          ContextKey = \"STREAMING_PUT_FILE\"\n\tfileGetStream          ContextKey = \"STREAMING_GET_FILE\"\n\tfileTransferOptions    ContextKey = \"FILE_TRANSFER_OPTIONS\"\n\tenableDecfloat         ContextKey = \"ENABLE_DECFLOAT\"\n\tarrowAlloc             ContextKey = \"ARROW_ALLOC\"\n\tqueryTag               ContextKey = \"QUERY_TAG\"\n\tenableStructuredTypes  ContextKey = \"ENABLE_STRUCTURED_TYPES\"\n\tembeddedValuesNullable ContextKey = \"EMBEDDED_VALUES_NULLABLE\"\n\tdescribeOnly           ContextKey = \"DESCRIBE_ONLY\"\n\tinternalQuery          ContextKey = \"INTERNAL_QUERY\"\n\tcancelRetry            ContextKey = \"CANCEL_RETRY\"\n\tlogQueryText           ContextKey = \"LOG_QUERY_TEXT\"\n\tlogQueryParameters     ContextKey = \"LOG_QUERY_PARAMETERS\"\n)\n\nvar (\n\tdefaultTimeProvider = &unixTimeProvider{}\n)\n\n// WithMultiStatement returns a context that allows the user to execute the desired number of sql queries in one query\nfunc WithMultiStatement(ctx context.Context, num int) context.Context {\n\treturn context.WithValue(ctx, multiStatementCount, num)\n}\n\n// WithAsyncMode returns a context that allows execution of query in async mode\nfunc WithAsyncMode(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, asyncMode, true)\n}\n\n// WithQueryIDChan returns a context that contains the channel to receive the query ID\nfunc WithQueryIDChan(ctx context.Context, c chan<- string) context.Context {\n\treturn context.WithValue(ctx, queryIDChannel, c)\n}\n\n// WithRequestID returns a new context with the specified snowflake request id\nfunc WithRequestID(ctx context.Context, requestID UUID) context.Context {\n\treturn context.WithValue(ctx, snowflakeRequestIDKey, requestID)\n}\n\n// WithFetchResultByID returns a context that allows retrieving the result by query ID\nfunc WithFetchResultByID(ctx context.Context, queryID string) context.Context {\n\treturn context.WithValue(ctx, fetchResultByID, queryID)\n}\n\n// WithFilePutStream returns a context that contains the address of the file stream to be PUT\nfunc WithFilePutStream(ctx context.Context, reader io.Reader) context.Context {\n\treturn context.WithValue(ctx, filePutStream, reader)\n}\n\n// WithFileGetStream returns a context that contains the address of the file stream to be GET\nfunc WithFileGetStream(ctx context.Context, writer io.Writer) context.Context {\n\treturn context.WithValue(ctx, fileGetStream, writer)\n}\n\n// WithFileTransferOptions returns a context that contains the address of file transfer options\nfunc WithFileTransferOptions(ctx context.Context, options *SnowflakeFileTransferOptions) context.Context {\n\treturn context.WithValue(ctx, fileTransferOptions, options)\n}\n\n// WithDescribeOnly returns a context that enables a describe only query\nfunc WithDescribeOnly(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, describeOnly, true)\n}\n\n// WithHigherPrecision returns a context that enables higher precision by\n// returning a *big.Int or *big.Float variable when querying rows for column\n// types with numbers that don't fit into its native Golang counterpart\n// When used in combination with arrowbatches.WithBatches, original BigDecimal in arrow batches will be preserved.\nfunc WithHigherPrecision(ctx context.Context) context.Context {\n\treturn ia.WithHigherPrecision(ctx)\n}\n\n// WithDecfloatMappingEnabled returns a context that enables native support for DECFLOAT.\n// Without this context, DECFLOAT columns are returned as strings.\n// With this context enabled, DECFLOAT columns are returned as *big.Float or float64 (depending on HigherPrecision setting).\n// Keep in mind that both float64 and *big.Float are not able to precisely represent some DECFLOAT values.\n// If precision is important, you have to use string representation and use your own library to parse it.\nfunc WithDecfloatMappingEnabled(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, enableDecfloat, true)\n}\n\n// WithArrowAllocator returns a context embedding the provided allocator\n// which will be utilized by chunk downloaders when constructing Arrow\n// objects.\nfunc WithArrowAllocator(ctx context.Context, pool memory.Allocator) context.Context {\n\treturn context.WithValue(ctx, arrowAlloc, pool)\n}\n\n// WithQueryTag returns a context that will set the given tag as the QUERY_TAG\n// parameter on any queries that are run\nfunc WithQueryTag(ctx context.Context, tag string) context.Context {\n\treturn context.WithValue(ctx, queryTag, tag)\n}\n\n// WithStructuredTypesEnabled changes how structured types are returned.\n// Without this context structured types are returned as strings.\n// With this context enabled, structured types are returned as native Go types.\nfunc WithStructuredTypesEnabled(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, enableStructuredTypes, true)\n}\n\n// WithEmbeddedValuesNullable changes how complex structures are returned.\n// Instead of simple values (like string) sql.NullXXX wrappers (like sql.NullString) are used.\n// It applies to map values and arrays.\nfunc WithEmbeddedValuesNullable(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, embeddedValuesNullable, true)\n}\n\n// WithInternal sets the internal query flag.\nfunc WithInternal(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, internalQuery, true)\n}\n\n// WithLogQueryText enables logging of the query text.\nfunc WithLogQueryText(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, logQueryText, true)\n}\n\n// WithLogQueryParameters enables logging of the query parameters.\nfunc WithLogQueryParameters(ctx context.Context) context.Context {\n\treturn context.WithValue(ctx, logQueryParameters, true)\n}\n\n// Get the request ID from the context if specified, otherwise generate one\nfunc getOrGenerateRequestIDFromContext(ctx context.Context) UUID {\n\trequestID, ok := ctx.Value(snowflakeRequestIDKey).(UUID)\n\tif ok && requestID != nilUUID {\n\t\treturn requestID\n\t}\n\treturn NewUUID()\n}\n\n// integer min\nfunc intMin(a, b int) int {\n\tif a < b {\n\t\treturn a\n\t}\n\treturn b\n}\n\n// integer max\nfunc intMax(a, b int) int {\n\tif a > b {\n\t\treturn a\n\t}\n\treturn b\n}\n\nfunc int64Max(a, b int64) int64 {\n\tif a > b {\n\t\treturn a\n\t}\n\treturn b\n}\n\nfunc getMin(arr []int) int {\n\tif len(arr) == 0 {\n\t\treturn -1\n\t}\n\tmin := arr[0]\n\tfor _, v := range arr {\n\t\tif v <= min {\n\t\t\tmin = v\n\t\t}\n\t}\n\treturn min\n}\n\n// time.Duration max\nfunc durationMax(d1, d2 time.Duration) time.Duration {\n\tif d1-d2 > 0 {\n\t\treturn d1\n\t}\n\treturn d2\n}\n\n// time.Duration min\nfunc durationMin(d1, d2 time.Duration) time.Duration {\n\tif d1-d2 < 0 {\n\t\treturn d1\n\t}\n\treturn d2\n}\n\n// toNamedValues converts a slice of driver.Value to a slice of driver.NamedValue for Go 1.8 SQL package\nfunc toNamedValues(values []driver.Value) []driver.NamedValue {\n\tnamedValues := make([]driver.NamedValue, len(values))\n\tfor idx, value := range values {\n\t\tnamedValues[idx] = driver.NamedValue{Name: \"\", Ordinal: idx + 1, Value: value}\n\t}\n\treturn namedValues\n}\n\n// TokenAccessor manages the session token and master token\ntype TokenAccessor = sfconfig.TokenAccessor\n\ntype simpleTokenAccessor struct {\n\ttoken        string\n\tmasterToken  string\n\tsessionID    int64\n\taccessorLock sync.Mutex   // Used to implement accessor's Lock and Unlock\n\ttokenLock    sync.RWMutex // Used to synchronize SetTokens and GetTokens\n}\n\nfunc getSimpleTokenAccessor() TokenAccessor {\n\treturn &simpleTokenAccessor{sessionID: -1}\n}\n\nfunc (sta *simpleTokenAccessor) Lock() error {\n\tsta.accessorLock.Lock()\n\treturn nil\n}\n\nfunc (sta *simpleTokenAccessor) Unlock() {\n\tsta.accessorLock.Unlock()\n}\n\nfunc (sta *simpleTokenAccessor) GetTokens() (token string, masterToken string, sessionID int64) {\n\tsta.tokenLock.RLock()\n\tdefer sta.tokenLock.RUnlock()\n\treturn sta.token, sta.masterToken, sta.sessionID\n}\n\nfunc (sta *simpleTokenAccessor) SetTokens(token string, masterToken string, sessionID int64) {\n\tsta.tokenLock.Lock()\n\tdefer sta.tokenLock.Unlock()\n\tsta.token = token\n\tsta.masterToken = masterToken\n\tsta.sessionID = sessionID\n}\n\nfunc safeGetTokens(sr *snowflakeRestful) (token string, masterToken string, sessionID int64) {\n\tif sr == nil || sr.TokenAccessor == nil {\n\t\tlogger.Error(\"safeGetTokens: could not get tokens as TokenAccessor was nil\")\n\t\treturn \"\", \"\", 0\n\t}\n\treturn sr.TokenAccessor.GetTokens()\n}\n\nfunc escapeForCSV(value string) string {\n\tif value == \"\" {\n\t\treturn \"\\\"\\\"\"\n\t}\n\tif strings.Contains(value, \"\\\"\") || strings.Contains(value, \"\\n\") ||\n\t\tstrings.Contains(value, \",\") || strings.Contains(value, \"\\\\\") {\n\t\treturn \"\\\"\" + strings.ReplaceAll(value, \"\\\"\", \"\\\"\\\"\") + \"\\\"\"\n\t}\n\treturn value\n}\n\n// GetFromEnv is used to get the value of an environment variable from the system\nfunc GetFromEnv(name string, failOnMissing bool) (string, error) {\n\tif value := os.Getenv(name); value != \"\" {\n\t\treturn value, nil\n\t}\n\tif failOnMissing {\n\t\treturn \"\", fmt.Errorf(\"%v environment variable is not set\", name)\n\t}\n\treturn \"\", nil\n}\n\ntype currentTimeProvider interface {\n\tcurrentTime() int64\n}\n\ntype unixTimeProvider struct {\n}\n\nfunc (utp *unixTimeProvider) currentTime() int64 {\n\treturn time.Now().UnixMilli()\n}\n\ntype syncParams struct {\n\tmu     sync.Mutex\n\tparams map[string]*string\n}\n\nfunc newSyncParams(params map[string]*string) syncParams {\n\tcopied := make(map[string]*string)\n\tif params != nil {\n\t\tmaps.Copy(copied, params)\n\t}\n\treturn syncParams{params: copied}\n}\n\nfunc (sp *syncParams) get(key string) (*string, bool) {\n\tsp.mu.Lock()\n\tdefer sp.mu.Unlock()\n\tif sp.params == nil {\n\t\treturn nil, false\n\t}\n\tv, ok := sp.params[key]\n\treturn v, ok\n}\n\nfunc (sp *syncParams) set(key string, value *string) {\n\tsp.mu.Lock()\n\tdefer sp.mu.Unlock()\n\tif sp.params == nil {\n\t\tsp.params = make(map[string]*string)\n\t}\n\tsp.params[key] = value\n}\n\n// All returns an iterator over all params, holding the lock for the\n// duration of iteration. Callers use: for k, v := range sp.All() { ... }\nfunc (sp *syncParams) All() iter.Seq2[string, string] {\n\treturn func(yield func(string, string) bool) {\n\t\tsp.mu.Lock()\n\t\tdefer sp.mu.Unlock()\n\t\tfor k, v := range sp.params {\n\t\t\tif !yield(k, *v) {\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc chooseRandomFromRange(min float64, max float64) float64 {\n\treturn rand.Float64()*(max-min) + min\n}\n\nfunc withLowerKeys[T any](in map[string]T) map[string]T {\n\tout := make(map[string]T)\n\tfor k, v := range in {\n\t\tout[strings.ToLower(k)] = v\n\t}\n\treturn out\n}\n\nfunc findByPrefix(in []string, prefix string) int {\n\tfor i, v := range in {\n\t\tif strings.HasPrefix(v, prefix) {\n\t\t\treturn i\n\t\t}\n\t}\n\treturn -1\n}\n"
  },
  {
    "path": "util_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"maps\"\n\t\"math/rand\"\n\t\"os\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\ntype tcIntMinMax struct {\n\tv1  int\n\tv2  int\n\tout int\n}\n\ntype tcUUID struct {\n\tuuid string\n}\n\ntype constTypeProvider struct {\n\tconstTime int64\n}\n\ntype tcSafeGetTokens struct {\n\tname              string\n\tsr                *snowflakeRestful\n\texpectedSessionID int64\n}\n\nfunc (ctp *constTypeProvider) currentTime() int64 {\n\treturn ctp.constTime\n}\n\nfunc constTimeProvider(constTime int64) *constTypeProvider {\n\treturn &constTypeProvider{constTime: constTime}\n}\n\nfunc TestSimpleTokenAccessor(t *testing.T) {\n\taccessor := getSimpleTokenAccessor()\n\ttoken, masterToken, sessionID := accessor.GetTokens()\n\tif token != \"\" {\n\t\tt.Errorf(\"unexpected token %v\", token)\n\t}\n\tif masterToken != \"\" {\n\t\tt.Errorf(\"unexpected master token %v\", masterToken)\n\t}\n\tif sessionID != -1 {\n\t\tt.Errorf(\"unexpected session id %v\", sessionID)\n\t}\n\n\texpectedToken, expectedMasterToken, expectedSessionID := \"token123\", \"master123\", int64(123)\n\taccessor.SetTokens(expectedToken, expectedMasterToken, expectedSessionID)\n\ttoken, masterToken, sessionID = accessor.GetTokens()\n\tif token != expectedToken {\n\t\tt.Errorf(\"unexpected token %v\", token)\n\t}\n\tif masterToken != expectedMasterToken {\n\t\tt.Errorf(\"unexpected master token %v\", masterToken)\n\t}\n\tif sessionID != expectedSessionID {\n\t\tt.Errorf(\"unexpected session id %v\", sessionID)\n\t}\n}\n\nfunc TestSimpleTokenAccessorGetTokensSynchronization(t *testing.T) {\n\taccessor := getSimpleTokenAccessor()\n\tvar wg sync.WaitGroup\n\tfailed := false\n\tfor range 1000 {\n\t\twg.Add(1)\n\t\tgo func() {\n\t\t\t// set a random session and token\n\t\t\tsession := rand.Int63()\n\t\t\tsessionStr := strconv.FormatInt(session, 10)\n\t\t\taccessor.SetTokens(\"t\"+sessionStr, \"m\"+sessionStr, session)\n\n\t\t\t// read back session and token and verify that invariant still holds\n\t\t\ttoken, masterToken, session := accessor.GetTokens()\n\t\t\tsessionStr = strconv.FormatInt(session, 10)\n\t\t\tif \"t\"+sessionStr != token || \"m\"+sessionStr != masterToken {\n\t\t\t\tfailed = true\n\t\t\t}\n\t\t\twg.Done()\n\t\t}()\n\t}\n\t// wait for all competing goroutines to finish setting and getting tokens\n\twg.Wait()\n\tif failed {\n\t\tt.Fail()\n\t}\n}\n\nfunc TestSafeGetTokens(t *testing.T) {\n\ttestcases := []tcSafeGetTokens{\n\t\t{\n\t\t\tname: \"with simple token accessor\",\n\t\t\tsr: &snowflakeRestful{\n\t\t\t\tFuncPostQuery: postQueryTest,\n\t\t\t\tTokenAccessor: getSimpleTokenAccessor(),\n\t\t\t},\n\t\t\texpectedSessionID: -1,\n\t\t},\n\t\t{\n\t\t\tname: \"without token accessor\",\n\t\t\tsr: &snowflakeRestful{\n\t\t\t\tFuncPostQuery: postQueryTest,\n\t\t\t},\n\t\t\texpectedSessionID: 0,\n\t\t},\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v\", test.name), func(t *testing.T) {\n\t\t\t_, _, sessionID := safeGetTokens(test.sr)\n\t\t\tassertEqualE(t, sessionID, test.expectedSessionID, \"expected sessionId to be %v, was %v\",\n\t\t\t\tfmt.Sprintf(\"%d\", test.expectedSessionID),\n\t\t\t\tfmt.Sprintf(\"%d\", sessionID))\n\t\t})\n\t}\n}\n\nfunc TestGetRequestIDFromContext(t *testing.T) {\n\texpectedRequestID := NewUUID()\n\tctx := WithRequestID(context.Background(), expectedRequestID)\n\trequestID := getOrGenerateRequestIDFromContext(ctx)\n\tif requestID != expectedRequestID {\n\t\tt.Errorf(\"unexpected request id: %v, expected: %v\", requestID, expectedRequestID)\n\t}\n\tctx = WithRequestID(context.Background(), nilUUID)\n\trequestID = getOrGenerateRequestIDFromContext(ctx)\n\tif requestID == nilUUID {\n\t\tt.Errorf(\"unexpected request id, should not be nil\")\n\t}\n}\n\nfunc TestGenerateRequestID(t *testing.T) {\n\tfirstRequestID := getOrGenerateRequestIDFromContext(context.Background())\n\totherRequestID := getOrGenerateRequestIDFromContext(context.Background())\n\tif firstRequestID == otherRequestID {\n\t\tt.Errorf(\"request id should not be the same\")\n\t}\n}\n\nfunc TestIntMin(t *testing.T) {\n\ttestcases := []tcIntMinMax{\n\t\t{1, 3, 1},\n\t\t{5, 100, 5},\n\t\t{321, 3, 3},\n\t\t{123, 123, 123},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v_%v_%v\", test.v1, test.v2, test.out), func(t *testing.T) {\n\t\t\ta := intMin(test.v1, test.v2)\n\t\t\tif test.out != a {\n\t\t\t\tt.Errorf(\"failed int min. v1: %v, v2: %v, expected: %v, got: %v\", test.v1, test.v2, test.out, a)\n\t\t\t}\n\t\t})\n\t}\n}\nfunc TestIntMax(t *testing.T) {\n\ttestcases := []tcIntMinMax{\n\t\t{1, 3, 3},\n\t\t{5, 100, 100},\n\t\t{321, 3, 321},\n\t\t{123, 123, 123},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v_%v_%v\", test.v1, test.v2, test.out), func(t *testing.T) {\n\t\t\ta := intMax(test.v1, test.v2)\n\t\t\tif test.out != a {\n\t\t\t\tt.Errorf(\"failed int max. v1: %v, v2: %v, expected: %v, got: %v\", test.v1, test.v2, test.out, a)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype tcDurationMinMax struct {\n\tv1  time.Duration\n\tv2  time.Duration\n\tout time.Duration\n}\n\nfunc TestDurationMin(t *testing.T) {\n\ttestcases := []tcDurationMinMax{\n\t\t{1 * time.Second, 3 * time.Second, 1 * time.Second},\n\t\t{5 * time.Second, 100 * time.Second, 5 * time.Second},\n\t\t{321 * time.Second, 3 * time.Second, 3 * time.Second},\n\t\t{123 * time.Second, 123 * time.Second, 123 * time.Second},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v_%v_%v\", test.v1, test.v2, test.out), func(t *testing.T) {\n\t\t\ta := durationMin(test.v1, test.v2)\n\t\t\tif test.out != a {\n\t\t\t\tt.Errorf(\"failed duratoin max. v1: %v, v2: %v, expected: %v, got: %v\", test.v1, test.v2, test.out, a)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestDurationMax(t *testing.T) {\n\ttestcases := []tcDurationMinMax{\n\t\t{1 * time.Second, 3 * time.Second, 3 * time.Second},\n\t\t{5 * time.Second, 100 * time.Second, 100 * time.Second},\n\t\t{321 * time.Second, 3 * time.Second, 321 * time.Second},\n\t\t{123 * time.Second, 123 * time.Second, 123 * time.Second},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v_%v_%v\", test.v1, test.v2, test.out), func(t *testing.T) {\n\t\t\ta := durationMax(test.v1, test.v2)\n\t\t\tif test.out != a {\n\t\t\t\tt.Errorf(\"failed duratoin max. v1: %v, v2: %v, expected: %v, got: %v\", test.v1, test.v2, test.out, a)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype tcNamedValues struct {\n\tvalues []driver.Value\n\tout    []driver.NamedValue\n}\n\nfunc compareNamedValues(v1 []driver.NamedValue, v2 []driver.NamedValue) bool {\n\tif v1 == nil && v2 == nil {\n\t\treturn true\n\t}\n\tif v1 == nil || v2 == nil {\n\t\treturn false\n\t}\n\tif len(v1) != len(v2) {\n\t\treturn false\n\t}\n\tfor i := range v1 {\n\t\tif v1[i] != v2[i] {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\nfunc TestToNamedValues(t *testing.T) {\n\ttestcases := []tcNamedValues{\n\t\t{\n\t\t\tvalues: []driver.Value{},\n\t\t\tout:    []driver.NamedValue{},\n\t\t},\n\t\t{\n\t\t\tvalues: []driver.Value{1},\n\t\t\tout:    []driver.NamedValue{{Name: \"\", Ordinal: 1, Value: 1}},\n\t\t},\n\t\t{\n\t\t\tvalues: []driver.Value{1, \"test1\", 9.876, nil},\n\t\t\tout: []driver.NamedValue{\n\t\t\t\t{Name: \"\", Ordinal: 1, Value: 1},\n\t\t\t\t{Name: \"\", Ordinal: 2, Value: \"test1\"},\n\t\t\t\t{Name: \"\", Ordinal: 3, Value: 9.876},\n\t\t\t\t{Name: \"\", Ordinal: 4, Value: nil}},\n\t\t},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(\"\", func(t *testing.T) {\n\t\t\ta := toNamedValues(test.values)\n\n\t\t\tif !compareNamedValues(test.out, a) {\n\t\t\t\tt.Errorf(\"failed int max. v1: %v, v2: %v, expected: %v, got: %v\", test.values, test.out, test.out, a)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype tcIntArrayMin struct {\n\tin  []int\n\tout int\n}\n\nfunc TestGetMin(t *testing.T) {\n\ttestcases := []tcIntArrayMin{\n\t\t{[]int{1, 2, 3, 4, 5}, 1},\n\t\t{[]int{10, 25, 15, 5, 20}, 5},\n\t\t{[]int{15, 12, 9, 6, 3}, 3},\n\t\t{[]int{123, 123, 123, 123, 123}, 123},\n\t\t{[]int{}, -1},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(fmt.Sprintf(\"%v\", test.out), func(t *testing.T) {\n\t\t\ta := getMin(test.in)\n\t\t\tif test.out != a {\n\t\t\t\tt.Errorf(\"failed get min. in: %v, expected: %v, got: %v\", test.in, test.out, a)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype tcURLList struct {\n\tin  string\n\tout bool\n}\n\nfunc TestValidURL(t *testing.T) {\n\ttestcases := []tcURLList{\n\t\t{\"https://ssoTestURL.okta.com\", true},\n\t\t{\"https://ssoTestURL.okta.com:8080\", true},\n\t\t{\"https://ssoTestURL.okta.com/testpathvalue\", true},\n\t\t{\"-a calculator\", false},\n\t\t{\"This is a random test\", false},\n\t\t{\"file://TestForFile\", false},\n\t}\n\tfor _, test := range testcases {\n\t\tt.Run(test.in, func(t *testing.T) {\n\t\t\tresult := isValidURL(test.in)\n\t\t\tif test.out != result {\n\t\t\t\tt.Errorf(\"Failed to validate URL, input :%v, expected: %v, got: %v\", test.in, test.out, result)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype tcEncodeList struct {\n\tin  string\n\tout string\n}\n\nfunc TestEncodeURL(t *testing.T) {\n\ttestcases := []tcEncodeList{\n\t\t{\"Hello @World\", \"Hello+%40World\"},\n\t\t{\"Test//String\", \"Test%2F%2FString\"},\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.in, func(t *testing.T) {\n\t\t\tresult := urlEncode(test.in)\n\t\t\tif test.out != result {\n\t\t\t\tt.Errorf(\"Failed to encode string, input %v, expected: %v, got: %v\", test.in, test.out, result)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestParseUUID(t *testing.T) {\n\ttestcases := []tcUUID{\n\t\t{\"6ba7b812-9dad-11d1-80b4-00c04fd430c8\"},\n\t\t{\"00302010-0504-0706-0809-0a0b0c0d0e0f\"},\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.uuid, func(t *testing.T) {\n\t\t\trequestID := ParseUUID(test.uuid)\n\t\t\tif requestID.String() != test.uuid {\n\t\t\t\tt.Fatalf(\"failed to parse uuid\")\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype tcEscapeCsv struct {\n\tin  string\n\tout string\n}\n\nfunc TestEscapeForCSV(t *testing.T) {\n\ttestcases := []tcEscapeCsv{\n\t\t{\"\", \"\\\"\\\"\"},\n\t\t{\"\\n\", \"\\\"\\n\\\"\"},\n\t\t{\"test\\\\\", \"\\\"test\\\\\\\"\"},\n\t}\n\n\tfor _, test := range testcases {\n\t\tt.Run(test.out, func(t *testing.T) {\n\t\t\tresult := escapeForCSV(test.in)\n\t\t\tif test.out != result {\n\t\t\t\tt.Errorf(\"Failed to escape string, input %v, expected: %v, got: %v\", test.in, test.out, result)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestGetFromEnv(t *testing.T) {\n\tos.Setenv(\"SF_TEST\", \"test\")\n\tdefer os.Unsetenv(\"SF_TEST\")\n\tresult, err := GetFromEnv(\"SF_TEST\", true)\n\n\tif err != nil {\n\t\tt.Error(\"failed to read SF_TEST environment variable\")\n\t}\n\tif result != \"test\" {\n\t\tt.Errorf(\"incorrect value read for SF_TEST. Expected: test, read %v\", result)\n\t}\n}\n\nfunc TestGetFromEnvFailOnMissing(t *testing.T) {\n\t_, err := GetFromEnv(\"SF_TEST_MISSING\", true)\n\tif err == nil {\n\t\tt.Error(\"should report error when there is missing env parameter\")\n\t}\n}\n\nfunc skipOnJenkins(t *testing.T, message string) {\n\tif os.Getenv(\"JENKINS_HOME\") != \"\" {\n\t\tt.Skip(\"Skipping test on Jenkins: \" + message)\n\t}\n}\n\nfunc skipAuthTests(t *testing.T, message string) {\n\tif os.Getenv(\"RUN_AUTH_TESTS\") != \"true\" {\n\t\tt.Skip(\"Setup 'RUN_AUTH_TESTS' flag to perform this test\" + message)\n\t}\n}\n\nfunc skipOnMac(t *testing.T, reason string) {\n\tif runtime.GOOS == \"darwin\" && runningOnGithubAction() {\n\t\tt.Skip(\"skipped on Mac: \" + reason)\n\t}\n}\n\nfunc skipOnWindows(t *testing.T, reason string) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.Skip(\"skipped on Windows: \" + reason)\n\t}\n}\n\nfunc randomString(n int) string {\n\tr := rand.New(rand.NewSource(time.Now().UnixNano()))\n\talpha := []rune(\"abcdefghijklmnopqrstuvwxyz\")\n\tb := make([]rune, n)\n\tfor i := range b {\n\t\tb[i] = alpha[r.Intn(len(alpha))]\n\t}\n\treturn string(b)\n}\n\nfunc TestWithLowerKeys(t *testing.T) {\n\tm := make(map[string]string)\n\tm[\"abc\"] = \"def\"\n\tm[\"GHI\"] = \"KLM\"\n\tlowerM := withLowerKeys(m)\n\tassertEqualE(t, lowerM[\"abc\"], \"def\")\n\tassertEqualE(t, lowerM[\"ghi\"], \"KLM\")\n}\n\nfunc TestFindByPrefix(t *testing.T) {\n\tnonEmpty := []string{\"aaa\", \"bbb\", \"ccc\"}\n\tassertEqualE(t, findByPrefix(nonEmpty, \"a\"), 0)\n\tassertEqualE(t, findByPrefix(nonEmpty, \"aa\"), 0)\n\tassertEqualE(t, findByPrefix(nonEmpty, \"aaa\"), 0)\n\tassertEqualE(t, findByPrefix(nonEmpty, \"bb\"), 1)\n\tassertEqualE(t, findByPrefix(nonEmpty, \"ccc\"), 2)\n\tassertEqualE(t, findByPrefix(nonEmpty, \"dd\"), -1)\n\tassertEqualE(t, findByPrefix([]string{}, \"dd\"), -1)\n}\n\nfunc TestInternal(t *testing.T) {\n\tctx := context.Background()\n\tassertFalseE(t, isInternal(ctx))\n\tctx = WithInternal(ctx)\n\tassertTrueE(t, isInternal(ctx))\n}\n\ntype envOverride struct {\n\tenvName  string\n\toldValue string\n}\n\nfunc (e *envOverride) rollback() {\n\tif e.oldValue != \"\" {\n\t\tos.Setenv(e.envName, e.oldValue)\n\t} else {\n\t\tos.Unsetenv(e.envName)\n\t}\n}\n\nfunc overrideEnv(env string, value string) envOverride {\n\toldValue := os.Getenv(env)\n\tos.Setenv(env, value)\n\treturn envOverride{env, oldValue}\n}\n\nfunc TestSyncParamsAll(t *testing.T) {\n\tt.Run(\"nil map constructor\", func(t *testing.T) {\n\t\tassertEqualE(t, len(syncParams{}.params), 0)\n\t})\n\n\tt.Run(\"original map is left intact\", func(t *testing.T) {\n\t\tm := make(map[string]*string)\n\t\ta := \"a\"\n\t\tm[\"a\"] = &a\n\t\tsp := newSyncParams(m)\n\t\tb := \"b\"\n\t\tsp.set(\"a\", &b)\n\t\tassertEqualE(t, *m[\"a\"], \"a\")\n\t})\n\n\tt.Run(\"nil map yields nothing\", func(t *testing.T) {\n\t\tvar sp syncParams\n\t\tcount := 0\n\t\tfor range sp.All() {\n\t\t\tcount++\n\t\t}\n\t\tassertEqualE(t, count, 0)\n\t})\n\n\tt.Run(\"empty map yields nothing\", func(t *testing.T) {\n\t\tsp := newSyncParams(map[string]*string{})\n\t\tcount := 0\n\t\tfor range sp.All() {\n\t\t\tcount++\n\t\t}\n\t\tassertEqualE(t, count, 0)\n\t})\n\n\tt.Run(\"iterates all entries\", func(t *testing.T) {\n\t\ta, b := \"1\", \"2\"\n\t\tsp := newSyncParams(map[string]*string{\"a\": &a, \"b\": &b})\n\t\tgot := maps.Collect(sp.All())\n\t\tassertEqualE(t, len(got), 2)\n\t\tassertEqualE(t, got[\"a\"], \"1\")\n\t\tassertEqualE(t, got[\"b\"], \"2\")\n\t})\n\n\tt.Run(\"break stops early\", func(t *testing.T) {\n\t\ta, b, c := \"1\", \"2\", \"3\"\n\t\tsp := newSyncParams(map[string]*string{\"a\": &a, \"b\": &b, \"c\": &c})\n\t\tcount := 0\n\t\tfor range sp.All() {\n\t\t\tcount++\n\t\t\tbreak\n\t\t}\n\t\tassertEqualE(t, count, 1)\n\t})\n\n\t// This test verifies there's no data race — All() holds the mutex during iteration\n\t// while set() also acquires it, so they must serialize correctly. Running this test\n\t// under -race would catch it if the locking were missing or broken.\n\tt.Run(\"concurrent iteration and mutation\", func(t *testing.T) {\n\t\tsp := newSyncParams(map[string]*string{})\n\t\tvar wg sync.WaitGroup\n\t\twg.Add(2)\n\t\tgo func() {\n\t\t\tdefer wg.Done()\n\t\t\tfor i := range 100 {\n\t\t\t\tv := strconv.Itoa(i)\n\t\t\t\tsp.set(v, &v)\n\t\t\t}\n\t\t}()\n\t\tgo func() {\n\t\t\tdefer wg.Done()\n\t\t\tfor range 100 {\n\t\t\t\tfor range sp.All() {\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\t\twg.Wait()\n\t})\n}\n"
  },
  {
    "path": "uuid.go",
    "content": "package gosnowflake\n\nimport (\n\t\"crypto/rand\"\n\t\"fmt\"\n\t\"strconv\"\n)\n\nconst rfc4122 = 0x40\n\n// UUID is a RFC4122 compliant uuid type\ntype UUID [16]byte\n\nvar nilUUID UUID\n\n// NewUUID creates a new snowflake UUID\nfunc NewUUID() UUID {\n\tvar u UUID\n\t_, err := rand.Read(u[:])\n\tif err != nil {\n\t\tlogger.Warnf(\"error while reading random bytes to UUID. %v\", err)\n\t}\n\tu[8] = (u[8] | rfc4122) & 0x7F\n\n\tvar version byte = 4\n\tu[6] = (u[6] & 0xF) | (version << 4)\n\treturn u\n}\n\nfunc getChar(str string) byte {\n\ti, _ := strconv.ParseUint(str, 16, 8)\n\treturn byte(i)\n}\n\n// ParseUUID parses a string of xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx into its UUID form\nfunc ParseUUID(str string) UUID {\n\treturn UUID{\n\t\tgetChar(str[0:2]), getChar(str[2:4]), getChar(str[4:6]), getChar(str[6:8]),\n\t\tgetChar(str[9:11]), getChar(str[11:13]),\n\t\tgetChar(str[14:16]), getChar(str[16:18]),\n\t\tgetChar(str[19:21]), getChar(str[21:23]),\n\t\tgetChar(str[24:26]), getChar(str[26:28]), getChar(str[28:30]), getChar(str[30:32]), getChar(str[32:34]), getChar(str[34:36]),\n\t}\n}\n\nfunc (u UUID) String() string {\n\treturn fmt.Sprintf(\"%x-%x-%x-%x-%x\", u[0:4], u[4:6], u[6:8], u[8:10], u[10:])\n}\n"
  },
  {
    "path": "value_awaiter.go",
    "content": "package gosnowflake\n\nimport (\n\t\"bytes\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"sync\"\n)\n\ntype valueAwaiterType struct {\n\tlockKey lockKeyType\n\tworking bool\n\tcond    *sync.Cond\n\tmu      sync.Mutex\n\th       *valueAwaitHolderType\n}\n\nfunc newValueAwaiter(lockKey lockKeyType, h *valueAwaitHolderType) *valueAwaiterType {\n\tret := &valueAwaiterType{\n\t\tlockKey: lockKey,\n\t\th:       h,\n\t}\n\tret.cond = sync.NewCond(&ret.mu)\n\treturn ret\n}\n\nfunc awaitValue[T any](valueAwaiter *valueAwaiterType, runFunc func() (T, error), acceptFunc func(t T, err error) bool, defaultFactoryFunc func() T) (T, error) {\n\tlogger.Tracef(\"awaitValue[%v] entered awaitValue for %s\", goroutineID(), valueAwaiter.lockKey.lockID())\n\tvalueAwaiter.mu.Lock()\n\tvalue, err := runFunc()\n\n\t// check if the value is already ready\n\tif acceptFunc(value, err) {\n\t\tlogger.Tracef(\"awaitValue[%v] value was ready\", goroutineID())\n\t\tvalueAwaiter.mu.Unlock()\n\t\treturn value, err\n\t}\n\n\t// value is not ready, check if no other thread is working\n\tif !valueAwaiter.working {\n\t\tlogger.Tracef(\"awaitValue[%v] start working\", goroutineID())\n\t\tvalueAwaiter.working = true\n\t\tvalueAwaiter.mu.Unlock()\n\t\t// continue working only in this thread\n\t\treturn defaultFactoryFunc(), nil\n\t}\n\n\t// Check again if the value is ready after each wakeup.\n\t// If one thread is woken up and the value is still not ready, it should return default and continue working on this.\n\t// If the value is ready, all threads should be woken up and return the value.\n\tret, err := runFunc()\n\tfor !acceptFunc(ret, err) {\n\t\tlogger.Tracef(\"awaitValue[%v] waiting for value\", goroutineID())\n\t\tvalueAwaiter.cond.Wait()\n\t\tlogger.Tracef(\"awaitValue[%v] woke up\", goroutineID())\n\t\tret, err = runFunc()\n\t\tif !acceptFunc(ret, err) && !valueAwaiter.working {\n\t\t\tlogger.Tracef(\"awaitValue[%v] start working after wait\", goroutineID())\n\t\t\tvalueAwaiter.working = true\n\t\t\tvalueAwaiter.mu.Unlock()\n\t\t\treturn defaultFactoryFunc(), nil\n\t\t}\n\t}\n\n\t// Value is ready - all threads should return the value.\n\tlogger.Tracef(\"awaitValue[%v] value was ready after wait\", goroutineID())\n\tvalueAwaiter.mu.Unlock()\n\treturn ret, err\n}\n\nfunc (v *valueAwaiterType) done() {\n\tlogger.Tracef(\"valueAwaiter[%v] done working for %s, resuming all threads\", goroutineID(), v.lockKey.lockID())\n\tv.mu.Lock()\n\tdefer v.mu.Unlock()\n\tv.working = false\n\tv.cond.Broadcast()\n\tv.h.remove(v)\n}\n\nfunc (v *valueAwaiterType) resumeOne() {\n\tlogger.Tracef(\"valueAwaiter[%v] done working for %s, resuming one thread\", goroutineID(), v.lockKey.lockID())\n\tv.mu.Lock()\n\tdefer v.mu.Unlock()\n\tv.working = false\n\tv.cond.Signal()\n}\n\ntype valueAwaitHolderType struct {\n\tmu      sync.Mutex\n\tholders map[string]*valueAwaiterType\n}\n\nvar valueAwaitHolder = newValueAwaitHolder()\n\nfunc newValueAwaitHolder() *valueAwaitHolderType {\n\treturn &valueAwaitHolderType{\n\t\tholders: make(map[string]*valueAwaiterType),\n\t}\n}\n\nfunc (h *valueAwaitHolderType) get(lockKey lockKeyType) *valueAwaiterType {\n\tlockID := lockKey.lockID()\n\th.mu.Lock()\n\tdefer h.mu.Unlock()\n\tholder, ok := h.holders[lockID]\n\tif !ok {\n\t\tholder = newValueAwaiter(lockKey, h)\n\t\th.holders[lockID] = holder\n\t}\n\treturn holder\n}\n\nfunc (h *valueAwaitHolderType) remove(v *valueAwaiterType) {\n\th.mu.Lock()\n\tdefer h.mu.Unlock()\n\tdelete(h.holders, v.lockKey.lockID())\n}\n\nfunc goroutineID() int {\n\tbuf := make([]byte, 32)\n\tn := runtime.Stack(buf, false)\n\tbuf = buf[:n]\n\t// goroutine 1 [running]: ...\n\n\tbuf, ok := bytes.CutPrefix(buf, []byte(\"goroutine \"))\n\tif !ok {\n\t\treturn -1\n\t}\n\n\tbefore, _, ok := bytes.Cut(buf, []byte{' '})\n\tif !ok {\n\t\treturn -2\n\t}\n\n\tgoid, err := strconv.Atoi(string(before))\n\tif err != nil {\n\t\tlogger.Tracef(\"goroutineID err: %v\", err)\n\t\treturn -3\n\t}\n\treturn goid\n}\n"
  },
  {
    "path": "version.go",
    "content": "package gosnowflake\n\n// SnowflakeGoDriverVersion is the version of Go Snowflake Driver.\nconst SnowflakeGoDriverVersion = \"2.0.1\"\n"
  },
  {
    "path": "wiremock_test.go",
    "content": "package gosnowflake\n\nimport (\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"database/sql\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\nvar wiremock = newWiremock()\nvar wiremockHTTPS = newWiremockHTTPS()\n\ntype wiremockClient struct {\n\tprotocol  string\n\thost      string\n\tport      int\n\tadminPort int\n\tclient    http.Client\n}\n\ntype wiremockClientHTTPS struct {\n\twiremockClient\n}\n\nfunc newWiremock() *wiremockClient {\n\twmHost := os.Getenv(\"WIREMOCK_HOST\")\n\tif wmHost == \"\" {\n\t\twmHost = \"127.0.0.1\"\n\t}\n\twmPortStr := os.Getenv(\"WIREMOCK_PORT\")\n\tif wmPortStr == \"\" {\n\t\twmPortStr = \"14355\"\n\t}\n\twmPort, err := strconv.Atoi(wmPortStr)\n\tif err != nil {\n\t\tpanic(fmt.Sprintf(\"WIREMOCK_PORT is not a number: %v\", wmPortStr))\n\t}\n\treturn &wiremockClient{\n\t\tprotocol:  \"http\",\n\t\thost:      wmHost,\n\t\tport:      wmPort,\n\t\tadminPort: wmPort,\n\t}\n}\n\nfunc newWiremockHTTPS() *wiremockClientHTTPS {\n\twmHost := os.Getenv(\"WIREMOCK_HOST_HTTPS\")\n\tif wmHost == \"\" {\n\t\twmHost = \"127.0.0.1\"\n\t}\n\twmPortStr := os.Getenv(\"WIREMOCK_PORT_HTTPS\")\n\tif wmPortStr == \"\" {\n\t\twmPortStr = \"13567\"\n\t}\n\twmPort, err := strconv.Atoi(wmPortStr)\n\tif err != nil {\n\t\tpanic(fmt.Sprintf(\"WIREMOCK_PORT is not a number: %v\", wmPortStr))\n\t}\n\twmAdminPortStr := os.Getenv(\"WIREMOCK_PORT\")\n\tif wmAdminPortStr == \"\" {\n\t\twmAdminPortStr = \"14355\"\n\t}\n\twmAdminPort, err := strconv.Atoi(wmAdminPortStr)\n\tif err != nil {\n\t\tpanic(fmt.Sprintf(\"WIREMOCK_PORT is not a number: %v\", wmPortStr))\n\t}\n\treturn &wiremockClientHTTPS{\n\t\twiremockClient: wiremockClient{\n\t\t\tprotocol:  \"https\",\n\t\t\thost:      wmHost,\n\t\t\tport:      wmPort,\n\t\t\tadminPort: wmAdminPort,\n\t\t},\n\t}\n}\n\nfunc (wm *wiremockClient) openDb(t *testing.T) *sql.DB {\n\tcfg := wm.connectionConfig()\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\treturn sql.OpenDB(connector)\n}\n\nfunc (wm *wiremockClient) connectionConfig() *Config {\n\tcfg := &Config{\n\t\tAccount:               \"testAccount\",\n\t\tUser:                  \"testUser\",\n\t\tPassword:              \"testPassword\",\n\t\tHost:                  wm.host,\n\t\tPort:                  wm.port,\n\t\tProtocol:              wm.protocol,\n\t\tLoginTimeout:          time.Duration(30) * time.Second,\n\t\tRequestTimeout:        time.Duration(30) * time.Second,\n\t\tMaxRetryCount:         3,\n\t\tOauthClientID:         \"testClientId\",\n\t\tOauthClientSecret:     \"testClientSecret\",\n\t\tOauthAuthorizationURL: wm.baseURL() + \"/oauth/authorize\",\n\t\tOauthTokenRequestURL:  wm.baseURL() + \"/oauth/token\",\n\t}\n\treturn cfg\n}\n\nfunc (wm *wiremockClientHTTPS) connectionConfig(t *testing.T) *Config {\n\tcfg := wm.wiremockClient.connectionConfig()\n\tcfg.Transporter = &http.Transport{\n\t\tTLSClientConfig: wm.tlsConfig(t),\n\t}\n\treturn cfg\n}\n\nfunc (wm *wiremockClientHTTPS) certPool(t *testing.T) *x509.CertPool {\n\ttestCertPool := x509.NewCertPool()\n\tcaBytes, err := os.ReadFile(\"ci/scripts/ca.der\")\n\tassertNilF(t, err)\n\tcertificate, err := x509.ParseCertificate(caBytes)\n\tassertNilF(t, err)\n\ttestCertPool.AddCert(certificate)\n\treturn testCertPool\n}\n\nfunc (wm *wiremockClientHTTPS) ocspTransporter(t *testing.T, delegate http.RoundTripper) http.RoundTripper {\n\tif delegate == nil {\n\t\tdelegate = http.DefaultTransport\n\t}\n\tcfg := wm.connectionConfig(t)\n\tcfg.Transporter = delegate\n\tov := newOcspValidator(cfg)\n\treturn &http.Transport{\n\t\tTLSClientConfig: &tls.Config{\n\t\t\tRootCAs:               wiremockHTTPS.certPool(t),\n\t\t\tVerifyPeerCertificate: ov.verifyPeerCertificateSerial,\n\t\t},\n\t\tDisableKeepAlives: true,\n\t}\n}\n\nfunc (wm *wiremockClientHTTPS) tlsConfig(t *testing.T) *tls.Config {\n\treturn &tls.Config{\n\t\tRootCAs: wm.certPool(t),\n\t}\n}\n\ntype wiremockMapping struct {\n\tfilePath string\n\tparams   map[string]string\n}\n\nfunc newWiremockMapping(filePath string) wiremockMapping {\n\treturn wiremockMapping{filePath: filePath}\n}\n\ntype disableEnrichingWithTelemetry struct{}\n\nfunc (wm *wiremockClient) registerMappings(t *testing.T, args ...any) {\n\tskipOnJenkins(t, \"wiremock does not work on Jenkins\")\n\n\tenrichWithTelemetry := true\n\tvar mappings []wiremockMapping\n\tfor _, arg := range args {\n\t\tswitch v := arg.(type) {\n\t\tcase wiremockMapping:\n\t\t\tmappings = append(mappings, v)\n\t\tcase []wiremockMapping:\n\t\t\tmappings = append(mappings, v...)\n\t\tcase disableEnrichingWithTelemetry:\n\t\t\tenrichWithTelemetry = false\n\t\tdefault:\n\t\t\tt.Fatalf(\"unsupported argument type: %T\", v)\n\t\t}\n\t}\n\tallMappings := mappings\n\tif enrichWithTelemetry {\n\t\tallMappings = append(allMappings, newWiremockMapping(\"telemetry/telemetry.json\"))\n\t}\n\tfor _, mapping := range allMappings {\n\t\tf, err := os.Open(\"test_data/wiremock/mappings/\" + mapping.filePath)\n\t\tassertNilF(t, err)\n\t\tdefer f.Close()\n\t\tmappingBodyBytes, err := io.ReadAll(f)\n\t\tassertNilF(t, err)\n\t\tmappingBody := string(mappingBodyBytes)\n\t\tfor key, val := range mapping.params {\n\t\t\tmappingBody = strings.Replace(mappingBody, key, val, 1)\n\t\t}\n\t\tresp, err := wm.client.Post(fmt.Sprintf(\"%v/import\", wm.mappingsURL()), \"application/json\", strings.NewReader(mappingBody))\n\t\tassertNilF(t, err)\n\t\tif resp.StatusCode != http.StatusOK {\n\t\t\trespBody, err := io.ReadAll(resp.Body)\n\t\t\tassertNilF(t, err)\n\t\t\tt.Fatalf(\"cannot create mapping. status=%v body=\\n%v\", resp.StatusCode, string(respBody))\n\t\t}\n\t}\n\tt.Cleanup(func() {\n\t\treq, err := http.NewRequest(\"DELETE\", wm.mappingsURL(), nil)\n\t\tassertNilF(t, err)\n\t\t_, err = wm.client.Do(req)\n\t\tassertNilE(t, err)\n\n\t\treq, err = http.NewRequest(\"POST\", fmt.Sprintf(\"%v/reset\", wm.scenariosURL()), nil)\n\t\tassertNilF(t, err)\n\t\t_, err = wm.client.Do(req)\n\t\tassertNilE(t, err)\n\t})\n}\n\nfunc (wm *wiremockClient) mappingsURL() string {\n\treturn fmt.Sprintf(\"http://%v:%v/__admin/mappings\", wm.host, wm.adminPort)\n}\n\nfunc (wm *wiremockClient) scenariosURL() string {\n\treturn fmt.Sprintf(\"http://%v:%v/__admin/scenarios\", wm.host, wm.adminPort)\n}\n\nfunc (wm *wiremockClient) baseURL() string {\n\treturn fmt.Sprintf(\"%v://%v:%v\", wm.protocol, wm.host, wm.port)\n}\n\nfunc TestQueryViaHttps(t *testing.T) {\n\twiremockHTTPS.registerMappings(t,\n\t\twiremockMapping{filePath: \"auth/password/successful_flow.json\"},\n\t\twiremockMapping{filePath: \"select1.json\", params: map[string]string{\n\t\t\t\"%AUTHORIZATION_HEADER%\": \"session token\",\n\t\t}},\n\t)\n\tcfg := wiremockHTTPS.connectionConfig(t)\n\ttestCertPool := x509.NewCertPool()\n\tcaBytes, err := os.ReadFile(\"ci/scripts/ca.der\")\n\tassertNilF(t, err)\n\tcertificate, err := x509.ParseCertificate(caBytes)\n\tassertNilF(t, err)\n\ttestCertPool.AddCert(certificate)\n\tcfg.Transporter = &http.Transport{\n\t\tTLSClientConfig: &tls.Config{\n\t\t\tRootCAs: testCertPool,\n\t\t},\n\t}\n\tconnector := NewConnector(SnowflakeDriver{}, *cfg)\n\tdb := sql.OpenDB(connector)\n\trows, err := db.Query(\"SELECT 1\")\n\tassertNilF(t, err)\n\tdefer rows.Close()\n\tvar v int\n\tassertTrueF(t, rows.Next())\n\tassertNilF(t, rows.Scan(&v))\n\tassertEqualE(t, v, 1)\n}\n"
  }
]