[
  {
    "path": ".bito/guidelines/domain-invariants.txt",
    "content": "Critical domain invariants for contentful-export:\n\n1. types.d.ts and lib/usageParams.js must stay in sync with the actual Options interface.\n   Any new option added to parseOptions.js must be reflected in both files.\n\n2. dist/ is a build artifact compiled from lib/ via Babel. Never edit dist/ files directly.\n   The build command is: npm run build (which runs clean + tsc check + babel compile).\n\n3. package.json version is always \"0.0.0-determined-by-semantic-release\". Never set a\n   version number manually. semantic-release handles all versioning based on conventional\n   commit messages.\n\n4. Webhooks and roles can only be exported from the master environment. The code in\n   get-space-data.js explicitly skips these when environmentId !== 'master'. This is a\n   Contentful API constraint, not a bug.\n\n5. When deliveryToken is provided and includeDrafts is false, entries and assets are\n   fetched from the CDA (published-only). Tags are CMA-only and will not be exported\n   via CDA. Do not change this dual-client logic without understanding the implications\n   for downstream consumers.\n\n6. Asset downloads and editor interface fetching both use concurrency of 6 via\n   Bluebird Promise.map. Increasing concurrency risks hitting Contentful API rate limits.\n\n7. The contentOnly flag is a convenience shorthand that sets skipRoles, skipContentModel,\n   and skipWebhooks to true. Do not duplicate this logic elsewhere.\n\n8. bfj (Big-Friendly JSON) is used for writing export files to handle large spaces\n   without exhausting memory. Do not replace with JSON.stringify for the export write path.\n\n9. Embargoed assets (URLs where the subdomain matches {images|assets|downloads|videos}.secure.*)\n   require JWT-signed URLs via the asset_keys API. The signing cache key is\n   host:spaceId:environmentId with a 6-hour expiry window. Changes to this logic are\n   security-sensitive.\n\n10. The pretest script runs lint + build + clean test artifacts. Running npm test always\n    triggers a full build. For faster iteration during development, use npm run test:unit\n    directly (after an initial build).\n"
  },
  {
    "path": ".bito/guidelines/repo-truth-and-boundaries.txt",
    "content": "Use the repository's written documentation as review context and check whether\nthe change matches the documented intent.\n\n- Start from README.md, ARCHITECTURE.md, AGENTS.md, CONTRIBUTING.md, and docs/ADRs/\n  for architectural context.\n- Check whether code, tests, and documentation all tell the same story. Flag\n  mismatches between implementation and the documented architecture or ADRs.\n- Treat AGENTS.md as the authoritative guide for sharp edges and invariants. If a\n  change violates an invariant documented there, flag it.\n- If CI or another required check already enforces a merge rule, do not ask for\n  duplicate PR template sections or manual checklists.\n- Ask for an ADR update when a change is architecture-significant (new module, new\n  dependency, new persistence strategy, new integration).\n- Distinguish the current public API surface from internal implementation details.\n  Public type/interface changes in types.d.ts require extra scrutiny.\n- The standalone CLI (bin/contentful-export) prints a redirection notice pointing\n  users to contentful-cli. Changes should focus on the library API, not CLI enhancements.\n"
  },
  {
    "path": ".bito/guidelines/review-posture.txt",
    "content": "Review this pull request like the tech lead of the contentful-export project.\n\n- Prefer a few high-signal findings to a long list of minor or style-only comments.\n- Prefer behavior, contract, runtime, and documentation issues over process-only\n  suggestions. Do not ask for duplicate PR template sections, checklists, or manual\n  validation acknowledgements when CI or required checks already enforce that policy.\n- Keep feedback actionable: explain why it matters, how it would surface in practice,\n  and the clearest next step.\n- If a concern is only a risk or assumption rather than a confirmed bug, say that\n  clearly and explain what evidence would confirm it.\n- If you find no issues, say so explicitly and call out any residual uncertainty\n  that still deserves human attention.\n\nKey areas to focus on for this repo:\n- Public API surface changes (Options interface, return types) -- check that types.d.ts\n  is updated to match any changes in lib/.\n- Backward compatibility for npm consumers -- this is a library, not just a CLI.\n- Pagination correctness in get-space-data.js -- off-by-one errors or missed edge cases\n  can silently drop content during export.\n- Embargoed asset handling -- JWT signing, cache invalidation, and error handling are\n  security-sensitive.\n- contentful-batch-libs compatibility -- ensure imported utilities match the expected API.\n"
  },
  {
    "path": ".bito.yaml",
    "content": "suggestion_mode: comprehensive\npost_description: true\npost_changelist: true\nexclude_files: 'package-lock.json'\nexclude_draft_pr: false\nsecret_scanner_feedback: true\nlinters_feedback: true\nrepo_level_guidelines_enabled: true\nsequence_diagram_enabled: true\ncustom_guidelines:\n  general:\n    - name: 'Review Posture'\n      path: './.bito/guidelines/review-posture.txt'\n    - name: 'Repo Truth And Alignment'\n      path: './.bito/guidelines/repo-truth-and-boundaries.txt'\n    - name: 'Domain Invariants'\n      path: './.bito/guidelines/domain-invariants.txt'\n"
  },
  {
    "path": ".contentful/vault-secrets.yaml",
    "content": "version: 1\nservices:\n  github-action:\n    policies:\n      - dependabot\n      - semantic-release\n      - packages-read\n"
  },
  {
    "path": ".eslintrc",
    "content": "{\n  \"extends\": [\n    \"standard\",\n    \"eslint:recommended\",\n    \"plugin:@typescript-eslint/eslint-recommended\",\n    \"plugin:@typescript-eslint/recommended\"\n  ],\n  \"plugins\": [\n    \"standard\",\n    \"promise\",\n    \"jest\"\n  ],\n  \"env\": {\n    \"jest/globals\": true\n  }\n}\n"
  },
  {
    "path": ".github/CODEOWNERS",
    "content": "* @contentful/team-developer-experience\n\npackage.json\npackage-lock.json\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug Report\nabout: Create a report to help us improve\ntitle: '[BUG] '\nlabels: bug\nassignees: ''\n---\n\n## Bug Description\n\nA clear and concise description of what the bug is.\n\n## Steps to Reproduce\n\n1. Go to '...'\n2. Execute '...'\n3. See error\n\n## Expected Behavior\n\nA clear and concise description of what you expected to happen.\n\n## Actual Behavior\n\nA clear and concise description of what actually happened.\n\n## Code Sample\n\n```javascript\n// Minimal code to reproduce the issue\n```\n\n## Environment\n\n- OS: [e.g. macOS 13.0, Windows 11, Ubuntu 22.04]\n- Package Version: [e.g. 1.2.3]\n- Node Version: [e.g. 18.0.0]\n- Package Manager: [e.g. npm 9.0.0, yarn 1.22.0]\n\n## Error Messages/Logs\n\n```\nPaste any error messages or relevant logs here\n```\n\n## Screenshots\n\nIf applicable, add screenshots to help explain your problem.\n\n## Additional Context\n\nAdd any other context about the problem here.\n\n## Possible Solution\n\nIf you have suggestions on how to fix the bug, please describe them here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "content": "blank_issues_enabled: true\ncontact_links:\n  - name: Contentful Support & Help Center\n    url: https://support.contentful.com/hc/en-us/\n    about: Submit a support ticket or browse the help center.\n  - name: Contentful Developer Portal\n    url: https://www.contentful.com/developers/\n    about: Browse the developer portal for documentation, tutorials, and more.\n  - name: Contentful Community Discord\n    url: https://www.contentful.com/discord/\n    about: Get peer support on the Contentful Community Discord.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature Request\nabout: Suggest an idea for this project\ntitle: '[FEATURE] '\nlabels: enhancement\nassignees: ''\n---\n\n## Feature Description\n\nA clear and concise description of the feature you'd like to see.\n\n## Problem Statement\n\nIs your feature request related to a problem? Please describe.\nExample: I'm always frustrated when [...]\n\n## Proposed Solution\n\nA clear and concise description of what you want to happen.\n\n## Use Case\n\nDescribe the use case for this feature. How would you use it?\n\n```javascript\n// Example of how the feature would be used\nconst example = new Feature({\n  option: 'value',\n});\n```\n\n## Alternatives Considered\n\nA clear and concise description of any alternative solutions or features you've considered.\n\n## Benefits\n\nWhat are the benefits of implementing this feature?\n\n- Benefit 1\n- Benefit 2\n- Benefit 3\n\n## Potential Drawbacks\n\nAre there any potential drawbacks or challenges with this feature?\n\n## Additional Context\n\nAdd any other context, screenshots, or examples about the feature request here.\n\n## Implementation Suggestions\n\nIf you have ideas about how this could be implemented, please share them here.\n\n## Priority\n\nHow important is this feature to you?\n\n- [ ] Critical - Blocking my usage\n- [ ] High - Important for my use case\n- [ ] Medium - Would be nice to have\n- [ ] Low - Just a suggestion\n\n## Willingness to Contribute\n\n- [ ] I'd be willing to submit a PR for this feature\n- [ ] I can help test this feature\n- [ ] I can help with documentation\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "<!--\nThank you for opening a pull request.\n\nPlease fill in as much of the template below as you're able. Feel free to remove\nany section you want to skip.\n-->\n\n## Summary\n\n<!-- Give a short summary what your PR is introducing/fixing. -->\n\n## Description\n\n<!-- Describe your changes in detail -->\n\n## Motivation and Context\n\n<!--\nWhy is this change required? What problem does it solve?\nIf it fixes an open issue, please link to the issue here.\n-->\n\n## PR Checklist\n\n- [ ] I have read the `CONTRIBUTING.md` file\n- [ ] All commits follow [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/)\n- [ ] Documentation is updated (if necessary)\n- [ ] PR doesn't contain any sensitive information\n- [ ] There are no breaking changes\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "version: 2\n\nupdates:\n  - package-ecosystem: npm\n    directory: \"/\"\n    schedule:\n      interval: daily\n      time: \"00:00\"\n      timezone: UTC\n    open-pull-requests-limit: 10\n    ignore:\n    - dependency-name: husky\n      versions:\n        - \">=5.0.0\"\n    - dependency-name: figures # Pure ESM module. Remove when supporting ESM\n      versions:\n        - \">=4.0.0\"\n    - dependency-name: bfj\n      versions:\n        - \">=9.0.0\"\n    - dependency-name: semantic-release\n      versions:\n        - \">=23.0.0\"\n    commit-message:\n      prefix: build\n      include: scope\n    groups:\n      production-dependencies:\n        applies-to: version-updates\n        dependency-type: production\n        update-types:\n          - minor\n          - patch\n        patterns:\n          - '*'\n      dev-dependencies:\n        applies-to: version-updates\n        dependency-type: development\n        update-types:\n          - minor\n          - patch\n        patterns:\n          - '*'\n\n    cooldown:\n      default-days: 15"
  },
  {
    "path": ".github/workflows/build.yaml",
    "content": "name: Build\n\non:\n  workflow_call:\n\njobs:\n  build:\n    runs-on: ubuntu-latest\n\n    permissions:\n      contents: read\n\n    steps:\n      - name: Checkout code\n        uses: actions/checkout@v5\n\n      - name: Setup Node.js\n        uses: actions/setup-node@v6\n        with:\n          node-version: '24'\n          cache: 'npm'\n\n      - name: Install dependencies\n        run: npm ci\n\n      - name: Build\n        run: npm run build\n\n      - name: Check build artifacts\n        run: ls -la dist/\n\n      - name: Save Build folders\n        uses: actions/cache/save@v4\n        with:\n          path: |\n            dist\n          key: build-cache-${{ github.run_id }}-${{ github.run_attempt }}\n"
  },
  {
    "path": ".github/workflows/check.yaml",
    "content": "name: Run Checks\n\non:\n  workflow_call:\n    secrets:\n      MANAGEMENT_TOKEN:\n        required: true\n      DELIVERY_TOKEN:\n        required: true\n      EXPORT_SPACE_ID:\n        required: true\n      EXPORT_SPACE_ID_EMBARGOED_ASSETS:\n        required: true\n\njobs:\n  check:\n    runs-on: ubuntu-latest\n\n    permissions:\n      contents: read\n\n    steps:\n      - name: Checkout code\n        uses: actions/checkout@v5\n\n      - name: Setup Node.js\n        uses: actions/setup-node@v6\n        with:\n          node-version: '24'\n          cache: 'npm'\n\n      - name: Install dependencies\n        run: npm ci\n\n      - name: Restore the build folders\n        uses: actions/cache/restore@v4\n        with:\n          path: |\n            dist\n          key: build-cache-${{ github.run_id }}-${{ github.run_attempt }}\n          fail-on-cache-miss: true\n\n      # Prior CI pipeline did not lint or format, skipping for now\n      # - name: Run linter\n      #   run: npm run lint\n\n      # - name: Check formatting\n      #   run: npm run format:check\n\n      - name: Run unit tests\n        run: npm run test:unit\n\n      - name: Run integration tests\n        run: npm run test:integration\n        env:\n          MANAGEMENT_TOKEN: ${{ secrets.MANAGEMENT_TOKEN }}\n          DELIVERY_TOKEN: ${{ secrets.DELIVERY_TOKEN }}\n          EXPORT_SPACE_ID: ${{ secrets.EXPORT_SPACE_ID }}\n          EXPORT_SPACE_ID_EMBARGOED_ASSETS: ${{ secrets.EXPORT_SPACE_ID_EMBARGOED_ASSETS }}"
  },
  {
    "path": ".github/workflows/codeql.yaml",
    "content": "---\nname: \"CodeQL Scan for GitHub Actions Workflows\"\n\non:\n  push:\n    branches: [main]\n    paths: [\".github/workflows/**\"]\n\njobs:\n  analyze:\n    name: Analyze GitHub Actions workflows\n    runs-on: ubuntu-latest\n    permissions:\n      actions: read\n      contents: read\n      security-events: write\n\n    steps:\n      - uses: actions/checkout@v5\n\n      - name: Initialize CodeQL\n        uses: github/codeql-action/init@v4\n        with:\n          languages: actions\n\n      - name: Run CodeQL Analysis\n        uses: github/codeql-action/analyze@v4\n        with:\n          category: actions\n"
  },
  {
    "path": ".github/workflows/dependabot-approve-and-request-merge.yaml",
    "content": "name: \"dependabot approve-and-request-merge\"\n\non: pull_request_target\n\njobs:\n  worker:\n    permissions:\n      contents: write\n      id-token: write\n    runs-on: ubuntu-latest\n    if: github.event.pull_request.user.login == 'dependabot[bot]' && github.repository == github.event.pull_request.head.repo.full_name\n    steps:\n      - uses: contentful/github-auto-merge@v1\n        with:\n          VAULT_URL: ${{ secrets.VAULT_URL }}\n"
  },
  {
    "path": ".github/workflows/main.yaml",
    "content": "name: CI\npermissions:\n  contents: read\n\non:\n  push:\n    branches: ['**']\n  pull_request:\n    branches: ['**']\n\njobs:\n  build:\n    uses: ./.github/workflows/build.yaml\n\n  check:\n    needs: build\n    uses: ./.github/workflows/check.yaml\n    secrets: inherit\n\n  release:\n    if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/beta')\n    needs: [build, check]\n    permissions:\n      contents: write\n      id-token: write\n      actions: read\n    uses: ./.github/workflows/release.yaml\n    secrets:\n      VAULT_URL: ${{ secrets.VAULT_URL }}\n\n"
  },
  {
    "path": ".github/workflows/release.yaml",
    "content": "name: Release\n\non:\n  workflow_call:\n    secrets:\n      VAULT_URL:\n        required: true\n\njobs:\n  release:\n    runs-on: ubuntu-latest\n\n    permissions:\n      contents: write\n      id-token: write\n      actions: read\n\n    steps:\n      - name: 'Retrieve Secrets from Vault'\n        id: vault\n        uses: hashicorp/vault-action@v3.4.0\n        with:\n          url: ${{ secrets.VAULT_URL }}\n          role: ${{ github.event.repository.name }}-github-action\n          method: jwt\n          path: github-actions\n          exportEnv: false\n          secrets: |\n            github/token/${{ github.event.repository.name }}-semantic-release token | GITHUB_TOKEN ;\n\n      - name: Get Automation Bot User ID\n        id: get-user-id\n        run: echo \"user-id=$(gh api \"/users/contentful-automation[bot]\" --jq .id)\" >> \"$GITHUB_OUTPUT\"\n        env:\n          GITHUB_TOKEN: ${{ steps.vault.outputs.GITHUB_TOKEN }}\n\n      - name: Setting up Git User Credentials\n        run: |\n          git config --global user.name 'contentful-automation[bot]'\n          git config --global user.email '${{ steps.get-user-id.outputs.user-id }}+contentful-automation[bot]@users.noreply.github.com'\n\n      - name: Checkout code\n        uses: actions/checkout@v5\n        with:\n          fetch-depth: 0\n\n      - name: Setup Node.js\n        uses: actions/setup-node@v6\n        with:\n          node-version: '24'\n          cache: 'npm'\n\n      - name: Install latest npm\n        run: npm install -g npm@latest\n\n      - name: Install dependencies\n        run: npm ci\n\n      - name: Restore the build folders\n        uses: actions/cache/restore@v4\n        with:\n          path: |\n            dist\n          key: build-cache-${{ github.run_id }}-${{ github.run_attempt }}\n          fail-on-cache-miss: true\n\n      - name: Run Release\n        run: |\n          echo \"Starting Semantic Release Process\"\n          echo \"npm version: $(npm -v)\"\n          npm run semantic-release\n        env:\n          GITHUB_TOKEN: ${{ steps.vault.outputs.GITHUB_TOKEN }}\n\n      - name: Get latest release tag\n        id: get-tag\n        run: |\n          TAG=$(gh api repos/${{ github.repository }}/releases/latest --jq .tag_name)\n          echo \"tag=$TAG\" >> $GITHUB_OUTPUT\n        env:\n          GITHUB_TOKEN: ${{ steps.vault.outputs.GITHUB_TOKEN }}\n\n      - name: Summary\n        run: |\n          echo \"## Release Summary\" >> $GITHUB_STEP_SUMMARY\n          echo \"\" >> $GITHUB_STEP_SUMMARY\n          echo \"- **Version**: ${{ steps.get-tag.outputs.tag }}\" >> $GITHUB_STEP_SUMMARY\n          echo \"- **GitHub Release**: https://github.com/${{ github.repository }}/releases/tag/${{ steps.get-tag.outputs.tag }}\" >> $GITHUB_STEP_SUMMARY\n"
  },
  {
    "path": ".gitignore",
    "content": "contentful-export-*\n\ndist\ngh-pages\n\n# Docker\n*.dockerfile\n*.dockerignore\n\n# Node package managers\nyarn.lock\n\n# Created by https://www.gitignore.io/api/vim,code,linux,macos,windows,sublimetext,node\n\n### Code ###\n# Visual Studio Code - https://code.visualstudio.com/\n.settings/\n.vscode/\njsconfig.json\n\n# Export config\nconfig.json\n\n### Linux ###\n*~\n\n# temporary files which can be created if a process still has a handle open of a deleted file\n.fuse_hidden*\n\n# KDE directory preferences\n.directory\n\n# Linux trash folder which might appear on any partition or disk\n.Trash-*\n\n# .nfs files are created when an open file is removed but is still being accessed\n.nfs*\n\n### macOS ###\n*.DS_Store\n.AppleDouble\n.LSOverride\n\n# Icon must end with two \\r\nIcon\n\n# Thumbnails\n._*\n\n# Files that might appear in the root of a volume\n.DocumentRevisions-V100\n.fseventsd\n.Spotlight-V100\n.TemporaryItems\n.Trashes\n.VolumeIcon.icns\n.com.apple.timemachine.donotpresent\n\n# Directories potentially created on remote AFP share\n.AppleDB\n.AppleDesktop\nNetwork Trash Folder\nTemporary Items\n.apdisk\n\n### Node ###\n# Logs\nlogs\n*.log\nnpm-debug.log*\nyarn-debug.log*\nyarn-error.log*\n\n# Runtime data\npids\n*.pid\n*.seed\n*.pid.lock\n\n# Directory for instrumented libs generated by jscoverage/JSCover\nlib-cov\n\n# Coverage directory used by tools like istanbul\ncoverage\n\n# nyc test coverage\n.nyc_output\n\n# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)\n.grunt\n\n# Bower dependency directory (https://bower.io/)\nbower_components\n\n# node-waf configuration\n.lock-wscript\n\n# Compiled binary addons (http://nodejs.org/api/addons.html)\nbuild/Release\n\n# Dependency directories\nnode_modules/\njspm_packages/\n\n# Typescript v1 declaration files\ntypings/\n\n# Optional npm cache directory\n.npm\n\n# Optional eslint cache\n.eslintcache\n\n# Optional REPL history\n.node_repl_history\n\n# Output of 'npm pack'\n*.tgz\n\n# Yarn Integrity file\n.yarn-integrity\n\n# dotenv environment variables file\n.env\n.envrc\n\n### SublimeText ###\n# cache files for sublime text\n*.tmlanguage.cache\n*.tmPreferences.cache\n*.stTheme.cache\n\n# workspace files are user-specific\n*.sublime-workspace\n\n# project files should be checked into the repository, unless a significant\n# proportion of contributors will probably not be using SublimeText\n# *.sublime-project\n\n# sftp configuration file\nsftp-config.json\n\n# Package control specific files\nPackage Control.last-run\nPackage Control.ca-list\nPackage Control.ca-bundle\nPackage Control.system-ca-bundle\nPackage Control.cache/\nPackage Control.ca-certs/\nPackage Control.merged-ca-bundle\nPackage Control.user-ca-bundle\noscrypto-ca-bundle.crt\nbh_unicode_properties.cache\n\n# Sublime-github package stores a github token in this file\n# https://packagecontrol.io/packages/sublime-github\nGitHub.sublime-settings\n\n### Vim ###\n# swap\n.sw[a-p]\n.*.sw[a-p]\n# session\nSession.vim\n# temporary\n.netrwhist\n# auto-generated tag files\ntags\n\n### Windows ###\n# Windows thumbnail cache files\nThumbs.db\nehthumbs.db\nehthumbs_vista.db\n\n# Folder config file\nDesktop.ini\n\n# Recycle Bin used on file shares\n$RECYCLE.BIN/\n\n# Windows Installer files\n*.cab\n*.msi\n*.msm\n*.msp\n\n# Windows shortcuts\n*.lnk\n\n.idea\n.tool-versions\n\n# End of https://www.gitignore.io/api/vim,code,linux,macos,windows,sublimetext,node\n"
  },
  {
    "path": ".npmrc",
    "content": "ignore-scripts=true\n"
  },
  {
    "path": ".nvmrc",
    "content": "24\n"
  },
  {
    "path": "AGENTS.md",
    "content": "<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\n# Agent Guide\n\nRead this file first. It tells you where to find context in this repo.\n\n## Quick Reference\n\n| What you need | Where to look |\n|---|---|\n| How this repo is structured | [ARCHITECTURE.md](./ARCHITECTURE.md) |\n| How to build/test/run | [CONTRIBUTING.md](./CONTRIBUTING.md) |\n| Why decisions were made | [docs/ADRs/](./docs/ADRs/) |\n| What this repo does | [README.md](./README.md) |\n| PR review rules | [.bito/guidelines/](./.bito/guidelines/) |\n| Active specs/work | [docs/specs/](./docs/specs/) |\n\n## Sharp Edges & Invariants\n\n- **Never edit `dist/` directly.** It is a Babel build artifact compiled from `lib/`. Always edit source in `lib/` and run `npm run build`.\n- **`types.d.ts` is hand-maintained.** It is NOT auto-generated. When changing the public API (options, return type), you must manually update `types.d.ts` to match.\n- **`usageParams.js` and `types.d.ts` must stay in sync.** CLI options (yargs definitions in `lib/usageParams.js`) and the TypeScript `Options` interface (`types.d.ts`) define the same option set. Changes to one must be reflected in the other.\n- **Webhooks and roles are master-only.** The code explicitly skips webhook and role export when `environmentId !== 'master'`. Do not change this -- it reflects a Contentful API constraint.\n- **The standalone CLI redirects to `contentful-cli`.** `bin/contentful-export` prints a notice that the CLI has moved to `contentful-cli`, then runs the export. Do not add new CLI-only features here.\n- **Integration tests require real Contentful spaces.** They are not mocked. CI provides the required secrets via environment variables. Do not commit tokens.\n- **`package.json` version is `0.0.0-determined-by-semantic-release`.** Never set a version manually. `semantic-release` handles all versioning.\n- **Babel target (Node 12) is lower than `engines.node` (>=22).** This is a known inconsistency in `babel.config.json`. The low target is harmless but confusing.\n- **`contentOnly` flag is a shorthand.** When set, it internally enables `skipRoles`, `skipContentModel`, and `skipWebhooks`. Do not duplicate this logic.\n- **Asset downloads use concurrency of 6.** Both `download-assets.js` and `get-space-data.js` (editor interfaces) use Bluebird `Promise.map` with `{ concurrency: 6 }`. Be careful about changing this -- it affects API rate limiting.\n\n## High-Traffic Areas\n\nThese paths are the most critical and frequently exercised — changes here carry outsized risk:\n\n| Path | Why it's sensitive | What to watch |\n|---|---|---|\n| `lib/tasks/get-space-data.js` | Core export logic; every export invokes it | Pagination ordering (`sys.createdAt,sys.id`) is load-bearing for deterministic exports. Changing page size or concurrency affects API rate limits. |\n| `lib/parseOptions.js` | Validates and merges all user input | Adding/removing options here cascades to `usageParams.js`, `types.d.ts`, and all downstream consumers. |\n| `lib/index.js` | Listr task orchestration | Task ordering matters — e.g., `get-space-data` must complete before `download-assets` can reference fetched asset URLs. |\n| `lib/tasks/download-assets.js` | Network-heavy, concurrency-limited | Changing concurrency (currently 6) can trigger CMA rate limiting or exhaust memory on large spaces. |\n\n## Key Conventions\n\n- **Commit format:** Conventional Commits enforced by Commitizen + Husky pre-commit hook\n- **Branch strategy:** `main` (stable releases) + `beta` (pre-releases), feature branches, squash merge\n- **Test location:** `test/unit/` mirrors `lib/` structure; `test/integration/` for end-to-end\n- **Module system:** ES modules in source, compiled to CJS via Babel for npm distribution\n- **Build before test:** `npm test` runs `pretest` which includes lint + build\n\n## Integration Points\n\n**Upstream (this repo consumes):**\n- Contentful Management API (`api.contentful.com`) -- all entity CRUD\n- Contentful Delivery API (`cdn.contentful.com`) -- published-only content\n- `contentful-batch-libs` -- shared utilities for export/import tools\n\n**Downstream (consumes this repo):**\n- `contentful-cli` -- wraps this library as `contentful space export`\n- `contentful-mcp-server` -- uses this library for space-to-space migration\n- Direct npm consumers\n\n## Build & Quality\n\n```bash\n# Quick verification loop\nnpm install && npm run build && npm run test:unit && npm run lint\n```\n"
  },
  {
    "path": "ARCHITECTURE.md",
    "content": "<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\n# Architecture\n\n## Overview\n\n`contentful-export` is a Node.js library and CLI tool that exports the full content model, content, and assets from a Contentful space (or environment) to a JSON file. It orchestrates paginated reads against the Contentful Management API (CMA) and optionally the Content Delivery API (CDA), downloads asset files, and writes the aggregated result to disk. It is the export half of the Contentful export/import toolchain.\n\n## System Context\n\n```mermaid\ngraph TD\n    CLI[\"contentful-cli<br/>(space export command)\"] --> LIB[\"contentful-export<br/>(this repo)\"]\n    MCP[\"contentful-mcp-server<br/>(space-to-space migration)\"] --> LIB\n    USER[\"Direct library consumers<br/>(npm package)\"] --> LIB\n    LIB --> CMA[\"Contentful Management API<br/>(api.contentful.com)\"]\n    LIB --> CDA[\"Contentful Delivery API<br/>(cdn.contentful.com)\"]\n    LIB --> FS[\"Local filesystem<br/>(JSON export file + asset downloads)\"]\n```\n\n## Internal Structure\n\n| Directory / File | Purpose |\n|---|---|\n| `lib/index.js` | Main entry point. Orchestrates the export pipeline using Listr tasks: init client, fetch space data, download assets, write export file. |\n| `lib/parseOptions.js` | Merges defaults, config file, and user-supplied params. Validates required fields (`spaceId`, `managementToken`). Processes proxy settings, query strings, and file paths. |\n| `lib/tasks/init-client.js` | Creates CMA and/or CDA client instances using `contentful-management` and `contentful` SDKs. |\n| `lib/tasks/get-space-data.js` | Paginated fetching of all entity types (content types, entries, assets, locales, tags, webhooks, roles, editor interfaces). Handles draft/archived filtering and tag stripping. |\n| `lib/tasks/download-assets.js` | Downloads asset binary files to disk with concurrency of 6. Handles embargoed (signed URL) assets. |\n| `lib/usageParams.js` | Yargs CLI argument definitions. Consumed by the `bin/contentful-export` CLI entry point. |\n| `lib/utils/embargoedAssets.js` | JWT-based URL signing for embargoed (secure) assets. Caches asset keys per space/environment. |\n| `lib/utils/headers.js` | Parses custom HTTP header strings (`-H \"Key: Value\"`) into an object for API requests. |\n| `bin/contentful-export` | CLI entry point. Requires the built `dist/` output, prints a redirection notice pointing users to `contentful-cli`, then runs the export. |\n| `types.d.ts` | Hand-maintained TypeScript type declarations for the public API (`Options` interface and default export). |\n| `dist/` | Babel-compiled output (CJS). Generated by `npm run build`, not checked into git. |\n\n## Data Flow\n\n1. **Option parsing** -- User passes options (programmatic or CLI). `parseOptions` merges defaults, config file, and params. Validates required fields.\n2. **Client initialization** -- Creates a CMA client. If `deliveryToken` is provided (and `includeDrafts` is false), also creates a CDA client for fetching published-only entries/assets.\n3. **Paginated fetching** -- `get-space-data.js` connects to the space/environment, then fetches each entity type in sequence using `pagedGet` (page size = `maxAllowedLimit`, default 1000, ordered by `sys.createdAt,sys.id`). Entries and assets are post-filtered for drafts/archived status and optionally have tags stripped.\n4. **Asset download** (optional) -- If `downloadAssets` is true, asset files are streamed to disk under `exportDir/<host>/<path>`. Embargoed assets are signed via the asset_keys API with a 6-hour expiry window before download.\n5. **JSON export** -- The aggregated data object is written to disk using `bfj` (Big-Friendly JSON) for streaming large JSON writes without exhausting memory.\n6. **Summary** -- A table of exported entity counts is printed, along with duration and file path.\n\n## Domain Concepts\n\n| Concept | Description |\n|---|---|\n| **Space** | Top-level container in Contentful. Export targets one space at a time. |\n| **Environment** | A branch of a space's content. Defaults to `master`. Webhooks and roles can only be exported from the `master` environment. |\n| **Content Model** | The set of content types, locales, and editor interfaces that define the structure of content. |\n| **Embargoed Assets** | Assets hosted on `*.secure.*` domains that require JWT-signed URLs for access. The tool creates short-lived signed URLs via the `asset_keys` API. |\n| **Draft / Archived** | Entries and assets can be in draft (no `publishedVersion`) or archived (`archivedVersion` set) states. By default, only published items are exported. |\n| **CDA vs CMA export** | CMA returns all versions (latest, including unpublished changes). CDA returns only the published version. Providing a `deliveryToken` switches entry/asset fetching to CDA. Tags are CMA-only and will not be exported via CDA. |\n\n## Key Dependencies\n\n| Dependency | Why it's here |\n|---|---|\n| `contentful-management` (v12) | CMA client for fetching space data. v12 requires Node >=22. |\n| `contentful` (v11) | CDA client, used when `deliveryToken` is provided for published-only export. |\n| `contentful-batch-libs` (v11) | Shared utility library for Contentful export/import tools: logging, error handling, task wrapping, proxy utilities, sequence headers. |\n| `bfj` (v9) | Big-Friendly JSON -- streaming JSON serializer for writing large export files without memory exhaustion. |\n| `listr` | Task runner that provides structured progress output (spinner or verbose renderer for CI). |\n| `bluebird` | Promise library used for `Promise.map` with concurrency control (pagination, asset downloads). |\n| `yargs` (v18) | CLI argument parsing. |\n| `axios` (v1) | HTTP client for downloading asset files. |\n| `jsonwebtoken` | JWT signing for embargoed asset URL generation. |\n| `date-fns` | Date formatting and duration calculation for export file naming and summary. |\n\n## Configuration\n\n| Variable / Flag | Purpose | Default |\n|---|---|---|\n| `spaceId` | Space to export (required) | -- |\n| `managementToken` | CMA API token (required) | -- |\n| `environmentId` | Environment within the space | `master` |\n| `deliveryToken` | CDA token; switches entry/asset fetching to published-only | -- |\n| `exportDir` | Directory for output files | `process.cwd()` |\n| `saveFile` | Whether to write JSON to disk | `true` |\n| `maxAllowedLimit` | Items per API page request | `1000` |\n| `downloadAssets` | Download asset binary files to disk | `false` |\n| `includeDrafts` | Include draft entries/assets | `false` |\n| `includeArchived` | Include archived entries/assets | `false` |\n| `contentOnly` | Only export entries and assets (sets `skipRoles`, `skipContentModel`, `skipWebhooks` to true) | `false` |\n| `skipContentModel` / `skipContent` / `skipRoles` / `skipWebhooks` / `skipTags` / `skipEditorInterfaces` | Granular skip flags | all `false` |\n| `stripTags` | Remove tags from exported entries/assets | `false` |\n| `host` | CMA API host | `api.contentful.com` |\n| `hostDelivery` | CDA host | `cdn.contentful.com` |\n| `proxy` / `rawProxy` | HTTP proxy configuration | -- / `false` |\n| `useVerboseRenderer` | Line-by-line output instead of spinner (useful for CI) | `false` |\n| `config` | Path to JSON config file with all options | -- |\n\n### CI Environment Variables\n\nThese are used in the GitHub Actions check workflow for integration tests:\n\n| Variable | Purpose |\n|---|---|\n| `MANAGEMENT_TOKEN` | CMA token for test space |\n| `DELIVERY_TOKEN` | CDA token for test space |\n| `EXPORT_SPACE_ID` | Space ID for integration tests |\n| `EXPORT_SPACE_ID_EMBARGOED_ASSETS` | Space ID for embargoed asset tests |\n\n## Operational Knowledge\n\n### Deployment\n\nThis is an npm library, not a deployed service. Releases happen automatically via `semantic-release` when commits are pushed to `main` (stable) or `beta` (prerelease) branches. The release workflow retrieves credentials from HashiCorp Vault (visible in `.github/workflows/release.yaml`).\n\n- **Rollback:** Unpublish or publish a patched version to npm. There is no service to roll back.\n- **Beta channel:** Pushing to the `beta` branch publishes a prerelease version on the `beta` npm dist-tag.\n\n### Failure Modes\n\n| Failure | Cause | Mitigation |\n|---|---|---|\n| `400 - Response size too big` | Contentful API response size limits exceeded | Reduce `maxAllowedLimit` (e.g., to 50) |\n| Integration test failures in CI | Missing or expired test space credentials (secrets) | Ensure `MANAGEMENT_TOKEN`, `DELIVERY_TOKEN`, `EXPORT_SPACE_ID`, `EXPORT_SPACE_ID_EMBARGOED_ASSETS` secrets are valid |\n| Embargoed asset download failure | Asset key creation fails or JWT signing error | Check that the space has embargoed assets enabled and the management token has permissions |\n| `ContentfulMultiError` | Aggregated errors during export (partial failure) | Check the error log file at the path printed in output |\n\n### Dependency Failure Behavior\n\nThis library depends on the Contentful Management and Delivery APIs at runtime. If those APIs are unavailable:\n\n| Scenario | Behavior |\n|---|---|\n| CMA unreachable (network failure, DNS, timeout) | Export fails immediately with an Axios network error. No partial output is written. |\n| CMA returns 5xx errors | The SDK retries with exponential backoff (built into `contentful-management`). After retries are exhausted, the export fails with the error aggregated into `ContentfulMultiError`. |\n| CDA unreachable (when `deliveryToken` is provided) | Same as CMA — network error or retry exhaustion leads to export failure. |\n| Rate-limited (429) | The SDK handles 429 responses with automatic retry after the `X-Contentful-RateLimit-Reset` header delay. Large spaces may see slow exports but will eventually complete unless the rate limit is persistently exceeded. |\n| Asset CDN unreachable (during `downloadAssets`) | Individual asset downloads fail after Axios timeout. The export completes but reports failed asset downloads in the error log. |\n\nThere is no partial-export resume capability — a failed export must be retried from scratch.\n\n\n## Integration Points\n\n### Upstream (this repo consumes)\n\n- **Contentful Management API** (`api.contentful.com`) -- Primary data source for all entity types\n- **Contentful Delivery API** (`cdn.contentful.com`) -- Optional, for published-only entry/asset export\n- **Contentful Asset Keys API** -- For signing embargoed asset download URLs\n- **contentful-batch-libs** -- Shared logging, error handling, and utility functions\n\n### Downstream (consumes this repo)\n\n- **contentful-cli** (`contentful space export` command) -- Primary CLI consumer; the standalone CLI in this repo redirects users here\n- **contentful-mcp-server** -- Uses this library for space-to-space migration export step\n- **Direct npm consumers** -- Anyone importing `contentful-export` as a library\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\n# Contributing\n\n## Prerequisites\n\n| Tool | Version | Notes |\n|---|---|---|\n| Node.js | >=22 (see `.nvmrc` for exact: 24) | Use `nvm use` to switch automatically |\n| npm | Bundled with Node | Lockfile is `package-lock.json` |\n\nNo additional tokens or Docker setup is needed for local development. Integration tests require Contentful API tokens (provided via environment variables in CI).\n\n## Getting Started\n\n```bash\n# Clone and install\ngit clone git@github.com:contentful/contentful-export.git\ncd contentful-export\nnpm install                     # source: package-lock.json\n\n# Build (clean, type-check, then babel compile)\nnpm run build                   # source: package.json -> scripts.build\n\n# Run tests (lint + build + unit + integration)\nnpm test                        # source: package.json -> scripts.test\n```\n\n**Note:** `npm test` runs `pretest` first (lint + build + clean test artifacts), then unit tests with coverage, then integration tests. Integration tests require environment variables -- see the CI section below.\n\n## Development Workflow\n\n```bash\n# Watch mode for incremental builds\nnpm run build:watch             # source: package.json -> scripts.build:watch\n\n# Run only unit tests\nnpm run test:unit               # source: package.json -> scripts.test:unit\n\n# Run unit tests in watch mode\nnpm run test:unit:watch         # source: package.json -> scripts.test:unit:watch\n\n# Run only integration tests (requires env vars)\nnpm run test:integration        # source: package.json -> scripts.test:integration\n```\n\n## Commands\n\n### Build\n\n| Command | What it does | Source |\n|---|---|---|\n| `npm run build` | Clean dist/, type-check with tsc, compile with Babel to dist/ | `package.json` -> `scripts.build` |\n| `npm run build:watch` | Babel compile in watch mode | `package.json` -> `scripts.build:watch` |\n| `npm run clean` | Remove dist/ and coverage/ | `package.json` -> `scripts.clean` |\n| `npm run check` | TypeScript type checking (no emit) | `package.json` -> `scripts.check` |\n\n### Test\n\n| Command | What it does | Source |\n|---|---|---|\n| `npm test` | Lint + build + unit tests + integration tests | `package.json` -> `scripts.test` |\n| `npm run test:unit` | Jest unit tests with coverage | `package.json` -> `scripts.test:unit` |\n| `npm run test:unit:watch` | Unit tests in watch mode | `package.json` -> `scripts.test:unit:watch` |\n| `npm run test:unit:debug` | Unit tests with Node inspector for debugging | `package.json` -> `scripts.test:unit:debug` |\n| `npm run test:integration` | Jest integration tests (requires env vars) | `package.json` -> `scripts.test:integration` |\n| `npm run test:integration:watch` | Integration tests in watch mode | `package.json` -> `scripts.test:integration:watch` |\n| `npm run test:integration:debug` | Integration tests with Node inspector | `package.json` -> `scripts.test:integration:debug` |\n\n### Lint\n\n| Command | What it does | Source |\n|---|---|---|\n| `npm run lint` | ESLint on lib/, bin/, and types.d.ts | `package.json` -> `scripts.lint` |\n| `npm run lint:fix` | ESLint with auto-fix | `package.json` -> `scripts.lint:fix` |\n\n### Release\n\n| Command | What it does | Source |\n|---|---|---|\n| `npm run semantic-release` | Run semantic-release (CI only) | `package.json` -> `scripts.semantic-release` |\n\n## Testing\n\n- **Framework:** Jest (v29)\n- **Config:** Inline in `package.json` under the `jest` key\n- **Unit tests:** `test/unit/` -- mirrors `lib/` structure\n- **Integration tests:** `test/integration/` -- runs against a real Contentful space (requires environment variables)\n- **Run all:** `npm test`\n- **Run single:** `npx jest --testPathPattern=test/unit/tasks/init-client`\n- **Coverage:** Collected from `lib/**/*.js`, excludes `usageParams.js`\n\nIntegration tests require these environment variables:\n- `MANAGEMENT_TOKEN` -- CMA API token\n- `DELIVERY_TOKEN` -- CDA API token\n- `EXPORT_SPACE_ID` -- Space ID for standard tests\n- `EXPORT_SPACE_ID_EMBARGOED_ASSETS` -- Space ID for embargoed asset tests\n\n## Code Style & Conventions\n\n- **Linting:** ESLint with `standard` + `@typescript-eslint` rules (config: `.eslintrc`)\n- **Module system:** Source is ES modules (import/export), compiled to CJS via Babel for distribution\n- **Type checking:** TypeScript (`tsc --noEmit`) via `tsconfig.json` -- checks `.js` files with `allowJs` and `checkJs` enabled. Strict mode is off.\n- **No Prettier:** This repo does not use Prettier. Follow the existing code style and ESLint rules.\n- **Note:** The CI pipeline currently has linting commented out in the check workflow (see `.github/workflows/check.yaml`). Linting still runs locally via `npm test` (which calls `pretest`).\n\n## Commit Convention\n\nThis repo uses [Conventional Commits](https://www.conventionalcommits.org/) via [Commitizen](https://github.com/commitizen/cz-cli) with `cz-conventional-changelog`:\n\n```\ntype(scope): description\n```\n\nValid types: `feat`, `fix`, `chore`, `docs`, `refactor`, `test`, `perf`, `ci`, `build`, `revert`\n\nExamples:\n```\nfeat: add support for exporting taxonomies\nfix: handle embargoed asset download timeout\nbuild(deps): bump contentful-management to v12\nchore(ci): update Node version in workflow\n```\n\n`semantic-release` uses `@semantic-release/commit-analyzer` to determine version bumps:\n- `feat:` -> minor version bump\n- `fix:` -> patch version bump\n- `build(deps):` -> patch version bump (custom rule)\n- `feat!:` or `fix!:` or `BREAKING CHANGE:` in body -> major version bump\n\nGit hooks are configured via npm lifecycle scripts: `precommit` runs `npm run lint` and `prepush` runs `npm run test`. Husky v4 is a devDependency but hook configuration relies on npm script naming conventions.\n\n## Branch Strategy\n\n- `main` -- Production. Merges trigger a stable npm release via semantic-release.\n- `beta` -- Pre-release channel. Merges trigger a beta npm release (`npm install contentful-export@beta`).\n- Feature branches -- No enforced naming pattern. Use descriptive names (e.g., `feat/add-taxonomy-export`, `fix/embargoed-download`).\n\n## Release Process\n\nFully automated via `semantic-release` on GitHub Actions:\n\n1. Push or merge to `main` or `beta`\n2. CI runs build + check jobs\n3. If checks pass, the release job runs `semantic-release`\n4. `semantic-release` analyzes commit messages, determines version, publishes to npm, creates a GitHub release\n\nRelease credentials (GitHub token) are retrieved from HashiCorp Vault during CI. The npm publish mechanism is handled by semantic-release.\n\n## Pull Requests\n\n- No enforced PR title format\n- Required checks: Build job + Check job (unit tests + integration tests)\n- Dependabot PRs are auto-approved and auto-merged via the `dependabot-approve-and-request-merge` workflow\n- CI runs on all branches (push and PR events)\n\n## CI/CD\n\n| Job | Trigger | What it does |\n|---|---|---|\n| `build` (Build) | Push to any branch, PR to any branch | Checkout, setup Node 24, npm ci, babel build, cache dist/ | \n| `check` (Run Checks) | After build succeeds | Restore build cache, run unit tests, run integration tests (with secrets) |\n| `release` (Release) | Push to `main` or `beta`, after build + check pass | Retrieve Vault secrets, run `semantic-release` to publish to npm + create GitHub release |\n| `codeql` (CodeQL Scan) | Push to `main` changing `.github/workflows/` | Static analysis of GitHub Actions workflows |\n| `dependabot-approve-and-request-merge` | `pull_request_target` from dependabot | Auto-approve and request merge for Dependabot PRs |\n\n## Adding a New Component\n\n| What you're adding | Copy this as a template | Keep in sync |\n|---|---|---|\n| New entity type in export | Any entity block in `lib/tasks/get-space-data.js` (e.g., tags ~line 63) | `parseOptions.js` (default + `contentOnly`), `usageParams.js`, `types.d.ts` |\n| New top-level task | `lib/tasks/download-assets.js` (options closure pattern) or `get-space-data.js` (sub-list pattern) | Wire into `lib/index.js` tasks array |\n| New utility | `lib/utils/headers.js` | — |\n\nTests mirror `lib/` structure: `test/unit/tasks/<name>.test.js` or `test/unit/utils/<name>.test.js`.\n\n## File-Level Guidance\n\n| Path | Notes |\n|---|---|\n| `dist/` | Generated by Babel build. Never edit directly. Not committed to git. |\n| `types.d.ts` | Hand-maintained TypeScript declarations for the public API. Update when changing the `Options` interface or export signature. |\n| `bin/contentful-export` | CLI entry point. Requires compiled `dist/` output. Prints a redirection notice pointing to `contentful-cli`, then runs the export. |\n| `package-lock.json` | Lockfile. Do not edit manually. Regenerate with `npm install`. |\n| `.npmrc` | Contains `ignore-scripts=true` for security. |\n| `example-config.json` | Example configuration file shipped with the package. Keep in sync with supported options. |\n| `lib/usageParams.js` | CLI argument definitions. Keep in sync with `types.d.ts` when adding/removing options. |\n"
  },
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2016 Contentful\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Contentful export tool\n\n[![CI](https://github.com/contentful/contentful-export/actions/workflows/main.yaml/badge.svg)](https://github.com/contentful/contentful-export/actions/workflows/main.yaml)\n[![npm](https://img.shields.io/npm/v/contentful-export.svg)](https://www.npmjs.com/package/contentful-export) [![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release)\n\n[Contentful](https://www.contentful.com) provides a content infrastructure for digital teams to power content in websites, apps, and devices. Unlike a CMS, Contentful was built to integrate with the modern software stack. It offers a central hub for structured content, powerful management and delivery APIs, and a customizable web app that enable developers and content creators to ship digital products faster.\n\nThis is a library that helps you backup your Content Model, Content and Assets or move them to a new Contentful space. _It will support Roles & Permissions in a future version._\n\nTo import your exported data, please refer to the [contentful-import](https://github.com/contentful/contentful-import) repository.\n\n## :exclamation: Usage as CLI\n\n> We moved the CLI version of this tool into our [Contentful CLI](https://github.com/contentful/contentful-cli). This allows our users to use and install only one single CLI tool to get the full Contentful experience.\n>\n> Please have a look at the [Contentful CLI export command documentation](https://github.com/contentful/contentful-cli/tree/master/docs/space/export) to learn more about how to use this as command line tool.\n\n## :cloud: Pre-requisites && Installation\n\n### Pre-requisites\n\n- Node LTS\n\n### :cloud: Installation\n\n```bash\nnpm install contentful-export\n```\n\n## :hand: Usage\n\n### CommonJS\n\n```javascript\nconst contentfulExport = require('contentful-export')\n\nconst options = {\n  spaceId: '<space_id>',\n  managementToken: '<content_management_api_key>',\n  ...\n}\n\ncontentfulExport(options)\n  .then((result) => {\n    console.log('Your space data:', result)\n  })\n  .catch((err) => {\n    console.log('Oh no! Some errors occurred!', err)\n  })\n```\n\n### ESM\n\n```javascript\nimport contentfulExport from 'contentful-export'\n\nconst options = {\n  spaceId: '<space_id>',\n  managementToken: '<content_management_api_key>',\n  ...\n}\n\n// contentfulExport returns a Promise so you can use async/await, etc.\nawait contentfulExport(options)\n```\n\n### Querying\n\nTo scope your export, you are able to pass query parameters. All search parameters of our API are supported as documented in our [API documentation](https://www.contentful.com/developers/docs/references/content-delivery-api/#/reference/search-parameters).\n\n```javascript\nconst contentfulExport = require('contentful-export')\n\nconst options = {\n  spaceId: '<space_id>',\n  managementToken: '<content_management_api_key>',\n  queryEntries: ['content_type=<content_type_id>']\n}\n\ncontentfulExport(options)\n...\n```\n\nThe Export tool also support multiple inline queries.\n\n```javascript\nconst contentfulExport = require('contentful-export')\n\nconst options = {\n  spaceId: '<space_id>',\n  managementToken: '<content_management_api_key>',\n  queryEntries: [\n    'content_type=<content_type_id>',\n    'sys.id=<entry_id>'\n  ]\n}\n\ncontentfulExport(options)\n...\n```\n\n`queryAssets` uses the same syntax as `queryEntries`\n\n### Export an environment\n\n```javascript\nconst contentfulExport = require('contentful-export')\n\nconst options = {\n  spaceId: '<space_id>',\n  managementToken: '<content_management_api_key>',\n  environmentId: '<environment_id>'\n}\n\ncontentfulExport(options)\n...\n```\n\n## :gear: Configuration options\n\n### Basics\n\n#### `spaceId` [string] [required]\n\nID of the space with source data\n\n#### `environmentId` [string] [default: 'master']\n\nID of the environment in the source space\n\n#### `managementToken` [string] [required]\n\nContentful management API token for the space to be exported\n\n#### `deliveryToken` [string]\n\nContentful Content Delivery API (CDA) token for the space to be exported.\n\nProviding `deliveryToken` will export both entries and assets from the\nContentful Delivery API, instead of the Contentful Management API.\nThis may be useful if you want to export the latest _published_ versions,\nas the management API always only exports the entirety of items, with the latest\nunpublished content. So if you want to make sure only to see the latest\npublished changes, provide the `deliveryToken`.\n\nJust to clarify: When Contentful Management API always returns the latest version (e.g. 50 in this case):\n\n```\n  \"createdAt\": \"2020-01-06T12:00:00.000Z\",\n  \"updatedAt\": \"2020-04-07T11:00:00.000Z\",\n  \"publishedVersion\": 23,\n  \"publishedAt\": \"2020-04-05T14:00:00.000Z\",\n  \"publishedCounter\": 1,\n  \"version\": 50,\n```\n\nthe Content Delivery API would return the `publishedVersion` (23). CDA responses don't include\nversion number.\n\nNote: Tags are only available on the Contentful Management API, so they will not be exported if you provide a Contentful Delivery Token. Tags is a new feature that not all users have access to.\n\n### Output\n\n#### `exportDir` [string] [default: current process working directory]\n\nDefines the path for storing the export JSON file\n\n#### `saveFile` [boolean] [default: true]\n\nSave the export as a JSON file\n\n#### `contentFile` [string]\n\nThe filename for the exported data\n\n### Filtering\n\n#### `includeDrafts` [boolean] [default: false]\n\nInclude drafts in the exported entries.\n\nThe `deliveryToken` option is ignored\nwhen `includeDrafts` has been set as `true`.\nIf you want to include drafts, there's no point of getting them through the\nContent Delivery API.\n\n#### `includeArchived` [boolean] [default: false]\n\nInclude archived entries in the exported entries\n\n#### `skipContentModel` [boolean] [default: false]\n\nSkip exporting content models\n\n#### `skipEditorInterfaces` [boolean] [default: false]\n\nSkip exporting editor interfaces\n\n#### `skipContent` [boolean] [default: false]\n\nSkip exporting assets and entries.\n\n#### `skipRoles` [boolean] [default: false]\n\nSkip exporting roles and permissions\n\n#### `skipTags` [boolean] [default: false]\n\nSkip exporting tags\n\n#### `skipWebhooks` [boolean] [default: false]\n\nSkip exporting webhooks\n\n#### `stripTags` [boolean] [default: false]\n\nUntag assets and entries\n\n#### `contentOnly` [boolean] [default: false]\n\nOnly export entries and assets\n\n#### `queryEntries` [array]\n\nOnly export entries that match these queries\n\n#### `queryAssets` [array]\n\nOnly export assets that match these queries\n\n#### `downloadAssets` [boolean]\n\nDownload actual asset files\n\n### Connection\n\n#### `host` [string] [default: 'api.contentful.com']\n\nThe Management API host\n\n#### `hostDelivery` [string] [default: 'cdn.contentful.com']\n\nThe Delivery API host\n\n#### `proxy` [string]\n\nProxy configuration in HTTP auth format: `host:port` or `user:password@host:port`\n\n#### `rawProxy` [boolean]\n\nPass proxy config to Axios instead of creating a custom httpsAgent\n\n#### `maxAllowedLimit` [number] [default: 1000]\n\nThe number of items per page per request\n\n#### `headers` [object]\n\nAdditional headers to attach to the requests.\n\n### Other\n\n#### `errorLogFile` [string]\n\nFull path to the error log file\n\n#### `useVerboseRenderer` [boolean] [default: false]\n\nDisplay progress in new lines instead of displaying a busy spinner and the status in the same line. Useful for CI.\n\n## :rescue_worker_helmet: Troubleshooting\n\n### Proxy\n\nUnable to connect to Contentful through your proxy? Try to set the `rawProxy` option to `true`.\n\n```javascript\ncontentfulExport({\n  proxy: 'https://cat:dog@example.com:1234',\n  rawProxy: true,\n  ...\n})\n```\n\n### Error: 400 - Bad Request - Response size too big.\n\nContentful response sizes are limited (find more info in our [technical limit docs](https://www.contentful.com/developers/docs/technical-limits/)). In order to resolve this issue, limit the amount of entities received within a single request by setting the [`maxAllowedLimit`](#maxallowedlimit-number-default-1000) option:\n\n```javascript\ncontentfulExport({\n  proxy: 'https://cat:dog@example.com:1234',\n  rawProxy: true,\n  maxAllowedLimit: 50\n  ...\n})\n```\n\n### Embargoed Assets\n\nIf a space is configured to use the [embargoed assets feature](https://www.contentful.com/help/media/embargoed-assets/), certain options will need to be set to use the export/import tooling. When exporting content, the `downloadAssets` option must be set to `true`. This will download the asset files to your local machine. Then, when importing content ([using `contentful-import`](https://github.com/contentful/contentful-import)), the `uploadAssets` option must be set to `true` and the `assetsDirectory` must be set to the directory that contains all of the exported asset folders.\n\n```javascript\nconst contentfulExport = require(\"contentful-export\");\n\nconst options = {\n  spaceId: \"<space_id>\",\n  managementToken: \"<content_management_api_key>\",\n  downloadAssets: true,\n};\n\ncontentfulExport(options);\n```\n\n## :card_file_box: Exported data structure\n\nThis is an overview of the exported data:\n\n```json\n{\n  \"contentTypes\": [],\n  \"entries\": [],\n  \"assets\": [],\n  \"locales\": [],\n  \"tags\": [],\n  \"webhooks\": [],\n  \"roles\": [],\n  \"editorInterfaces\": []\n}\n```\n\n_Note:_ Tags feature is not available for all users. If you do not have access to this feature, the tags array will always be empty.\n\n## :warning: Limitations\n\n- This tool currently does **not** support the export of space memberships.\n- Exported webhooks with credentials will be exported as normal webhooks. Credentials should be added manually afterwards.\n- If you have custom UI extensions, you need to reinstall them manually in the new space.\n\n## :memo: Changelog\n\nRead the [releases](https://github.com/contentful/contentful-export/releases) page for more information.\n\n## :scroll: License\n\nThis project is licensed under MIT license\n\n## For AI Agents\n\n<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\nIf you are an AI coding agent working in this repository, read [AGENTS.md](./AGENTS.md) first. It tells you where to find architectural context, development setup, decision records, and repo-specific rules.\n\n[1]: https://www.contentful.com\n"
  },
  {
    "path": "babel.config.json",
    "content": "{\n  \"presets\": [\n    [\n      \"@babel/preset-env\",\n      {\n        \"targets\": {\n          \"node\": \"12\"\n        }\n      }\n    ]\n  ],\n  \"plugins\": [\n    \"@babel/plugin-proposal-object-rest-spread\",\n    \"add-module-exports\"\n  ]\n}\n"
  },
  {
    "path": "bin/contentful-export",
    "content": "#!/usr/bin/env node\n\n// eslint-disable-next-line\nconst runContentfulExport = require('../dist/index')\n// eslint-disable-next-line\nconst usageParams = require('../dist/usageParams')\n\nconsole.log('We moved the CLI version of this tool into our Contentful CLI.\\nThis allows our users to use and install only one single CLI tool to get the full Contentful experience.\\nFor more info please visit https://github.com/contentful/contentful-cli/tree/master/docs/space/export')\n\nrunContentfulExport(usageParams)\n  .then(() => {\n    process.exit(0)\n  })\n  .catch((err) => {\n    if (err.name !== 'ContentfulMultiError') {\n      console.error(err)\n    }\n    process.exit(1)\n  })\n"
  },
  {
    "path": "catalog-info.yaml",
    "content": "apiVersion: backstage.io/v1alpha1\nkind: Component\nmetadata:\n  name: contentful-export\n  description: |\n    This tool allows you to export a Contentful space to a JSON dump.\n  annotations:\n    circleci.com/project-slug: github/contentful/contentful-export\n    github.com/project-slug: contentful/contentful-export\n    contentful.com/ci-alert-slack: prd-ecosystem-dx-bots\n    contentful.com/service-tier: \"4\"\n  tags:\n    - tier-4\nspec:\n  type: cli\n  lifecycle: production\n  owner: group:team-developer-experience\n"
  },
  {
    "path": "docs/ADRs/2016-06-20-babel-cjs-build-pipeline.md",
    "content": "<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\n# Use Babel to Transpile ES2015+ to CommonJS\n\n## Status\n\nAccepted\n\n## Context\n\nThe project was created in June 2016, when ES module support in Node.js was not yet available. The codebase is written using ES2015+ syntax (import/export, arrow functions, object spread) but needed to produce CommonJS output compatible with the Node.js ecosystem and npm consumers.\n\n## Decision\n\nUse Babel (`@babel/preset-env` targeting Node 12, with `@babel/plugin-proposal-object-rest-spread` and `babel-plugin-add-module-exports`) to transpile `lib/` source to `dist/` as CommonJS modules. The `main` field in `package.json` points to `dist/index.js`.\n\nThe `add-module-exports` Babel plugin ensures that `module.exports = exports.default` is added, allowing CommonJS consumers to `require('contentful-export')` without `.default`.\n\n## Consequences\n\n- Source files use ES module syntax (`import`/`export`), but the published package is CJS-only.\n- The `dist/` directory is a build artifact and must be compiled before testing or publishing.\n- Babel config has not been updated to target modern Node versions (still targets Node 12 in `babel.config.json` despite `engines.node >= 22` in `package.json`). This is a known inconsistency -- the low target is harmless since all output runs on Node 22+.\n- A future migration to native ESM or dual CJS/ESM publishing would require updating the build pipeline.\n"
  },
  {
    "path": "docs/ADRs/2016-06-20-semantic-release.md",
    "content": "<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\n# Use semantic-release for Automated Versioning and Publishing\n\n## Status\n\nAccepted\n\n## Context\n\nThe project needed an automated release process that derives version numbers from commit messages and publishes to npm without manual intervention. Conventional Commits were already adopted as the commit message standard.\n\nContext not found for the original decision rationale -- likely a default/inherited choice from Contentful's ecosystem tooling practices. The project has used semantic-release since its early days.\n\n## Decision\n\nUse `semantic-release` with the following plugins:\n1. `@semantic-release/commit-analyzer` -- Determines version bump from commit messages. Custom rule: `build(deps)` commits trigger a patch release.\n2. `@semantic-release/release-notes-generator` -- Generates changelog from commits.\n3. `@semantic-release/npm` -- Publishes to npm.\n4. `@semantic-release/github` -- Creates GitHub releases.\n\nTwo release branches are configured:\n- `main` -- Stable releases\n- `beta` -- Pre-release channel with `beta` dist-tag\n\n## Consequences\n\n- Version numbers are never manually set (`\"version\": \"0.0.0-determined-by-semantic-release\"` in `package.json`).\n- Every merge to `main` or `beta` that includes a `feat:`, `fix:`, or `build(deps):` commit triggers a release.\n- Contributors must follow conventional commit format for their changes to appear in releases.\n- Commitizen (`cz-conventional-changelog`) is configured to guide commit message formatting.\n"
  },
  {
    "path": "docs/ADRs/2024-12-02-typescript-check-only.md",
    "content": "<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\n# Add TypeScript Type Checking (No Emit)\n\n## Status\n\nAccepted\n\n## Context\n\nThe codebase is written in JavaScript but lacked type safety. TypeScript's `checkJs` feature allows type-checking existing `.js` files without rewriting them in `.ts`. A `types.d.ts` file was already maintained for the public API surface.\n\nSource: git commit `61c42cd` (2024-12-02) -- \"chore: use tsconfig for type checking\"\n\n## Decision\n\nAdd a `tsconfig.json` with `\"noEmit\": true`, `\"allowJs\": true`, and `\"checkJs\": true` to enable TypeScript type-checking on the existing JavaScript source without changing the Babel build pipeline. The `npm run check` script runs `tsc`, and `npm run build` calls `check` before the Babel compilation step.\n\nStrict mode is deliberately left off (`\"strict\": false`) to avoid a large-scale type fix effort on the legacy codebase.\n\n## Consequences\n\n- Type errors are caught at build time without requiring a full TypeScript migration.\n- The Babel build pipeline remains the sole compiler for producing `dist/` output.\n- `types.d.ts` continues to be hand-maintained separately from the source.\n- Future contributors can incrementally add JSDoc type annotations to improve type coverage.\n"
  },
  {
    "path": "docs/ADRs/2025-11-14-circleci-to-github-actions.md",
    "content": "<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\n# Migrate CI/CD from CircleCI to GitHub Actions\n\n## Status\n\nAccepted\n\n## Context\n\nThe project originally used CircleCI for CI/CD. As part of a coordinated initiative (DX-541), publishing pipelines were migrated to GitHub Actions for consistency, improved secret management via HashiCorp Vault, and reduced tooling fragmentation.\n\nSource: git commit `00b792d` (2025-11-14) -- \"chore: migrate publishing pipeline to GHA from CircleCI [DX-541]\"\n\n## Decision\n\nReplace the CircleCI pipeline with GitHub Actions workflows:\n- `main.yaml` -- Orchestrates build, check, and release jobs\n- `build.yaml` -- Installs dependencies, compiles with Babel, caches `dist/`\n- `check.yaml` -- Runs unit and integration tests using cached build artifacts\n- `release.yaml` -- Retrieves credentials from HashiCorp Vault and runs `semantic-release`\n- `codeql.yaml` -- CodeQL scanning for GitHub Actions workflows\n- `dependabot-approve-and-request-merge.yaml` -- Auto-approves Dependabot PRs\n\nSecrets (GitHub token) are managed via Vault rather than repository-level GitHub secrets.\n\n## Consequences\n\n- CI/CD is fully GitHub-native, consistent with other Contentful ecosystem repositories.\n- Release credentials are fetched at runtime from Vault, reducing secret sprawl.\n- Build artifacts are cached between jobs using `actions/cache`.\n- The `catalog-info.yaml` annotation still references the old CircleCI slug (`circleci.com/project-slug`), which is a stale reference.\n"
  },
  {
    "path": "docs/ADRs/2026-04-09-cma-v12-node-22-minimum.md",
    "content": "<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\n# Update to CMA.js v12 and Drop Node <22 Support\n\n## Status\n\nAccepted\n\n## Context\n\nThe `contentful-management` SDK released v12 with breaking changes that required Node.js 22 or later. This was a coordinated update across multiple Contentful ecosystem repositories (contentful-export, contentful-import, contentful-cli, field-editors).\n\nSource: git commit `6b9c061` (2026-04-09) -- \"fix!: update to CMA.js v12 and drop Node <22 support [DX-781]\"\n\n## Decision\n\nUpgrade `contentful-management` to v12, bump `contentful-batch-libs` to v11 (compatible with CMA v12), and set `engines.node` to `>=22`. The `.nvmrc` was updated to `24` to match the CI environment. This was a breaking change for consumers running older Node.js versions.\n\n## Consequences\n\n- Node.js 22 is the minimum supported version for this package.\n- Consumers on Node <22 must stay on the previous major version.\n- CI workflows use Node 24 (matching `.nvmrc`).\n- The `contentful-batch-libs` v11 update was a necessary companion change.\n- This aligns the package with the rest of the Contentful ecosystem's Node.js support matrix.\n"
  },
  {
    "path": "docs/ADRs/README.md",
    "content": "<!-- Generated by seed-golden-context | Last updated: 2026-05-04 -->\n# Architecture Decision Records\n\n| ADR | Date | Status | Title |\n|---|---|---|---|\n| [001](./2016-06-20-babel-cjs-build-pipeline.md) | 2016-06-20 | Accepted | Use Babel to transpile ES2015+ to CommonJS |\n| [002](./2016-06-20-semantic-release.md) | 2016-06-20 | Accepted | Use semantic-release for automated versioning and publishing |\n| [003](./2024-12-02-typescript-check-only.md) | 2024-12-02 | Accepted | Add TypeScript type checking (no emit) |\n| [004](./2025-11-14-circleci-to-github-actions.md) | 2025-11-14 | Accepted | Migrate CI/CD from CircleCI to GitHub Actions |\n| [005](./2026-04-09-cma-v12-node-22-minimum.md) | 2026-04-09 | Accepted | Update to CMA.js v12 and drop Node <22 support |\n"
  },
  {
    "path": "docs/specs/.gitkeep",
    "content": ""
  },
  {
    "path": "docs/specs/README.md",
    "content": "# Specs\n\nImplementation-level specs for active work in this repo. Format: `YYYY-MM-<spec_name>.md`.\nSpecs here represent current intent -- archive or remove when work ships.\n"
  },
  {
    "path": "example-config.json",
    "content": "{\n  \"spaceId\": \"source space id\",\n  \"environmentId\": \"master\",\n  \"managementToken\": \"destination space management token\",\n  \"deliveryToken\": \"token to export both entries and assets from the Contentful Delivery API\",\n  \"exportDir\": \"/path/to/export/directory\",\n  \"saveFile\": true,\n  \"contentFile\": \"export.json\",\n  \"includeDrafts\": false,\n  \"includeArchived\": false,\n  \"skipContentModel\": false,\n  \"skipEditorInterfaces\": false,\n  \"skipContent\": false,\n  \"skipRoles\": false,\n  \"skipWebhooks\": false,\n  \"contentOnly\": false,\n  \"queryEntries\": [\n    \"content_type=<content_type_id>\",\n    \"sys.id=<entry_id>\",\n    \"limit=1000\"\n  ],\n  \"queryAssets\": [\n    \"fields.title=Example\"\n  ],\n  \"downloadAssets\": false,\n  \"host\": \"api.contentful.com\",\n  \"proxy\": \"https://user:password@host:port\",\n  \"rawProxy\": false,\n  \"maxAllowedLimit\": 1000,\n  \"errorLogFile\": \"/path/to/error.log\",\n  \"useVerboseRenderer\": false\n}"
  },
  {
    "path": "example-config.test.json",
    "content": "{\n  \"spaceId\": \"source space id\",\n  \"managementToken\": \"destination space management token\"\n}\n"
  },
  {
    "path": "lib/index.js",
    "content": "import { access } from 'fs'\n\nimport bfj from 'bfj'\nimport Promise from 'bluebird'\nimport Table from 'cli-table3'\nimport Listr from 'listr'\nimport UpdateRenderer from 'listr-update-renderer'\nimport VerboseRenderer from 'listr-verbose-renderer'\nimport startCase from 'lodash.startcase'\nimport mkdirp from 'mkdirp'\nimport { differenceInSeconds } from 'date-fns/differenceInSeconds'\nimport { formatDistance } from 'date-fns/formatDistance'\n\nimport {\n  setupLogging,\n  displayErrorLog,\n  wrapTask,\n  writeErrorLogFile\n} from 'contentful-batch-libs'\n\nimport downloadAssets from './tasks/download-assets'\nimport getSpaceData from './tasks/get-space-data'\nimport initClient from './tasks/init-client'\n\nimport parseOptions from './parseOptions'\n\nconst accessP = Promise.promisify(access)\n\nconst tableOptions = {\n  // remove ANSI color codes for better CI/CD compatibility\n  style: { head: [], border: [] }\n}\n\nfunction createListrOptions (options) {\n  if (options.useVerboseRenderer) {\n    return {\n      renderer: VerboseRenderer\n    }\n  }\n  return {\n    renderer: UpdateRenderer,\n    collapse: false\n  }\n}\n\nexport default function runContentfulExport (params) {\n  const log = []\n  const options = parseOptions(params)\n\n  const listrOptions = createListrOptions(options)\n\n  // Setup custom error listener to store errors for later\n  setupLogging(log)\n\n  const tasks = new Listr(\n    [\n      {\n        title: 'Initialize client',\n        task: wrapTask((ctx) => {\n          try {\n            // CMA client\n            ctx.client = initClient(options)\n            if (options.deliveryToken && !options.includeDrafts) {\n              // CDA client for fetching only public entries\n              ctx.cdaClient = initClient(options, true)\n            }\n            return Promise.resolve()\n          } catch (err) {\n            return Promise.reject(err)\n          }\n        })\n      },\n      {\n        title: 'Fetching data from space',\n        task: (ctx) => {\n          return getSpaceData({\n            client: ctx.client,\n            cdaClient: ctx.cdaClient,\n            spaceId: options.spaceId,\n            environmentId: options.environmentId,\n            maxAllowedLimit: options.maxAllowedLimit,\n            includeDrafts: options.includeDrafts,\n            includeArchived: options.includeArchived,\n            skipContentModel: options.skipContentModel,\n            skipEditorInterfaces: options.skipEditorInterfaces,\n            skipContent: options.skipContent,\n            skipWebhooks: options.skipWebhooks,\n            skipRoles: options.skipRoles,\n            skipTags: options.skipTags,\n            stripTags: options.stripTags,\n            listrOptions,\n            queryEntries: options.queryEntries,\n            queryAssets: options.queryAssets\n          })\n        }\n      },\n      {\n        title: 'Download assets',\n        task: wrapTask(downloadAssets(options)),\n        skip: (ctx) =>\n          !options.downloadAssets ||\n          !Object.prototype.hasOwnProperty.call(ctx.data, 'assets')\n      },\n      {\n        title: 'Write export log file',\n        task: () => {\n          return new Listr([\n            {\n              title: 'Lookup directory to store the logs',\n              task: (ctx) => {\n                return accessP(options.exportDir)\n                  .then(() => {\n                    ctx.logDirectoryExists = true\n                  })\n                  .catch(() => {\n                    ctx.logDirectoryExists = false\n                  })\n              }\n            },\n            {\n              title: 'Create log directory',\n              task: () => {\n                return mkdirp(options.exportDir)\n              },\n              skip: (ctx) => !ctx.logDirectoryExists\n            },\n            {\n              title: 'Writing data to file',\n              task: (ctx) => {\n                return bfj.write(options.logFilePath, ctx.data, {\n                  circular: 'ignore',\n                  space: 2\n                })\n              }\n            }\n          ])\n        },\n        skip: () => !options.saveFile\n      }\n    ],\n    listrOptions\n  )\n\n  return tasks\n    .run({\n      data: {}\n    })\n    .then((ctx) => {\n      const resultTypes = Object.keys(ctx.data)\n      if (resultTypes.length) {\n        const resultTable = new Table(tableOptions)\n\n        resultTable.push([{ colSpan: 2, content: 'Exported entities' }])\n\n        resultTypes.forEach((type) => {\n          resultTable.push([startCase(type), ctx.data[type].length])\n        })\n\n        console.log(resultTable.toString())\n      } else {\n        console.log('No data was exported')\n      }\n\n      if ('assetDownloads' in ctx) {\n        const downloadsTable = new Table(tableOptions)\n        downloadsTable.push([\n          { colSpan: 2, content: 'Asset file download results' }\n        ])\n        downloadsTable.push(['Successful', ctx.assetDownloads.successCount])\n        downloadsTable.push(['Warnings ', ctx.assetDownloads.warningCount])\n        downloadsTable.push(['Errors ', ctx.assetDownloads.errorCount])\n        console.log(downloadsTable.toString())\n      }\n\n      const endTime = new Date()\n      const durationHuman = formatDistance(endTime, options.startTime)\n      const durationSeconds = differenceInSeconds(endTime, options.startTime)\n\n      console.log(`The export took ${durationHuman} (${durationSeconds}s)`)\n      if (options.saveFile) {\n        console.log(\n          `\\nStored space data to json file at: ${options.logFilePath}`\n        )\n      }\n      return ctx.data\n    })\n    .catch((err) => {\n      log.push({\n        ts: new Date().toJSON(),\n        level: 'error',\n        error: err\n      })\n    })\n    .then((data) => {\n      // @todo this should live in batch libs\n      const errorLog = log.filter(\n        (logMessage) =>\n          logMessage.level !== 'info' && logMessage.level !== 'warning'\n      )\n      const displayLog = log.filter(\n        (logMessage) => logMessage.level !== 'info'\n      )\n      displayErrorLog(displayLog)\n\n      if (errorLog.length) {\n        return writeErrorLogFile(options.errorLogFile, errorLog).then(() => {\n          const multiError = new Error('Errors occured')\n          multiError.name = 'ContentfulMultiError'\n          Object.assign(multiError, { errors: errorLog })\n          throw multiError\n        })\n      }\n\n      console.log('The export was successful.')\n\n      return data\n    })\n}\n"
  },
  {
    "path": "lib/parseOptions.js",
    "content": "import { addSequenceHeader, agentFromProxy, proxyStringToObject } from 'contentful-batch-libs'\nimport { format } from 'date-fns/format'\nimport { resolve } from 'path'\nimport qs from 'querystring'\n\nimport { version } from '../package.json'\nimport { getHeadersConfig } from './utils/headers'\n\nexport default function parseOptions (params) {\n  const defaultOptions = {\n    environmentId: 'master',\n    exportDir: process.cwd(),\n    includeDrafts: false,\n    includeArchived: false,\n    skipRoles: false,\n    skipContentModel: false,\n    skipEditorInterfaces: false,\n    skipContent: false,\n    skipWebhooks: false,\n    skipTags: false,\n    stripTags: false,\n    maxAllowedLimit: 1000,\n    saveFile: true,\n    useVerboseRenderer: false,\n    rawProxy: false\n  }\n\n  const configFile = params.config\n    // eslint-disable-next-line @typescript-eslint/no-require-imports\n    ? require(resolve(process.cwd(), params.config))\n    : {}\n\n  const options = {\n    ...defaultOptions,\n    ...configFile,\n    ...params,\n    headers: addSequenceHeader(params.headers || getHeadersConfig(params.header))\n  }\n\n  // Validation\n  if (!options.spaceId) {\n    throw new Error('The `spaceId` option is required.')\n  }\n\n  if (!options.managementToken) {\n    throw new Error('The `managementToken` option is required.')\n  }\n\n  options.startTime = new Date()\n  options.contentFile = options.contentFile || `contentful-export-${options.spaceId}-${options.environmentId}-${format(options.startTime, \"yyyy-MM-dd'T'HH-mm-ss\")}.json`\n\n  options.logFilePath = resolve(options.exportDir, options.contentFile)\n\n  if (!options.errorLogFile) {\n    options.errorLogFile = resolve(options.exportDir, `contentful-export-error-log-${options.spaceId}-${options.environmentId}-${format(options.startTime, \"yyyy-MM-dd'T'HH-mm-ss\")}.json`)\n  } else {\n    options.errorLogFile = resolve(process.cwd(), options.errorLogFile)\n  }\n\n  // Further processing\n  options.accessToken = options.managementToken\n\n  if (options.proxy) {\n    if (typeof options.proxy === 'string') {\n      const proxySimpleExp = /.+:\\d+/\n      const proxyAuthExp = /.+:.+@.+:\\d+/\n      if (!(proxySimpleExp.test(options.proxy) || proxyAuthExp.test(options.proxy))) {\n        throw new Error('Please provide the proxy config in the following format:\\nhost:port or user:password@host:port')\n      }\n      options.proxy = proxyStringToObject(options.proxy)\n    }\n\n    if (!options.rawProxy) {\n      options.httpsAgent = agentFromProxy(options.proxy)\n      delete options.proxy\n    }\n  }\n\n  if (options.queryEntries && options.queryEntries.length > 0) {\n    const querystr = options.queryEntries.join('&')\n    options.queryEntries = qs.parse(querystr)\n  }\n\n  if (options.queryAssets && options.queryAssets.length > 0) {\n    const querystr = options.queryAssets.join('&')\n    options.queryAssets = qs.parse(querystr)\n  }\n\n  if (options.contentOnly) {\n    options.skipRoles = true\n    options.skipContentModel = true\n    options.skipWebhooks = true\n  }\n\n  options.application = options.managementApplication || `contentful.export/${version}`\n  options.feature = options.managementFeature || 'library-export'\n  return options\n}\n"
  },
  {
    "path": "lib/tasks/download-assets.js",
    "content": "import Promise from 'bluebird'\nimport { getEntityName } from 'contentful-batch-libs'\nimport figures from 'figures'\nimport { createWriteStream, promises as fs } from 'fs'\nimport path from 'path'\nimport { pipeline } from 'stream'\nimport { promisify } from 'util'\nimport { calculateExpiryTimestamp, isEmbargoedAsset, signUrl } from '../utils/embargoedAssets'\nimport axios from 'axios'\n\nconst streamPipeline = promisify(pipeline)\n\n/**\n * @param {Object} options - The options for downloading the asset.\n * @param {string} options.url - The URL of the asset to download.\n * @param {string} options.directory - The directory where the asset should be saved.\n * @param {import('axios').AxiosInstance} options.httpClient - The HTTP client to use for downloading the asset.\n */\nasync function downloadAsset ({ url, directory, httpClient }) {\n// handle urls without protocol\n  if (url.startsWith('//')) {\n    url = 'https:' + url\n  }\n\n  // build local file path from the url for the download\n  const parsedUrl = new URL(url)\n  const decodedPathname = decodeURIComponent(parsedUrl.pathname)\n  const localFile = path.join(directory, parsedUrl.host, decodedPathname)\n\n  // ensure directory exists and create file stream\n  await fs.mkdir(path.dirname(localFile), { recursive: true })\n  const file = createWriteStream(localFile)\n\n  try {\n    // download asset\n    const assetRequest = await httpClient.get(url, {\n      responseType: 'stream',\n      transformResponse: [(data) => data]\n    })\n\n    // Wait for stream to be consumed before returning local file\n    await streamPipeline(assetRequest.data, file)\n    return localFile\n  } catch (e) {\n    /**\n     * @type {import('axios').AxiosError}\n     */\n    const axiosError = e\n    throw new Error(`error response status: ${axiosError.response.status}`)\n  }\n}\n\nexport default function downloadAssets (options) {\n  return (ctx, task) => {\n    let successCount = 0\n    let warningCount = 0\n    let errorCount = 0\n\n    const httpClient = axios.create({\n      headers: options.headers,\n      timeout: options.timeout,\n      httpAgent: options.httpAgent,\n      httpsAgent: options.httpsAgent,\n      proxy: options.proxy\n    })\n\n    return Promise.map(ctx.data.assets, (asset) => {\n      const entityName = getEntityName(asset)\n      if (!asset.fields.file) {\n        task.output = `${figures.warning} asset ${entityName} has no file(s)`\n        warningCount++\n        return\n      }\n      const locales = Object.keys(asset.fields.file)\n      return Promise.mapSeries(locales, (locale) => {\n        const url = asset.fields.file[locale].url\n        if (!url) {\n          task.output = `${figures.cross} asset '${entityName}' doesn't contain an url in path asset.fields.file[${locale}].url`\n          errorCount++\n\n          return Promise.resolve()\n        }\n\n        let startingPromise = Promise.resolve({ url, directory: options.exportDir, httpClient })\n\n        if (isEmbargoedAsset(url)) {\n          const { host, accessToken, spaceId, environmentId } = options\n          const expiresAtMs = calculateExpiryTimestamp()\n\n          startingPromise = signUrl(host, accessToken, spaceId, environmentId, url, expiresAtMs, httpClient)\n            .then((signedUrl) => ({ url: signedUrl, directory: options.exportDir, httpClient }))\n        }\n\n        return startingPromise\n          .then(downloadAsset)\n          .then(() => {\n            task.output = `${figures.tick} downloaded ${entityName} (${url})`\n            successCount++\n          })\n          .catch((error) => {\n            task.output = `${figures.cross} error downloading ${url}: ${error.message}`\n            errorCount++\n          })\n      })\n    }, {\n      concurrency: 6\n    })\n      .then(() => {\n        ctx.assetDownloads = {\n          successCount,\n          warningCount,\n          errorCount\n        }\n      })\n  }\n}\n"
  },
  {
    "path": "lib/tasks/get-space-data.js",
    "content": "import Promise from 'bluebird'\nimport { logEmitter, wrapTask } from 'contentful-batch-libs'\nimport Listr from 'listr'\nimport verboseRenderer from 'listr-verbose-renderer'\n\nconst MAX_ALLOWED_LIMIT = 1000\nlet pageLimit = MAX_ALLOWED_LIMIT\n\n/**\n * Gets all the content from a space via the management API. This includes\n * content in draft state.\n */\nexport default function getFullSourceSpace ({\n  client,\n  cdaClient,\n  spaceId,\n  environmentId = 'master',\n  skipContentModel,\n  skipContent,\n  skipWebhooks,\n  skipRoles,\n  skipEditorInterfaces,\n  skipTags,\n  stripTags,\n  includeDrafts,\n  includeArchived,\n  maxAllowedLimit,\n  listrOptions,\n  queryEntries,\n  queryAssets\n}) {\n  pageLimit = maxAllowedLimit || MAX_ALLOWED_LIMIT\n  listrOptions = listrOptions || {\n    renderer: verboseRenderer\n  }\n\n  return new Listr([\n    {\n      title: 'Connecting to space',\n      task: wrapTask((ctx) => {\n        return client.getSpace(spaceId)\n          .then((space) => {\n            ctx.space = space\n            return space.getEnvironment(environmentId)\n          })\n          .then((environment) => {\n            ctx.environment = environment\n          })\n      })\n    },\n    {\n      title: 'Fetching content types data',\n      task: wrapTask((ctx) => {\n        return pagedGet({ source: ctx.environment, method: 'getContentTypes' })\n          .then(extractItems)\n          .then((items) => {\n            ctx.data.contentTypes = items\n          })\n      }),\n      skip: () => skipContentModel\n    },\n    {\n      title: 'Fetching tags data',\n      task: wrapTask((ctx) => {\n        return pagedGet({ source: ctx.environment, method: 'getTags' })\n          .then(extractItems)\n          .then((items) => {\n            ctx.data.tags = items\n          })\n          .catch(() => {\n            ctx.data.tags = []\n          })\n      }),\n      skip: () => skipTags\n    },\n    {\n      title: 'Fetching editor interfaces data',\n      task: wrapTask((ctx) => {\n        return getEditorInterfaces(ctx.data.contentTypes)\n          .then((editorInterfaces) => {\n            ctx.data.editorInterfaces = editorInterfaces.filter((editorInterface) => {\n              return editorInterface !== null\n            })\n          })\n      }),\n      skip: (ctx) => skipContentModel || skipEditorInterfaces || (ctx.data.contentTypes.length === 0 && 'Skipped since no content types downloaded')\n    },\n    {\n      title: 'Fetching content entries data',\n      task: wrapTask((ctx) => {\n        const source = cdaClient?.withAllLocales || ctx.environment\n        if (cdaClient) {\n          // let's not fetch children when using Content Delivery API\n          queryEntries = queryEntries || {}\n          queryEntries.include = 0\n        }\n        return pagedGet({ source, method: 'getEntries', query: queryEntries })\n          .then(extractItems)\n          .then((items) => filterDrafts(items, includeDrafts, cdaClient))\n          .then((items) => filterArchived(items, includeArchived))\n          .then((items) => removeTags(items, stripTags))\n          .then((items) => {\n            ctx.data.entries = items\n          })\n      }),\n      skip: () => skipContent\n    },\n    {\n      title: 'Fetching assets data',\n      task: wrapTask((ctx) => {\n        const source = cdaClient?.withAllLocales || ctx.environment\n        queryAssets = queryAssets || {}\n        return pagedGet({ source, method: 'getAssets', query: queryAssets })\n          .then(extractItems)\n          .then((items) => filterDrafts(items, includeDrafts, cdaClient))\n          .then((items) => filterArchived(items, includeArchived))\n          .then((items) => removeTags(items, stripTags))\n          .then((items) => {\n            ctx.data.assets = items\n          })\n      }),\n      skip: () => skipContent\n    },\n    {\n      title: 'Fetching locales data',\n      task: wrapTask((ctx) => {\n        return pagedGet({ source: ctx.environment, method: 'getLocales' })\n          .then(extractItems)\n          .then((items) => {\n            ctx.data.locales = items\n          })\n      }),\n      skip: () => skipContentModel\n    },\n    {\n      title: 'Fetching webhooks data',\n      task: wrapTask((ctx) => {\n        return pagedGet({ source: ctx.space, method: 'getWebhooks' })\n          .then(extractItems)\n          .then((items) => {\n            ctx.data.webhooks = items\n          })\n      }),\n      skip: () => skipWebhooks || (environmentId !== 'master' && 'Webhooks can only be exported from master environment')\n    },\n    {\n      title: 'Fetching roles data',\n      task: wrapTask((ctx) => {\n        return pagedGet({ source: ctx.space, method: 'getRoles' })\n          .then(extractItems)\n          .then((items) => {\n            ctx.data.roles = items\n          })\n      }),\n      skip: () => skipRoles || (environmentId !== 'master' && 'Roles can only be exported from master environment')\n    }\n  ], listrOptions)\n}\n\nfunction getEditorInterfaces (contentTypes) {\n  return Promise.map(contentTypes, (contentType) => {\n    return contentType.getEditorInterface()\n      .then((editorInterface) => {\n        logEmitter.emit('info', `Fetched editor interface for ${contentType.name}`)\n        return editorInterface\n      })\n      .catch(() => {\n      // old contentTypes may not have an editor interface but we'll handle in a later stage\n      // but it should not stop getting the data process\n        logEmitter.emit('warning', `No editor interface found for ${contentType}`)\n        return Promise.resolve(null)\n      })\n  }, {\n    concurrency: 6\n  })\n}\n\n/**\n * Gets all the existing entities based on pagination parameters.\n * The first call will have no aggregated response. Subsequent calls will\n * concatenate the new responses to the original one.\n */\nfunction pagedGet ({ source, method, skip = 0, aggregatedResponse = null, query = null }) {\n  const userQueryLimit = query && query.limit\n  const fetchedTotal = aggregatedResponse && aggregatedResponse.items.length\n  const limit = userQueryLimit ? Math.min(pageLimit, userQueryLimit - fetchedTotal) : pageLimit\n\n  const requestQuery = Object.assign({},\n    {\n      skip,\n      order: 'sys.createdAt,sys.id'\n    },\n    query,\n    {\n      limit\n    }\n  )\n\n  return source[method](requestQuery)\n    .then((response) => {\n      if (!aggregatedResponse) {\n        aggregatedResponse = response\n      } else {\n        aggregatedResponse.items = aggregatedResponse.items.concat(response.items)\n      }\n\n      const totalItemsLength = aggregatedResponse.items.length\n      const total = response.total\n\n      logPagingStatus(response, requestQuery, userQueryLimit)\n\n      const gotAllQueryLimitedItems = userQueryLimit && totalItemsLength >= userQueryLimit\n      const gotAllItems = totalItemsLength >= total\n      const gotNoItems = totalItemsLength <= 0\n      if (gotAllQueryLimitedItems || gotAllItems || gotNoItems) {\n        return aggregatedResponse\n      }\n      return pagedGet({ source, method, skip: skip + response.items.length, aggregatedResponse, query })\n    })\n}\n\nfunction logPagingStatus (response, requestQuery, userLimit) {\n  const { total, limit, items } = response\n  const pagedItemsLength = items.length\n\n  // sometimes our pageLimit or queryLimit of 1000 is overridden by the API (like in locales)\n  const imposedLimit = limit || requestQuery.limit\n  const limitedTotal = userLimit ? Math.min(userLimit, total) : total\n  const page = Math.ceil(requestQuery.skip / imposedLimit) + 1\n  const pages = Math.ceil(limitedTotal / imposedLimit)\n  logEmitter.emit('info', `Fetched ${pagedItemsLength} of ${total} items (Page ${page}/${pages})`)\n}\n\nfunction extractItems (response) {\n  return response.items\n}\n\nfunction filterDrafts (items, includeDrafts, cdaClient) {\n  // CDA filters drafts based on host, no need to do filtering here\n  return (includeDrafts || cdaClient) ? items : items.filter((item) => !!item.sys.publishedVersion || !!item.sys.archivedVersion)\n}\n\nfunction filterArchived (items, includeArchived) {\n  return includeArchived ? items : items.filter((item) => !item.sys.archivedVersion)\n}\n\nfunction removeTags (items, stripTags) {\n  if (stripTags) {\n    items.forEach(item => {\n      if (item.metadata?.tags) {\n        item.metadata.tags = []\n      }\n    })\n  }\n  return items\n}\n"
  },
  {
    "path": "lib/tasks/init-client.js",
    "content": "import { createClient as createCdaClient } from 'contentful'\nimport { logEmitter } from 'contentful-batch-libs'\nimport { createClient as createCmaClient } from 'contentful-management'\n\nfunction logHandler (level, data) {\n  logEmitter.emit(level, data)\n}\n\nexport default function initClient (opts, useCda = false) {\n  const defaultOpts = {\n    timeout: 10000,\n    logHandler\n  }\n  const config = {\n    ...defaultOpts,\n    ...opts\n  }\n  if (useCda) {\n    const cdaConfig = {\n      ...config,\n      space: config.spaceId,\n      accessToken: config.deliveryToken,\n      environment: config.environmentId,\n      host: config.hostDelivery\n    }\n    return createCdaClient(cdaConfig).withoutLinkResolution\n  }\n  return createCmaClient(config, { type: 'legacy' })\n}\n"
  },
  {
    "path": "lib/usageParams.js",
    "content": "import yargs from 'yargs'\nimport packageFile from '../package.json'\n\nexport default yargs\n  .version(packageFile.version || 'Version only available on installed package')\n  .usage('Usage: $0 [options]')\n  .option('space-id', {\n    describe: 'ID of Space with source data',\n    type: 'string',\n    demand: true\n  })\n  .option('environment-id', {\n    describe: 'ID of Environment with source data',\n    type: 'string',\n    default: 'master'\n  })\n  .option('management-token', {\n    describe: 'Contentful management API token for the space to be exported',\n    type: 'string',\n    demand: true\n  })\n  .option('delivery-token', {\n    describe: 'Contentful Content Delivery API token for the space to be exported',\n    type: 'string'\n  })\n  .option('export-dir', {\n    describe: 'Defines the path for storing the export json file (default path is the current directory)',\n    type: 'string'\n  })\n  .option('include-drafts', {\n    describe: 'Include drafts in the exported entries',\n    type: 'boolean',\n    default: false\n  })\n  .option('include-archived', {\n    describe: 'Include archived entries in the exported entries',\n    type: 'boolean',\n    default: false\n  })\n  .option('skip-content-model', {\n    describe: 'Skip exporting content models',\n    type: 'boolean',\n    default: false\n  })\n  .option('skip-content', {\n    describe: 'Skip exporting assets and entries',\n    type: 'boolean',\n    default: false\n  })\n  .option('skip-roles', {\n    describe: 'Skip exporting roles and permissions',\n    type: 'boolean',\n    default: false\n  })\n  .options('skip-tags', {\n    describe: 'Skip exporting tags',\n    type: 'boolean',\n    default: false\n  })\n  .option('skip-webhooks', {\n    describe: 'Skip exporting webhooks',\n    type: 'boolean',\n    default: false\n  })\n  .options('strip-tags', {\n    describe: 'Untag assets and entries',\n    type: 'boolean',\n    default: false\n  })\n  .option('content-only', {\n    describe: 'only export entries and assets',\n    type: 'boolean',\n    default: false\n  })\n  .option('download-assets', {\n    describe: 'With this flags asset files will also be downloaded',\n    type: 'boolean'\n  })\n  .option('max-allowed-limit', {\n    describe: 'How many items per page per request',\n    type: 'number',\n    default: 1000\n  })\n  .option('host', {\n    describe: 'Management API host',\n    type: 'string',\n    default: 'api.contentful.com'\n  })\n  .option('host-delivery', {\n    describe: 'Delivery API host',\n    type: 'string',\n    default: 'cdn.contentful.com'\n  })\n  .option('proxy', {\n    describe: 'Proxy configuration in HTTP auth format: [http|https]://host:port or [http|https]://user:password@host:port',\n    type: 'string'\n  })\n  .option('raw-proxy', {\n    describe: 'Pass proxy config to Axios instead of creating a custom httpsAgent',\n    type: 'boolean',\n    default: false\n  })\n  .option('error-log-file', {\n    describe: 'Full path to the error log file',\n    type: 'string'\n  })\n  .option('query-entries', {\n    describe: 'Exports only entries that matches these queries',\n    type: 'array'\n  })\n  .option('query-assets', {\n    describe: 'Exports only assets that matches these queries',\n    type: 'array'\n  })\n  .option('content-file', {\n    describe: 'The filename for the exported data',\n    type: 'string'\n  })\n  .option('save-file', {\n    describe: 'Save the export as a json file',\n    type: 'boolean',\n    default: true\n  })\n  .option('use-verbose-renderer', {\n    describe: 'Display progress in new lines instead of displaying a busy spinner and the status in the same line. Useful for CI.',\n    type: 'boolean',\n    default: false\n  })\n  .option('header', {\n    alias: 'H',\n    type: 'string',\n    describe: 'Pass an additional HTTP Header'\n  })\n  .config('config', 'An optional configuration JSON file containing all the options for a single run')\n  .argv\n"
  },
  {
    "path": "lib/utils/embargoedAssets.js",
    "content": "import jwt from 'jsonwebtoken'\n\nconst SIX_HOURS_IN_MS = 6 * 60 * 60 * 1000\nconst assetKeyCache = new Map()\n\n/**\n * @param {string} host - The Contentful API host.\n * @param {string} accessToken - The access token for the Contentful API.\n * @param {string} spaceId - The ID of the Contentful space.\n * @param {string} environmentId - The ID of the Contentful environment.\n * @param {number} expiresAtMs - The expiration time in milliseconds.\n * @param {import('axios').AxiosInstance} httpClient - The HTTP client to use for requests.\n */\nfunction createAssetKey (host, accessToken, spaceId, environmentId, expiresAtMs, httpClient) {\n  return httpClient(`https://${host}/spaces/${spaceId}/environments/${environmentId}/asset_keys`, {\n    method: 'POST',\n    data: JSON.stringify({\n      expiresAt: Math.floor(expiresAtMs / 1000) // in seconds\n    }),\n    headers: {\n      Authorization: `Bearer ${accessToken}`,\n      'Content-Type': 'application/json'\n    }\n  })\n}\n\nexport const shouldCreateNewCacheItem = (cacheItem, currentExpiresAtMs) =>\n  !cacheItem || currentExpiresAtMs - cacheItem.expiresAtMs > SIX_HOURS_IN_MS\n\nasync function createCachedAssetKey (host, accessToken, spaceId, environmentId, minExpiresAtMs, httpClient) {\n  const cacheKey = `${host}:${spaceId}:${environmentId}`\n  let cacheItem = assetKeyCache.get(cacheKey)\n\n  if (shouldCreateNewCacheItem(cacheItem, minExpiresAtMs)) {\n    const expiresAtMs = calculateExpiryTimestamp()\n\n    if (minExpiresAtMs > expiresAtMs) {\n      throw new Error(`Cannot fetch an asset key so far in the future: ${minExpiresAtMs} > ${expiresAtMs}`)\n    }\n\n    try {\n      const assetKeyResponse = await createAssetKey(host, accessToken, spaceId, environmentId, expiresAtMs, httpClient)\n      cacheItem = { expiresAtMs, result: assetKeyResponse.data }\n      assetKeyCache.set(cacheKey, cacheItem)\n    } catch (err) {\n      // If we encounter an error, make sure to clear the cache item if this is the most recent fetch.\n      const curCacheItem = assetKeyCache.get(cacheKey)\n      if (curCacheItem === cacheItem) {\n        assetKeyCache.delete(cacheKey)\n      }\n\n      return Promise.reject(err)\n    }\n  }\n\n  return cacheItem.result\n}\n\nfunction generateSignedToken (secret, urlWithoutQueryParams, expiresAtMs) {\n  // Convert expiresAtMs to seconds, if defined\n  const exp = expiresAtMs ? Math.floor(expiresAtMs / 1000) : undefined\n  return jwt.sign({\n    sub: urlWithoutQueryParams,\n    exp\n  }, secret, { algorithm: 'HS256' })\n}\n\nfunction generateSignedUrl (policy, secret, url, expiresAtMs) {\n  const parsedUrl = new URL(url)\n\n  const urlWithoutQueryParams = parsedUrl.origin + parsedUrl.pathname\n  const token = generateSignedToken(secret, urlWithoutQueryParams, expiresAtMs)\n\n  parsedUrl.searchParams.set('token', token)\n  parsedUrl.searchParams.set('policy', policy)\n\n  return parsedUrl.toString()\n}\n\nexport function isEmbargoedAsset (url) {\n  const pattern = /((images)|(assets)|(downloads)|(videos))\\.secure\\./\n  return pattern.test(url)\n}\n\nexport function calculateExpiryTimestamp () {\n  return Date.now() + SIX_HOURS_IN_MS\n}\n\n/**\n * @param {string} host - The Contentful API host.\n * @param {string} accessToken - The access token for the Contentful API.\n * @param {string} spaceId - The ID of the Contentful space.\n * @param {string} environmentId - The ID of the Contentful environment.\n * @param {string} url - The URL to be signed.\n * @param {number} expiresAtMs - The expiration time in milliseconds.\n * @param {import('axios').AxiosInstance} httpClient - The HTTP client to use for requests.\n */\nexport function signUrl (host, accessToken, spaceId, environmentId, url, expiresAtMs, httpClient) {\n  // handle urls without protocol\n  if (url.startsWith('//')) {\n    url = 'https:' + url\n  }\n\n  return createCachedAssetKey(host, accessToken, spaceId, environmentId, expiresAtMs, httpClient)\n    .then(({ policy, secret }) => generateSignedUrl(policy, secret, url, expiresAtMs))\n}\n"
  },
  {
    "path": "lib/utils/headers.js",
    "content": "/**\n * Turn header option into an object. Invalid header values\n * are ignored.\n *\n * @example\n * getHeadersConfig('Accept: Any')\n * // -> {Accept: 'Any'}\n *\n * @example\n * getHeadersConfig(['Accept: Any', 'X-Version: 1'])\n * // -> {Accept: 'Any', 'X-Version': '1'}\n *\n * @param value {string|string[]}\n */\nexport function getHeadersConfig (value) {\n  if (!value) {\n    return {}\n  }\n\n  const values = Array.isArray(value) ? value : [value]\n\n  return values.reduce((headers, value) => {\n    value = value.trim()\n\n    const separatorIndex = value.indexOf(':')\n\n    // Invalid header format\n    if (separatorIndex === -1) {\n      return headers\n    }\n\n    const headerKey = value.slice(0, separatorIndex).trim()\n    const headerValue = value.slice(separatorIndex + 1).trim()\n\n    return {\n      ...headers,\n      [headerKey]: headerValue\n    }\n  }, {})\n}\n"
  },
  {
    "path": "package.json",
    "content": "{\n  \"name\": \"contentful-export\",\n  \"version\": \"0.0.0-determined-by-semantic-release\",\n  \"description\": \"this tool allows you to export a space to a JSON dump\",\n  \"main\": \"dist/index.js\",\n  \"types\": \"types.d.ts\",\n  \"engines\": {\n    \"node\": \">=22\"\n  },\n  \"bin\": {\n    \"contentful-export\": \"./bin/contentful-export\"\n  },\n  \"scripts\": {\n    \"build\": \"npm run clean && npm run check && babel lib --out-dir dist\",\n    \"build:watch\": \"babel lib --out-dir dist --watch\",\n    \"check\": \"tsc\",\n    \"clean\": \"rimraf dist && rimraf coverage\",\n    \"lint\": \"eslint lib bin/* types.d.ts\",\n    \"lint:fix\": \"npm run lint -- --fix\",\n    \"pretest\": \"npm run lint && npm run build && rimraf ./test/integration/tmp\",\n    \"test\": \"npm run test:unit && npm run test:integration\",\n    \"test:unit\": \"jest --testPathPattern=test/unit --coverage\",\n    \"test:unit:debug\": \"node --inspect-brk ./node_modules/.bin/jest --runInBand --watch --testPathPattern=test/unit\",\n    \"test:unit:watch\": \"npm run test:unit -- --watch\",\n    \"test:integration\": \"jest --testPathPattern=test/integration\",\n    \"test:integration:debug\": \"node --inspect-brk ./node_modules/.bin/jest --runInBand --watch --testPathPattern=test/integration\",\n    \"test:integration:watch\": \"npm run test:integration -- --watch\",\n    \"semantic-release\": \"semantic-release\",\n    \"prepublishOnly\": \"npm run build\",\n    \"postpublish\": \"npm run clean\",\n    \"precommit\": \"npm run lint\",\n    \"prepush\": \"npm run test\"\n  },\n  \"repository\": {\n    \"type\": \"git\",\n    \"url\": \"https://github.com/contentful/contentful-export.git\"\n  },\n  \"keywords\": [\n    \"contentful\",\n    \"contentful-export\"\n  ],\n  \"author\": \"Contentful <opensource@contentful.com>\",\n  \"license\": \"MIT\",\n  \"bugs\": {\n    \"url\": \"https://github.com/contentful/contentful-export/issues\"\n  },\n  \"dependencies\": {\n    \"axios\": \"^1.13.5\",\n    \"bfj\": \"^9.1.3\",\n    \"bluebird\": \"^3.3.3\",\n    \"cli-table3\": \"^0.6.0\",\n    \"contentful\": \"^11.5.10\",\n    \"contentful-batch-libs\": \"^11.0.0\",\n    \"contentful-management\": \"^12.2.0\",\n    \"date-fns\": \"^4.1.0\",\n    \"figures\": \"^3.2.0\",\n    \"jsonwebtoken\": \"^9.0.0\",\n    \"listr\": \"^0.14.1\",\n    \"listr-update-renderer\": \"^0.5.0\",\n    \"listr-verbose-renderer\": \"^0.6.0\",\n    \"lodash.startcase\": \"^4.4.0\",\n    \"mkdirp\": \"^2.0.0\",\n    \"yargs\": \"^18.0.0\"\n  },\n  \"devDependencies\": {\n    \"@babel/cli\": \"^7.0.0\",\n    \"@babel/core\": \"^7.0.0\",\n    \"@babel/plugin-proposal-object-rest-spread\": \"^7.0.0\",\n    \"@babel/preset-env\": \"^7.0.0\",\n    \"@babel/template\": \"^7.0.0\",\n    \"@babel/types\": \"^7.0.0\",\n    \"@types/jest\": \"^29.0.0\",\n    \"@typescript-eslint/eslint-plugin\": \"^8.2.0\",\n    \"babel-jest\": \"^30.0.0\",\n    \"babel-plugin-add-module-exports\": \"^1.0.2\",\n    \"cz-conventional-changelog\": \"^3.3.0\",\n    \"eslint\": \"^8.27.0\",\n    \"eslint-config-standard\": \"^17.0.0\",\n    \"eslint-plugin-import\": \"^2.12.0\",\n    \"eslint-plugin-jest\": \"^29.0.1\",\n    \"eslint-plugin-node\": \"^11.1.0\",\n    \"eslint-plugin-promise\": \"^6.0.0\",\n    \"eslint-plugin-standard\": \"^5.0.0\",\n    \"https-proxy-agent\": \"^7.0.0\",\n    \"husky\": \"^4.3.8\",\n    \"jest\": \"^29.0.0\",\n    \"nixt\": \"^0.5.0\",\n    \"nock\": \"^15.0.0\",\n    \"opener\": \"^1.4.1\",\n    \"rimraf\": \"^4.0.7\",\n    \"semantic-release\": \"^25.0.2\"\n  },\n  \"files\": [\n    \"bin\",\n    \"dist\",\n    \"example-config.json\",\n    \"index.js\",\n    \"types.d.ts\"\n  ],\n  \"config\": {\n    \"commitizen\": {\n      \"path\": \"./node_modules/cz-conventional-changelog\"\n    }\n  },\n  \"release\": {\n    \"branches\": [\n      \"main\",\n      {\n        \"name\": \"beta\",\n        \"channel\": \"beta\",\n        \"prerelease\": true\n      }\n    ],\n    \"plugins\": [\n      [\n        \"@semantic-release/commit-analyzer\",\n        {\n          \"releaseRules\": [\n            {\n              \"type\": \"build\",\n              \"scope\": \"deps\",\n              \"release\": \"patch\"\n            }\n          ]\n        }\n      ],\n      \"@semantic-release/release-notes-generator\",\n      \"@semantic-release/npm\",\n      \"@semantic-release/github\"\n    ]\n  },\n  \"jest\": {\n    \"testEnvironment\": \"node\",\n    \"collectCoverageFrom\": [\n      \"lib/**/*.js\"\n    ],\n    \"coveragePathIgnorePatterns\": [\n      \"usageParams.js\"\n    ]\n  },\n  \"overrides\": {\n    \"cross-spawn\": \"^7.0.6\"\n  }\n}\n"
  },
  {
    "path": "test/integration/export-lib.test.js",
    "content": "import { join } from 'path'\nimport fs from 'fs'\n\nimport mkdirp from 'mkdirp'\nimport rimraf from 'rimraf'\n\nimport runContentfulExport from '../../dist/index'\n\nconst fsPromises = fs.promises\n\njest.setTimeout(15000)\n\nconst tmpFolder = join(__dirname, 'tmp-lib')\nconst spaceId = process.env.EXPORT_SPACE_ID\nconst managementToken = process.env.MANAGEMENT_TOKEN\nconst deliveryToken = process.env.DELIVERY_TOKEN\n\nconst spaceIdEmbargoedAssets = process.env.EXPORT_SPACE_ID_EMBARGOED_ASSETS\n\nbeforeAll(() => {\n  mkdirp.sync(tmpFolder)\n})\n\nafterAll(() => {\n  rimraf.sync(tmpFolder)\n})\n\ntest('It should export space when used as a library', () => {\n  return runContentfulExport({ spaceId, managementToken, saveFile: false, exportDir: tmpFolder })\n    .catch((multierror) => {\n      const errors = multierror.errors.filter((error) => Object.prototype.hasOwnProperty.call(error, 'error'))\n      expect(errors).toHaveLength(0)\n    })\n    .then((content) => {\n      expect(content).toBeTruthy()\n      expect(content.contentTypes).toHaveLength(2)\n      expect(content.editorInterfaces).toHaveLength(2)\n      expect(content.entries).toHaveLength(4)\n      expect(content.assets).toHaveLength(4)\n      expect(content.locales).toHaveLength(1)\n      expect(content.tags).toHaveLength(4)\n      expect(content.webhooks).toHaveLength(0)\n      expect(content.roles).toHaveLength(7)\n      // make sure entries are delivered from CMA\n      expect(content.entries[0].sys).toHaveProperty('publishedVersion')\n    })\n})\n\ntest('It should export environment when used as a library', () => {\n  return runContentfulExport({ spaceId, environmentId: 'staging', managementToken, saveFile: false, exportDir: tmpFolder })\n    .catch((multierror) => {\n      const errors = multierror.errors.filter((error) => Object.prototype.hasOwnProperty.call(error, 'error'))\n      expect(errors).toHaveLength(0)\n    })\n    .then((content) => {\n      expect(content).toBeTruthy()\n      expect(content.contentTypes).toHaveLength(2)\n      expect(content.editorInterfaces).toHaveLength(2)\n      expect(content.entries).toHaveLength(4)\n      expect(content.assets).toHaveLength(4)\n      expect(content.locales).toHaveLength(1)\n      expect(content.tags).toHaveLength(2)\n      expect(content).not.toHaveProperty('webhooks')\n      expect(content).not.toHaveProperty('roles')\n    })\n})\n\ntest('It should export space when used as a library, with deliveryToken', () => {\n  return runContentfulExport({ spaceId, managementToken, deliveryToken, saveFile: false, exportDir: tmpFolder })\n    .catch((multierror) => {\n      const errors = multierror.errors.filter((error) => Object.prototype.hasOwnProperty.call(error, 'error'))\n      expect(errors).toHaveLength(0)\n    })\n    .then((content) => {\n      expect(content).toBeTruthy()\n      expect(content.contentTypes).toHaveLength(2)\n      expect(content.editorInterfaces).toHaveLength(2)\n      expect(content.entries).toHaveLength(4)\n      expect(content.assets).toHaveLength(4)\n      expect(content.locales).toHaveLength(1)\n      expect(content.tags).toHaveLength(4)\n      expect(content.webhooks).toHaveLength(0)\n      expect(content.roles).toHaveLength(7)\n    })\n})\n\ntest('It should export embargoed assets space when used as a library', () => {\n  return runContentfulExport({\n    spaceId: spaceIdEmbargoedAssets,\n    managementToken,\n    saveFile: true,\n    downloadAssets: true,\n    exportDir: tmpFolder,\n    host: 'api.contentful.com'\n  })\n    .catch((multierror) => {\n      const errors = multierror.errors.filter((error) => Object.prototype.hasOwnProperty.call(error, 'error'))\n      expect(errors).toHaveLength(0)\n    })\n    .then(async (content) => {\n      expect(content.assets).toHaveLength(1)\n\n      // This code ensures that the protected/embargoed asset has actually been downloaded\n      const files = await fsPromises.readdir(tmpFolder, { withFileTypes: true })\n      const directories = files.filter(f => f.isDirectory())\n      expect(directories).toHaveLength(1)\n    })\n})\n"
  },
  {
    "path": "test/unit/index.test.js",
    "content": "import { resolve } from 'path'\n\nimport bfj from 'bfj'\nimport fs from 'fs'\nimport mkdirp from 'mkdirp'\n\nimport {\n  setupLogging,\n  displayErrorLog,\n  writeErrorLogFile\n} from 'contentful-batch-libs'\n\nimport { mockDownloadAssets } from './mocks/download-assets'\nimport { mockGetSpaceData } from './mocks/get-space-data'\n\nimport downloadAssets from '../../lib/tasks/download-assets'\nimport getSpaceData from '../../lib/tasks/get-space-data'\nimport initClient from '../../lib/tasks/init-client'\nimport runContentfulExport from '../../lib/index'\n\njest.spyOn(global.console, 'log')\njest.mock('../../lib/tasks/init-client')\njest.mock('../../lib/tasks/download-assets', () => jest.fn(() => mockDownloadAssets))\njest.mock('../../lib/tasks/get-space-data', () => jest.fn(mockGetSpaceData))\n\njest.mock('fs', () => ({ access: jest.fn((path, cb) => cb()) }))\njest.mock('mkdirp', () => jest.fn())\njest.mock('bfj', () => ({ write: jest.fn().mockResolvedValue() }))\njest.mock('contentful-batch-libs/dist/logging', () => ({\n  setupLogging: jest.fn(),\n  displayErrorLog: jest.fn(),\n  logToTaskOutput: () => jest.fn(),\n  writeErrorLogFile: jest.fn((destination, errorLog) => {\n    const multiError = new Error('Errors occured')\n    multiError.name = 'ContentfulMultiError'\n    multiError.errors = errorLog\n    throw multiError\n  })\n}))\n\nafterEach(() => {\n  initClient.mockClear()\n  getSpaceData.mockClear()\n  setupLogging.mockClear()\n  displayErrorLog.mockClear()\n  fs.access.mockClear()\n  mkdirp.mockClear()\n  bfj.write.mockClear()\n  writeErrorLogFile.mockClear()\n  downloadAssets.mockClear()\n  global.console.log.mockClear()\n})\n\ntest('Runs Contentful Export with default config', async () => {\n  await runContentfulExport({\n    errorLogFile: 'errorlogfile',\n    spaceId: 'someSpaceId',\n    managementToken: 'someManagementToken'\n  })\n\n  expect(initClient.mock.calls).toHaveLength(1)\n  expect(getSpaceData.mock.calls).toHaveLength(1)\n  expect(setupLogging.mock.calls).toHaveLength(1)\n  expect(downloadAssets.mock.calls).toHaveLength(1)\n  expect(displayErrorLog.mock.calls).toHaveLength(1)\n  expect(fs.access.mock.calls).toHaveLength(1)\n  expect(mkdirp.mock.calls).toHaveLength(1)\n  expect(bfj.write.mock.calls).toHaveLength(1)\n  expect(writeErrorLogFile.mock.calls).toHaveLength(0)\n  const exportedTable = global.console.log.mock.calls.find((call) => call[0].match(/Exported entities/))\n  expect(exportedTable).not.toBeUndefined()\n  expect(exportedTable[0]).toMatch(/Exported entities/)\n  expect(exportedTable[0]).toMatch(/Content Types.+0/)\n  expect(exportedTable[0]).toMatch(/Entries.+0/)\n  expect(exportedTable[0]).toMatch(/Assets.+2/)\n  expect(exportedTable[0]).toMatch(/Locales.+0/)\n  const assetsTable = global.console.log.mock.calls.find((call) => call[0].match(/Asset file download results/))\n  expect(assetsTable).toBeUndefined()\n})\n\ntest('Runs Contentful Export and downloads assets', async () => {\n  await runContentfulExport({\n    errorLogFile: 'errorlogfile',\n    spaceId: 'someSpaceId',\n    managementToken: 'someManagementToken',\n    downloadAssets: true\n  })\n\n  expect(initClient.mock.calls).toHaveLength(1)\n  expect(getSpaceData.mock.calls).toHaveLength(1)\n  expect(setupLogging.mock.calls).toHaveLength(1)\n  expect(downloadAssets.mock.calls).toHaveLength(1)\n  expect(displayErrorLog.mock.calls).toHaveLength(1)\n  expect(fs.access.mock.calls).toHaveLength(1)\n  expect(mkdirp.mock.calls).toHaveLength(1)\n  expect(bfj.write.mock.calls).toHaveLength(1)\n  expect(writeErrorLogFile.mock.calls).toHaveLength(0)\n  const exportedTable = global.console.log.mock.calls.find((call) => call[0].match(/Exported entities/))\n  expect(exportedTable).not.toBeUndefined()\n  expect(exportedTable[0]).toMatch(/Exported entities/)\n  expect(exportedTable[0]).toMatch(/Content Types.+0/)\n  expect(exportedTable[0]).toMatch(/Entries.+0/)\n  expect(exportedTable[0]).toMatch(/Assets.+2/)\n  expect(exportedTable[0]).toMatch(/Locales.+0/)\n  const assetsTable = global.console.log.mock.calls.find((call) => call[0].match(/Asset file download results/))\n  expect(assetsTable).not.toBeUndefined()\n  expect(assetsTable[0]).toMatch(/Asset file download results/)\n  expect(assetsTable[0]).toMatch(/Successful.+3/)\n  expect(assetsTable[0]).toMatch(/Warnings.+2/)\n  expect(assetsTable[0]).toMatch(/Errors.+1/)\n})\n\ntest('Creates a valid and correct opts object', async () => {\n  const errorLogFile = 'errorlogfile'\n  const { default: exampleConfig } = await import('../../example-config.test.json')\n\n  await runContentfulExport({\n    errorLogFile,\n    config: resolve(__dirname, '..', '..', 'example-config.test.json')\n  })\n\n  expect(initClient.mock.calls[0][0].skipContentModel).toBeFalsy()\n  expect(initClient.mock.calls[0][0].skipEditorInterfaces).toBeFalsy()\n  expect(initClient.mock.calls[0][0].skipTags).toBeFalsy()\n  expect(initClient.mock.calls[0][0].stripTags).toBeFalsy()\n  expect(initClient.mock.calls[0][0].errorLogFile).toBe(resolve(process.cwd(), errorLogFile))\n  expect(initClient.mock.calls[0][0].spaceId).toBe(exampleConfig.spaceId)\n  expect(initClient.mock.calls).toHaveLength(1)\n  expect(getSpaceData.mock.calls).toHaveLength(1)\n  expect(setupLogging.mock.calls).toHaveLength(1)\n  expect(downloadAssets.mock.calls).toHaveLength(1)\n  expect(displayErrorLog.mock.calls).toHaveLength(1)\n  expect(fs.access.mock.calls).toHaveLength(1)\n  expect(mkdirp.mock.calls).toHaveLength(1)\n  expect(bfj.write.mock.calls).toHaveLength(1)\n  expect(writeErrorLogFile.mock.calls).toHaveLength(0)\n  const exportedTable = global.console.log.mock.calls.find((call) => call[0].match(/Exported entities/))\n  expect(exportedTable).not.toBeUndefined()\n  expect(exportedTable[0]).toMatch(/Exported entities/)\n  expect(exportedTable[0]).toMatch(/Content Types.+0/)\n  expect(exportedTable[0]).toMatch(/Entries.+0/)\n  expect(exportedTable[0]).toMatch(/Assets.+2/)\n  expect(exportedTable[0]).toMatch(/Locales.+0/)\n})\n\ntest('Run Contentful export fails due to rejection', async () => {\n  const rejectError = new Error()\n  rejectError.request = { uri: 'erroruri' }\n  getSpaceData.mockImplementation(() => Promise.reject(rejectError))\n\n  await expect(runContentfulExport({\n    errorLogFile: 'errorlogfile',\n    spaceId: 'someSpaceId',\n    managementToken: 'someManagementToken'\n  })).rejects.toThrow()\n\n  expect(initClient.mock.calls).toHaveLength(1)\n  expect(getSpaceData.mock.calls).toHaveLength(1)\n  expect(setupLogging.mock.calls).toHaveLength(1)\n  expect(downloadAssets.mock.calls).toHaveLength(1)\n  expect(displayErrorLog.mock.calls).toHaveLength(1)\n  expect(fs.access.mock.calls).toHaveLength(0)\n  expect(mkdirp.mock.calls).toHaveLength(0)\n  expect(bfj.write.mock.calls).toHaveLength(0)\n  expect(writeErrorLogFile.mock.calls).toHaveLength(1)\n})\n"
  },
  {
    "path": "test/unit/mocks/download-assets.js",
    "content": "export const mockDownloadAssets = async (ctx) => {\n  ctx.assetDownloads = {\n    successCount: 3,\n    warningCount: 2,\n    errorCount: 1\n  }\n}\n"
  },
  {
    "path": "test/unit/mocks/get-space-data.js",
    "content": "import Listr from 'listr'\n\nexport const mockGetSpaceData = () => {\n  return new Listr([\n    {\n      title: 'mocked get full source space',\n      task: (ctx) => {\n        ctx.data = {\n          contentTypes: [],\n          entries: [],\n          assets: [\n            {\n              sys: { id: 'someValidAsset' },\n              fields: {\n                file: {\n                  'en-US': {\n                    url: '//images.contentful.com/kq9lln4hyr8s/2MTd2wBirYikEYkIIc0YSw/7aa4c06f3054996e45bb3f13964cb254/rocka-nutrition.png'\n                  }\n                }\n              }\n            },\n            {\n              sys: { id: 'someBrokenAsset' },\n              fields: {}\n            }\n          ],\n          locales: []\n        }\n      }\n    }\n  ])\n}\n"
  },
  {
    "path": "test/unit/parseOptions.test.js",
    "content": "import { HttpsProxyAgent } from 'https-proxy-agent'\nimport { basename, isAbsolute, resolve, sep } from 'path'\nimport parseOptions from '../../lib/parseOptions'\n\nconst spaceId = 'foo'\nconst managementToken = 'someManagementToken'\nconst basePath = resolve(__dirname, '..', '..')\n\nconst toBeAbsolutePathWithPattern = (received, pattern) => {\n  const escapedPattern = [basename(basePath), pattern].join(`\\\\${sep}`)\n\n  return (!isAbsolute(received) || !RegExp(`/${escapedPattern}$/`).test(received))\n}\n\ntest('parseOptions sets requires spaceId', () => {\n  expect(\n    () => parseOptions({})\n  ).toThrow('The `spaceId` option is required.')\n})\n\ntest('parseOptions sets requires managementToken', () => {\n  expect(\n    () => parseOptions({\n      spaceId: 'someSpaceId'\n    })\n  ).toThrow('The `managementToken` option is required.')\n})\n\ntest('parseOptions sets correct default options', async () => {\n  const { default: packageJson } = await import(resolve(basePath, 'package.json'))\n  const version = packageJson.version\n\n  const options = parseOptions({ spaceId, managementToken })\n\n  const contentFileNamePattern = `contentful-export-${spaceId}-master-[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}-[0-9]{2}-[0-9]{2}\\\\.json`\n  const errorFileNamePattern = `contentful-export-error-log-${spaceId}-master-[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}-[0-9]{2}-[0-9]{2}\\\\.json`\n\n  expect(options.contentFile).toMatch(new RegExp(`^${contentFileNamePattern}$`))\n  expect(toBeAbsolutePathWithPattern(options.errorLogFile, errorFileNamePattern)).toBe(true)\n  expect(options.exportDir).toBe(basePath)\n  expect(options.includeDrafts).toBe(false)\n  expect(options.includeArchived).toBe(false)\n  expect(toBeAbsolutePathWithPattern(options.logFilePath, contentFileNamePattern)).toBe(true)\n  expect(options.application).toBe(`contentful.export/${version}`)\n  expect(options.feature).toBe('library-export')\n  expect(options.accessToken).toBe(managementToken)\n  expect(options.maxAllowedLimit).toBe(1000)\n  expect(options.saveFile).toBe(true)\n  expect(options.skipContent).toBe(false)\n  expect(options.skipContentModel).toBe(false)\n  expect(options.skipEditorInterfaces).toBe(false)\n  expect(options.skipRoles).toBe(false)\n  expect(options.skipWebhooks).toBe(false)\n  expect(options.skipTags).toBe(false)\n  expect(options.stripTags).toBe(false)\n  expect(options.spaceId).toBe(spaceId)\n  expect(options.startTime).toBeInstanceOf(Date)\n  expect(options.useVerboseRenderer).toBe(false)\n  expect(options.deliveryToken).toBeUndefined()\n})\n\ntest('parseOption accepts config file', async () => {\n  const configFileName = 'example-config.test.json'\n  const { default: config } = await import(resolve(basePath, configFileName))\n\n  const options = parseOptions({ config: configFileName })\n  Object.keys(config).forEach((key) => {\n    expect(options[key]).toBe(config[key])\n  })\n})\n\ntest('parseOption overwrites errorLogFile', () => {\n  const errorLogFile = 'error.log'\n  const options = parseOptions({\n    spaceId,\n    managementToken,\n    errorLogFile\n  })\n  expect(options.errorLogFile).toBe(resolve(basePath, errorLogFile))\n})\n\ntest('parseOption throws with invalid proxy', () => {\n  expect(() => parseOptions({\n    spaceId,\n    managementToken,\n    proxy: 'invalid'\n  })).toThrow('Please provide the proxy config in the following format:\\nhost:port or user:password@host:port')\n})\n\ntest('parseOption accepts proxy config as string', () => {\n  const options = parseOptions({\n    spaceId,\n    managementToken,\n    proxy: 'localhost:1234'\n  })\n  expect(options).not.toHaveProperty('proxy')\n  expect(options.httpsAgent).toBeInstanceOf(HttpsProxyAgent)\n})\n\ntest('parseOption accepts proxy config as object', () => {\n  const options = parseOptions({\n    spaceId,\n    managementToken,\n    proxy: {\n      host: 'localhost',\n      port: 1234,\n      user: 'foo',\n      password: 'bar'\n    }\n  })\n  expect(options).not.toHaveProperty('proxy')\n  expect(options.httpsAgent).toBeInstanceOf(HttpsProxyAgent)\n})\n\ntest('parseOptions parses queryEntries option', () => {\n  const options = parseOptions({\n    spaceId,\n    managementToken,\n    queryEntries: [\n      'someParam=someValue',\n      'someOtherParam=someOtherValue'\n    ]\n  })\n  expect(options.queryEntries).toMatchObject({\n    someParam: 'someValue',\n    someOtherParam: 'someOtherValue'\n  })\n})\n\ntest('parseOptions parses queryAssets option', () => {\n  const options = parseOptions({\n    spaceId,\n    managementToken,\n    queryAssets: [\n      'someParam=someValue',\n      'someOtherParam=someOtherValue'\n    ]\n  })\n  expect(options.queryAssets).toMatchObject({\n    someParam: 'someValue',\n    someOtherParam: 'someOtherValue'\n  })\n})\n\ntest('parseOptions sets correct options given contentOnly', () => {\n  const options = parseOptions({\n    spaceId,\n    managementToken,\n    contentOnly: true\n  })\n  expect(options.skipRoles).toBe(true)\n  expect(options.skipContentModel).toBe(true)\n  expect(options.skipWebhooks).toBe(true)\n})\n\ntest('parseOptions accepts custom application & feature', () => {\n  const managementApplication = 'managementApplicationMock'\n  const managementFeature = 'managementFeatureMock'\n\n  const options = parseOptions({\n    spaceId,\n    managementToken,\n    managementApplication,\n    managementFeature\n  })\n\n  expect(options.application).toBe(managementApplication)\n  expect(options.feature).toBe(managementFeature)\n})\n\ntest('parseOption parses deliveryToken option', () => {\n  const options = parseOptions({\n    spaceId,\n    managementToken,\n    deliveryToken: 'testDeliveryToken'\n  })\n  expect(options.accessToken).toBe(managementToken)\n  expect(options.spaceId).toBe(spaceId)\n  expect(options.deliveryToken).toBe('testDeliveryToken')\n})\n\ntest('parseOption parses headers option', () => {\n  const options = parseOptions({\n    spaceId,\n    managementToken,\n    headers: {\n      header1: '1',\n      header2: '2'\n    }\n  })\n  expect(options.headers).toEqual({\n    header1: '1',\n    header2: '2',\n    'CF-Sequence': expect.any(String)\n  })\n})\n\ntest('parses params.header if provided', function () {\n  const config = parseOptions({\n    spaceId,\n    managementToken,\n    header: ['Accept   : application/json ', ' X-Header: 1']\n  })\n  expect(config.headers).toEqual({ Accept: 'application/json', 'X-Header': '1', 'CF-Sequence': expect.any(String) })\n})\n"
  },
  {
    "path": "test/unit/tasks/download-assets.test.js",
    "content": "import { promises as fs, rmSync } from 'fs'\nimport { tmpdir } from 'os'\nimport { resolve } from 'path'\n\nimport nock from 'nock'\n\nimport downloadAssets from '../../../lib/tasks/download-assets'\n\nconst tmpDirectory = resolve(tmpdir(), 'contentful-import-test')\n\nconst BASE_PATH = '//images.contentful.com'\nconst BASE_PATH_SECURE = '//images.secure.contentful.com'\nconst ASSET_PATH = '/kq9lln4hyr8s/2MTd2wBirYikEYkIIc0YSw/7aa4c06f3054996e45bb3f13964cb254'\nconst EXISTING_ASSET_FILENAME = 'rocka-nutrition.png'\nconst EXISTING_ASSET_URL = `${ASSET_PATH}/${EXISTING_ASSET_FILENAME}`\nconst EMBARGOED_ASSET_FILENAME = 'space-dog.png'\nconst EMBARGOED_ASSET_URL = `${ASSET_PATH}/${EMBARGOED_ASSET_FILENAME}`\nconst NON_EXISTING_URL = '/does-not-exist.png'\nconst UNICODE_SHORT_FILENAME = '测试文件.jpg'\nconst UNICODE_SHORT_URL = `${ASSET_PATH}/${encodeURIComponent(UNICODE_SHORT_FILENAME)}`\nconst UNICODE_LONG_FILENAME = `${'测试文件'.repeat(10)}.jpg`\nconst UNICODE_LONG_URL = `${ASSET_PATH}/${encodeURIComponent(UNICODE_LONG_FILENAME)}`\nconst DIFFERENT_FILENAME = 'different filename.jpg'\nconst UPLOAD_URL = '//file-stack-url-do-not-use-me.png'\n\nconst API_HOST = 'api.contentful.com'\nconst SPACE_ID = 'kq9lln4hyr8s'\nconst ACCESS_TOKEN = 'abc'\nconst ENVIRONMENT_ID = 'master'\nconst POLICY = 'eyJhbG.eyJMDIyfQ.SflKx5c'\nconst SECRET = 's3cr3t'\n\nlet taskProxy\nlet output\n\nnock(`https:${BASE_PATH}`)\n  .get(EXISTING_ASSET_URL)\n  .times(8)\n  .reply(200)\n\nnock(`https:${BASE_PATH}`)\n  .get(NON_EXISTING_URL)\n  .reply(404)\n\nnock(`https:${BASE_PATH}`)\n  .get(UNICODE_SHORT_URL)\n  .times(2)\n  .reply(200)\n\nnock(`https:${BASE_PATH}`)\n  .get(UNICODE_LONG_URL)\n  .times(2)\n  .reply(200)\n\n// Mock downloading assets using signed URLs\nnock(`https:${BASE_PATH_SECURE}`)\n  .get(EMBARGOED_ASSET_URL)\n  .query({ policy: POLICY, token: /.+/i })\n  .times(2)\n  .reply(200)\n\n// Mock asset-key creation for embargoed assets\nnock(`https://${API_HOST}`)\n  .post(`/spaces/${SPACE_ID}/environments/${ENVIRONMENT_ID}/asset_keys`, {\n    expiresAt: /.+/i\n  })\n  .times(1)\n  .reply(200, { policy: POLICY, secret: SECRET })\n\nfunction getAssets ({ existing = 0, nonExisting = 0, missingUrl = 0, embargoed = 0, unicodeShort = 0, unicodeLong = 0, differentFilename = 0 } = {}) {\n  const existingUrl = `${BASE_PATH}${EXISTING_ASSET_URL}`\n  const embargoedUrl = `${BASE_PATH_SECURE}${EMBARGOED_ASSET_URL}`\n  const nonExistingUrl = `${BASE_PATH}${NON_EXISTING_URL}`\n  const unicodeShortUrl = `${BASE_PATH}${UNICODE_SHORT_URL}`\n  const unicodeLongUrl = `${BASE_PATH}${UNICODE_LONG_URL}`\n  const assets = []\n  for (let i = 0; i < nonExisting; i++) {\n    assets.push({\n      sys: {\n        id: `Non existing asset ${i}`\n      },\n      fields: {\n        file: {\n          'en-US': {\n            url: nonExistingUrl,\n            fileName: NON_EXISTING_URL,\n            upload: UPLOAD_URL\n          },\n          'de-DE': {\n            url: nonExistingUrl,\n            fileName: NON_EXISTING_URL,\n            upload: UPLOAD_URL\n          }\n        }\n      }\n    })\n  }\n  for (let i = 0; i < existing; i++) {\n    assets.push({\n      sys: {\n        id: `existing asset ${i}`\n      },\n      fields: {\n        file: {\n          'en-US': {\n            url: existingUrl,\n            fileName: EXISTING_ASSET_FILENAME,\n            upload: UPLOAD_URL\n          },\n          'de-DE': {\n            url: existingUrl,\n            fileName: EXISTING_ASSET_FILENAME,\n            upload: UPLOAD_URL\n          }\n        }\n      }\n    })\n  }\n  for (let i = 0; i < embargoed; i++) {\n    assets.push({\n      sys: {\n        id: `embargoed asset ${i}`\n      },\n      fields: {\n        file: {\n          'en-US': {\n            url: embargoedUrl,\n            fileName: EMBARGOED_ASSET_FILENAME,\n            upload: UPLOAD_URL\n          },\n          'de-DE': {\n            url: embargoedUrl,\n            fileName: EMBARGOED_ASSET_FILENAME,\n            upload: UPLOAD_URL\n          }\n        }\n      }\n    })\n  }\n  for (let i = 0; i < missingUrl; i++) {\n    assets.push({\n      sys: {\n        id: `missing file url ${i}`\n      },\n      fields: {\n        file: {\n          'en-US': {\n            upload: UPLOAD_URL,\n            fileName: DIFFERENT_FILENAME\n          },\n          'de-DE': {\n            upload: UPLOAD_URL,\n            fileName: DIFFERENT_FILENAME\n          }\n        }\n      }\n    })\n  }\n  for (let i = 0; i < unicodeShort; i++) {\n    assets.push({\n      sys: {\n        id: `unicode short asset ${i}`\n      },\n      fields: {\n        file: {\n          'en-US': {\n            url: unicodeShortUrl,\n            fileName: UNICODE_SHORT_FILENAME,\n            upload: UPLOAD_URL\n          },\n          'de-DE': {\n            url: unicodeShortUrl,\n            fileName: UNICODE_SHORT_FILENAME,\n            upload: UPLOAD_URL\n          }\n        }\n      }\n    })\n  }\n  for (let i = 0; i < unicodeLong; i++) {\n    assets.push({\n      sys: {\n        id: `unicode long asset ${i}`\n      },\n      fields: {\n        file: {\n          'en-US': {\n            url: unicodeLongUrl,\n            fileName: UNICODE_LONG_FILENAME,\n            upload: UPLOAD_URL\n          },\n          'de-DE': {\n            url: unicodeLongUrl,\n            fileName: UNICODE_LONG_FILENAME,\n            upload: UPLOAD_URL\n          }\n        }\n      }\n    })\n  }\n  for (let i = 0; i < differentFilename; i++) {\n    assets.push({\n      sys: {\n        id: `different filename asset ${i}`\n      },\n      fields: {\n        file: {\n          'en-US': {\n            url: existingUrl,\n            fileName: DIFFERENT_FILENAME,\n            upload: UPLOAD_URL\n          },\n          'de-DE': {\n            url: existingUrl,\n            fileName: DIFFERENT_FILENAME,\n            upload: UPLOAD_URL\n          }\n        }\n      }\n    })\n  }\n  return assets\n}\n\nbeforeEach(() => {\n  output = jest.fn()\n  taskProxy = new Proxy({}, {\n    set: (obj, prop, value) => {\n      if (prop === 'output') {\n        output(value)\n        return value\n      }\n      throw new Error(`It should not access task property ${String(prop)} (value: ${value})`)\n    }\n  })\n})\nbeforeAll(async () => {\n  await fs.mkdir(tmpDirectory, { recursive: true })\n})\n\nafterAll(() => {\n  // Couldn't get `fs.promises.rm` to work without permissions issues\n  rmSync(tmpDirectory, { recursive: true, force: true })\n\n  if (!nock.isDone()) {\n    throw new Error(`pending mocks: ${nock.pendingMocks().join(', ')}`)\n  }\n\n  nock.cleanAll()\n  nock.restore()\n})\n\ntest('Downloads assets and properly counts failed attempts', () => {\n  const task = downloadAssets({\n    exportDir: tmpDirectory\n  })\n  const ctx = {\n    data: {\n      assets: [\n        ...getAssets({ existing: 1, nonExisting: 1 }),\n        {\n          sys: {\n            id: 'corrupt asset [warning]'\n          },\n          fields: {}\n        }\n      ]\n    }\n  }\n\n  return task(ctx, taskProxy)\n    .then(() => {\n      expect(ctx.assetDownloads).toEqual({\n        successCount: 2,\n        warningCount: 1,\n        errorCount: 2\n      })\n      expect(output.mock.calls).toHaveLength(5)\n    })\n})\n\ntest('Downloads embargoed assets', () => {\n  const task = downloadAssets({\n    exportDir: tmpDirectory,\n    host: API_HOST,\n    accessToken: ACCESS_TOKEN,\n    spaceId: SPACE_ID,\n    environmentId: ENVIRONMENT_ID\n  })\n  const ctx = {\n    data: {\n      assets: [\n        ...getAssets({ embargoed: 1 })\n      ]\n    }\n  }\n\n  return task(ctx, taskProxy)\n    .then(() => {\n      expect(ctx.assetDownloads).toEqual({\n        successCount: 2,\n        warningCount: 0,\n        errorCount: 0\n      })\n      expect(output.mock.calls).toHaveLength(2)\n    })\n})\n\ntest('it doesn\\'t use fileStack url as fallback for the file url and throws a warning output', () => {\n  const task = downloadAssets({\n    exportDir: tmpDirectory\n  })\n  const ctx = {\n    data: {\n      assets: [\n        ...getAssets({ existing: 2, missingUrl: 1 })\n      ]\n    }\n  }\n\n  return task(ctx, taskProxy)\n    .then(() => {\n      expect(ctx.assetDownloads).toEqual({\n        successCount: 4,\n        warningCount: 0,\n        errorCount: 2\n      })\n      expect(output.mock.calls).toHaveLength(6)\n\n      const missingUrlsOutputCount = output.mock.calls.filter(call =>\n        call[0]?.endsWith('asset.fields.file[en-US].url') ||\n          call[0]?.endsWith('asset.fields.file[de-DE].url'))\n\n      expect(missingUrlsOutputCount).toHaveLength(2)\n    })\n})\n\ntest('Downloads assets with short Unicode filenames', () => {\n  const task = downloadAssets({\n    exportDir: tmpDirectory\n  })\n  const ctx = {\n    data: {\n      assets: [\n        ...getAssets({ unicodeShort: 1 })\n      ]\n    }\n  }\n\n  return task(ctx, taskProxy)\n    .then(() => {\n      expect(ctx.assetDownloads).toEqual({\n        successCount: 2,\n        warningCount: 0,\n        errorCount: 0\n      })\n      expect(output.mock.calls).toHaveLength(2)\n\n      const unicodeShortAsset = ctx.data.assets.find(asset => asset.sys.id === 'unicode short asset 0')\n      expect(unicodeShortAsset.fields.file['en-US'].fileName).toBe(UNICODE_SHORT_FILENAME)\n      expect(unicodeShortAsset.fields.file['de-DE'].fileName).toBe(UNICODE_SHORT_FILENAME)\n    })\n})\n\ntest('Downloads assets with long Unicode filenames', () => {\n  const task = downloadAssets({\n    exportDir: tmpDirectory\n  })\n  const ctx = {\n    data: {\n      assets: [\n        ...getAssets({ unicodeLong: 1 })\n      ]\n    }\n  }\n\n  return task(ctx, taskProxy)\n    .then(() => {\n      expect(ctx.assetDownloads).toEqual({\n        successCount: 2,\n        warningCount: 0,\n        errorCount: 0\n      })\n      expect(output.mock.calls).toHaveLength(2)\n\n      const unicodeLongAsset = ctx.data.assets.find(asset => asset.sys.id === 'unicode long asset 0')\n      expect(unicodeLongAsset.fields.file['en-US'].fileName).toBe(UNICODE_LONG_FILENAME)\n      expect(unicodeLongAsset.fields.file['de-DE'].fileName).toBe(UNICODE_LONG_FILENAME)\n    })\n})\n\ntest('Downloads assets with different filename than URL path', () => {\n  const task = downloadAssets({\n    exportDir: tmpDirectory\n  })\n  const ctx = {\n    data: {\n      assets: [\n        ...getAssets({ differentFilename: 1 })\n      ]\n    }\n  }\n\n  return task(ctx, taskProxy)\n    .then(() => {\n      expect(ctx.assetDownloads).toEqual({\n        successCount: 2,\n        warningCount: 0,\n        errorCount: 0\n      })\n      expect(output.mock.calls).toHaveLength(2)\n\n      const differentFilenameAsset = ctx.data.assets.find(asset => asset.sys.id === 'different filename asset 0')\n      expect(differentFilenameAsset.fields.file['en-US'].fileName).toBe(DIFFERENT_FILENAME)\n      expect(differentFilenameAsset.fields.file['en-US'].url).toBe(`${BASE_PATH}${EXISTING_ASSET_URL}`)\n      expect(differentFilenameAsset.fields.file['de-DE'].fileName).toBe(DIFFERENT_FILENAME)\n      expect(differentFilenameAsset.fields.file['de-DE'].url).toBe(`${BASE_PATH}${EXISTING_ASSET_URL}`)\n    })\n})\n"
  },
  {
    "path": "test/unit/tasks/get-space-data.test.js",
    "content": "import getSpaceData from '../../../lib/tasks/get-space-data'\n\nconst maxAllowedLimit = 100\nconst resultItemCount = 420\n\nfunction pagedResult (query, maxItems, mock = {}) {\n  const { skip, limit } = query\n  const cnt = maxItems - skip > limit ? limit : maxItems - skip\n  return {\n    items: Array.from({ length: cnt}, (n) => {\n      const id = n * skip + 1\n      return Object.assign({ sys: { id }}, mock)\n    }),\n    total: maxItems\n  }\n}\n\nfunction pagedContentResult (query, maxItems, mock = {}) {\n  const result = pagedResult(query, maxItems, mock)\n  result.items.map((item, index) => {\n    item.sys.publishedVersion = index % 2\n    return item\n  })\n  return result\n}\n\nconst mockSpace = {}\n\nconst mockEnvironment = {}\n\nconst mockClient = {}\n\nconst getEditorInterface = jest.fn()\n\nconst mockAsset = { metadata: { tags: [{}] } }\n\nconst mockEntry = { metadata: { tags: [{}] } }\n\nfunction setupMocks () {\n  mockClient.getSpace = jest.fn(() => Promise.resolve(mockSpace))\n  mockSpace.getEnvironment = jest.fn(() => Promise.resolve(mockEnvironment))\n  mockEnvironment.getContentTypes = jest.fn((query) => {\n    return Promise.resolve(pagedResult(query, resultItemCount, {\n      getEditorInterface\n    }))\n  })\n  mockEnvironment.getEntries = jest.fn((query) => {\n    return Promise.resolve(pagedContentResult(query, resultItemCount, mockEntry))\n  })\n  mockEnvironment.getAssets = jest.fn((query) => {\n    return Promise.resolve(pagedContentResult(query, resultItemCount, mockAsset))\n  })\n  mockEnvironment.getLocales = jest.fn((query) => {\n    return Promise.resolve(pagedResult(query, resultItemCount))\n  })\n  mockEnvironment.getTags = jest.fn((query) => {\n    return Promise.resolve(pagedResult(query, resultItemCount))\n  })\n  mockSpace.getWebhooks = jest.fn((query) => {\n    return Promise.resolve(pagedResult(query, resultItemCount))\n  })\n  mockSpace.getRoles = jest.fn((query) => {\n    return Promise.resolve(pagedResult(query, resultItemCount))\n  })\n  getEditorInterface.mockImplementation(() => Promise.resolve({}))\n}\n\nbeforeEach(setupMocks)\n\nafterEach(() => {\n  mockClient.getSpace.mockClear()\n  mockEnvironment.getContentTypes.mockClear()\n  mockEnvironment.getEntries.mockClear()\n  mockEnvironment.getAssets.mockClear()\n  mockEnvironment.getLocales.mockClear()\n  mockEnvironment.getTags.mockClear()\n  mockSpace.getWebhooks.mockClear()\n  mockSpace.getRoles.mockClear()\n  getEditorInterface.mockClear()\n})\n\ntest('Gets whole destination content', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.entries).toHaveLength(resultItemCount / 2)\n      expect(response.data.assets).toHaveLength(resultItemCount / 2)\n      expect(response.data.locales).toHaveLength(resultItemCount)\n      expect(response.data.tags).toHaveLength(resultItemCount)\n      expect(response.data.webhooks).toHaveLength(resultItemCount)\n      expect(response.data.roles).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toHaveLength(resultItemCount)\n    })\n})\n\ntest('Gets whole destination content without content model', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    skipContentModel: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(0)\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(0)\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(getEditorInterface.mock.calls).toHaveLength(0)\n      expect(response.data.contentTypes).toBeUndefined()\n      expect(response.data.entries).toHaveLength(resultItemCount / 2)\n      expect(response.data.assets).toHaveLength(resultItemCount / 2)\n      expect(response.data.locales).toBeUndefined()\n      expect(response.data.tags).toHaveLength(resultItemCount)\n      expect(response.data.webhooks).toHaveLength(resultItemCount)\n      expect(response.data.roles).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toBeUndefined()\n    })\n})\n\ntest('Gets whole destination content without content', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    skipContent: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(0)\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(0)\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.entries).toBeUndefined()\n      expect(response.data.assets).toBeUndefined()\n      expect(response.data.locales).toHaveLength(resultItemCount)\n      expect(response.data.tags).toHaveLength(resultItemCount)\n      expect(response.data.webhooks).toHaveLength(resultItemCount)\n      expect(response.data.roles).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toHaveLength(resultItemCount)\n    })\n})\n\ntest('Gets whole destination content without webhooks', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    skipWebhooks: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(0)\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.entries).toHaveLength(resultItemCount / 2)\n      expect(response.data.assets).toHaveLength(resultItemCount / 2)\n      expect(response.data.locales).toHaveLength(resultItemCount)\n      expect(response.data.tags).toHaveLength(resultItemCount)\n      expect(response.data.webhooks).toBeUndefined()\n      expect(response.data.roles).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toHaveLength(resultItemCount)\n    })\n})\n\ntest('Gets whole destination content without roles', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    skipRoles: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(0)\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.entries).toHaveLength(resultItemCount / 2)\n      expect(response.data.assets).toHaveLength(resultItemCount / 2)\n      expect(response.data.locales).toHaveLength(resultItemCount)\n      expect(response.data.tags).toHaveLength(resultItemCount)\n      expect(response.data.webhooks).toHaveLength(resultItemCount)\n      expect(response.data.roles).toBeUndefined()\n      expect(response.data.editorInterfaces).toHaveLength(resultItemCount)\n    })\n})\n\ntest('Gets whole destination content without editor interfaces', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    skipEditorInterfaces: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(getEditorInterface.mock.calls).toHaveLength(0)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.entries).toHaveLength(resultItemCount / 2)\n      expect(response.data.assets).toHaveLength(resultItemCount / 2)\n      expect(response.data.locales).toHaveLength(resultItemCount)\n      expect(response.data.tags).toHaveLength(resultItemCount)\n      expect(response.data.webhooks).toHaveLength(resultItemCount)\n      expect(response.data.roles).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toBeUndefined()\n    })\n})\n\ntest('Gets whole destination content without tags', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    skipTags: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(0)\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.entries).toHaveLength(resultItemCount / 2)\n      expect(response.data.assets).toHaveLength(resultItemCount / 2)\n      expect(response.data.locales).toHaveLength(resultItemCount)\n      expect(response.data.tags).toBeUndefined()\n      expect(response.data.webhooks).toHaveLength(resultItemCount)\n      expect(response.data.roles).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toHaveLength(resultItemCount)\n    })\n})\n\ntest('Gets whole destination content with drafts', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    includeDrafts: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.entries).toHaveLength(resultItemCount)\n      expect(response.data.assets).toHaveLength(resultItemCount)\n      expect(response.data.locales).toHaveLength(resultItemCount)\n      expect(response.data.tags).toHaveLength(resultItemCount)\n      expect(response.data.webhooks).toHaveLength(resultItemCount)\n      expect(response.data.roles).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toHaveLength(resultItemCount)\n    })\n})\n\ntest('Gets whole destination content with archived entries', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    includeDrafts: true,\n    includeArchived: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.entries).toHaveLength(resultItemCount)\n      expect(response.data.assets).toHaveLength(resultItemCount)\n      expect(response.data.locales).toHaveLength(resultItemCount)\n      expect(response.data.tags).toHaveLength(resultItemCount)\n      expect(response.data.webhooks).toHaveLength(resultItemCount)\n      expect(response.data.roles).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toHaveLength(resultItemCount)\n    })\n})\n\ntest('Skips webhooks & roles for non-master environments', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    environmentId: 'staging',\n    maxAllowedLimit,\n    includeDrafts: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getEntries.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getAssets.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockEnvironment.getTags.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(mockSpace.getWebhooks.mock.calls).toHaveLength(0)\n      expect(mockSpace.getRoles.mock.calls).toHaveLength(0)\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.entries).toHaveLength(resultItemCount)\n      expect(response.data.assets).toHaveLength(resultItemCount)\n      expect(response.data.locales).toHaveLength(resultItemCount)\n      expect(response.data.tags).toHaveLength(resultItemCount)\n      expect(response.data).not.toHaveProperty('webhooks')\n      expect(response.data).not.toHaveProperty('roles')\n      expect(response.data.editorInterfaces).toHaveLength(resultItemCount)\n    })\n})\n\ntest('Gets whole destination content and detects missing editor interfaces', () => {\n  getEditorInterface.mockImplementation(() => Promise.reject(new Error('No editor interface found')))\n\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    skipContent: true,\n    skipWebhooks: true,\n    skipRoles: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(Math.ceil(resultItemCount / maxAllowedLimit))\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toHaveLength(0)\n    })\n})\n\ntest('Skips editor interfaces since no content types are found', () => {\n  mockEnvironment.getContentTypes.mockImplementation(() => Promise.resolve({\n    items: [],\n    total: 0\n  }))\n\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    skipContent: true,\n    skipWebhooks: true,\n    skipRoles: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(1)\n      expect(getEditorInterface.mock.calls).toHaveLength(0)\n      expect(response.data.contentTypes).toHaveLength(0)\n      expect(response.data.editorInterfaces).toBeUndefined()\n    })\n})\n\ntest('Loads 1000 items per page by default', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    skipContent: true,\n    skipWebhooks: true,\n    skipRoles: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getContentTypes.mock.calls[0][0].limit).toBe(1000)\n      expect(getEditorInterface.mock.calls).toHaveLength(resultItemCount)\n      expect(response.data.contentTypes).toHaveLength(resultItemCount)\n      expect(response.data.editorInterfaces).toHaveLength(resultItemCount)\n    })\n})\n\ntest('Query entry/asset respect limit query param', () => {\n  // overwrite the getAssets mock so maxItems is larger than default page size in pagedGet (get-space-data.js)\n  mockEnvironment.getAssets = jest.fn((query) => {\n    return Promise.resolve(pagedContentResult(query, 2000, mockEntry))\n  })\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    skipContentModel: true,\n    skipWebhooks: true,\n    skipRoles: true,\n    includeDrafts: true,\n    queryEntries: { limit: 20 }, // test limit < pageSize\n    queryAssets: { limit: 1001 } // test limit > pageSize\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getEntries.mock.calls[0][0].limit).toBe(20)\n      expect(mockEnvironment.getAssets.mock.calls[0][0].limit).toBe(1000) // assets should be called 2x\n      expect(mockEnvironment.getAssets.mock.calls[1][0].limit).toBe(1) // because it has to fetch the final item in the second page\n      expect(response.data.assets).toHaveLength(1001)\n      expect(response.data.entries).toHaveLength(20)\n    })\n})\n\ntest('only skips fetched items', () => {\n  // overwrite the getLocales only returns 20 items in pages of 10\n  mockEnvironment.getLocales = jest.fn()\n    .mockResolvedValueOnce({\n      items: Array.from({ length: 10 }, (n) => {\n        const id = n + 1\n        return Object.assign({ sys: { id } })\n      }),\n      total: 20\n    })\n    .mockResolvedValueOnce({\n      items: Array.from({ length: 7 }, (n) => {\n        const id = n + 11\n        return Object.assign({ sys: { id } })\n      }),\n      total: 17\n    })\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    skipContent: true,\n    skipWebhooks: true,\n    skipRoles: true\n  })\n    .run({\n      data: {}\n    })\n    .then(() => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(2)\n      expect(mockEnvironment.getLocales.mock.calls[0][0].limit).toBe(1000)\n      expect(mockEnvironment.getLocales.mock.calls[0][0].skip).toBe(0)\n      expect(mockEnvironment.getLocales.mock.calls[1][0].limit).toBe(1000)\n      expect(mockEnvironment.getLocales.mock.calls[1][0].skip).toBe(10)\n    })\n})\n\ntest('halts fetching when no items in page', () => {\n  // overwrite the getLocales returns 0 items\n  mockEnvironment.getLocales = jest.fn()\n    .mockResolvedValueOnce({\n      items: [],\n      total: 20\n    })\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    skipContent: true,\n    skipWebhooks: true,\n    skipRoles: true\n  })\n    .run({\n      data: {}\n    })\n    .then(() => {\n      expect(mockClient.getSpace.mock.calls).toHaveLength(1)\n      expect(mockSpace.getEnvironment.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getLocales.mock.calls).toHaveLength(1)\n      expect(mockEnvironment.getLocales.mock.calls[0][0].limit).toBe(1000)\n      expect(mockEnvironment.getLocales.mock.calls[0][0].skip).toBe(0)\n    })\n})\n\ntest('Strips tags from entries and assets', () => {\n  return getSpaceData({\n    client: mockClient,\n    spaceId: 'spaceid',\n    maxAllowedLimit,\n    stripTags: true\n  })\n    .run({\n      data: {}\n    })\n    .then((response) => {\n      expect(response.data.entries).toHaveLength(resultItemCount / 2)\n      const hasAssetsWithTags = response.data.assets.some(asset => asset.metadata?.tags?.length > 0)\n      expect(hasAssetsWithTags).toBe(false)\n      const hasEntryWithTags = response.data.entries.some(entry => entry.metadata?.tags?.length > 0)\n      expect(hasEntryWithTags).toBe(false)\n    })\n})\n"
  },
  {
    "path": "test/unit/tasks/init-client.test.js",
    "content": "import initClient from '../../../lib/tasks/init-client'\n\nimport contentfulManagement from 'contentful-management'\nimport contentful from 'contentful'\nimport { logEmitter } from 'contentful-batch-libs'\n\njest.mock('contentful-management', () => {\n  return {\n    createClient: jest.fn(() => 'cmaClient')\n  }\n})\n\njest.mock('contentful', () => {\n  return {\n    createClient: jest.fn(() => 'cdaClient')\n  }\n})\n\njest.mock('contentful-batch-libs', () => {\n  return {\n    logEmitter: {\n      emit: jest.fn()\n    }\n  }\n})\n\ntest('does create clients and passes custom logHandler', () => {\n  const opts = {\n    httpAgent: 'httpAgent',\n    httpsAgent: 'httpsAgent',\n    application: 'application',\n    headers: 'headers',\n    host: 'host',\n    insecure: 'insecure',\n    integration: 'integration',\n    port: 'port',\n    proxy: 'proxy',\n    accessToken: 'accessToken',\n    spaceId: 'spaceId'\n  }\n\n  initClient(opts)\n\n  expect(contentfulManagement.createClient.mock.calls[0][0]).toMatchObject({\n    accessToken: opts.accessToken,\n    host: opts.host,\n    port: opts.port,\n    headers: opts.headers,\n    insecure: opts.insecure,\n    proxy: opts.proxy,\n    httpAgent: opts.httpAgent,\n    httpsAgent: opts.httpsAgent,\n    application: opts.application,\n    integration: opts.integration\n  })\n  expect(contentfulManagement.createClient.mock.calls[0][0]).toHaveProperty('logHandler')\n  expect(contentfulManagement.createClient.mock.calls[0][0].timeout).toEqual(10000)\n  expect(contentfulManagement.createClient.mock.calls).toHaveLength(1)\n  expect(contentful.createClient.mock.calls).toHaveLength(0)\n\n  // Call passed log handler\n  contentfulManagement.createClient.mock.calls[0][0].logHandler('level', 'logMessage')\n\n  expect(logEmitter.emit.mock.calls[0][0]).toBe('level')\n  expect(logEmitter.emit.mock.calls[0][1]).toBe('logMessage')\n})\n\ntest('does create both clients when deliveryToken is set', () => {\n  const opts = {\n    httpAgent: 'httpAgent',\n    httpsAgent: 'httpsAgent',\n    application: 'application',\n    headers: 'headers',\n    host: 'host',\n    insecure: 'insecure',\n    integration: 'integration',\n    port: 'port',\n    proxy: 'proxy',\n    accessToken: 'accessToken',\n    spaceId: 'spaceId',\n    deliveryToken: 'deliveryToken',\n    hostDelivery: 'hostDelivery'\n  }\n\n  initClient(opts, true)\n\n  expect(contentfulManagement.createClient.mock.calls[0][0]).toMatchObject({\n    accessToken: opts.accessToken,\n    host: opts.host,\n    port: opts.port,\n    headers: opts.headers,\n    insecure: opts.insecure,\n    proxy: opts.proxy,\n    httpAgent: opts.httpAgent,\n    httpsAgent: opts.httpsAgent,\n    application: opts.application,\n    integration: opts.integration\n  })\n  expect(contentful.createClient.mock.calls[0][0]).toMatchObject({\n    space: opts.spaceId,\n    accessToken: opts.deliveryToken,\n    host: opts.hostDelivery,\n    port: opts.port,\n    headers: opts.headers,\n    insecure: opts.insecure,\n    proxy: opts.proxy,\n    httpAgent: opts.httpAgent,\n    httpsAgent: opts.httpsAgent,\n    application: opts.application,\n    integration: opts.integration\n  })\n  expect(contentfulManagement.createClient.mock.calls).toHaveLength(1)\n  expect(contentful.createClient.mock.calls).toHaveLength(1)\n})\n"
  },
  {
    "path": "test/unit/utils/embargoedAssets.test.js",
    "content": "import { shouldCreateNewCacheItem } from '../../../lib/utils/embargoedAssets'\nconst SIX_HOURS_IN_MS = 6 * 60 * 60 * 1000\n\ntest('only returns true for expiry time difference greater than 6 hours', () => {\n  expect(shouldCreateNewCacheItem({ expiresAtMs: 1 }, SIX_HOURS_IN_MS - 2)).toBe(false)\n  expect(shouldCreateNewCacheItem({ expiresAtMs: 1 }, SIX_HOURS_IN_MS + 2)).toBe(true)\n})\n"
  },
  {
    "path": "test/unit/utils/headers.test.js",
    "content": "import { getHeadersConfig } from '../../../lib/utils/headers'\n\ntest('getHeadersConfig returns empty object when value is undefined', () => {\n  expect(getHeadersConfig(undefined)).toEqual({})\n})\n\ntest('getHeadersConfig accepts single or multiple values', () => {\n  expect(getHeadersConfig('Accept: Any')).toEqual({ Accept: 'Any' })\n  expect(getHeadersConfig(['Accept: Any', 'X-Version: 1'])).toEqual({\n    Accept: 'Any',\n    'X-Version': '1'\n  })\n})\n\ntest('getHeadersConfig ignores invalid headers', () => {\n  expect(\n    getHeadersConfig(['Accept: Any', 'X-Version: 1', 'invalid'])\n  ).toEqual({\n    Accept: 'Any',\n    'X-Version': '1'\n  })\n})\n\ntest('getHeadersConfig trims spacing around keys & values', () => {\n  expect(\n    getHeadersConfig([\n      '  Accept:   Any   ',\n      '   X-Version   :1 ',\n      'invalid'\n    ])\n  ).toEqual({\n    Accept: 'Any',\n    'X-Version': '1'\n  })\n})\n"
  },
  {
    "path": "tsconfig.json",
    "content": "{\n  \"compilerOptions\": {\n    \"target\": \"ESNext\",\n    \"module\": \"commonjs\",\n    \"esModuleInterop\": true,\n    \"forceConsistentCasingInFileNames\": false,\n    \"strict\": false,\n    \"skipLibCheck\": true,\n    \"allowJs\": true,\n    \"checkJs\": true,\n    \"resolveJsonModule\": true,\n    \"noEmit\": true\n  },\n  \"include\": [\"./lib\", \"./bin\"]\n}\n"
  },
  {
    "path": "types.d.ts",
    "content": "export interface Options {\n  managementToken: string;\n  spaceId: string;\n  contentFile?: string;\n  contentOnly?: boolean;\n  deliveryToken?: string;\n  downloadAssets?: boolean;\n  environmentId?: string;\n  errorLogFile?: string;\n  exportDir?: string;\n  headers?: string[];\n  host?: string;\n  includeArchived?: boolean;\n  includeDrafts?: boolean;\n  limit?: number;\n  managementApplication?: string;\n  managementFeature?: string;\n  maxAllowedLimit?: number;\n  proxy?: string;\n  queryEntries?: string[];\n  queryAssets?: string[];\n  rawProxy?: boolean;\n  saveFile?: boolean;\n  skipContent?: boolean;\n  skipContentModel?: boolean;\n  skipEditorInterfaces?: boolean;\n  skipRoles?: boolean;\n  skipWebhooks?: boolean;\n  skipTags?: boolean;\n  useVerboseRenderer?: boolean;\n}\n\ntype ContentfulExportField = 'contentTypes' | 'entries' | 'assets' | 'locales' | 'tags' | 'webhooks' | 'roles' | 'editorInterfaces';\n\ndeclare const runContentfulExport: (params: Options) => Promise<Record<ContentfulExportField, unknown[]>>\nexport default runContentfulExport\n"
  }
]