[
  {
    "path": ".github/FUNDING.yml",
    "content": "# These are supported funding model platforms\n\ngithub: [DahnM20]\npatreon: # Replace with a single Patreon username\nopen_collective: # Replace with a single Open Collective username\nko_fi: # Replace with a single Ko-fi username\ntidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel\ncommunity_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry\nliberapay: # Replace with a single Liberapay username\nissuehunt: # Replace with a single IssueHunt username\notechie: # Replace with a single Otechie username\nlfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry\ncustom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']\n"
  },
  {
    "path": ".github/workflows/main.yml",
    "content": "name: Docker Compose Build | Healthcheck | Tests\n\non:\n  push:\n    branches:\n      - main\n      - develop\n      - develop-features-0.8.1\n\njobs:\n  build_and_test:\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v2\n\n      - name: Move to docker directory and run docker compose\n        run: |\n          cd docker\n          docker compose -f docker-compose.it.yml up -d\n\n      - name: Run healthcheck script\n        run: |\n          cd docker\n          chmod +x healthcheck.sh\n          ./healthcheck.sh http://localhost:5000/healthcheck\n\n      - name: Print Docker logs\n        if: failure()\n        run: |\n          cd docker\n          docker compose logs\n\n      - name: Run UI unit tests\n        run: |\n          cd packages/ui\n          npm i\n          npm run test\n\n      - name: Run Python unit tests\n        run: |\n          docker exec ai-flow-backend python -m unittest discover -s tests/unit -p '*test_*.py'\n\n      - name: Run integration tests\n        run: |\n          cd integration_tests\n          npm i\n          npm run test\n      \n      - name: Print Docker logs\n        if: failure()\n        run: |\n          cd docker\n          docker compose logs"
  },
  {
    "path": ".gitignore",
    "content": "packages/backend/.env\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2023 Dahn\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\">\n  <img src=\"assets/header.png\" alt=\"AI-Flow Logo\" />\n</p>\n\n<p align=\"center\">\n  <em>Open-source tool to seamlessly connect multiple AI model APIs into repeatable workflows.</em>\n</p>\n\n<p align=\"center\">\n  <a href=\"https://docs.ai-flow.net/?ref=github\"><img src=\"https://img.shields.io/badge/lang-English-blue.svg\" alt=\"English\"></a>\n  <a href=\"https://docs.ai-flow.net/?ref=github\"><img src=\"https://img.shields.io/badge/lang-French-blue.svg\" alt=\"French\"></a>\n  <img src=\"https://img.shields.io/badge/License-MIT-yellow.svg\">\n  <img src=\"https://img.shields.io/github/v/release/DahnM20/ai-flow\">\n  <a href=\"https://twitter.com/DahnM20\"><img src=\"https://img.shields.io/twitter/follow/AI-Flow?style=social\" alt=\"Follow on Twitter\"></a>\n</p>\n\n<p align=\"center\">\n  <a href=\"https://ai-flow.net/?ref=github\">🔗 Website</a> • \n  <a href=\"https://docs.ai-flow.net/?ref=github\">📚 Documentation</a>\n</p>\n\n---\n\n<div align=\"center\">\n  🎉🚀 Latest Release: v0.11.3 🚀🎉\n\n  <br>\n  Nodes Updated : Web search can be enabled on GPT node, Claude 4 available\n  <br>\n  UI : Node Search Bar, Shortcut for Popular Replicate Models\n  \n  <br>\n  New Models available : Flux Kontext, Veo 3, Lyria 2, Imagen 4 available through the Replicate Node\n</div>\n\n---\n\n![AI-Flow Intro](assets/flow-example-3.png)\n\n## Overview\n\n**AI-Flow** is an open-source, user-friendly UI that lets you visually design, manage, and monitor AI-driven workflows by seamlessly connecting multiple AI model APIs (e.g., OpenAI, StabilityAI, Replicate, Claude, Deepseek).\n\n## Features\n\n- **Visual Workflow Builder:** Drag-and-drop interface for crafting AI workflows.\n- **Real-Time Monitoring:** Watch your workflow execute and track results.\n- **Parallel Processing:** Nodes run in parallel whenever possible.\n- **Model Management:** Easily organize and manage diverse AI models.\n- **Import/Export:** Share or back up your workflows effortlessly.\n\n## Supported Models\n\n- **Replicate:** All models available through the Replicate API (FLUX.1, FLUX.1 Kontext, Imagen 4, Veo 3, Lyria 2, and many more)\n- **OpenAI:** GPT-4o, GPT-4.1, TTS, o1, o3, o4.\n- **StabilityAI:** Stable Diffusion 3.5, SDXL, Stable Video Diffusion, plus additional tools.\n- **Others:** Claude, Deepseek, OpenRouter.\n\n![Scenario Example](assets/flow-example-2.png)\n\n## Open Source vs. Cloud\n\n**AI-Flow** is fully open source and available under the MIT License, empowering you to build and run your AI workflows on your personal machine.\n\nFor those seeking enhanced functionality and a polished experience, **AI-Flow Pro** on our cloud platform ([app.ai-flow.net](https://ai-flow.net/?ref=github)) offers advanced features, including:\n\n- **Subflows & Loops:** Create complex, nested workflows and iterate tasks effortlessly.\n- **API-Triggered Flows:** Initiate workflows via API calls for seamless automation.\n- **Integrated Services:** Connect with external services such as Google Search, Airtable, Zapier, and Make.\n- **Simplified Interface:** Transform workflows into streamlined tools with an intuitive UI.\n\n![Pro VS Open Source](assets/comparison-pro-vs-opensource-v2.png)\n\nThe cloud version builds upon the foundation of the open-source project, giving you more power and flexibility while still letting you use your own API keys.\n\n## Installation\n\n> **Note:** To unlock full functionality, AI-Flow requires S3-compatible storage (with proper CORS settings) to host resources. Without it, features like File Upload or nodes that rely on external providers (e.g., StabilityAI) may not work as expected. Also, set `REPLICATE_API_KEY` in the App Parameters or in your environment to use the Replicate node.\n\n### Method 1: Using the Executable (Windows Only)\n\n> **Note:** This method is only available for Windows users.\n\n1. Download the latest Windows version of AI-Flow from the official releases page: [AI-Flow Releases](https://ai-flow.net/release/)\n2. Once downloaded, run the `.exe` file.\n\nThis will start a local server and open AI-Flow in a standalone window, giving you direct access to its user interface without needing to install anything else.\n\n### Method 2 : Docker Installation\n\n1. **Prepare Docker Compose:**\n\n   - Navigate to the `docker` directory:\n     ```bash\n     cd docker\n     ```\n\n2. **Launch with Docker Compose:**\n   ```bash\n   docker-compose up -d\n   ```\n3. **Access the Application:**\n   - Open [http://localhost:80](http://localhost:80) in your browser.\n   - To stop, run:\n     ```bash\n     docker-compose stop\n     ```\n\n### Method 3 : Local Installation\n\n1. **Clone the Repository:**\n\n   ```bash\n   git clone https://github.com/DahnM20/ai-flow.git\n   cd ai-flow\n   ```\n\n2. **UI Setup:**\n\n   ```bash\n   cd packages/ui\n   npm install\n   ```\n\n3. **Backend Setup:**\n\n   ```bash\n   cd ../backend\n   poetry install\n   ```\n\n   - **Windows Users:**\n     ```bash\n     poetry shell\n     pip install -r requirements_windows.txt\n     ```\n\n4. **Run the Application:**\n   - Start the backend:\n     ```bash\n     poetry run python server.py\n     ```\n   - In a new terminal, start the UI:\n     ```bash\n     cd packages/ui\n     npm start\n     ```\n   - Open your browser and navigate to [http://localhost:3000](http://localhost:3000).\n\n## Contributing\n\nWe welcome contributions! If you encounter issues or have feature ideas, please [open an issue](https://github.com/DahnM20/ai-flow/issues) or submit a pull request.\n\n## License\n\nThis project is released under the [MIT License](LICENSE).\n"
  },
  {
    "path": "bin/generate_python_classes_from_ts.sh",
    "content": "npm i -g typescript-json-schema \ntypescript-json-schema \"../packages/ui/src/nodes-configuration/types.ts\" \"*\" --out \"schema.json\"\nmv schema.json ../packages/backend/app/processors/components/\ncd ../packages/backend/app/processors/components/\npoetry run datamodel-codegen --input schema.json --input-file-type jsonschema --output model.py --output-model-type pydantic_v2.BaseModel --enum-field-as-literal all\nrm schema.json\necho \"model.py generated\""
  },
  {
    "path": "docker/README.md",
    "content": "## 🐳 Docker\n\n### Docker Compose\n\n1. Go to the docker directory: `cd ./docker`\n2. Update the .yml if needed for the PORTS\n3. Launch `docker-compose up` or `docker-compose up -d`\n4. Open your browser and navigate to `http://localhost:3000`\n5. Use `docker-compose stop` when you want to stop the app. "
  },
  {
    "path": "docker/docker-compose.it.yml",
    "content": "services:\n  backend:\n    container_name: ai-flow-backend\n    build:\n      context: ../packages/backend/\n      dockerfile: Dockerfile\n    ports:\n      - 5000:5000\n    environment:\n      - HOST=0.0.0.0\n      - PORT=5000\n      - DEPLOYMENT_ENV=LOCAL\n      - LOCAL_STORAGE_FOLDER_NAME=local_storage\n      - USE_MOCK=true\n    volumes:\n      - ./ai-flow-backend-storage:/app/local_storage\n\n  frontend:\n    container_name: ai-flow-frontend\n    build:\n      context: ../packages/ui/\n      dockerfile: Dockerfile\n    ports:\n      - 80:80\n    environment:\n      - VITE_APP_WS_HOST=localhost\n      - VITE_APP_WS_PORT=5000\n"
  },
  {
    "path": "docker/docker-compose.yml",
    "content": "services:\n  backend:\n    container_name: ai-flow-backend\n    build:\n      context: ../packages/backend/\n      dockerfile: Dockerfile\n    ports:\n      - 5001:5000\n    environment:\n      - HOST=0.0.0.0\n      - PORT=5000\n      - DEPLOYMENT_ENV=LOCAL\n      - REPLICATE_API_KEY=sample\n      - LOCAL_STORAGE_FOLDER_NAME=local_storage\n    volumes:\n      - ./ai-flow-backend-storage:/app/local_storage\n\n  frontend:\n    container_name: ai-flow-frontend\n    build:\n      context: ../packages/ui/\n      dockerfile: Dockerfile\n      args:\n        VITE_APP_WS_HOST: localhost\n        VITE_APP_WS_PORT: 5001\n        VITE_APP_API_REST_PORT: 5001\n    ports:\n      - 80:80\n"
  },
  {
    "path": "docker/healthcheck.sh",
    "content": "#!/bin/bash\n\nif [ \"$#\" -ne 1 ]; then\n    echo \"Usage: $0 <URL>\"\n    exit 1\nfi\n\nURL=\"$1\"\nINTERVAL=5\nMAX_ATTEMPTS=20 \n\nattempt=0\nwhile [ $attempt -lt $MAX_ATTEMPTS ]; do\n  attempt=$(( $attempt + 1 ))\n  \n  curl --fail --silent $URL && echo \"Service is up!\" && exit 0\n  \n  echo \"Service not ready yet. Waiting for $INTERVAL seconds. Attempt $attempt of $MAX_ATTEMPTS.\"\n  sleep $INTERVAL\ndone\n\necho \"Service did not become ready after $MAX_ATTEMPTS attempts.\"\nexit 1"
  },
  {
    "path": "integration_tests/.gitignore",
    "content": "# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.\n\n# dependencies\n/node_modules\n/.pnp\n.pnp.js\n\n# testing\n/coverage\n\n# production\n/dist\n\n# misc\n.DS_Store\n.env.local\n.env.development.local\n.env.test.local\n.env.production.local\n\nnpm-debug.log*\nyarn-debug.log*\nyarn-error.log*\n"
  },
  {
    "path": "integration_tests/package.json",
    "content": "{\n  \"name\": \"integration_tests\",\n  \"version\": \"1.0.0\",\n  \"description\": \"\",\n  \"main\": \"dist/index.js\",\n  \"scripts\": {\n    \"test\": \"mocha dist/tests/**/*Test.js\",\n    \"build\": \"tsc\",\n    \"pretest\": \"npm run build\"\n  },\n  \"author\": \"\",\n  \"license\": \"ISC\",\n  \"dependencies\": {\n    \"axios\": \"^1.5.0\",\n    \"chai\": \"^4.3.8\",\n    \"mocha\": \"^10.2.0\",\n    \"socket.io-client\": \"^4.7.2\"\n  },\n  \"devDependencies\": {\n    \"@types/chai\": \"^4.3.6\",\n    \"@types/minimist\": \"^1.2.2\",\n    \"@types/mocha\": \"^10.0.1\",\n    \"@types/node\": \"^20.5.9\",\n    \"@types/normalize-package-data\": \"^2.4.1\",\n    \"@types/socket.io-client\": \"^3.0.0\",\n    \"typescript\": \"^5.2.2\"\n  }\n}"
  },
  {
    "path": "integration_tests/tests/nodeProcessingOrder/nodeErrorTest.ts",
    "content": "import { expect } from \"chai\";\nimport {\n  disconnectSocket,\n  getSocket,\n  setupSocket,\n} from \"../../utils/testHooks\";\nimport {\n  createRequestData,\n  flowWithFourParallelNodeStep,\n  flowWithoutLinks,\n  sequentialFlow,\n} from \"../../utils/requestDatas\";\n\ndescribe(\"Node errors test\", function () {\n  this.timeout(30000);\n\n  beforeEach(function (done) {\n    setupSocket(done);\n  });\n\n  afterEach(function () {\n    disconnectSocket();\n  });\n\n  it(\"Error in sequential flow should stop the flow\", function (done) {\n    const socket = getSocket();\n\n    const flowNodesWithOneError = structuredClone(sequentialFlow);\n    flowNodesWithOneError[1] = {\n      ...flowNodesWithOneError[1],\n      raiseError: true,\n    };\n\n    socket.emit(\"process_file\", createRequestData(flowNodesWithOneError));\n\n    let errorReceived = false;\n    let progressCount = 0;\n    const progressBeforeError = 1;\n\n    socket.on(\"error\", (error) => {\n      errorReceived = true;\n      expect(progressCount).to.equal(progressBeforeError);\n    });\n\n    socket.on(\"progress\", (progress) => {\n      if (errorReceived) {\n        done(new Error(\"Received progress after error\"));\n      }\n      progressCount++;\n      if (progressCount > progressBeforeError) {\n        done(new Error(`Too many nodes sent progress`));\n      }\n    });\n\n    setTimeout(() => {\n      if (!errorReceived) {\n        socket.disconnect();\n        done(new Error(\"No error received within the expected time\"));\n      } else {\n        done();\n      }\n    }, 10000);\n  });\n\n  it(\"Error in flow without link should run the others nodes\", function (done) {\n    const socket = getSocket();\n\n    const flow = structuredClone(sequentialFlow);\n    flow[1] = {\n      ...flow[1],\n      raiseError: true,\n    };\n\n    socket.emit(\"process_file\", createRequestData(flow));\n\n    let errorReceived = false;\n    let progressCount = 0;\n    const maxProgressBeforeError = 2;\n\n    socket.on(\"error\", (error) => {\n      errorReceived = true;\n      expect(progressCount).to.be.at.most(maxProgressBeforeError);\n    });\n\n    socket.on(\"progress\", (progress) => {\n      progressCount++;\n      if (progressCount > maxProgressBeforeError) {\n        done(new Error(`Too many nodes sent progress`));\n      }\n    });\n\n    setTimeout(() => {\n      if (errorReceived && progressCount === 1) {\n        socket.disconnect();\n        done();\n      } else {\n        done(new Error(\"No error received within the expected time\"));\n      }\n    }, 2000);\n  });\n\n  it(\"Error in flow with 4 parallel node should return result for all of them except the one in error\", function (done) {\n    const socket = getSocket();\n\n    const flow = structuredClone(flowWithFourParallelNodeStep);\n\n    flow[4] = {\n      ...flow[4],\n      raiseError: true,\n    };\n\n    flow[1] = {\n      ...flow[1],\n      sleepDuration: 2,\n    }; //One node with high delay to be sure he is processed during the error raise\n\n    socket.emit(\"process_file\", createRequestData(flow));\n\n    let errorReceived = false;\n    let progressCount = 0;\n    const maxProgressBeforeError = 4;\n\n    socket.on(\"error\", (error) => {\n      errorReceived = true;\n      expect(progressCount).to.be.at.most(maxProgressBeforeError);\n    });\n\n    socket.on(\"progress\", (progress) => {\n      progressCount++;\n      if (progressCount > maxProgressBeforeError) {\n        done(new Error(`Too many nodes sent progress`));\n      }\n    });\n\n    socket.on(\"run_end\", (end) => {\n      if (progressCount !== maxProgressBeforeError) {\n        done(\n          new Error(`Not all nodes were processed before end of execution.`)\n        );\n      } else {\n        done();\n      }\n    });\n  });\n});\n"
  },
  {
    "path": "integration_tests/tests/nodeProcessingOrder/nodeParallelExecutionDurationTest.ts",
    "content": "import { expect } from \"chai\";\nimport {\n  disconnectSocket,\n  getSocket,\n  setupSocket,\n} from \"../../utils/testHooks\";\nimport {\n  createRequestData,\n  flowWithFourParallelNodeStep,\n} from \"../../utils/requestDatas\";\n\ndescribe(\"Node errors test\", function () {\n  this.timeout(15000);\n\n  beforeEach(function (done) {\n    setupSocket(done);\n  });\n\n  afterEach(function () {\n    disconnectSocket();\n  });\n\n  it(\"4 parallel node with 2s sleep each should not compound time\", function (done) {\n    const socket = getSocket();\n\n    const flow = structuredClone(flowWithFourParallelNodeStep);\n\n    flow[1] = {\n      ...flow[1],\n      sleepDuration: 2,\n    };\n\n    flow[2] = {\n      ...flow[2],\n      sleepDuration: 2,\n    };\n\n    flow[3] = {\n      ...flow[3],\n      sleepDuration: 2,\n    };\n\n    flow[4] = {\n      ...flow[4],\n      sleepDuration: 2,\n    };\n\n    const maxDurationMsExpected = 5000;\n    const timeStart = Date.now();\n    socket.emit(\"process_file\", createRequestData(flow));\n\n    let progressCount = 0;\n    const maxProgress = 5;\n\n    socket.on(\"progress\", (progress) => {\n      progressCount++;\n      if (progressCount > maxProgress) {\n        done(new Error(`Too many nodes sent progress`));\n      }\n    });\n\n    socket.on(\"run_end\", (end) => {\n      if (progressCount !== maxProgress) {\n        done(\n          new Error(`Not all nodes were processed before end of execution.`)\n        );\n      } else {\n        const timeEnd = Date.now();\n        const duration = timeEnd - timeStart;\n        expect(duration).to.be.lessThan(maxDurationMsExpected);\n        done();\n      }\n    });\n  });\n});\n"
  },
  {
    "path": "integration_tests/tests/nodeProcessingOrder/nodeWithChildrenTest.ts",
    "content": "import { expect } from \"chai\";\nimport { disconnectSocket, getSocket, setupSocket } from \"../../utils/testHooks\";\nimport { createRequestData } from \"../../utils/requestDatas\";\n\ndescribe('node with children test', function () {\n    this.timeout(15000);\n\n    beforeEach(function (done) {\n        setupSocket(done);\n    });\n\n    afterEach(function () {\n        disconnectSocket();\n    });\n\n    const flowNodeWithChildren = [\n        {\n            inputs: [],\n            name: \"1#llm-prompt\",\n            processorType: \"llm-prompt\",\n        },\n        {\n            inputs: [\n                {\n                    \"inputNode\": \"1#llm-prompt\",\n                    \"inputNodeOutputKey\": 0\n                }\n            ],\n            name: \"2#llm-prompt\",\n            processorType: \"llm-prompt\",\n        },\n        {\n            inputs: [\n                {\n                    \"inputNode\": \"1#llm-prompt\",\n                    \"inputNodeOutputKey\": 0\n                }\n            ],\n            name: \"3#stable-diffusion-stabilityai-prompt\",\n            processorType: \"stable-diffusion-stabilityai-prompt\",\n        }\n    ];\n\n    it('process_file should process the parent first, then its children', function (done) {\n        const socket = getSocket();\n        socket.emit('process_file', createRequestData(flowNodeWithChildren));\n\n        let processedNodes: string[] = [];\n\n        socket.on('progress', (data) => {\n            processedNodes.push(data.instanceName);\n\n            if (processedNodes.length === flowNodeWithChildren.length) {\n                try {\n                    //First one needs to be the parent \n                    expect(processedNodes[0]).to.equal(flowNodeWithChildren[0].name);\n\n                    expect(processedNodes).to.includes(flowNodeWithChildren[1].name);\n                    expect(processedNodes).to.includes(flowNodeWithChildren[2].name);\n                    done();\n                } catch (error) {\n                    done(error);\n                }\n            }\n        });\n\n        socket.on('error', (error) => {\n            done(new Error(`Error event received: ${JSON.stringify(error)}`));\n        });\n    });\n\n});"
  },
  {
    "path": "integration_tests/tests/nodeProcessingOrder/nodeWithMultipleParentsTest.ts",
    "content": "import { expect } from \"chai\";\nimport { disconnectSocket, getSocket, setupSocket } from \"../../utils/testHooks\";\nimport { createRequestData } from \"../../utils/requestDatas\";\n\ndescribe('node with multiple parent test', function () {\n    this.timeout(15000);\n\n    beforeEach(function (done) {\n        setupSocket(done);\n    });\n\n    afterEach(function () {\n        disconnectSocket();\n    });\n\n    const flowWithNodesWithMultipleParents = [\n        {\n            inputs: [],\n            name: \"1#llm-prompt\",\n            processorType: \"llm-prompt\",\n        },\n        {\n            inputs: [],\n            name: \"2#llm-prompt\",\n            processorType: \"llm-prompt\",\n        },\n        {\n            inputs: [\n                {\n                    \"inputNode\": \"1#llm-prompt\",\n                    \"inputNodeOutputKey\": 0\n                },\n                {\n                    \"inputNode\": \"2#llm-prompt\",\n                    \"inputNodeOutputKey\": 0\n                }\n            ],\n            name: \"3#stable-diffusion-stabilityai-prompt\",\n            processorType: \"stable-diffusion-stabilityai-prompt\",\n        }\n    ];\n\n    it('process_file should process both parents before the child', function (done) {\n        const socket = getSocket();\n        socket.emit('process_file', createRequestData(flowWithNodesWithMultipleParents));\n\n        let processedNodes: string[] = [];\n\n        socket.on('progress', (data) => {\n            processedNodes.push(data.instanceName);\n\n            if (processedNodes.length === flowWithNodesWithMultipleParents.length) {\n                try {\n                    // Check if both parents are processed before the child\n                    expect(processedNodes.includes(\"1#llm-prompt\")).to.be.true;\n                    expect(processedNodes.includes(\"2#llm-prompt\")).to.be.true;\n                    expect(processedNodes.indexOf(\"3#stable-diffusion-stabilityai-prompt\")).to.be.greaterThan(processedNodes.indexOf(\"1#llm-prompt\"));\n                    expect(processedNodes.indexOf(\"3#stable-diffusion-stabilityai-prompt\")).to.be.greaterThan(processedNodes.indexOf(\"2#llm-prompt\"));\n                    done();\n                } catch (error) {\n                    done(error);\n                }\n            }\n\n        });\n\n        socket.on('error', (error) => {\n            done(new Error(`Error event received: ${JSON.stringify(error)}`));\n        });\n    });\n\n});"
  },
  {
    "path": "integration_tests/tests/nodeProcessingOrder/nodesWithoutLinkTest.ts",
    "content": "import { expect } from \"chai\";\nimport { disconnectSocket, getSocket, setupSocket } from \"../../utils/testHooks\";\nimport { createRequestData } from \"../../utils/requestDatas\";\n\ndescribe('node without link test', function () {\n    this.timeout(15000);\n\n    beforeEach(function (done) {\n        setupSocket(done);\n    });\n\n    afterEach(function () {\n        disconnectSocket();\n    });\n\n    const flowWithNodesWithoutLink = [\n        {\n            inputs: [],\n            name: \"1#llm-prompt\",\n            processorType: \"llm-prompt\",\n        },\n        {\n            inputs: [],\n            name: \"2#llm-prompt\",\n            processorType: \"llm-prompt\",\n        },\n        {\n            inputs: [],\n            name: \"3#stable-diffusion-stabilityai-prompt\",\n            processorType: \"stable-diffusion-stabilityai-prompt\",\n        }\n    ];\n\n    it('process_file should process all the nodes', function (done) {\n        const socket = getSocket();\n        socket.emit('process_file', createRequestData(flowWithNodesWithoutLink));\n\n        let processedNodes: string[] = [];\n\n        socket.on('progress', (data) => {\n            processedNodes.push(data.instanceName);\n\n            if (processedNodes.length === flowWithNodesWithoutLink.length) {\n                try {\n                    expect(processedNodes).to.includes(flowWithNodesWithoutLink[0].name);\n                    expect(processedNodes).to.includes(flowWithNodesWithoutLink[1].name);\n                    expect(processedNodes).to.includes(flowWithNodesWithoutLink[2].name);\n                    done();\n                } catch (error) {\n                    done(error);\n                }\n            }\n        });\n\n        socket.on('error', (error) => {\n            done(new Error(`Error event received: ${JSON.stringify(error)}`));\n        });\n    });\n\n});\n\n"
  },
  {
    "path": "integration_tests/tests/nodeProcessingOrder/singleNodeTest.ts",
    "content": "import { expect } from \"chai\";\nimport { Socket, io } from \"socket.io-client\";\nimport { createRequestData } from \"../../utils/requestDatas\";\n\ndescribe('single node test', function () {\n    this.timeout(5000);\n\n    let socket: Socket;\n\n    beforeEach(function (done) {\n        socket = io('http://localhost:5000');\n\n        socket.on('connect', function () {\n            done();\n        });\n\n        socket.on('connect_error', function (error) {\n            done(error);\n        });\n    });\n\n    afterEach(function () {\n        socket.disconnect();\n    });\n\n    const flowWithSingleNode = [\n        {\n            inputs: [],\n            name: \"1#llm-prompt\",\n            processorType: \"llm-prompt\",\n        }\n    ];\n\n    it('process_file should trigger one progress event', function (done) {\n        socket.emit('process_file', createRequestData(flowWithSingleNode));\n\n        socket.once('progress', (data) => {\n            expect(data).to.have.property('instanceName').to.equal(flowWithSingleNode[0].name);\n            done();\n        });\n\n        socket.once('error', (error) => {\n            done(new Error(`Error event received: ${JSON.stringify(error)}`));\n        });\n    });\n\n});"
  },
  {
    "path": "integration_tests/tests/socketEvents/processFileEventTest.ts",
    "content": "import { io, Socket } from \"socket.io-client\";\nimport { expect } from 'chai';\nimport { basicJsonFlow, getBasicProcessFileData, getJsonFlowWithMissingInputTextProcessFileData } from '../../utils/requestDatas';\n\ndescribe('process_file event tests', function () {\n    this.timeout(5000);\n\n    let socket: Socket;\n\n    beforeEach(function (done) {\n        socket = io('http://localhost:5000');\n\n        socket.on('connect', function () {\n            done();\n        });\n\n        socket.on('connect_error', function (error) {\n            done(error);\n        });\n    });\n\n    afterEach(function () {\n        socket.disconnect();\n    });\n\n    it('process_file should trigger run_end event', function (done) {\n        const processFileData = getBasicProcessFileData();\n\n        socket.emit('process_file', processFileData);\n\n        socket.once('run_end', (data) => {\n            expect(data).to.have.property('output');\n            done();\n        });\n\n        socket.once('error', (error) => {\n            done(new Error(`Error event received: ${JSON.stringify(error)}`));\n        });\n    });\n\n    it('process_file should trigger progress event', function (done) {\n        const processFileData = getBasicProcessFileData();\n\n        socket.emit('process_file', processFileData);\n\n        socket.once('progress', (data) => {\n            expect(data).to.have.property('output').to.equal(basicJsonFlow[0].inputText);\n            expect(data).to.have.property('instanceName').to.equal(basicJsonFlow[0].name);\n            done();\n        });\n\n        socket.once('error', (error) => {\n            done(new Error(`Error event received: ${JSON.stringify(error)}`));\n        });\n    });\n\n    it('process_file should trigger current_node_running event', function (done) {\n        const processFileData = getBasicProcessFileData();\n\n        socket.emit('process_file', processFileData);\n\n        socket.once('current_node_running', (data) => {\n            expect(data).to.have.property('instanceName').to.equal(basicJsonFlow[0].name);\n            done();\n        });\n\n        socket.once('error', (error) => {\n            done(new Error(`Error event received: ${JSON.stringify(error)}`));\n        });\n    });\n\n    it('process_file with missing input should trigger error event', function (done) {\n        const processFileData = getJsonFlowWithMissingInputTextProcessFileData();\n\n        socket.emit('process_file', processFileData);\n        socket.once('error', (error) => {\n            done();\n        });\n    });\n});"
  },
  {
    "path": "integration_tests/tests/socketEvents/runNodeEventTest.ts",
    "content": "import { io, Socket } from \"socket.io-client\";\nimport { expect } from 'chai';\nimport { basicJsonFlow, getBasicRunNodeData, getJsonFlowWithMissingInputTextProcessFileData } from '../../utils/requestDatas';\n\ndescribe('run_node event tests', function () {\n    this.timeout(5000);\n\n    let socket: Socket;\n\n    beforeEach(function (done) {\n        socket = io('http://localhost:5000');\n\n        socket.on('connect', function () {\n            done();\n        });\n\n        socket.on('connect_error', function (error) {\n            done(error);\n        });\n    });\n\n    afterEach(function () {\n        socket.disconnect();\n    });\n\n    it('run_node should trigger progress event', function (done) {\n        const runNodeData = getBasicRunNodeData();\n\n        socket.emit('run_node', runNodeData);\n\n        socket.once('progress', (data) => {\n            expect(data).to.have.property('output').to.equal(basicJsonFlow[0].inputText);\n            expect(data).to.have.property('instanceName').to.equal(basicJsonFlow[0].name);\n            done();\n        });\n\n        socket.once('error', (error) => {\n            done(new Error(`Error event received: ${JSON.stringify(error)}`));\n        });\n    });\n\n    it('run_node should trigger current_node_running event', function (done) {\n        const runNodeData = getBasicRunNodeData();\n\n        socket.emit('run_node', runNodeData);\n\n        socket.once('current_node_running', (data) => {\n            expect(data).to.have.property('instanceName').to.equal(basicJsonFlow[0].name);\n            done();\n        });\n\n        socket.once('error', (error) => {\n            done(new Error(`Error event received: ${JSON.stringify(error)}`));\n        });\n    });\n\n    it('run_node with missing input should trigger error event', function (done) {\n        const processFileData = getJsonFlowWithMissingInputTextProcessFileData();\n\n        socket.emit('run_node', processFileData);\n        socket.once('error', (error) => {\n            done();\n        });\n    });\n});"
  },
  {
    "path": "integration_tests/tests/socketEvents/socketConnectionTest.ts",
    "content": "import { io, Socket } from \"socket.io-client\";\nimport { expect } from 'chai';\n\ndescribe('Socket.IO connection tests', function () {\n\n    let socket: Socket;\n\n    beforeEach(function (done: Mocha.Done): void {\n        socket = io('http://localhost:5000');\n\n        socket.on('connect', function (): void {\n            done();\n        });\n\n        socket.on('connect_error', function (error: any): void {\n            done(error);\n        });\n    });\n\n    afterEach(function (): void {\n        socket.disconnect();\n    });\n\n    it('should be connected to the server', function (done: Mocha.Done): void {\n        expect(socket.connected).to.be.true;\n        done();\n    });\n\n    it('should disconnect', function (done: Mocha.Done): void {\n        socket.disconnect();\n        expect(socket.connected).to.be.false;\n        done();\n    });\n});"
  },
  {
    "path": "integration_tests/tsconfig.json",
    "content": "{\n    \"compilerOptions\": {\n        \"target\": \"ES6\",\n        \"module\": \"commonjs\",\n        \"outDir\": \"./dist\",\n        \"rootDir\": \"./\",\n        \"strict\": true\n    }\n}"
  },
  {
    "path": "integration_tests/utils/requestDatas.ts",
    "content": "type ProcessFileData = {\n  jsonFile: string;\n  parameters: Record<string, string>;\n};\n\ntype RunNodeData = {\n  jsonFile: string;\n  parameters: Record<string, string>;\n  nodeName: string;\n};\n\nexport type Node = {\n  inputs: {\n    inputName?: string;\n    inputNode: string;\n    inputNodeOutputKey: number;\n  }[];\n  name: string;\n  processorType: string;\n  [key: string]: any;\n};\n\nconst basicJsonFlow: Node[] = [\n  {\n    inputs: [],\n    name: \"kbk1proh1#input-text\",\n    processorType: \"input-text\",\n    inputText: \"Hello World\",\n    x: 1,\n    y: 1,\n  },\n];\n\nconst jsonFlowWithMissingInputText: Node[] = [\n  {\n    inputs: [],\n    name: \"kbk1proh1#input-text\",\n    processorType: \"input-text\",\n    x: 1,\n    y: 1,\n  },\n];\n\nexport const flowWithOneNonFreeNode: Node[] = [\n  {\n    inputs: [],\n    name: \"1#stable-diffusion-stabilityai-prompt\",\n    processorType: \"stable-diffusion-stabilityai-prompt\",\n  },\n];\n\nexport const sequentialFlow: Node[] = [\n  {\n    inputs: [],\n    name: \"1#llm-prompt\",\n    processorType: \"llm-prompt\",\n    raiseError: false,\n  },\n  {\n    inputs: [\n      {\n        inputNode: \"1#llm-prompt\",\n        inputNodeOutputKey: 0,\n      },\n    ],\n    name: \"2#llm-prompt\",\n    processorType: \"llm-prompt\",\n    raiseError: false,\n  },\n  {\n    inputs: [\n      {\n        inputNode: \"2#llm-prompt\",\n        inputNodeOutputKey: 0,\n      },\n    ],\n    name: \"3#stable-diffusion-stabilityai-prompt\",\n    processorType: \"stable-diffusion-stabilityai-prompt\",\n    raiseError: false,\n  },\n];\n\nexport const flowWithoutLinks: Node[] = [\n  {\n    inputs: [],\n    name: \"1#llm-prompt\",\n    processorType: \"llm-prompt\",\n    model: \"gpt-4\",\n    prompt: \"hi\",\n    raiseError: false,\n  },\n  {\n    inputs: [],\n    name: \"2#llm-prompt\",\n    processorType: \"llm-prompt\",\n    model: \"gpt-4\",\n    prompt: \"hi\",\n    raiseError: false,\n  },\n  {\n    inputs: [],\n    name: \"3#stable-diffusion-stabilityai-prompt\",\n    processorType: \"stable-diffusion-stabilityai-prompt\",\n    raiseError: false,\n  },\n];\n\nexport const flowFreeNodesWithoutLink: Node[] = [\n  {\n    inputs: [],\n    name: \"1#input-text\",\n    processorType: \"input-text\",\n    inputText: \"fake\",\n  },\n  {\n    inputs: [],\n    name: \"2#input-text\",\n    processorType: \"input-text\",\n    inputText: \"fake\",\n  },\n  {\n    inputs: [],\n    name: \"3#input-text\",\n    processorType: \"input-text\",\n    inputText: \"fake\",\n  },\n];\n\nexport const flowWithFourParallelNodeStep: Node[] = [\n  {\n    inputs: [],\n    name: \"1#llm-prompt\",\n    processorType: \"llm-prompt\",\n    raiseError: false,\n    sleepDuration: undefined,\n  },\n  {\n    inputs: [\n      {\n        inputNode: \"1#llm-prompt\",\n        inputNodeOutputKey: 0,\n      },\n    ],\n    name: \"2#llm-prompt\",\n    processorType: \"llm-prompt\",\n    raiseError: false,\n    sleepDuration: undefined,\n  },\n  {\n    inputs: [\n      {\n        inputNode: \"1#llm-prompt\",\n        inputNodeOutputKey: 0,\n      },\n    ],\n    name: \"3#llm-prompt\",\n    processorType: \"llm-prompt\",\n    raiseError: false,\n    sleepDuration: undefined,\n  },\n  {\n    inputs: [\n      {\n        inputNode: \"1#llm-prompt\",\n        inputNodeOutputKey: 0,\n      },\n    ],\n    name: \"4#llm-prompt\",\n    processorType: \"llm-prompt\",\n    raiseError: false,\n    sleepDuration: undefined,\n  },\n  {\n    inputs: [\n      {\n        inputNode: \"1#llm-prompt\",\n        inputNodeOutputKey: 0,\n      },\n    ],\n    name: \"5#llm-prompt\",\n    processorType: \"llm-prompt\",\n    raiseError: false,\n    sleepDuration: undefined,\n  },\n];\n\nfunction getBasicProcessFileData(): ProcessFileData {\n  return {\n    jsonFile: JSON.stringify(basicJsonFlow),\n    parameters: {\n      openaiApiKey: \"apiKey\",\n    },\n  };\n}\n\nfunction getBasicRunNodeData(): RunNodeData {\n  return {\n    jsonFile: JSON.stringify(basicJsonFlow),\n    nodeName: basicJsonFlow[0].name,\n    parameters: {\n      openaiApiKey: \"apiKey\",\n    },\n  };\n}\n\nfunction getJsonFlowWithMissingInputTextProcessFileData(): ProcessFileData {\n  return {\n    jsonFile: JSON.stringify(jsonFlowWithMissingInputText),\n    parameters: {\n      openaiApiKey: \"apiKey\",\n    },\n  };\n}\n\nfunction createRequestData(flow: any): ProcessFileData {\n  return {\n    jsonFile: JSON.stringify(flow),\n    parameters: {\n      openaiApiKey: \"apiKey\",\n    },\n  };\n}\nexport {\n  basicJsonFlow,\n  jsonFlowWithMissingInputText,\n  getBasicProcessFileData,\n  getBasicRunNodeData,\n  getJsonFlowWithMissingInputTextProcessFileData,\n  createRequestData,\n};\n"
  },
  {
    "path": "integration_tests/utils/testHooks.ts",
    "content": "import { Socket, io } from \"socket.io-client\";\n\nlet socket: Socket;\n\nexport const setupSocket = (done: any) => {\n    socket = io('http://localhost:5000');\n\n    socket.on('connect', function () {\n        done();\n    });\n\n    socket.on('connect_error', function (error) {\n        done(error);\n    });\n};\n\nexport const disconnectSocket = () => {\n    socket.disconnect();\n};\n\nexport const getSocket = () => socket;\n"
  },
  {
    "path": "packages/backend/.gitignore",
    "content": "# Fichiers générés par l'environnement de développement\n__pycache__/\n*.py[cod]\n\n# Fichiers générés par l'IDE\n.idea/\n.vscode/\n\n# Fichiers de logs\n*.log\n\n# Fichiers d'env\n*.env\n\n# Fichiers de build\nbuild\ndist\nserver.spec\n\n# Local storage\nlocal_storage/"
  },
  {
    "path": "packages/backend/Dockerfile",
    "content": "FROM python:3.9\n\n# Default values\nENV HOST=0.0.0.0\nENV PORT=5000\n\n\nWORKDIR /app\n\n# System dependencies\nRUN apt-get update && apt-get install -y \\\n    build-essential \\\n    libpq-dev \\\n    python3-dev \\\n    libssl-dev \\\n    libffi-dev \\\n    libmagic-dev \\\n    && apt-get clean \\\n    && rm -rf /var/lib/apt/lists/*\n\n\n# Playwright\nARG PLAYWRIGHT_VERSION=1.39\nENV PLAYWRIGHT_BROWSERS_PATH=/ms-playwright\n\nRUN pip install playwright==$PLAYWRIGHT_VERSION && \\\n    playwright install chromium && \\\n    playwright install-deps chromium\n\n# Poetry & Dependencies\nRUN pip install --upgrade poetry \\\n    && poetry config virtualenvs.create false\n\nCOPY poetry.lock pyproject.toml /app/\n\nRUN poetry install --no-interaction --no-root\n\n# The rest of the app\nCOPY app /app/app/\nCOPY resources /app/resources\nCOPY tests/ /app/tests/\nCOPY server.py README.md /app/\nCOPY config.yaml /app/\n\nEXPOSE 5000\n\nCMD [\"poetry\", \"run\", \"python\", \"server.py\"]"
  },
  {
    "path": "packages/backend/README.md",
    "content": ""
  },
  {
    "path": "packages/backend/app/env_config.py",
    "content": "import os\nimport sys\nfrom typing import List, Optional\n\nENV_LOCAL = \"LOCAL\"\nENV_CLOUD = \"CLOUD\"\nCURRENT_ENV = os.environ.get(\"DEPLOYMENT_ENV\", ENV_LOCAL)\n\n\nCURRENT_DIR = os.path.dirname(os.path.abspath(__file__))\nBACKEND_DIR = os.path.dirname(CURRENT_DIR)\nLOCAL_STORAGE_DIR = os.path.join(\n    BACKEND_DIR, os.getenv(\"LOCAL_STORAGE_FOLDER_NAME\", \"local_storage\")\n)\n\n\ndef get_static_folder() -> str:\n    if getattr(sys, \"frozen\", False):\n        base_path = sys._MEIPASS\n        build_dir = os.path.join(base_path, \"build\")\n    else:\n        base_path = os.path.dirname(os.path.abspath(__file__))\n        build_dir = os.path.join(base_path, \"..\", \"..\", \"ui\", \"build\")\n    return build_dir\n\n\ndef is_cloud_env() -> bool:\n    return CURRENT_ENV == ENV_CLOUD\n\n\ndef is_local_environment() -> bool:\n    return CURRENT_ENV == ENV_LOCAL\n\n\ndef is_mock_env() -> bool:\n    return os.getenv(\"USE_MOCK\") == \"true\"\n\n\ndef is_server_static_files_enabled() -> bool:\n    return os.getenv(\"SERVE_STATIC_FILES\") == \"true\"\n\n\ndef get_local_storage_folder_path() -> str:\n    return LOCAL_STORAGE_DIR\n\n\ndef get_flask_secret_key() -> Optional[str]:\n    return os.getenv(\"FLASK_SECRET_KEY\")\n\n\ndef get_replicate_api_key() -> Optional[str]:\n    return os.getenv(\"REPLICATE_API_KEY\")\n\n\ndef get_background_task_max_workers() -> int:\n    return int(os.getenv(\"BACKGROUND_TASK_MAX_WORKERS\", \"2\"))\n\n\ndef use_async_browser() -> bool:\n    return os.getenv(\"USE_ASYNC_BROWSER\") == \"true\"\n\n\ndef get_browser_tab_max_usage() -> int:\n    return int(os.getenv(\"BROWSER_TAB_MAX_USAGE\", \"100\"))\n\n\ndef get_browser_tab_pool_size() -> int:\n    return int(os.getenv(\"BROWSER_TAB_POOL_SIZE\", \"3\"))\n\n\ndef is_set_app_config_on_ui_enabled() -> bool:\n    return os.getenv(\"ENABLE_SET_APP_CONFIG_ON_UI\", \"true\") == \"true\"\n\n\ndef is_s3_enabled() -> bool:\n    return os.getenv(\"S3_AWS_ACCESS_KEY_ID\") is not None\n"
  },
  {
    "path": "packages/backend/app/flask/app_routes/__init__.py",
    "content": ""
  },
  {
    "path": "packages/backend/app/flask/app_routes/image_routes.py",
    "content": "from app.env_config import (get_local_storage_folder_path)\nfrom flask import Blueprint, send_from_directory\n\nimage_blueprint = Blueprint('image_blueprint', __name__)\n\n@image_blueprint.route(\"/image/<path:filename>\")\ndef serve_image(filename):\n    \"\"\"\n        Serve image from local storage.\n    \"\"\"\n    return send_from_directory(get_local_storage_folder_path(), filename)"
  },
  {
    "path": "packages/backend/app/flask/app_routes/node_routes.py",
    "content": "import json\n\nfrom flask import Blueprint, request\n\nfrom ...utils.node_extension_utils import get_dynamic_extension_config, get_extensions\n\n# from ...utils.openapi_reader import OpenAPIReader\nfrom ...utils.replicate_utils import (\n    get_highlighted_models_info,\n    get_model_openapi_schema,\n    get_replicate_collection_models,\n    get_replicate_collections,\n    get_replicate_models,\n)\n\nnode_blueprint = Blueprint(\"node_blueprint\", __name__)\n\n\n@node_blueprint.route(\"/node/extensions\")\ndef get_node_extensions():\n    extensions = get_extensions()\n    return {\"extensions\": extensions}\n\n\n@node_blueprint.route(\"/node/extensions/dynamic\", methods=[\"POST\"])\ndef get_dynamic_extension():\n    request_body = request.json\n\n    if request_body is None:\n        raise Exception(\"Missing data\")\n\n    processor_type = request_body.get(\"processorType\")\n    data = request_body.get(\"data\")\n\n    config = get_dynamic_extension_config(processor_type, data)\n\n    return config.dict()\n\n\n@node_blueprint.route(\"/node/models\")\ndef get_public_models():\n    cursor = request.args.get(\"cursor\", None)\n\n    public = get_replicate_models(cursor=cursor)\n    highlighted = get_highlighted_models_info()\n\n    return {\"public\": public, \"highlighted\": highlighted}\n\n\n@node_blueprint.route(\"/node/collections\")\ndef get_collections():\n    return get_replicate_collections()\n\n\n@node_blueprint.route(\"/node/collections/<path:collection>\")\ndef get_collection_models(collection):\n    cursor = request.args.get(\"cursor\", None)\n    return get_replicate_collection_models(collection, cursor=cursor)\n\n\n@node_blueprint.route(\"/node/replicate/config/<path:model>\")\ndef get_config(model):\n    return get_model_openapi_schema(model)\n\n\n# @node_blueprint.route(\"/node/openapi/<path:api_name>/models\")\n# def get_openapi_models(api_name):\n#     api_reader = OpenAPIReader(f\"./resources/openapi/{api_name}.json\")\n#     return api_reader.get_all_paths()\n\n\n# @node_blueprint.route(\"/node/openapi/<path:api_name>/config/<path:id>\")\n# def get_openapi_model_config(api_name, id):\n#     api_reader = OpenAPIReader(f\"./resources/openapi/{api_name}.json\")\n#     return api_reader.get_request_schema(id)\n"
  },
  {
    "path": "packages/backend/app/flask/app_routes/parameters_routes.py",
    "content": "import os\nimport yaml\nfrom flask import Blueprint\n\nparameters_blueprint = Blueprint(\"parameters_blueprint\", __name__)\n\n\ndef load_config():\n    with open(\"config.yaml\", \"r\") as file:\n        return yaml.safe_load(file)\n\n\n@parameters_blueprint.route(\"/parameters\", methods=[\"GET\"])\ndef parameters():\n    config = load_config()\n    return config\n"
  },
  {
    "path": "packages/backend/app/flask/app_routes/static_routes.py",
    "content": "import os\nfrom flask import Blueprint, send_from_directory\n\nfrom ...env_config import get_static_folder\n\nstatic_blueprint = Blueprint('static_blueprint', __name__)\n\n\n@static_blueprint.route(\"/\", defaults={\"path\": \"\"})\n@static_blueprint.route(\"/<path:path>\")\ndef serve(path):\n    \"\"\"\n        Serve UI static files from the static folder. \n    \"\"\"\n    static_folder = get_static_folder()\n    if path != \"\" and os.path.exists(os.path.join(static_folder, path)):\n        return send_from_directory(static_folder, path)\n    else:\n        return send_from_directory(static_folder, \"index.html\")"
  },
  {
    "path": "packages/backend/app/flask/app_routes/upload_routes.py",
    "content": "import logging\nfrom flask import Blueprint\nfrom ...storage.storage_strategy import StorageStrategy\n\nfrom ...root_injector import get_root_injector\nfrom flask import request\n\nupload_blueprint = Blueprint(\"upload_blueprint\", __name__)\n\n\n@upload_blueprint.route(\"/upload\")\ndef upload_file():\n    \"\"\"\n    Serve image from local storage.\n    \"\"\"\n\n    logging.info(\"Uploading file\")\n    storage_strategy = get_root_injector().get(StorageStrategy)\n\n    filename = request.args.get(\"filename\")\n    try:\n        data = storage_strategy.get_upload_link(filename)\n    except Exception as e:\n        logging.error(e)\n        raise Exception(\n            \"Error uploading file. \"\n            \"Please check your S3 configuration. \"\n            \"If you've not configured S3 please refer to docs.ai-flow.net/docs/file-upload\"\n        )\n\n    json_link = {\n        \"upload_data\": data[0],\n        \"download_link\": data[1],\n    }\n\n    return json_link\n"
  },
  {
    "path": "packages/backend/app/flask/decorators.py",
    "content": "from functools import wraps\n\nfrom flask import jsonify, request, g\nfrom flask_socketio import emit\nimport json\n\n\ndef with_flow_data_validations(*validation_funcs):\n    def decorator(func):\n        @wraps(func)\n        def wrapper(data, *args, **kwargs):\n            try:\n                flow_data = json.loads(data.get(\"jsonFile\", \"{}\"))\n\n                for validation_func in validation_funcs:\n                    validation_func(flow_data)\n\n                return func(data, *args, **kwargs)\n            except Exception as e:\n                emit(\"error\", {\"error\": str(e)})\n\n        return wrapper\n\n    return decorator\n"
  },
  {
    "path": "packages/backend/app/flask/flask_app.py",
    "content": "import logging\nfrom flask import Flask, request, redirect\nfrom flask_cors import CORS\nimport os\n\nfrom ..env_config import get_flask_secret_key, get_static_folder\n\ndef create_app():\n    app = Flask(__name__, static_folder=get_static_folder())\n\n    if get_flask_secret_key() is not None : \n        logging.info(\"Flask secret key set\")\n        app.config['SECRET_KEY'] = get_flask_secret_key()\n    else :\n        logging.warning(\"Flask secret key not set\")\n        app.config['SECRET_KEY'] = \"default_secret\"\n        \n    CORS(app)\n\n\n    if os.getenv(\"USE_HTTPS\", \"false\").lower() == \"true\":\n\n        @app.before_request\n        def before_request():\n            if not request.is_secure:\n                url = request.url.replace(\"http://\", \"https://\", 1)\n                return redirect(url, code=301)\n    \n    logging.info(\"App created\")        \n    return app\n"
  },
  {
    "path": "packages/backend/app/flask/routes.py",
    "content": "import logging\nfrom app.env_config import is_server_static_files_enabled, is_local_environment\nfrom app.flask.socketio_init import flask_app\nfrom .utils.constants import HTTP_OK\n\n\n@flask_app.route(\"/healthcheck\", methods=[\"GET\"])\ndef healthcheck():\n    return \"OK\", HTTP_OK\n\n\nfrom .app_routes.node_routes import node_blueprint\n\nflask_app.register_blueprint(node_blueprint)\n\nfrom .app_routes.upload_routes import upload_blueprint\n\nflask_app.register_blueprint(upload_blueprint)\n\nfrom .app_routes.parameters_routes import parameters_blueprint\n\nflask_app.register_blueprint(parameters_blueprint)\n\nif is_server_static_files_enabled():\n    from .app_routes.static_routes import static_blueprint\n\n    logging.info(\"Visual interface will be available at http://localhost:5000\")\n    flask_app.register_blueprint(static_blueprint)\n\nif is_local_environment():\n    from .app_routes.image_routes import image_blueprint\n\n    logging.info(\"Environment set to LOCAL\")\n    flask_app.register_blueprint(image_blueprint)\n"
  },
  {
    "path": "packages/backend/app/flask/socketio_init.py",
    "content": "import eventlet\n\neventlet.monkey_patch(all=False, socket=True)\n\nfrom flask_socketio import SocketIO\nfrom .flask_app import create_app\n\nflask_app = create_app()\nsocketio = SocketIO(flask_app, cors_allowed_origins=\"*\", async_mode=\"eventlet\")"
  },
  {
    "path": "packages/backend/app/flask/sockets.py",
    "content": "import eventlet\nfrom ..env_config import is_set_app_config_on_ui_enabled\n\neventlet.monkey_patch(all=False, socket=True)\n\nfrom app.flask.socketio_init import flask_app\nfrom app.flask.socketio_init import socketio\nimport logging\nimport json\n\nfrom flask import g, request, session\nfrom flask_socketio import emit\nfrom ..root_injector import (\n    get_root_injector,\n    refresh_root_injector,\n)\nfrom .utils.constants import PARAMETERS_FIELD_NAME, ENV_API_KEYS\n\nfrom ..processors.launcher.processor_launcher import ProcessorLauncher\nfrom ..processors.context.processor_context_flask_request import (\n    ProcessorContextFlaskRequest,\n)\nimport traceback\nimport os\n\n\ndef populate_request_global_object(data):\n    \"\"\"\n    This function is responsible for initializing individual request objects either from the\n    environmental variables or from the data passed as arguments, ensuring that the necessary API\n    keys are available throughout the request for different processes.\n\n    Parameters:\n        data (dict): A dictionary containing potentially necessary keys: \"openai_api_key\" and \"stabilityai_api_key\".\n    \"\"\"\n    use_env = os.getenv(\"USE_ENV_API_KEYS\", \"false\").lower()\n    logging.debug(\"use_env: %s\", use_env)\n\n    if use_env == \"true\":\n        for key in ENV_API_KEYS:\n            env_key = key.upper()\n            value = os.getenv(env_key)\n            if not value:\n                raise Exception(f\"Required {env_key} not provided in environment.\")\n            setattr(g, f\"session_{key}\", value)\n    else:\n        if not PARAMETERS_FIELD_NAME in data:\n            raise Exception(f\"No {PARAMETERS_FIELD_NAME} provided in data.\")\n\n        for key, value in data[PARAMETERS_FIELD_NAME].items():\n            if value:\n                setattr(g, f\"session_{key}\", value)\n            else:\n                raise Exception(f\"No {key} provided in data.\")\n\n\n@socketio.on(\"connect\")\ndef handle_connect():\n    logging.info(\"Client connected\")\n\n\n@socketio.on(\"process_file\")\ndef handle_process_file(data):\n    \"\"\"\n    This event handler is activated when a \"process_file\" event is received via Socket.IO. It allows to run every node in\n    the file, even if they have been executed before.\n\n    Parameters:\n        data (dict): A dictionary encompassing the event's payload, which comprises the JSON configuration file\n                    (\"jsonFile\").\n\n    \"\"\"\n    try:\n        populate_request_global_object(data)\n        flow_data = json.loads(data.get(\"jsonFile\"))\n        launcher = get_root_injector().get(ProcessorLauncher)\n        launcher.set_context(ProcessorContextFlaskRequest(g, session, request.sid))\n\n        if flow_data:\n            processors = launcher.load_processors(flow_data)\n            output = launcher.launch_processors(processors)\n\n            logging.debug(\"Emitting processing_result event with output: %s\", output)\n            emit(\"run_end\", {\"output\": output})\n        else:\n            logging.warning(\"Invalid input or missing configuration file\")\n            emit(\"error\", {\"error\": \"Invalid input or missing configuration file\"})\n    except Exception as e:\n        emit(\"error\", {\"error\": str(e)})\n        traceback.print_exc()\n        logging.error(f\"An error occurred: {str(e)}\")\n\n\n@socketio.on(\"run_node\")\ndef handle_run_node(data):\n    \"\"\"\n    This event handler is activated when a \"run_node\" event is received via Socket.IO. It facilitates the processing\n    of the specified node in the data payload, launching only the designated node and preceding nodes if they\n    haven't been executed earlier.\n\n    Parameters:\n        data (dict): A dictionary encompassing the event's payload, which comprises the JSON configuration file\n                    (\"jsonFile\") and the name of the node to run (\"nodeName\").\n\n    \"\"\"\n    try:\n        populate_request_global_object(data)\n        flow_data = json.loads(data.get(\"jsonFile\"))\n        node_name = data.get(\"nodeName\")\n\n        launcher = get_root_injector().get(ProcessorLauncher)\n        launcher.set_context(ProcessorContextFlaskRequest(g, session, request.sid))\n\n        if flow_data and node_name:\n            processors = launcher.load_processors_for_node(flow_data, node_name)\n            output = launcher.launch_processors_for_node(processors, node_name)\n            logging.debug(\"Emitting processing_result event with output: %s\", output)\n            emit(\"run_end\", {\"output\": output})\n        else:\n            logging.warning(\"Invalid input or missing parameters\")\n            emit(\"error\", {\"error\": \"Invalid input or missing parameters\"})\n    except Exception as e:\n        emit(\n            \"error\",\n            {\"error\": str(e), \"nodeName\": node_name},\n        )\n        traceback.print_exc()\n        logging.error(f\"An error occurred: {node_name} - {str(e)}\")\n\n\n@socketio.on(\"disconnect\")\ndef handle_disconnect():\n    logging.info(\"Client disconnected\")\n\n\n@socketio.on(\"update_app_config\")\ndef handle_update_app_config(data):\n    if not is_set_app_config_on_ui_enabled():\n        return\n\n    logging.info(\"Updating app config\")\n    config_keys = [\n        \"S3_BUCKET_NAME\",\n        \"S3_AWS_ACCESS_KEY_ID\",\n        \"S3_AWS_SECRET_ACCESS_KEY\",\n        \"S3_AWS_REGION_NAME\",\n        \"S3_ENDPOINT_URL\",\n        \"REPLICATE_API_KEY\",\n    ]\n    for key in config_keys:\n        value = data.get(key)\n\n        if value is not None and str(value).strip():\n            logging.info(f\"Setting {key}\")\n            os.environ[key] = value\n\n    refresh_root_injector()\n"
  },
  {
    "path": "packages/backend/app/flask/utils/constants.py",
    "content": "HTTP_OK = 200\nHTTP_BAD_REQUEST = 400\nHTTP_NOT_FOUND = 404\nHTTP_UNAUTHORIZED = 401\n\n\nSESSION_USER_ID_KEY = \"user_id\"\n\nPARAMETERS_FIELD_NAME = \"parameters\"\n\nENV_API_KEYS = [\n    \"openai_api_key\",\n    \"stabilityai_api_key\",\n    \"replicate_api_key\",\n    \"anthropic_api_key\",\n    \"openrouter_api_key\",\n]\n"
  },
  {
    "path": "packages/backend/app/llms/utils/max_token_for_model.py",
    "content": "import tiktoken\n\nDEFAULT_MAX_TOKEN = 4097\n\n\ndef max_token_for_model(model_name: str) -> int:\n    if \"gpt-4o\" in model_name:\n        return 128000\n    token_data = {\n        # GPT-4.1 models\n        \"gpt-4.1\": 1047576,\n        \"gpt-4.1-mini\": 1047576,\n        \"gpt-4.1-nano\": 1047576,\n        # GPT-4 models\n        \"gpt-4o\": 128000,\n        \"gpt-4o-2024-11-20\": 128000,\n        \"gpt-4o-mini\": 128000,\n        \"gpt-4-turbo\": 128000,\n        \"gpt-4-turbo-preview\": 128000,\n        \"gpt-4-1106-preview\": 128000,\n        \"gpt-4-vision-preview\": 128000,\n        \"gpt-4\": 8192,\n        \"gpt-4-0613\": 8192,\n        \"gpt-4-32k\": 32768,\n        \"gpt-4-32k-0613\": 32768,\n        \"gpt-4-0314\": 8192,\n        \"gpt-4-32k-0314\": 32768,\n        # GPT-3.5 models\n        \"gpt-3.5-turbo\": 16385,\n        \"gpt-3.5-turbo-1106\": 16385,\n        \"gpt-3.5-turbo-16k\": 16385,\n        \"gpt-3.5-turbo-instruct\": 4097,\n        \"gpt-3.5-turbo-0613\": 4097,\n        \"gpt-3.5-turbo-16k-0613\": 16385,\n        \"gpt-3.5-turbo-0301\": 4097,\n        # Other GPT-3.5 models\n        \"text-davinci-003\": 4097,\n        \"text-davinci-002\": 4097,\n        \"code-davinci-002\": 8001,\n    }\n    return token_data.get(model_name, DEFAULT_MAX_TOKEN)\n\n\ndef nb_token_for_input(input: str, model_name: str) -> int:\n    try:\n        return len(tiktoken.encoding_for_model(model_name).encode(input))\n    except Exception as e:\n        default_model_for_token = \"gpt-4o\"\n        return len(tiktoken.encoding_for_model(default_model_for_token).encode(input))\n"
  },
  {
    "path": "packages/backend/app/log_config.py",
    "content": "import logging\nimport colorlog\n\n\ndef setup_logger(name: str):\n    formatter = colorlog.ColoredFormatter(\n        \"%(log_color)s%(levelname)-8s%(reset)s %(message)s\",\n        datefmt=None,\n        reset=True,\n        log_colors={\n            \"DEBUG\": \"cyan\",\n            \"INFO\": \"green\",\n            \"WARNING\": \"yellow\",\n            \"ERROR\": \"red\",\n            \"CRITICAL\": \"red\",\n        },\n    )\n\n    logger = logging.getLogger(name)\n    handler = logging.StreamHandler()\n    handler.setFormatter(formatter)\n    logger.addHandler(handler)\n\n    return logger\n\n\nroot_logger = setup_logger(\"root\")\nroot_logger.setLevel(logging.INFO)\n"
  },
  {
    "path": "packages/backend/app/processors/components/__init__.py",
    "content": ""
  },
  {
    "path": "packages/backend/app/processors/components/core/__init__.py",
    "content": ""
  },
  {
    "path": "packages/backend/app/processors/components/core/ai_data_splitter_processor.py",
    "content": "import logging\nfrom ...context.processor_context import ProcessorContext\nfrom ..processor import ContextAwareProcessor\n\n\nfrom .processor_type_name_utils import ProcessorType\nfrom openai import OpenAI\n\n\ndef interpret_escape_sequences(separator):\n    escape_dict = {\n        r\"\\n\": \"\\n\",\n        r\"\\r\": \"\\r\",\n        r\"\\t\": \"\\t\",\n    }\n    return escape_dict.get(separator, separator)\n\n\nclass AIDataSplitterProcessor(ContextAwareProcessor):\n    processor_type = ProcessorType.AI_DATA_SPLITTER\n    DEFAULT_SEPARATOR = \";\"\n    AI_MODE = \"ai\"\n    MANUAL_MODE = \"manual\"\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n        self.nb_output = 0\n        self.model = \"gpt-4o\"\n        self.api_key = context.get_value(\"openai_api_key\")\n\n    def get_llm_response(self, messages):\n        client = OpenAI(api_key=self.api_key)\n\n        kwargs = {\"model\": self.model, \"input\": messages}\n        response = client.responses.create(**kwargs)\n        return response.output_text\n\n    def process(self):\n        if self.get_input_processor() is None:\n            return \"\"\n\n        input_data = self.get_input_processor().get_output(\n            self.get_input_node_output_key()\n        )\n\n        mode = self.get_input_by_name(\"mode\", self.AI_MODE)\n\n        if mode == self.AI_MODE:\n            self.init_context(input_data)\n\n            answer = self.get_llm_response(self.messages)\n\n            data_to_split = answer.encode(\"utf-8\").decode(\"utf8\")\n            self.set_output(\n                data_to_split.split(AIDataSplitterProcessor.DEFAULT_SEPARATOR)\n            )\n            self.nb_output = len(self._output)\n\n        if mode == self.MANUAL_MODE:\n            separator = self.get_input_by_name(\n                \"separator\", AIDataSplitterProcessor.DEFAULT_SEPARATOR\n            )\n            separator = interpret_escape_sequences(separator)\n            self.set_output(input_data.split(separator))\n            self.nb_output = len(self._output)\n\n        return self._output\n\n    def init_context(self, input_data: str) -> None:\n        \"\"\"\n        Initialize the context for the OpenAI Chat model with a set of standard messages.\n        Additional user input data can be provided, which will be added to the messages.\n\n        :param input_data: Additional information or text provided by the user that needs processing.\n        \"\"\"\n        # Define the system message with clear instructions and examples\n        system_msg = (\n            \"You are an assistant whose task is to separate ideas or concepts from the input text using semicolons (;). \"\n            \"Do not include any meta-comments or self-references in your responses. \"\n            \"Here are some examples of how to perform the task: \"\n            \"\\n\\n\"\n            \"Example 1:\\n\"\n            \"Input: 'The main idea is that dogs are very popular pets, and many people enjoy walking them in parks. Another important concept is that dogs need a lot of exercise to stay healthy.'\\n\"\n            \"Output: 'Dogs are very popular pets; many people enjoy walking them in parks; dogs need a lot of exercise to stay healthy.'\\n\\n\"\n            \"Example 2:\\n\"\n            \"Input: '1) A picture of a woman 2) A video with a bird 3) Air conditioner'\\n\"\n            \"Output: 'A picture of a woman; A video with a bird; Air conditioner.'\\n\\n\"\n            \"Example 3:\\n\"\n            \"Input: 'Here are two ideas: - Dogs are better than cats - Birds are beautiful'\\n\"\n            \"Output: 'Dogs are better than cats; Birds are beautiful.'\\n\\n\"\n            \"Example 4:\\n\"\n            \"Input: 'Crée une interprétation artistique numérique de la ville de New York la nuit sous la pluie, mettant l'accent sur les reflets lumineux sur les surfaces mouillées. Imagine et dessine un nouveau type de fleur qui n'existe pas encore dans la nature. Assure-toi qu'elle a une allure exotique et utilise des couleurs vives et uniques que l'on ne trouve pas couramment chez les fleurs. Conçois une image représentant une scène du futur, avec des villes futuristes, des technologies avancées et des formes de vie artificielles coexistant avec des formes de vie naturelles.'\\n\"\n            \"Output: 'Crée une interprétation artistique numérique de la ville de New York la nuit sous la pluie, mettant l'accent sur les reflets lumineux sur les surfaces mouillées; Imagine et dessine un nouveau type de fleur qui n'existe pas encore dans la nature. Assure-toi qu'elle a une allure exotique et utilise des couleurs vives et uniques que l'on ne trouve pas couramment chez les fleurs; Conçois une image représentant une scène du futur, avec des villes futuristes, des technologies avancées et des formes de vie artificielles coexistant avec des formes de vie naturelles.'\\n\\n\"\n            \"After reading the input, output each distinct idea or concept separated by semicolons.\"\n        )\n\n        user_nb_output = self.get_input_by_name(\"nb_output\", 0)\n        if user_nb_output > 1:\n            system_msg += f\"\\nThe estimated number of outputs for the next message is {user_nb_output}.\"\n\n        self.messages = [\n            {\"role\": \"system\", \"content\": system_msg},\n            {\"role\": \"user\", \"content\": input_data},\n        ]\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/dall_e_prompt_processor.py",
    "content": "from ...context.processor_context import ProcessorContext\nfrom ..processor import ContextAwareProcessor\n\nfrom openai import OpenAI\n\nfrom .processor_type_name_utils import ProcessorType\n\n\nclass DallEPromptProcessor(ContextAwareProcessor):\n    processor_type = ProcessorType.DALLE_PROMPT\n\n    DEFAULT_MODEL = \"dall-e-3\"\n    DEFAULT_SIZE = \"1024x1024\"\n    DEFAULT_QUALITY = \"standard\"\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n        self.prompt = config.get(\"prompt\")\n        self.size = config.get(\"size\", DallEPromptProcessor.DEFAULT_SIZE)\n        self.quality = config.get(\"quality\", DallEPromptProcessor.DEFAULT_QUALITY)\n\n    def process(self):\n        if self.get_input_processor() is not None:\n            self.prompt = (\n                self.get_input_processor().get_output(self.get_input_node_output_key())\n                if self.prompt is None or len(self.prompt) == 0\n                else self.prompt\n            )\n\n        api_key = self._processor_context.get_value(\"openai_api_key\")\n        client = OpenAI(\n            api_key=api_key,\n        )\n\n        response = client.images.generate(\n            model=DallEPromptProcessor.DEFAULT_MODEL,\n            prompt=self.prompt,\n            n=1,\n            size=self.size,\n            quality=self.quality,\n        )\n\n        return response.data[0].url\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/display_processor.py",
    "content": "from .processor_type_name_utils import ProcessorType\nfrom ..processor import BasicProcessor\n\n\nclass DisplayProcessor(BasicProcessor):\n    processor_type = \"display\"\n\n    def __init__(self, config):\n        super().__init__(config)\n\n    def process(self):\n        input_data = None\n        if self.get_input_processor() is None:\n            return \"\"\n\n        input_data = self.get_input_processor().get_output(\n            self.get_input_node_output_key()\n        )\n\n        return input_data\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/file_processor.py",
    "content": "from .processor_type_name_utils import ProcessorType\nfrom ..processor import BasicProcessor\n\n\nclass FileProcessor(BasicProcessor):\n    processor_type = ProcessorType.FILE\n\n    def __init__(self, config):\n        super().__init__(config)\n        self.url = config[\"fileUrl\"]\n\n    def process(self):\n        return self.url\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/gpt_vision_processor.py",
    "content": "import re\nfrom typing import Any, List\n\nfrom ...launcher.event_type import EventType\nfrom ...launcher.processor_event import ProcessorEvent\nfrom ...context.processor_context import ProcessorContext\nfrom ..processor import ContextAwareProcessor\nfrom .processor_type_name_utils import ProcessorType\nfrom openai import OpenAI\nfrom urllib.parse import urlparse\n\n\nclass GPTVisionProcessor(ContextAwareProcessor):\n    processor_type = ProcessorType.GPT_VISION\n    DEFAULT_MODEL = \"gpt-4o\"\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n    def _gather_image_url_values(self) -> List[Any]:\n        \"\"\"\n        Pull the value of `element` plus every `element_<n>` child field\n        in the order they appear in self.fields_names.\n        \"\"\"\n        # Match element_0, element_1, … whatever the UI generates\n        child_pattern = re.compile(r\"^image_url_\\d+$\")\n\n        # Preserve original order: parent first, then the children\n        ordered_field_names = [\n            fname\n            for fname in self.fields_names\n            if fname == \"image_url\" or child_pattern.match(fname)\n        ]\n\n        values = [self.get_input_by_name(fname, None) for fname in ordered_field_names]\n        return [v for v in values if v is not None]\n\n    def process(self):\n        self.vision_inputs = {\n            \"prompt\": self.get_input_by_name(\"prompt\"),\n        }\n\n        images_urls = self._gather_image_url_values()\n\n        if (\n            self.vision_inputs[\"prompt\"] is None\n            or len(self.vision_inputs[\"prompt\"]) == 0\n        ):\n            raise ValueError(\"No prompt provided.\")\n\n        if len(images_urls) == 0:\n            raise ValueError(\"No image provided.\")\n\n        for url in images_urls:\n            if not self.is_valid_url(url):\n                raise ValueError(f\"Invalid URL provided. \\n {url}\")\n\n        api_key = self._processor_context.get_value(\"openai_api_key\")\n        client = OpenAI(\n            api_key=api_key,\n        )\n        content = []\n\n        for image in images_urls:\n            content.append(\n                {\n                    \"type\": \"image_url\",\n                    \"image_url\": {\"url\": image},\n                }\n            )\n\n        content.append(\n            {\n                \"type\": \"text\",\n                \"text\": self.vision_inputs[\"prompt\"],\n            }\n        )\n\n        response = client.chat.completions.create(\n            model=GPTVisionProcessor.DEFAULT_MODEL,\n            messages=[\n                {\n                    \"role\": \"user\",\n                    \"content\": content,\n                }\n            ],\n            max_tokens=4096,\n            stream=True,\n        )\n\n        final_response = \"\"\n        for chunk in response:\n            if not chunk.choices[0].delta.content:\n                continue\n            final_response += chunk.choices[0].delta.content\n            event = ProcessorEvent(self, final_response)\n            self.notify(EventType.STREAMING, event)\n\n        return final_response\n\n    def is_valid_url(self, url):\n        try:\n            result = urlparse(url)\n            return all([result.scheme, result.netloc])\n        except Exception:\n            return False\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/input_image_processor.py",
    "content": "from .processor_type_name_utils import ProcessorType\nfrom ..processor import BasicProcessor\n\n\nclass InputImageProcessor(BasicProcessor):\n    processor_type = ProcessorType.INPUT_IMAGE\n\n    def __init__(self, config):\n        super().__init__(config)\n        self.inputText = config[\"inputText\"]\n\n    def process(self):\n        return self.inputText\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/input_processor.py",
    "content": "from .processor_type_name_utils import ProcessorType\nfrom ..processor import BasicProcessor\n\n\nclass InputProcessor(BasicProcessor):\n    processor_type = ProcessorType.INPUT_TEXT\n\n    def __init__(self, config):\n        super().__init__(config)\n        self.inputText = config[\"inputText\"]\n\n    def process(self):\n        return self.inputText\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/llm_prompt_processor.py",
    "content": "import logging\n\nfrom app.processors.exceptions import LightException\n\nfrom ...launcher.processor_event import ProcessorEvent\nfrom ...launcher.event_type import EventType\nfrom ....llms.utils.max_token_for_model import max_token_for_model, nb_token_for_input\nfrom ...context.processor_context import ProcessorContext\nfrom ..processor import ContextAwareProcessor\nfrom openai import OpenAI\n\nfrom .processor_type_name_utils import ProcessorType\n\n\nclass LLMPromptProcessor(ContextAwareProcessor):\n    processor_type = ProcessorType.LLM_PROMPT\n    DEFAULT_MODEL = \"gpt-4o\"\n    streaming = True\n    models_with_web_search = [\n        \"gpt-4o\",\n        \"gpt-4o-mini\",\n        \"gpt-4.1\",\n        \"gpt-4.1-mini\",\n    ]\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n        self.model = config.get(\"model\", LLMPromptProcessor.DEFAULT_MODEL)\n        self.prompt = config.get(\"prompt\", None)\n\n    def handle_stream_answer(self, awnser):\n        event = ProcessorEvent(self, awnser)\n        self.notify(EventType.STREAMING, event)\n\n    def nb_tokens_from_messages(self, messages, model):\n        \"\"\"\n        Calculates the total number of tokens in a list of messages using nb_token_for_input.\n        \"\"\"\n        total_tokens = 0\n        token_overhead = 3\n        for message in messages:\n            content_tokens = nb_token_for_input(message[\"content\"], model)\n            total_tokens += content_tokens + token_overhead\n        total_tokens += token_overhead\n        return total_tokens\n\n    def check_for_html_tags(self, text):\n        \"\"\"\n        Checks if the given text contains HTML tags or attributes.\n        \"\"\"\n        if \"<html\" in text or \"<body\" in text:\n            return True\n        return False\n\n    def process(self):\n        api_key = self._processor_context.get_value(\"openai_api_key\")\n\n        search_enabled = False\n        if self.model in self.models_with_web_search:\n            search_enabled = self.get_input_by_name(\"web_search\", False)\n            search_context_size = self.get_input_by_name(\"search_context_size\", None)\n\n        if api_key is None:\n            raise Exception(\"No OpenAI API key found\")\n\n        af_node_version = self.get_input_by_name(\"af_node_version\", 1)\n\n        context = None\n        if af_node_version > 1:\n            context = self.get_input_by_name(\"context\", None)\n            self.prompt = self.get_input_by_name(\"prompt\", None)\n        else:\n            if self.get_input_processor() is not None:\n                context = self.get_input_processor().get_output(\n                    self.get_input_node_output_key()\n                )\n\n        if self.prompt is None:\n            raise Exception(\"No prompt provided\")\n\n        self.init_context(context)\n        total_tokens = self.nb_tokens_from_messages(self.messages, self.model)\n        model_max_tokens = max_token_for_model(self.model)\n\n        if total_tokens > model_max_tokens:\n            logging.warning(\"Messages size: \" + str(total_tokens))\n            logging.warning(\"Model capacity: \" + str(model_max_tokens))\n            message = (\n                \"The text size exceeds the model's capacity. \"\n                \"Consider using a model with greater context handling capabilities or utilize the 'Find Similar Text' node to create a cohesive, condensed version of the context.\"\n            )\n            if (\n                context and self.check_for_html_tags(context)\n            ) or self.check_for_html_tags(self.prompt):\n                message += (\n                    \"\\n\\n\"\n                    \"Note: HTML tags or attributes are detected within the data provided. If they are unnecessary for this task, removing them could significantly reduce the context size.\"\n                )\n            raise Exception(message)\n\n        client = OpenAI(api_key=api_key)\n\n        kwargs = {\"model\": self.model, \"input\": self.messages, \"stream\": self.streaming}\n\n        if search_enabled:\n            kwargs[\"tools\"] = [\n                {\n                    \"type\": \"web_search_preview\",\n                    \"search_context_size\": search_context_size,\n                }\n            ]\n\n        stream = client.responses.create(**kwargs)\n\n        final_response = \"\"\n\n        for event in stream:\n            type = event.type\n            if type == \"response.output_text.delta\":\n                final_response += event.delta\n                self.handle_stream_answer(final_response)\n            if type == \"response.completed\":\n                response_data = event.response\n                final_response = response_data.output_text\n            if type == \"response.failed\":\n                response_data = event.response\n                if not hasattr(response_data, \"error\"):\n                    logging.warning(f\"Error from OpenAI with no data: {response_data}\")\n                    continue\n\n                raise LightException(\n                    f\"Error from OpenAI : {response_data.error.message}\"\n                )\n            if type == \"error\":\n                raise LightException(f\"Error from OpenAI : {event.message}\")\n\n        return final_response\n\n    def init_context(self, context: str) -> None:\n        \"\"\"\n        Initialise the context for the LLM model with a standard set of messages.\n        Additional user input data can be provided, which will be added to the messages.\n\n        :param context: additional information to be used by the assistant.\n        \"\"\"\n        if context is None:\n            system_msg = \"You are a helpful assistant. \"\n            user_msg_content = self.prompt\n        else:\n            system_msg = (\n                \"You are a helpful assistant. \"\n                \"You will respond to requests indicated by the '#Request' tag, \"\n                \"using the context provided under the '#Context' tag.\"\n                \"Your response should feel natural and seamless, as if you've internalized the context \"\n                \"and are answering the request without needing to directly point back to the information provided\"\n            )\n            user_msg_content = f\"#Context: {context} \\n\\n#Request: {self.prompt}\"\n\n        self.messages = [\n            {\"role\": \"system\", \"content\": system_msg},\n            {\"role\": \"user\", \"content\": user_msg_content},\n        ]\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/merge_processor.py",
    "content": "from ..processor import ContextAwareProcessor\nfrom .processor_type_name_utils import ProcessorType, MergeModeEnum\n\nclass MergeProcessor(ContextAwareProcessor):\n    processor_type = ProcessorType.MERGER_PROMPT\n\n    def __init__(self, config, context):\n        super().__init__(config, context)\n\n        self.merge_mode = MergeModeEnum(int(config[\"mergeMode\"]))\n\n    def update_prompt(self, inputs):\n        for idx, value in enumerate(inputs, start=1):\n            placeholder = f\"${{input-{idx}}}\"\n            self.prompt = self.prompt.replace(placeholder, str(value))\n\n    def process(self):\n        self.prompt = self.get_input_by_name(\"prompt\", \"\")\n        input_names = self.get_input_names_from_config()\n        inputs = [self.get_input_by_name(name, \"\") for name in input_names]\n\n        self.update_prompt(inputs)\n\n        return self.prompt\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/processor_type_name_utils.py",
    "content": "from enum import Enum\n\n\nclass MergeModeEnum(Enum):\n    MERGE = 1\n    MERGE_AND_PROMPT = 2\n\n\nclass ProcessorType(Enum):\n    INPUT_TEXT = \"input-text\"\n    INPUT_IMAGE = \"input-image\"\n    URL_INPUT = \"url_input\"\n    LLM_PROMPT = \"llm-prompt\"\n    GPT_VISION = \"gpt-vision\"\n    YOUTUBE_TRANSCRIPT_INPUT = \"youtube_transcript_input\"\n    DALLE_PROMPT = \"dalle-prompt\"\n    STABLE_DIFFUSION_STABILITYAI_PROMPT = \"stable-diffusion-stabilityai-prompt\"\n    STABLE_VIDEO_DIFFUSION_REPLICATE = \"stable-video-diffusion-replicate\"\n    REPLICATE = \"replicate\"\n    MERGER_PROMPT = \"merger-prompt\"\n    AI_DATA_SPLITTER = \"ai-data-splitter\"\n    TRANSITION = \"transition\"\n    DISPLAY = \"display\"\n    FILE = \"file\"\n    STABLE_DIFFUSION_THREE = \"stabilityai-stable-diffusion-3-processor\"\n    TEXT_TO_SPEECH = \"openai-text-to-speech-processor\"\n    DOCUMENT_TO_TEXT = \"document-to-text-processor\"\n    STABILITYAI = \"stabilityai-generic-processor\"\n    CLAUDE = \"claude-anthropic-processor\"\n    REPLACE_TEXT = \"replace-text\"\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/replicate_processor.py",
    "content": "from datetime import datetime\nimport logging\nfrom queue import Queue\nimport time\nfrom urllib.parse import urlparse\n\nfrom app.env_config import is_s3_enabled\n\n\nfrom ...launcher.event_type import EventType\nfrom ...launcher.processor_event import ProcessorEvent\n\nfrom ....utils.processor_utils import stream_download_file_as_binary\n\nfrom ...exceptions import LightException\nfrom ....utils.replicate_utils import (\n    get_input_schema_from_open_API_schema,\n    get_model_openapi_schema,\n    get_output_schema_from_open_API_schema,\n)\n\nfrom ...context.processor_context import ProcessorContext\nfrom ..processor import ContextAwareProcessor\nimport replicate\nfrom .processor_type_name_utils import ProcessorType\nfrom ....tasks.task_exception import TaskAlreadyRegisteredError\nfrom ....tasks.thread_pool_task_manager import add_task, register_task_processor\nfrom ....tasks.task_utils import wait_for_result\n\n\nclass ReplicateProcessor(ContextAwareProcessor):\n    processor_type = ProcessorType.REPLICATE\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n        self.is_processing = False\n        self.config = config\n        self.model = config.get(\"model\")\n\n        if self.model is None:\n            self.model = config.get(\"config\").get(\"nodeName\")\n\n        if \":\" not in self.model:\n            logging.warning(f\"Model {self.model} has no version\")\n            raise Exception(f\"Cannot find version for this model : {self.model}.\")\n\n        self.model_name_withouth_version = self.model.split(\":\")[0]\n\n    def get_prediction_result(\n        self, prediction, processor, timeout=3600.0, initial_sleep=0.1, max_sleep=5.0\n    ):\n        results_queue = Queue()\n        add_task(\"replicate_prediction_wait\", (prediction, processor), results_queue)\n\n        try:\n            prediction = wait_for_result(\n                results_queue, timeout, initial_sleep, max_sleep\n            )\n        except TimeoutError as e:\n            raise TimeoutError(\"Prediction result timed out\")\n\n        return prediction\n\n    @staticmethod\n    def wait_for_prediction_task(task_data):\n        prediction, processor = task_data\n        while prediction.status not in [\"succeeded\", \"failed\", \"canceled\"]:\n            time.sleep(prediction._client.poll_interval)\n            if prediction.status == \"processing\":\n                processor.is_processing = True\n            prediction.reload()\n        return prediction\n\n    def register_background_task(self):\n        try:\n            register_task_processor(\n                \"replicate_prediction_wait\",\n                self.wait_for_prediction_task,\n                max_concurrent_tasks=100,\n            )\n        except TaskAlreadyRegisteredError as e:\n            pass\n\n    def process(self):\n        api_key = self._processor_context.get_value(\"replicate_api_key\")\n\n        self.schema = get_model_openapi_schema(self.model_name_withouth_version)\n        input_processors = self.get_input_processors()\n        input_output_keys = self.get_input_node_output_keys()\n        input_names = self.get_input_names()\n\n        if input_processors:\n            for processor, name, key in zip(\n                input_processors, input_names, input_output_keys\n            ):\n                output = processor.get_output(key)\n\n                if output is None:\n                    continue\n\n                input_type = self._get_nested_input_schema_property(name, \"type\")\n\n                if input_type == \"integer\":\n                    output = int(output)\n                if input_type == \"number\":\n                    output = float(output)\n\n                self.config[name] = output\n\n        api = replicate.Client(api_token=api_key)\n\n        output_schema = get_output_schema_from_open_API_schema(self.schema[\"schema\"])\n        logging.debug(f\"Output schema : {output_schema}\")\n        output_type = output_schema.get(\"type\")\n        output_array_display = output_schema.get(\"x-cog-array-display\")\n        output_format = output_schema.get(\"format\")\n\n        if not \":\" in self.model:\n            logging.warning(f\"Model {self.model} has no version\")\n            raise Exception(\"Cannot find version for this model\")\n\n        rest, version_id = self.model.split(\":\")\n\n        self.config[\"disable_safety_checker\"] = True\n\n        try:\n            self.prediction = api.predictions.create(\n                version=version_id, input=self.config\n            )\n        except Exception as e:\n            logging.warning(f\"Error while creating prediction : {e}\")\n            raise LightException(\n                \"Please review your input to ensure it aligns with the expected format. \\n\\n\"\n                \"For reference, you can review the examples here: \\n\"\n                f\"https://replicate.com/{self.model_name_withouth_version}/examples\\n\\n\"\n                f\"Error message from Replicate: \\n\\n {e}\"\n            )\n\n        self.register_background_task()\n\n        self.prediction = self.get_prediction_result(self.prediction, self)\n\n        if self.prediction.status != \"succeeded\":\n            replicate_error_message = self.prediction.error\n            message_str = f\"Your Replicate prediction ended with status : {self.prediction.status} \\n\\n\"\n\n            if replicate_error_message and self.prediction.status != \"canceled\":\n                message_str += (\n                    f\"There may be an issue with the parameters provided for the model '{self.model_name_withouth_version}'. \\n\\n\"\n                    \"Please review your input to ensure it aligns with the expected format. \\n\\n\"\n                    \"For reference, you can review the examples here: \\n\"\n                    f\"https://replicate.com/{self.model_name_withouth_version}/examples\\n\\n\"\n                    f\"Error message from Replicate: {replicate_error_message}\"\n                )\n            exception = Exception(message_str)\n            exception.rollback_not_needed = True\n            raise exception\n\n        output = self.prediction.output\n        self.metrics = self.prediction.metrics\n        isUriOutput = output_format == \"uri\"\n\n        if output_type == \"array\" and output_array_display == \"concatenate\":\n            output = \"\".join(output)\n        elif output_type == \"array\":\n            items_type = output_schema.get(\"items\").get(\"type\")\n            items_format = output_schema.get(\"items\").get(\"format\")\n            isUriOutput = items_format == \"uri\"\n            output = output\n        elif output_type == \"string\":\n            if isinstance(output, list):\n                output = \"\".join(output)\n        else:\n            output = [output]\n\n        event = ProcessorEvent(self, output)\n        self.notify(EventType.STREAMING, event)\n\n        if isUriOutput:\n            if isinstance(output, list):\n                new_output = []\n                for uri in output:\n                    new_uri = self.upload_replicate_uri_to_storage(uri)\n                    new_output.append(new_uri)\n                output = new_output\n            else:\n                output = self.upload_replicate_uri_to_storage(output)\n\n        return output\n\n    def upload_replicate_uri_to_storage(self, uri):\n        if not is_s3_enabled():\n            return uri\n\n        storage = self.get_storage()\n        timestamp_str = datetime.now().strftime(\"%Y%m%d%H%M%S%f\")\n\n        extension = None\n        try:\n            parsed = urlparse(uri)\n            path = parsed.path\n            if \".\" in path:\n                extension = path.split(\".\")[-1]\n            else:\n                logging.warning(\"No extension found in URI: %s\", uri)\n        except Exception as e:\n            logging.warning(\"Error extracting extension from URI (%s): %s\", uri, str(e))\n\n        if not extension:\n            logging.warning(\"Aborting Upload - No extension found in URI: %s\", uri)\n            return uri\n\n        filename = f\"{self.name}-{timestamp_str}.{extension}\"\n        file = stream_download_file_as_binary(uri)\n\n        url = storage.save(filename, file)\n\n        return url\n\n    def _get_nested_input_schema_property(self, property_name, nested_key):\n        return (\n            get_input_schema_from_open_API_schema(self.schema.get(\"schema\", {}))\n            .get(\"properties\", {})\n            .get(property_name, {})\n            .get(nested_key)\n        )\n\n    def cancel(self):\n        api_key = self._processor_context.get_value(\"replicate_api_key\")\n        api = replicate.Client(api_token=api_key)\n        api.predictions.cancel(id=self.prediction.id)\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/stable_diffusion_stabilityai_prompt_processor.py",
    "content": "import base64\n\nfrom ...context.processor_context import ProcessorContext\nfrom ..processor import ContextAwareProcessor\nfrom datetime import datetime\nimport requests\n\nimport os\n\n\nfrom .processor_type_name_utils import ProcessorType\n\n\nclass StableDiffusionStabilityAIPromptProcessor(ContextAwareProcessor):\n    processor_type = ProcessorType.STABLE_DIFFUSION_STABILITYAI_PROMPT\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n        self.prompt = config.get(\"prompt\")\n\n        size = config.get(\"size\", \"1024x1024\")\n\n        self.height = int(size.split(\"x\")[0])\n        self.width = int(size.split(\"x\")[1])\n        self.style_preset = config.get(\"style_preset\", \"\")\n        self.samples = 1\n        self.engine_id = \"stable-diffusion-xl-1024-v1-0\"\n\n        self.api_host = os.getenv(\n            \"STABLE_DIFFUSION_STABILITYAI_API_HOST\", \"https://api.stability.ai\"\n        )\n\n    def prepare_and_process_response(self, response):\n        if response.status_code != 200:\n            raise Exception(\"Non-200 response: \" + str(response.text))\n\n        data = response.json()\n        first_image = data[\"artifacts\"][0][\"base64\"]\n        image_data = base64.b64decode(first_image)\n\n        storage = self.get_storage()\n        timestamp_str = datetime.now().strftime(\"%Y%m%d%H%M%S%f\")\n        filename = f\"{self.name}-{timestamp_str}.png\"\n        url = storage.save(filename, image_data)\n\n        return url\n\n    def setup_data_to_send(self):\n\n        if self.get_input_processor() is not None:\n            self.prompt = (\n                self.get_input_processor().get_output(self.get_input_node_output_key())\n                if self.prompt is None or len(self.prompt) == 0\n                else self.prompt\n            )\n\n        data_to_send = {\n            \"text_prompts\": [{\"text\": f\"{self.prompt}\"}],\n            \"cfg_scale\": 7,\n            \"height\": self.height,\n            \"width\": self.width,\n            \"samples\": self.samples,\n            \"steps\": 30,\n        }\n\n        return data_to_send\n\n    def process(self):\n        data_to_send = self.setup_data_to_send()\n        api_key = self._processor_context.get_value(\"stabilityai_api_key\")\n\n        response = requests.post(\n            f\"{self.api_host}/v1/generation/{self.engine_id}/text-to-image\",\n            headers={\n                \"Content-Type\": \"application/json\",\n                \"Accept\": \"application/json\",\n                \"Authorization\": f\"Bearer {api_key}\",\n            },\n            json=data_to_send,\n        )\n\n        return self.prepare_and_process_response(response)\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/stable_video_diffusion_replicate.py",
    "content": "import os\nfrom urllib.parse import urlparse\nfrom ...context.processor_context import ProcessorContext\nfrom ..processor import ContextAwareProcessor\nimport replicate\n\nfrom .processor_type_name_utils import ProcessorType\n\n\nclass StableVideoDiffusionReplicaterocessor(ContextAwareProcessor):\n    processor_type = ProcessorType.STABLE_VIDEO_DIFFUSION_REPLICATE\n\n    stable_video_diffusion_model = \"stability-ai/stable-video-diffusion:3f0457e4619daac51203dedb472816fd4af51f3149fa7a9e0b5ffcf1b8172438\"\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n        self.length = config.get(\"length\", \"14_frames_with_svd\")\n        self.frames_per_second = config.get(\"frames_per_second\", \"6\")\n\n    def process(self):\n        input_image_url = None\n        if self.get_input_processor() is not None:\n            input_image_url = self.get_input_processor().get_output(\n                self.get_input_node_output_key()\n            )\n\n        if input_image_url is None:\n            return \"No image provided.\"\n\n        if not self.is_valid_url(input_image_url):\n            return \"Invalid URL provided.\"\n\n        api_key = self._processor_context.get_value(\"replicate_api_key\")\n        api = replicate.Client(api_token=api_key)\n\n        output = api.run(\n            StableVideoDiffusionReplicaterocessor.stable_video_diffusion_model,\n            input={\n                \"cond_aug\": 0.02,\n                \"decoding_t\": 7,\n                \"input_image\": input_image_url,\n                \"video_length\": self.length,\n                \"sizing_strategy\": \"maintain_aspect_ratio\",\n                \"motion_bucket_id\": 127,\n                \"frames_per_second\": int(self.frames_per_second),\n            },\n        )\n\n        return output\n\n    def is_valid_url(self, url):\n        try:\n            result = urlparse(url)\n            return all([result.scheme, result.netloc])\n        except Exception:\n            return False\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/transition_processor.py",
    "content": "from .processor_type_name_utils import ProcessorType\nfrom ..processor import BasicProcessor\n\n\nclass TransitionProcessor(BasicProcessor):\n    processor_type = ProcessorType.TRANSITION\n\n    def __init__(self, config):\n        super().__init__(config)\n\n    def process(self):\n        input_data = None\n        if self.get_input_processor() is None:\n            return \"\"\n\n        input_data = self.get_input_processor().get_output(\n            self.get_input_node_output_key()\n        )\n\n        return input_data\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/url_input_processor.py",
    "content": "import random\nfrom bs4 import BeautifulSoup\n\nimport requests\n\nfrom ....utils.processor_utils import is_valid_url\nfrom ..processor import BasicProcessor\n\nfrom .processor_type_name_utils import ProcessorType\nimport logging\nfrom markdownify import markdownify\n\n\nclass URLInputProcessor(BasicProcessor):\n    WAIT_TIMEOUT = 60\n    GET_TIMEOUT = 20\n    processor_type = ProcessorType.URL_INPUT\n\n    USER_AGENTS = [\n        \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36\",\n        \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15\",\n        \"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:97.0) Gecko/20100101 Firefox/97.0\",\n    ]\n\n    def __init__(self, config):\n        super().__init__(config)\n\n    def get_random_user_agent():\n        return random.choice(URLInputProcessor.USER_AGENTS)\n\n    def fetch_content_simple(self):\n        \"\"\"\n        Fetches the website content using a simple GET request.\n        \"\"\"\n        try:\n            headers = {\"User-Agent\": URLInputProcessor.get_random_user_agent()}\n            response = requests.get(self.url, headers=headers, timeout=self.GET_TIMEOUT)\n\n            response.raise_for_status()\n            return response.text\n        except requests.RequestException as e:\n            logging.warning(f\"Failed to fetch content using simple GET: {e}\")\n            return None\n\n    def process(self):\n        self.url = self.get_input_by_name(\"url\")\n        self.loading_mode = self.get_input_by_name(\"loading_mode\", \"browser\")\n        self.effective_load_mode = self.loading_mode\n\n        # Validate URL input\n        if not self.url or not isinstance(self.url, str) or self.url.strip() == \"\":\n            raise Exception(\"No URL provided.\", \"noURLProvided\")\n\n        self.url = self.url.strip()\n        self.original_url = self.url\n\n        if not (self.url.startswith(\"https://\") or self.url.startswith(\"http://\")):\n            logging.warning(\n                \"URL does not start with 'https://' or 'http://' - compensating by prepending 'https://'.\"\n            )\n            self.url = \"https://\" + self.url\n\n        if not is_valid_url(self.url):\n            logging.warning(f\"Invalid URL: {self.url}\")\n            raise Exception(\n                f\"The provided URL '{self.original_url}' is not valid.\\n\\n\"\n                \"Please ensure the URL follows the correct format, e.g., 'https://www.example.com' or 'https://example.com'.\"\n            )\n\n        # Get additional parameters\n        self.selectors = self.get_input_by_name(\"selectors\", [])\n        self.selectors_to_remove = self.get_input_by_name(\"selectors_to_remove\", [])\n        self.with_html_tags = self.get_input_by_name(\"with_html_tags\", False)\n        self.with_html_attributes = self.get_input_by_name(\n            \"with_html_attributes\", False\n        )\n\n        response = None\n\n        task_data = {\n            \"url\": self.url,\n            \"selectors\": self.selectors,\n            \"selectors_to_remove\": self.selectors_to_remove,\n            \"with_html_tags\": self.with_html_tags,\n            \"with_html_attributes\": self.with_html_attributes,\n        }\n\n        content = self.fetch_content_simple()\n        response = self.process_content_with_beautiful_soup(content, task_data)\n\n        return response\n\n    def process_content_with_beautiful_soup(self, content, task_data):\n        \"\"\"\n        Process the HTML content using BeautifulSoup while considering the following parameters:\n        - selectors: a list of CSS selectors; if provided, only matching elements are kept.\n        - selectors_to_remove: a list of CSS selectors for elements that should be removed.\n        - with_html_tags: if True, the returned result will include HTML tags; otherwise, plain text.\n        - with_html_attributes: if True (and with_html_tags is True), HTML attributes will be kept;\n            otherwise, they will be stripped.\n        \"\"\"\n        if not content:\n            return \"\"\n\n        soup = BeautifulSoup(content, \"html.parser\")\n\n        selectors = task_data.get(\"selectors\", [])\n        if isinstance(selectors, str):\n            selectors = [selectors]\n\n        selectors_to_remove = task_data.get(\"selectors_to_remove\", [])\n        if isinstance(selectors_to_remove, str):\n            selectors_to_remove = [selectors_to_remove]\n\n        for selector in selectors_to_remove:\n            for element in soup.select(selector):\n                element.decompose()\n\n        if selectors:\n            selected_elements = soup.select(\", \".join(selectors))\n            if not selected_elements:\n                selected_elements = [soup]\n        else:\n            selected_elements = [soup]\n\n        with_html_tags = task_data.get(\"with_html_tags\", False)\n        with_html_attributes = task_data.get(\"with_html_attributes\", False)\n\n        if with_html_tags:\n            if not with_html_attributes:\n                for element in selected_elements:\n                    if hasattr(element, \"attrs\"):\n                        element.attrs = {}\n                    for tag in element.find_all(True):\n                        tag.attrs = {}\n            html_output = \"\".join(str(element) for element in selected_elements)\n            return html_output\n        else:\n            html_output = \"\".join(str(element) for element in selected_elements)\n            text_output = markdownify(html_output)\n            return text_output\n"
  },
  {
    "path": "packages/backend/app/processors/components/core/youtube_transcript_input_processor.py",
    "content": "import logging\n\nfrom ...utils.retry_mixin import RetryMixin\n\nfrom ...exceptions import LightException\n\nfrom ..processor import BasicProcessor\nfrom youtube_transcript_api import (\n    YouTubeTranscriptApi,\n    TranscriptsDisabled,\n    NoTranscriptFound,\n    VideoUnavailable,\n)\n\nfrom .processor_type_name_utils import ProcessorType\n\n\nclass YoutubeTranscriptInputProcessor(BasicProcessor, RetryMixin):\n    processor_type = ProcessorType.YOUTUBE_TRANSCRIPT_INPUT\n\n    def __init__(self, config):\n        super().__init__(config)\n        self.max_retries = 2\n        self.retry_delay = 0\n\n    def get_video_id(self):\n        if \"watch?v=\" in self.url:\n            return self.url.split(\"watch?v=\")[-1].split(\"&\")[0]\n        elif \"youtu.be/\" in self.url:\n            return self.url.split(\"youtu.be/\")[-1].split(\"?\")[0]\n        else:\n            raise LightException(f\"Invalid YouTube URL {self.url}\")\n\n    def process_with_youtube_transcript_api(self):\n        video_id = self.get_video_id()\n\n        try:\n            transcript_data = self.get_transcript(video_id)\n\n        except (TranscriptsDisabled, NoTranscriptFound) as e:\n            logging.warning(\n                f\"Transcript not available or disabled for video {self.url}\"\n            )\n            logging.debug(e)\n            raise Exception(f\"No transcription found for {self.url}\")\n\n        except VideoUnavailable as e:\n            logging.warning(f\"Video is unavailable\")\n            logging.debug(e)\n            raise Exception(f\"Video is unavailable for {self.url}\")\n\n        except Exception as e:\n            logging.warning(f\"Failed to retrieve transcript\")\n            logging.debug(e)\n            raise Exception(self.create_no_transcript_error_message(e))\n\n        content = \" \".join([entry[\"text\"] for entry in transcript_data])\n\n        if not content:\n            raise Exception(f\"No transcription found for {self.url}\")\n\n        return content\n\n    def get_transcript(self, video_id):\n        \"\"\"Attempts to get the transcript in the requested language or translate if not available.\"\"\"\n        try:\n            # Try to get the transcript in the requested language\n            return YouTubeTranscriptApi.get_transcript(\n                video_id, languages=[self.language]\n            )\n\n        except NoTranscriptFound:\n            # If transcript in the requested language is not found, try to find a translatable one\n            return self.get_translatable_transcript(video_id)\n\n        except Exception as e:\n            logging.debug(f\"Failed to retrieve transcript with first proxy\")\n            logging.debug(e)\n            # Retry with a new proxy\n            return YouTubeTranscriptApi.get_transcript(\n                video_id, languages=[self.language]\n            )\n\n    def get_translatable_transcript(self, video_id):\n        \"\"\"Finds a translatable transcript and translates it to the requested language.\"\"\"\n        try:\n            # List all transcripts for the video\n            transcripts = YouTubeTranscriptApi.list_transcripts(video_id)\n\n            # Find an auto-generated, translatable transcript\n            for transcript in transcripts:\n                if transcript.is_translatable:\n                    return transcript.translate(self.language).fetch()\n\n            # Raise an exception if no translatable transcript is found\n            raise NoTranscriptFound(\n                f\"No translatable transcript available for video {video_id}\"\n            )\n\n        except Exception as e:\n            logging.warning(f\"Failed to find a translatable transcript\")\n            logging.debug(e)\n            raise\n\n    def create_no_transcript_error_message(self, e):\n        requested_languages = getattr(e, \"_requested_language_codes\", None)\n        transcript_data = getattr(e, \"_transcript_data\", None)\n        error_message = f\"Failed to retrieve transcript for {self.url} \\n\\nRequested Language: {requested_languages} \\n\\n{transcript_data}\"\n        return error_message\n\n    def retrieve_transcript(self, url, language):\n        self.url = url\n        self.language = language\n\n        if not self.url:\n            raise Exception(\"No URL provided\")\n\n        content = self.run_with_retry(self.process_with_youtube_transcript_api)\n\n        if not content:\n            raise Exception(f\"No transcription found for {self.url}\")\n\n        logging.info(f\"Transcription for {self.url} retrieved successfully\")\n        return content\n\n    def process(self):\n\n        url = self.get_input_by_name(\"url\")\n        language = self.get_input_by_name(\"language\")\n        logging.info(language)\n\n        return self.retrieve_transcript(url, language)\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/__init__.py",
    "content": ""
  },
  {
    "path": "packages/backend/app/processors/components/extension/claude_anthropic_processor.py",
    "content": "import logging\nfrom datetime import datetime\n\nimport anthropic\n\nfrom ...context.processor_context import ProcessorContext\nfrom ..model import Field, NodeConfig, Option, Condition, ConditionGroup\nfrom .extension_processor import ContextAwareExtensionProcessor\nfrom ...launcher.processor_event import ProcessorEvent\nfrom ...launcher.event_type import EventType\n\n\nclass ClaudeAnthropicProcessor(ContextAwareExtensionProcessor):\n    processor_type = \"claude-anthropic-processor\"\n\n    model_config_map = {\n        \"claude-3-7-sonnet-latest\": {\n            \"max_tokens\": 8192,\n            \"max_tokens_thinking\": 64000,\n        },\n        \"claude-3-5-haiku-latest\": {\n            \"max_tokens\": 8192,\n            \"max_tokens_thinking\": 8192,\n        },\n        \"claude-3-5-sonnet-latest\": {\n            \"max_tokens\": 8192,\n            \"max_tokens_thinking\": 8192,\n        },\n        \"claude-3-opus-latest\": {\n            \"max_tokens\": 4096,\n            \"max_tokens_thinking\": 4096,\n        },\n        \"claude-3-haiku-20240307\": {\n            \"max_tokens\": 4096,\n            \"max_tokens_thinking\": 4096,\n        },\n        \"claude-3-5-sonnet-20240620\": {\n            \"max_tokens\": 8192,\n            \"max_tokens_thinking\": 8192,\n        },\n        \"claude-opus-4-0\": {\n            \"max_tokens\": 32000,\n            \"max_tokens_thinking\": 32000,\n        },\n        \"claude-sonnet-4-0\": {\n            \"max_tokens\": 64000,\n            \"max_tokens_thinking\": 64000,\n        },\n    }\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n        self.reasoning_content = \"\"\n\n    def get_node_config(self):\n\n        # Conditions\n        claude_thinking_condition = Condition(\n            field=\"model\",\n            operator=\"in\",\n            value=[\"claude-3-7-sonnet-latest\", \"claude-opus-4-0\", \"claude-sonnet-4-0\"],\n        )\n\n        thinking_enabled_condition = Condition(\n            field=\"thinking\", operator=\"equals\", value=True\n        )\n\n        budget_token_condition = ConditionGroup(\n            conditions=[claude_thinking_condition, thinking_enabled_condition],\n            logic=\"AND\",\n        )\n\n        # Fields\n        prompt = Field(\n            name=\"prompt\",\n            label=\"prompt\",\n            type=\"textarea\",\n            required=True,\n            placeholder=\"InputTextPlaceholder\",\n            hasHandle=True,\n        )\n\n        prompt_context = Field(\n            name=\"context\",\n            label=\"context\",\n            type=\"textfield\",\n            placeholder=\"InputTextPlaceholder\",\n            hasHandle=True,\n            description=\"Additional context that will be used to answer your prompt.\",\n        )\n\n        temperature = Field(\n            name=\"temperature\",\n            label=\"temperature\",\n            type=\"slider\",\n            min=0,\n            max=1,\n            defaultValue=1,\n            placeholder=\"InputTextPlaceholder\",\n            description=\"Use temperature closer to 0.0 for analytical tasks, and closer to 1.0 for creative tasks.\",\n        )\n\n        budget_token = Field(\n            name=\"budget_tokens\",\n            label=\"budget_tokens\",\n            type=\"slider\",\n            defaultValue=1024,\n            max=63999,\n            min=1024,\n            condition=budget_token_condition,\n            description=(\n                \"Determines how many tokens Claude can use for its internal reasoning process. \"\n                \"Larger budgets can enable more thorough analysis for complex problems, improving response quality.\"\n            ),\n        )\n\n        model_options = [\n            Option(\n                default=False,\n                value=\"claude-3-7-sonnet-latest\",\n                label=\"Claude 3.7 Sonnet\",\n            ),\n            Option(\n                default=False,\n                value=\"claude-3-5-haiku-latest\",\n                label=\"Claude 3.5 Haiku\",\n            ),\n            Option(\n                default=False,\n                value=\"claude-3-5-sonnet-latest\",\n                label=\"Claude 3.5 Sonnet\",\n            ),\n            Option(\n                default=False,\n                value=\"claude-3-opus-latest\",\n                label=\"Claude 3 Opus\",\n            ),\n            Option(\n                default=False,\n                value=\"claude-3-haiku-20240307\",\n                label=\"Claude 3 Haiku\",\n            ),\n            Option(\n                default=False,\n                value=\"claude-opus-4-0\",\n                label=\"Claude 4 Opus\",\n            ),\n            Option(\n                default=True,\n                value=\"claude-sonnet-4-0\",\n                label=\"Claude 4 Sonnet\",\n            ),\n        ]\n\n        model = Field(\n            name=\"model\",\n            label=\"model\",\n            type=\"select\",\n            options=model_options,\n            required=True,\n        )\n\n        thinking = Field(\n            name=\"thinking\",\n            label=\"thinking\",\n            type=\"boolean\",\n            condition=claude_thinking_condition,\n        )\n\n        fields = [\n            prompt,\n            prompt_context,\n            model,\n            thinking,\n            budget_token,\n            temperature,\n        ]\n\n        config = NodeConfig(\n            nodeName=\"ClaudeAnthropic\",\n            processorType=self.processor_type,\n            icon=\"AnthropicLogo\",\n            fields=fields,\n            outputType=\"markdown\",\n            section=\"models\",\n            helpMessage=\"claudeAnthropichHelp\",\n            showHandlesNames=True,\n        )\n\n        return config\n\n    def handle_stream_awnser(self, awnser):\n        event = ProcessorEvent(self, awnser)\n        self.notify(EventType.STREAMING, event)\n\n    def process(self):\n        \"\"\"\n        Retrieve max_tokens from a map instead of the node config.\n        If 'thinking' is enabled and the model supports it, we choose a different max_tokens.\n        \"\"\"\n\n        prompt = self.get_input_by_name(\"prompt\")\n        prompt_context = self.get_input_by_name(\"context\", None)\n        model = self.get_input_by_name(\"model\", \"claude-3-5-sonnet-20240620\")\n        temperature = self.get_input_by_name(\"temperature\", 1)\n        thinking = self.get_input_by_name(\"thinking\", False)\n\n        if \"3-7\" not in model and \"4-0\" not in model:\n            thinking = False\n\n        budget_tokens = None\n        if thinking:\n            budget_tokens = self.get_input_by_name(\"budget_tokens\", 1024)\n\n        model_config = ClaudeAnthropicProcessor.model_config_map.get(\n            model,\n            ClaudeAnthropicProcessor.model_config_map[\"claude-3-5-sonnet-20240620\"],\n        )\n        if thinking:\n            max_tokens = model_config[\"max_tokens_thinking\"]\n        else:\n            max_tokens = model_config[\"max_tokens\"]\n\n        if prompt is None:\n            return None\n\n        api_key = self._processor_context.get_value(\"anthropic_api_key\")\n        if api_key is None:\n            raise Exception(\"No Anthropic API key found\")\n\n        client = anthropic.Anthropic(api_key=api_key)\n\n        awnser = \"\"\n\n        if prompt_context is not None:\n            messages = [\n                {\n                    \"role\": \"user\",\n                    \"content\": f\"Context: {prompt_context} \\n Prompt: {prompt}\",\n                }\n            ]\n        else:\n            messages = [{\"role\": \"user\", \"content\": prompt}]\n\n        stream_kwargs = {\n            \"model\": model,\n            \"temperature\": temperature,\n            \"max_tokens\": max_tokens,\n            \"messages\": messages,\n        }\n\n        if thinking:\n            stream_kwargs[\"thinking\"] = {\n                \"budget_tokens\": budget_tokens,\n                \"type\": \"enabled\",\n            }\n\n        with client.messages.stream(**stream_kwargs) as stream:\n            try:\n                current_block_type = None\n                for event in stream:\n                    if event.type == \"content_block_start\":\n                        current_block_type = event.content_block.type\n                    elif event.type == \"content_block_delta\":\n                        if event.delta.type == \"thinking_delta\":\n                            self.reasoning_content += event.delta.thinking\n                        elif event.delta.type == \"text_delta\":\n                            awnser += event.delta.text\n                            self.handle_stream_awnser(awnser)\n                    elif event.type == \"message_stop\":\n                        break\n            except Exception as e:\n                logging.error(f\"An error occurred during streaming : {e}\")\n                raise Exception(\"An error occurred during streaming\")\n            finally:\n                stream.close()\n\n        return awnser\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/deepseek_processor.py",
    "content": "from ...launcher.event_type import EventType\nfrom ...launcher.processor_event import ProcessorEvent\nfrom ...context.processor_context import ProcessorContext\nfrom ..model import Field, NodeConfig, Option\nfrom .extension_processor import ContextAwareExtensionProcessor\nfrom openai import OpenAI\n\n\nclass DeepSeekProcessor(ContextAwareExtensionProcessor):\n    processor_type = \"deepseek-processor\"\n    streaming = True\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n        self.reasoning_content = \"\"\n\n    def get_node_config(self):\n        context = Field(\n            name=\"context\",\n            label=\"context\",\n            type=\"textfield\",\n            required=False,\n            placeholder=\"ContextPlaceholder\",\n            hasHandle=True,\n        )\n\n        text = Field(\n            name=\"prompt\",\n            label=\"prompt\",\n            type=\"textarea\",\n            required=True,\n            placeholder=\"PromptPlaceholder\",\n            hasHandle=True,\n        )\n\n        model_options = [\n            Option(\n                default=False,\n                value=\"deepseek-chat\",\n                label=\"V3\",\n            ),\n            Option(\n                default=True,\n                value=\"deepseek-reasoner\",\n                label=\"R1\",\n            ),\n        ]\n\n        model = Field(\n            name=\"model\",\n            type=\"option\",\n            options=model_options,\n            required=True,\n        )\n\n        fields = [model, context, text]\n\n        config = NodeConfig(\n            nodeName=\"DeepSeek\",\n            processorType=self.processor_type,\n            icon=\"DeepSeekLogo\",\n            fields=fields,\n            outputType=\"text\",\n            section=\"models\",\n            helpMessage=\"deepSeekHelp\",\n            showHandlesNames=True,\n        )\n\n        return config\n\n    def process(self):\n        prompt = self.get_input_by_name(\"prompt\")\n        context = self.get_input_by_name(\"context\", \"\")\n        model = self.get_input_by_name(\"model\")\n\n        if prompt is None:\n            return None\n\n        api_key = self._processor_context.get_value(\"deepseek_api_key\")\n\n        if api_key is None:\n            raise Exception(\"No DeepSeek API key found\")\n\n        client = OpenAI(api_key=api_key, base_url=\"https://api.deepseek.com\")\n\n        response = client.chat.completions.create(\n            model=model,\n            messages=[\n                {\n                    \"role\": \"user\",\n                    \"content\": f\"{context} {prompt}\",\n                }\n            ],\n            stream=self.streaming,\n        )\n\n        if self.streaming:\n            final_response = \"\"\n            for chunk in response:\n                r_content = getattr(chunk.choices[0].delta, \"reasoning_content\", None)\n                if r_content is not None:\n                    self.reasoning_content += r_content\n\n                if not chunk.choices[0].delta.content:\n                    continue\n                final_response += chunk.choices[0].delta.content\n                event = ProcessorEvent(self, final_response)\n                self.notify(EventType.STREAMING, event)\n\n            return final_response\n\n        return response.choices[0].message.content\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/document_to_text_processor.py",
    "content": "import logging\nfrom queue import Queue\nimport requests\nfrom ....tasks.task_exception import TaskAlreadyRegisteredError\n\nfrom ..node_config_builder import FieldBuilder, NodeConfigBuilder\n\nfrom ....tasks.thread_pool_task_manager import add_task, register_task_processor\nfrom ....utils.processor_utils import (\n    create_temp_file_with_bytes_content,\n    get_max_file_size_in_mb,\n    is_accepted_url_file_size,\n    is_s3_file,\n    is_valid_url,\n)\nfrom ....tasks.task_utils import wait_for_result\nfrom ..model import NodeConfig\nfrom .extension_processor import BasicExtensionProcessor\nfrom langchain.document_loaders import (\n    UnstructuredPDFLoader,\n    UnstructuredHTMLLoader,\n    CSVLoader,\n    JSONLoader,\n    TextLoader,\n    PyMuPDFLoader,\n)\n\n\nclass DocumentToText(BasicExtensionProcessor):\n    processor_type = \"document-to-text-processor\"\n    WAIT_TIMEOUT = 60\n\n    def __init__(self, config):\n        super().__init__(config)\n        self.loaders = {\n            \"application/pdf\": PyMuPDFLoader,\n            \"text/plain\": TextLoader,\n            \"text/csv\": CSVLoader,\n            \"text/html\": UnstructuredHTMLLoader,\n            \"application/json\": JSONLoader,\n        }\n        self.accepted_mime_types = self.loaders.keys()\n\n    def get_node_config(self) -> NodeConfig:\n        urlField = (\n            FieldBuilder()\n            .set_name(\"document_url\")\n            .set_label(\"document_url\")\n            .set_type(\"textfield\")\n            .set_required(True)\n            .set_placeholder(\"URLPlaceholder\")\n            .set_has_handle(True)\n            .build()\n        )\n\n        return (\n            NodeConfigBuilder()\n            .set_node_name(\"DocumentToText\")\n            .set_processor_type(self.processor_type)\n            .set_icon(\"FaFile\")\n            .set_section(\"input\")\n            .set_help_message(\"documentToTextHelp\")\n            .set_show_handles(True)\n            .set_output_type(\"text\")\n            .set_default_hide_output(True)\n            .add_field(urlField)\n            .build()\n        )\n\n    def get_loader_for_mime_type(self, mime_type, path):\n        \"\"\"Return an instance of the loader class associated with the given mime_type.\"\"\"\n        loader_class = self.loaders.get(mime_type)\n        if loader_class:\n            return loader_class(file_path=path)\n        else:\n            return None\n\n    def load_document(self, loader):\n\n        results_queue = Queue()\n        add_task(\"document_loader\", loader, results_queue)\n        document = None\n\n        try:\n            document = wait_for_result(results_queue)\n        except TimeoutError as e:\n            raise TimeoutError(\"Timeout - The document took too long to load\")\n\n        return document\n\n    @staticmethod\n    def document_loader_task(loader):\n        return loader.load()\n\n    def register_background_task(self):\n        try:\n            register_task_processor(\"document_loader\", self.document_loader_task)\n        except TaskAlreadyRegisteredError as e:\n            pass\n\n    def process(self):\n        url = self.get_input_by_name(\"document_url\")\n\n        if not is_valid_url(url):\n            raise ValueError(\"Invalid URL\")\n\n        if not is_s3_file(url) and not is_accepted_url_file_size(url):\n            raise ValueError(\n                f\"File size is too large (Max : {get_max_file_size_in_mb()})\"\n            )\n\n        r = requests.get(url)\n        if r.status_code != 200:\n            raise ValueError(\n                \"Check the url of your file; returned status code %s\" % r.status_code\n            )\n\n        mime_type = r.headers.get(\"Content-Type\")\n        if not is_s3_file(url) and mime_type not in self.accepted_mime_types:\n            raise ValueError(\"The file type is not supported.\")\n\n        temp_file, temp_dir = create_temp_file_with_bytes_content(r.content)\n        file_path = str(temp_file)\n\n        loader = self.get_loader_for_mime_type(mime_type, file_path)\n\n        self.register_background_task()\n\n        try:\n            document = self.load_document(loader)\n            if len(document) > 0:\n                output = \"\"\n                for doc in document:\n                    output += doc.page_content\n                return output\n            else:\n                return None\n        except Exception as e:\n            logging.warning(f\"Failed to load document from URL: {e}\")\n            raise e\n        finally:\n            temp_dir.cleanup()\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/extension_processor.py",
    "content": "from ..model import NodeConfig\nfrom ...context.processor_context import ProcessorContext\nfrom ..processor import BasicProcessor, ContextAwareProcessor\n\n\nclass ExtensionProcessor:\n    \"\"\"Base interface for extension processors\"\"\"\n\n    def get_node_config(self) -> NodeConfig:\n        pass\n\n\nclass DynamicExtensionProcessor:\n    \"\"\"Base interface for dynamic extension processors - These nodes config are populated by an API call after a user choice\"\"\"\n\n    def get_dynamic_node_config(self, data) -> NodeConfig:\n        pass\n\n\nclass BasicExtensionProcessor(ExtensionProcessor, BasicProcessor):\n    \"\"\"A basic extension processor that does not depend on user-specific parameters.\n\n    Inherits basic processing capabilities from BasicProcessor and schema handling from ExtensionProcessor.\n\n    Args:\n        config (dict): Configuration dictionary for processor setup.\n    \"\"\"\n\n    def __init__(self, config):\n        super().__init__(config)\n\n\nclass ContextAwareExtensionProcessor(ExtensionProcessor, ContextAwareProcessor):\n    \"\"\"An extension processor that requires context about the user, such as user-specific settings or keys.\n\n    This class supports context-aware processing by incorporating user context into the processing flow.\n\n    Args:\n        config (dict): Configuration dictionary for processor setup.\n        context (ProcessorContext, optional): Context object containing user-specific parameters. Defaults to None.\n    \"\"\"\n\n    def __init__(self, config, context: ProcessorContext = None):\n        super().__init__(config)\n        self._processor_context = context\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/generate_number_processor.py",
    "content": "import random\n\nfrom ..node_config_builder import FieldBuilder, NodeConfigBuilder\nfrom ...context.processor_context import ProcessorContext\nfrom .extension_processor import (\n    ContextAwareExtensionProcessor,\n    DynamicExtensionProcessor,\n)\n\n\nclass GenerateNumberProcessor(\n    ContextAwareExtensionProcessor, DynamicExtensionProcessor\n):\n    processor_type = \"generate-number-processor\"\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n    def get_node_config(self):\n        min_field = (\n            FieldBuilder()\n            .set_name(\"min\")\n            .set_label(\"Min\")\n            .set_type(\"numericfield\")\n            .set_description(\"minimumValueForTheRandomNumber\")\n            .set_default_value(0)\n            .build()\n        )\n        max_field = (\n            FieldBuilder()\n            .set_name(\"max\")\n            .set_label(\"Max\")\n            .set_type(\"numericfield\")\n            .set_description(\"maximumValueForTheRandomNumber\")\n            .set_default_value(1000)\n            .build()\n        )\n        return (\n            NodeConfigBuilder()\n            .set_node_name(\"Generate Number\")\n            .set_processor_type(self.processor_type)\n            .set_section(\"tools\")\n            .set_help_message(\"generateNumberHelp\")\n            .set_show_handles(True)\n            .add_field(min_field)\n            .add_field(max_field)\n            .set_output_type(\"text\")\n            .set_icon(\"GiPerspectiveDiceSix\")\n            .build()\n        )\n\n    def process(self):\n        # Retrieve optional parameters; default values are used if they are not provided.\n        min_val = self.get_input_by_name(\"min\")\n        max_val = self.get_input_by_name(\"max\")\n\n        try:\n            min_val = int(min_val) if min_val is not None else 0\n            max_val = int(max_val) if max_val is not None else 500\n        except ValueError:\n            raise ValueError(\"Both 'min' and 'max' should be valid numbers\")\n\n        if min_val > max_val:\n            raise ValueError(\"'min' should not be greater than 'max'\")\n\n        # Generate and return a random number in the inclusive range [min_val, max_val]\n        random_number = random.randint(min_val, max_val)\n        return [random_number]\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/gpt_image_processor.py",
    "content": "import base64\nimport mimetypes\nimport os\nimport re\nfrom datetime import datetime\nfrom io import BytesIO\nfrom urllib.parse import unquote, urlparse\n\nimport requests\nfrom openai import OpenAI\n\nfrom ...context.processor_context import ProcessorContext\nfrom ..model import Field, NodeConfig, Option\nfrom ..node_config_builder import NodeConfigBuilder\nfrom .extension_processor import (\n    ContextAwareExtensionProcessor,\n    DynamicExtensionProcessor,\n)\n\n\nclass GPTImageProcessor(ContextAwareExtensionProcessor, DynamicExtensionProcessor):\n    processor_type = \"gpt-image-processor\"\n\n    # our two modes\n    methods = [\"generate\", \"edit\"]\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n        self.method = self.get_input_by_name(\"method\")\n\n    def get_node_config(self):\n        # top-level mode selector\n        method_options = [\n            Option(default=(m == \"generate\"), value=m, label=m.title())\n            for m in self.methods\n        ]\n        method_field = Field(\n            name=\"method\",\n            label=\"mode\",\n            type=\"select\",\n            options=method_options,\n            required=True,\n        )\n\n        return (\n            NodeConfigBuilder()\n            .set_node_name(\"GPT Image\")\n            .set_processor_type(self.processor_type)\n            .set_icon(\"OpenAILogo\")\n            .set_help_message(\"gptImageHelp\")\n            .set_section(\"models\")\n            .add_field(method_field)\n            .set_is_dynamic(True)\n            .build()\n        )\n\n    # — builders for each mode's fields —\n\n    def build_generate_config(self, builder):\n        # same fields as your original generate case\n        builder.add_field(\n            Field(\n                name=\"model\",\n                label=\"Model\",\n                type=\"select\",\n                options=[\n                    Option(default=True, value=\"gpt-image-1\", label=\"gpt-image-1\")\n                ],\n                required=True,\n            )\n        )\n\n        builder.add_field(\n            Field(\n                name=\"prompt\",\n                label=\"Prompt\",\n                type=\"textarea\",\n                required=True,\n                placeholder=\"InputTextPlaceholder\",\n                hasHandle=True,\n            )\n        )\n\n        builder.add_field(\n            Field(\n                name=\"size\",\n                label=\"Size\",\n                type=\"select\",\n                options=[\n                    Option(default=True, value=\"auto\", label=\"auto\"),\n                    Option(default=False, value=\"1024x1024\", label=\"1024x1024\"),\n                    Option(default=False, value=\"1536x1024\", label=\"1536x1024\"),\n                    Option(default=False, value=\"1024x1536\", label=\"1024x1536\"),\n                ],\n                required=True,\n            )\n        )\n        builder.add_field(\n            Field(\n                name=\"quality\",\n                label=\"Quality\",\n                type=\"select\",\n                options=[\n                    Option(default=True, value=\"auto\", label=\"auto\"),\n                    Option(default=False, value=\"low\", label=\"low\"),\n                    Option(default=False, value=\"medium\", label=\"medium\"),\n                    Option(default=False, value=\"high\", label=\"high\"),\n                ],\n                required=True,\n            )\n        )\n        builder.add_field(\n            Field(\n                name=\"background\",\n                label=\"Background\",\n                type=\"select\",\n                options=[\n                    Option(default=True, value=\"opaque\", label=\"opaque\"),\n                    Option(default=False, value=\"transparent\", label=\"transparent\"),\n                ],\n                required=True,\n            )\n        )\n        builder.add_field(\n            Field(\n                name=\"moderation\",\n                label=\"Moderation\",\n                type=\"select\",\n                options=[\n                    Option(default=False, value=\"auto\", label=\"auto\"),\n                    Option(default=True, value=\"low\", label=\"low\"),\n                ],\n                required=True,\n            )\n        )\n        builder.set_output_type(\"imageUrl\")\n\n    def build_edit_config(self, builder):\n        # same fields as your original edit case\n        builder.add_field(\n            Field(\n                name=\"model\",\n                label=\"Model\",\n                type=\"select\",\n                options=[\n                    Option(default=True, value=\"gpt-image-1\", label=\"gpt-image-1\")\n                ],\n                required=True,\n            )\n        )\n        builder.add_field(\n            Field(\n                name=\"prompt\",\n                label=\"Prompt\",\n                type=\"textarea\",\n                required=True,\n                placeholder=\"InputTextPlaceholder\",\n                hasHandle=True,\n            )\n        )\n\n        builder.add_field(\n            Field(\n                name=\"mask\",\n                label=\"Mask\",\n                type=\"fileUpload\",\n                hasHandle=True,\n                description=\"gptImageMaskDescription\",\n            )\n        )\n\n        builder.add_field(\n            Field(\n                name=\"image\",\n                label=\"Image\",\n                type=\"fileUpload\",\n                hasHandle=True,\n                canAddChildrenFields=True,\n            )\n        )\n\n        builder.set_output_type(\"imageUrl\")\n\n    method_config_builders = {\n        \"generate\": build_generate_config,\n        \"edit\": build_edit_config,\n    }\n\n    def get_dynamic_node_config(self, data) -> NodeConfig:\n        method = data[\"method\"]\n        builder = (\n            NodeConfigBuilder()\n            .set_node_name(f\"GPT Image – {method.title()}\")\n            .set_processor_type(self.processor_type)\n            .set_icon(\"OpenAILogo\")\n            .set_section(\"models\")\n            .set_show_handles(True)\n        )\n        # inject the right fields\n        self.method_config_builders[method](self, builder)\n        return builder.build()\n\n    @staticmethod\n    def get_image_file_from_url(url):\n        response = requests.get(url)\n        response.raise_for_status()\n        parsed = urlparse(url)\n        filename = os.path.basename(parsed.path) or \"image.png\"\n        filename = unquote(filename)\n        if \".\" not in filename:\n            ext = mimetypes.guess_extension(response.headers.get(\"Content-Type\", \"\"))\n            filename += ext or \".png\"\n        buf = BytesIO(response.content)\n        buf.name = filename\n        return buf\n\n    def process(self):\n        prompt = self.get_input_by_name(\"prompt\")\n        model = self.get_input_by_name(\"model\")\n\n        api_key = self._processor_context.get_value(\"openai_api_key\")\n        if api_key is None:\n            raise Exception(\"No OpenAI API key found\")\n        client = OpenAI(api_key=api_key)\n\n        if self.method == \"edit\":\n            # gather all image_* fields just like before\n            images_fields = [\n                f for f in self.fields_names if re.match(r\"^image_\\d+$\", f)\n            ]\n            images_fields.insert(0, \"image\")\n            urls = [self.get_input_by_name(fld, None) for fld in images_fields]\n            urls = [u for u in urls if u]\n            files = [GPTImageProcessor.get_image_file_from_url(u) for u in urls]\n            mask = self.get_input_by_name(\"mask\", None)\n            if mask:\n                mask = GPTImageProcessor.get_image_file_from_url(mask)\n                result = client.images.edit(\n                    model=model,\n                    prompt=prompt,\n                    image=files,\n                    mask=mask,\n                )\n            else:\n                result = client.images.edit(\n                    model=model,\n                    prompt=prompt,\n                    image=files,\n                )\n\n        else:\n            # generate\n            size = self.get_input_by_name(\"size\")\n            quality = self.get_input_by_name(\"quality\")\n            background = self.get_input_by_name(\"background\")\n            moderation = self.get_input_by_name(\"moderation\")\n            result = client.images.generate(\n                model=model,\n                prompt=prompt,\n                size=size,\n                quality=quality,\n                background=background,\n                moderation=moderation,\n            )\n\n        img_b64 = result.data[0].b64_json\n        img_bytes = base64.b64decode(img_b64)\n        storage = self.get_storage()\n        fname = f\"{self.name}-{datetime.now():%Y%m%d%H%M%S%f}.png\"\n        return storage.save(fname, img_bytes)\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/http_get_processor.py",
    "content": "import logging\nimport requests\nimport json\nfrom urllib.parse import urlparse\n\nfrom ..node_config_builder import FieldBuilder, NodeConfigBuilder\nfrom ...context.processor_context import ProcessorContext\nfrom .extension_processor import ContextAwareExtensionProcessor\n\n\nclass HttpGetProcessor(ContextAwareExtensionProcessor):\n    processor_type = \"http-get-processor\"\n    max_timeout = 5  # Maximum timeout in seconds\n    max_response_size_in_mb = 2\n    max_response_size = (\n        1024 * 1024 * max_response_size_in_mb\n    )  # Maximum response size in bytes (2 MB)\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n    def get_node_config(self):\n        url_field = (\n            FieldBuilder()\n            .set_name(\"url\")\n            .set_label(\"URL\")\n            .set_type(\"textfield\")\n            .set_required(True)\n            .set_placeholder(\"httpGetProcessorURLPlaceholder\")\n            .set_description(\"httpGetProcessorURLDescription\")\n            .set_has_handle(True)\n            .build()\n        )\n\n        headers_field = (\n            FieldBuilder()\n            .set_name(\"headers\")\n            .set_label(\"Headers\")\n            .set_type(\"dictionnary\")\n            .set_description(\"httpGetProcessorHeadersDescription\")\n            .build()\n        )\n\n        return (\n            NodeConfigBuilder()\n            .set_node_name(\"HTTP Get\")\n            .set_processor_type(self.processor_type)\n            .set_icon(\"TbHttpGet\")\n            .set_section(\"input\")\n            .set_help_message(\"httpGetProcessorHelp\")\n            .set_output_type(\"text\")\n            .set_show_handles(True)\n            .add_field(url_field)\n            .add_field(headers_field)\n            .build()\n        )\n\n    def convert_headers_array_to_json(self, headers_array):\n        headers = {}\n        for header in headers_array:\n            headers[header[\"key\"]] = header[\"value\"]\n        return json.dumps(headers)\n\n    def process(self):\n        url = self.get_input_by_name(\"url\")\n        headers = self.get_input_by_name(\"headers\")\n        timeout = self.get_input_by_name(\"timeout\")\n\n        if not url:\n            raise ValueError(\"URL is required.\")\n\n        # Validate URL to prevent misuse\n        parsed_url = urlparse(url)\n        if not parsed_url.scheme.startswith(\"http\"):\n            raise ValueError(\"Invalid URL scheme. Only HTTP and HTTPS are allowed.\")\n\n        timeout = HttpGetProcessor.max_timeout\n\n        if headers:\n            headers = self.convert_headers_array_to_json(headers)\n            try:\n                headers = json.loads(headers)\n\n            except json.JSONDecodeError:\n                raise Exception(\"Headers must be a valid JSON.\")\n        else:\n            headers = {}\n\n        try:\n            response = requests.get(\n                url=url,\n                headers=headers,\n                timeout=timeout,\n                allow_redirects=False,\n                stream=True,\n            )\n            response.raise_for_status()\n        except requests.exceptions.RequestException as e:\n            logging.warning(f\"HTTP GET request failed: {str(e)}\")\n            raise Exception(f\"HTTP GET request failed: {str(e)}\")\n\n        # Limit the response size\n        content = bytes()\n        total_size = 0\n        try:\n            for chunk in response.iter_content(chunk_size=8192):\n                content += chunk\n                total_size += len(chunk)\n                if total_size > HttpGetProcessor.max_response_size:\n                    logging.warning(\"Response size exceeds maximum allowed limit.\")\n                    raise Exception(\n                        f\"Response size exceeds maximum allowed limit of {HttpGetProcessor.max_response_size_in_mb} MB. If need to load file, consider using the file node in URL mode.\"\n                    )\n        finally:\n            response.close()\n\n        content_type = response.headers.get(\"Content-Type\", \"\")\n\n        if \"application/json\" in content_type:\n            try:\n                return [json.loads(content.decode(response.encoding or \"utf-8\"))]\n\n            except ValueError:\n                raise Exception(\"Failed to parse JSON response.\")\n        else:\n            return content.decode(response.encoding or \"utf-8\", errors=\"replace\")\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/open_router_processor.py",
    "content": "import logging\n\nfrom ....env_config import is_local_environment\n\nfrom ...launcher.event_type import EventType\nfrom ...launcher.processor_event import ProcessorEvent\nfrom ...context.processor_context import ProcessorContext\nfrom ..model import Field, NodeConfig, Option, Condition\nfrom .extension_processor import ContextAwareExtensionProcessor\nfrom openai import OpenAI\nimport requests\nfrom cachetools import TTLCache, cached\n\n\ndef load_models_from_file():\n    import json\n    import os\n\n    current_dir = os.path.dirname(os.path.abspath(__file__))\n    models_file_path = os.path.join(\n        current_dir,\n        \"..\",\n        \"..\",\n        \"..\",\n        \"..\",\n        \"resources\",\n        \"data\",\n        \"openrouter_models.json\",\n    )\n    with open(models_file_path, \"r\") as file:\n        models = json.load(file)\n\n    return models.get(\"data\", [])\n\n\n@cached(TTLCache(maxsize=1, ttl=120000))\ndef get_models():\n    \"\"\"\n    Fetches the list of available models from OpenRouter API.\n    Caches the result to avoid redundant API calls.\n    \"\"\"\n    url = \"https://openrouter.ai/api/v1/models\"\n    try:\n        response = requests.get(url, timeout=10)\n        response.raise_for_status()\n        models = response.json()\n        return models.get(\"data\", [])\n    except Exception as e:\n        logging.warning(\n            f\"Failed to fetch OpenRouter models - Loading from file instead: {e}\"\n        )\n        return load_models_from_file()\n\n\n@cached(TTLCache(maxsize=1, ttl=120000))\ndef get_text_to_image_model_ids():\n    \"\"\"\n    Returns a list of model IDs that support text to image generation.\n    \"\"\"\n    available_models = get_models()\n    text_image_model_ids = [\n        model[\"id\"]\n        for model in available_models\n        if model.get(\"architecture\").get(\"modality\") == \"text+image->text\"\n    ]\n    return text_image_model_ids\n\n\nclass OpenRouterProcessor(ContextAwareExtensionProcessor):\n    processor_type = \"openrouter-processor\"\n    streaming = True\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n    def get_node_config(self):\n        context = Field(\n            name=\"context\",\n            label=\"context\",\n            type=\"textfield\",\n            required=False,\n            placeholder=\"ContextPlaceholder\",\n            hasHandle=True,\n        )\n\n        text = Field(\n            name=\"prompt\",\n            label=\"prompt\",\n            type=\"textarea\",\n            required=True,\n            placeholder=\"PromptPlaceholder\",\n            hasHandle=True,\n        )\n\n        available_models = get_models()\n\n        target_default_model_id = \"google/gemma-2-9b-it:free\"\n\n        model_options = [\n            Option(\n                default=(model[\"id\"] == target_default_model_id),\n                value=model[\"id\"],\n                label=model.get(\"name\", model[\"id\"]),\n            )\n            for model in available_models\n        ]\n\n        model_field = Field(\n            name=\"model\",\n            label=\"model\",\n            type=\"select\",\n            options=model_options,\n            required=True,\n        )\n\n        text_image_model_ids = get_text_to_image_model_ids()\n\n        image_url_condition = Condition(\n            field=\"model\", operator=\"in\", value=text_image_model_ids\n        )\n\n        image_url = Field(\n            name=\"image_url\",\n            label=\"Image URL\",\n            type=\"textfield\",\n            placeholder=\"InputImagePlaceholder\",\n            hasHandle=True,\n            condition=image_url_condition,\n        )\n\n        fields = [model_field, image_url, context, text]\n\n        config = NodeConfig(\n            nodeName=\"OpenRouter\",\n            processorType=self.processor_type,\n            icon=\"OpenRouterLogo\",\n            fields=fields,\n            outputType=\"text\",\n            section=\"models\",\n            helpMessage=\"openRouterHelp\",\n            showHandlesNames=True,\n        )\n\n        return config\n\n    def process(self):\n        prompt = self.get_input_by_name(\"prompt\")\n        context = self.get_input_by_name(\"context\", \"\")\n        model = self.get_input_by_name(\"model\")\n        image_url = self.get_input_by_name(\"image_url\", None)\n\n        if prompt is None:\n            return None\n\n        api_key = self._processor_context.get_value(\"openrouter_api_key\")\n\n        if api_key is None:\n            raise Exception(\"No OpenRouter API key found\")\n\n        client = OpenAI(base_url=\"https://openrouter.ai/api/v1\", api_key=api_key)\n\n        text_image_model_ids = get_text_to_image_model_ids()\n\n        if image_url is not None and model in text_image_model_ids:\n            content = [\n                {\n                    \"type\": \"image_url\",\n                    \"image_url\": {\"url\": image_url},\n                },\n                {\"type\": \"text\", \"text\": prompt},\n            ]\n        else:\n            content = f\"{context} {prompt}\"\n\n        response = client.chat.completions.create(\n            model=model,\n            messages=[{\"role\": \"user\", \"content\": content}],\n            stream=self.streaming,\n        )\n\n        if self.streaming:\n            final_response = \"\"\n            for chunk in response:\n                if not chunk.choices[0].delta.content:\n                    continue\n                final_response += chunk.choices[0].delta.content\n                event = ProcessorEvent(self, final_response)\n                self.notify(EventType.STREAMING, event)\n\n            return final_response\n\n        return response.choices[0].message.content\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/openai_reasoning_processor.py",
    "content": "import logging\n\nfrom app.processors.exceptions import LightException\nfrom ...launcher.event_type import EventType\nfrom ...launcher.processor_event import ProcessorEvent\nfrom ...context.processor_context import ProcessorContext\nfrom ..model import Field, FieldCondition, NodeConfig, Option\nfrom .extension_processor import ContextAwareExtensionProcessor\nfrom openai import OpenAI\n\n\nclass OpenAIReasoningProcessor(ContextAwareExtensionProcessor):\n    processor_type = \"openai-reasoning-processor\"\n    streaming = True\n    models_with_reasoning_effort = [\"o3-mini\", \"o4-mini\", \"o3\"]\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n    def get_node_config(self):\n        context = Field(\n            name=\"context\",\n            label=\"context\",\n            type=\"textfield\",\n            required=False,\n            placeholder=\"ContextPlaceholder\",\n            hasHandle=True,\n        )\n\n        text = Field(\n            name=\"prompt\",\n            label=\"prompt\",\n            type=\"textarea\",\n            required=True,\n            placeholder=\"PromptPlaceholder\",\n            hasHandle=True,\n        )\n\n        model_options = [\n            Option(\n                default=True,\n                value=\"o4-mini\",\n                label=\"o4-mini\",\n            ),\n            Option(\n                default=False,\n                value=\"o3-mini\",\n                label=\"o3-mini\",\n            ),\n            Option(\n                default=False,\n                value=\"o3\",\n                label=\"o3\",\n            ),\n            Option(\n                default=False,\n                value=\"o1-pro\",\n                label=\"o1-pro\",\n            ),\n            Option(\n                default=False,\n                value=\"o1\",\n                label=\"o1\",\n            ),\n        ]\n\n        model = Field(\n            name=\"model\",\n            type=\"option\",\n            options=model_options,\n            required=True,\n        )\n\n        reasoning_effort_options = [\n            Option(\n                default=False,\n                value=\"low\",\n                label=\"low\",\n            ),\n            Option(\n                default=True,\n                value=\"medium\",\n                label=\"medium\",\n            ),\n            Option(\n                default=False,\n                value=\"high\",\n                label=\"high\",\n            ),\n        ]\n\n        reasoning_effort = Field(\n            name=\"reasoning_effort\",\n            label=\"reasoning_effort\",\n            type=\"select\",\n            options=reasoning_effort_options,\n            condition=FieldCondition(\n                field=\"model\",\n                operator=\"in\",\n                value=OpenAIReasoningProcessor.models_with_reasoning_effort,\n            ),\n        )\n\n        fields = [model, context, text, reasoning_effort]\n\n        config = NodeConfig(\n            nodeName=\"OpenAI o-series\",\n            processorType=self.processor_type,\n            icon=\"OpenAILogo\",\n            fields=fields,\n            outputType=\"text\",\n            section=\"models\",\n            helpMessage=\"openaio1Help\",\n            showHandlesNames=True,\n        )\n\n        return config\n\n    def handle_stream_answer(self, awnser):\n        event = ProcessorEvent(self, awnser)\n        self.notify(EventType.STREAMING, event)\n\n    def process(self):\n        prompt = self.get_input_by_name(\"prompt\")\n        context = self.get_input_by_name(\"context\", \"\")\n        model = self.get_input_by_name(\"model\")\n        reasoning_effort = self.get_input_by_name(\"reasoning_effort\", \"medium\")\n\n        if prompt is None:\n            return None\n\n        api_key = self._processor_context.get_value(\"openai_api_key\")\n\n        if api_key is None:\n            raise Exception(\"No OpenAI API key found\")\n\n        client = OpenAI(api_key=api_key)\n\n        kwargs = {\n            \"model\": model,\n            \"input\": [{\"role\": \"user\", \"content\": f\"{context} {prompt}\"}],\n            \"stream\": self.streaming,\n        }\n\n        if model in OpenAIReasoningProcessor.models_with_reasoning_effort:\n            kwargs[\"reasoning\"] = {\"effort\": reasoning_effort}\n\n        stream = client.responses.create(**kwargs)\n        final_response = \"\"\n        for event in stream:\n            type = event.type\n            if type == \"response.output_text.delta\":\n                final_response += event.delta\n                self.handle_stream_answer(final_response)\n            if type == \"response.completed\":\n                response_data = event.response\n                final_response = response_data.output_text\n            if type == \"response.failed\":\n                response_data = event.response\n                if not hasattr(response_data, \"error\"):\n                    logging.warning(f\"Error from OpenAI with no data: {response_data}\")\n                    continue\n\n                raise LightException(\n                    f\"Error from OpenAI : {response_data.error.message}\"\n                )\n            if type == \"error\":\n                raise LightException(f\"Error from OpenAI : {event.message}\")\n\n        return final_response\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/openai_text_to_speech_processor.py",
    "content": "import logging\nimport re\nfrom ...context.processor_context import ProcessorContext\nfrom ..model import Field, NodeConfig, Option, Condition\nfrom .extension_processor import ContextAwareExtensionProcessor\nfrom openai import OpenAI\nfrom datetime import datetime\nimport io\nfrom pydub import AudioSegment\nimport eventlet\n\n\nclass OpenAITextToSpeechProcessor(ContextAwareExtensionProcessor):\n    processor_type = \"openai-text-to-speech-processor\"\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n    def get_node_config(self):\n        text = Field(\n            name=\"text\",\n            label=\"text\",\n            type=\"textfield\",\n            required=True,\n            placeholder=\"InputTextPlaceholder\",\n            hasHandle=True,\n        )\n\n        voices_options = [\n            Option(\n                default=True,\n                value=\"alloy\",\n                label=\"alloy\",\n            ),\n            Option(\n                default=False,\n                value=\"ash\",\n                label=\"ash\",\n            ),\n            Option(\n                default=False,\n                value=\"ballad\",\n                label=\"ballad\",\n            ),\n            Option(\n                default=False,\n                value=\"coral\",\n                label=\"coral\",\n            ),\n            Option(\n                default=False,\n                value=\"echo\",\n                label=\"echo\",\n            ),\n            Option(\n                default=False,\n                value=\"fable\",\n                label=\"fable\",\n            ),\n            Option(\n                default=False,\n                value=\"onyx\",\n                label=\"onyx\",\n            ),\n            Option(\n                default=False,\n                value=\"nova\",\n                label=\"nova\",\n            ),\n            Option(\n                default=False,\n                value=\"sage\",\n                label=\"sage\",\n            ),\n            Option(\n                default=False,\n                value=\"shimmer\",\n                label=\"shimmer\",\n            ),\n        ]\n\n        voice = Field(\n            name=\"voice\",\n            label=\"voice\",\n            type=\"select\",\n            options=voices_options,\n            required=True,\n        )\n\n        model_options = [\n            Option(\n                default=True,\n                value=\"gpt-4o-mini-tts\",\n                label=\"gpt-4o-mini-tts\",\n            ),\n            Option(\n                default=False,\n                value=\"tts-1\",\n                label=\"tts-1\",\n            ),\n            Option(\n                default=False,\n                value=\"tts-1-hd\",\n                label=\"tts-1-hd\",\n            ),\n        ]\n\n        model = Field(\n            name=\"model\",\n            label=\"model\",\n            type=\"select\",\n            options=model_options,\n            required=True,\n        )\n\n        instructions_enabled_condition = Condition(\n            field=\"model\", operator=\"equals\", value=\"gpt-4o-mini-tts\"\n        )\n\n        instructions = Field(\n            name=\"instruction\",\n            label=\"instruction\",\n            type=\"textfield\",\n            required=False,\n            placeholder=\"TTSInstructionPlaceholder\",\n            description=\"TTSInstructionDescription\",\n            hasHandle=True,\n            condition=instructions_enabled_condition,\n        )\n\n        fields = [text, model, voice, instructions]\n\n        config = NodeConfig(\n            nodeName=\"TextToSpeech\",\n            processorType=self.processor_type,\n            icon=\"OpenAILogo\",\n            fields=fields,\n            outputType=\"audioUrl\",\n            section=\"models\",\n            helpMessage=\"textToSpeechHelp\",\n            showHandlesNames=True,\n            keywords=[\"Audio\", \"Speech\", \"OpenAI\", \"TTS\"],\n        )\n\n        return config\n\n    def split_text_into_chunks(text, max_length=4096):\n        \"\"\"\n        Split text into chunks of up to max_length characters by packing as many whole sentences as possible.\n        If a single sentence exceeds max_length, split it into smaller parts.\n        \"\"\"\n        # Split text by sentence-ending punctuation followed by whitespace.\n        sentences = re.split(r\"(?<=[.!?])\\s+\", text)\n        chunks = []\n        current_sentences = []\n        current_length = 0\n\n        for sentence in sentences:\n            sentence_length = len(sentence)\n            # Add a space if there is already a sentence in the current chunk.\n            additional_length = (\n                sentence_length if not current_sentences else sentence_length + 1\n            )\n\n            if current_length + additional_length <= max_length:\n                # Append sentence to the current chunk.\n                current_sentences.append(sentence)\n                current_length += additional_length\n            else:\n                # Flush the current chunk if it's not empty.\n                if current_sentences:\n                    chunks.append(\" \".join(current_sentences))\n                    current_sentences = []\n                    current_length = 0\n\n                # If the sentence itself is too long, split it into parts.\n                if sentence_length > max_length:\n                    parts = [\n                        sentence[i : i + max_length]\n                        for i in range(0, sentence_length, max_length)\n                    ]\n                    # All full parts are separate chunks.\n                    chunks.extend(parts[:-1])\n                    # The last part might be less than max_length; add it to current chunk.\n                    current_sentences = [parts[-1]]\n                    current_length = len(parts[-1])\n                else:\n                    # Start a new chunk with the sentence.\n                    current_sentences = [sentence]\n                    current_length = sentence_length\n\n        if current_sentences:\n            chunks.append(\" \".join(current_sentences))\n        return chunks\n\n    def process(self):\n        text = self.get_input_by_name(\"text\")\n        voice = self.get_input_by_name(\"voice\")\n        model = self.get_input_by_name(\"model\")\n        instruction = self.get_input_by_name(\"instruction\", None)\n\n        if text is None:\n            return None\n\n        api_key = self._processor_context.get_value(\"openai_api_key\")\n\n        if api_key is None:\n            raise Exception(\"No OpenAI API key found\")\n\n        client = OpenAI(api_key=api_key)\n\n        # Split text into chunks that are each less than or equal to 4096 characters.\n        chunks = OpenAITextToSpeechProcessor.split_text_into_chunks(text, 4096)\n        pool = eventlet.GreenPool(2)\n\n        def create_audio_segment(chunk):\n            kwargs = {\n                \"model\": model,\n                \"voice\": voice,\n                \"input\": chunk,\n            }\n\n            if instruction is not None:\n                kwargs[\"instructions\"] = instruction\n\n            response = client.audio.speech.create(**kwargs)\n            if response is None:\n                return None\n            # Convert the response content (mp3 bytes) into an AudioSegment.\n            return AudioSegment.from_file(io.BytesIO(response.content), format=\"mp3\")\n\n        # Process chunks concurrently; imap preserves the order of chunks.\n        audio_segments = list(pool.imap(create_audio_segment, chunks))\n        # Filter out any None segments.\n        audio_segments = [segment for segment in audio_segments if segment is not None]\n\n        if not audio_segments:\n            return None\n\n        # Merge the audio segments.\n        merged_audio = audio_segments[0]\n        for seg in audio_segments[1:]:\n            merged_audio += seg\n\n        # Export merged audio to a bytes buffer.\n        merged_audio_buffer = io.BytesIO()\n        merged_audio.export(merged_audio_buffer, format=\"mp3\")\n        merged_audio_buffer.seek(0)\n\n        storage = self.get_storage()\n        timestamp_str = datetime.now().strftime(\"%Y%m%d%H%M%S%f\")\n        filename = f\"{self.name}-{timestamp_str}.mp3\"\n        url = storage.save(filename, merged_audio_buffer.read())\n\n        # cleanup\n        merged_audio_buffer.close()\n        del merged_audio_buffer\n        del merged_audio\n        del audio_segments\n\n        return url\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/replace_text_processor.py",
    "content": "import logging\nimport re\n\nfrom ..node_config_builder import FieldBuilder, NodeConfigBuilder\nfrom .extension_processor import BasicExtensionProcessor\nfrom ..core.processor_type_name_utils import ProcessorType\n\n\nclass ReplaceTextProcessor(BasicExtensionProcessor):\n    processor_type = ProcessorType.REPLACE_TEXT\n\n    def __init__(self, config):\n        super().__init__(config)\n\n    def get_node_config(self):\n        input_text_field = (\n            FieldBuilder()\n            .set_name(\"input_text\")\n            .set_label(\"Input Text\")\n            .set_type(\"textarea\")\n            .set_required(True)\n            .set_placeholder(\"ReplaceTextInputPlaceholder\")\n            .set_has_handle(True)\n            .build()\n        )\n\n        search_text_field = (\n            FieldBuilder()\n            .set_name(\"search_text\")\n            .set_label(\"Search Text\")\n            .set_type(\"textfield\")\n            .set_required(True)\n            .set_placeholder(\"ReplaceTextSearchPlaceholder\")\n            .set_has_handle(True)\n            .build()\n        )\n\n        replacement_text_field = (\n            FieldBuilder()\n            .set_name(\"replacement_text\")\n            .set_label(\"Replacement Text\")\n            .set_type(\"textfield\")\n            .set_required(True)\n            .set_placeholder(\"ReplaceTextReplacePlaceholder\")\n            .set_has_handle(True)\n            .build()\n        )\n\n        replace_all_field = (\n            FieldBuilder()\n            .set_name(\"replace_all\")\n            .set_label(\"Replace All Occurrences\")\n            .set_type(\"boolean\")\n            .set_default_value(True)\n            .build()\n        )\n\n        use_regex_field = (\n            FieldBuilder()\n            .set_name(\"use_regex\")\n            .set_label(\"Use Regular Expression\")\n            .set_type(\"boolean\")\n            .set_default_value(False)\n            .build()\n        )\n\n        case_sensitivity_field = (\n            FieldBuilder()\n            .set_name(\"case_sensitivity\")\n            .set_label(\"Case Sensitive\")\n            .set_type(\"boolean\")\n            .set_default_value(True)\n            .build()\n        )\n\n        return (\n            NodeConfigBuilder()\n            .set_node_name(\"ReplaceText\")\n            .set_processor_type(self.processor_type.value)\n            .set_section(\"tools\")\n            .set_help_message(\"replaceTextNodeHelp\")\n            .set_show_handles(True)\n            .set_output_type(\"text\")\n            .set_default_hide_output(False)\n            .add_field(input_text_field)\n            .add_field(search_text_field)\n            .add_field(replacement_text_field)\n            .add_field(replace_all_field)\n            .add_field(case_sensitivity_field)\n            .add_field(use_regex_field)\n            .set_icon(\"MdSwapHoriz\")\n            .build()\n        )\n\n    def process(self):\n        input_text = self.get_input_by_name(\"input_text\")\n        search_text = self.get_input_by_name(\"search_text\")\n        replacement_text = self.get_input_by_name(\"replacement_text\")\n        replace_all = self.get_input_by_name(\"replace_all\")\n        use_regex = self.get_input_by_name(\"use_regex\")\n        case_sensitivity = self.get_input_by_name(\"case_sensitivity\")\n\n        flags = 0\n        if not case_sensitivity:\n            flags |= re.IGNORECASE\n\n        if use_regex:\n            try:\n                pattern = re.compile(search_text, flags)\n                count = 0 if replace_all else 1\n                result_text = pattern.sub(replacement_text, input_text, count=count)\n            except re.error as e:\n                logging.warning(f\"Invalid regular expression: {e}\")\n                result_text = input_text\n        else:\n            if not case_sensitivity:\n                escaped_search_text = re.escape(search_text)\n                pattern = re.compile(escaped_search_text, flags)\n                count = 0 if replace_all else 1\n                result_text = pattern.sub(replacement_text, input_text, count=count)\n            else:\n                if replace_all:\n                    result_text = input_text.replace(search_text, replacement_text)\n                else:\n                    result_text = input_text.replace(search_text, replacement_text, 1)\n\n        return [result_text]\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/stabilityai_generic_processor.py",
    "content": "import json\nimport logging\nimport os\nfrom ..node_config_utils import get_sub_configuration\n\nfrom ....utils.openapi_client import Client\n\nfrom ....utils.processor_utils import (\n    stream_download_file_as_binary,\n)\n\nfrom ....utils.openapi_converter import OpenAPIConverter\n\nfrom ....utils.openapi_reader import OpenAPIReader\n\nfrom ..node_config_builder import FieldBuilder, NodeConfigBuilder\nfrom ...context.processor_context import ProcessorContext\nfrom ..model import NodeConfig, Option\nfrom .extension_processor import (\n    ContextAwareExtensionProcessor,\n    DynamicExtensionProcessor,\n)\nfrom datetime import datetime\nimport re\n\n\nclass StabilityAIGenericProcessor(\n    ContextAwareExtensionProcessor, DynamicExtensionProcessor\n):\n    processor_type = \"stabilityai-generic-processor\"\n    openapi_file_path = \"./resources/openapi/stabilityai.json\"\n    paths_denied = [\n        re.compile(r\"/v1/\"),  # Contains'/v1/'\n        re.compile(r\"/user/\"),  # Contains 'user'\n        re.compile(r\"/engines/\"),  # Contains 'engines'\n        re.compile(r\"/result/\"),  # Contains 'result'\n        re.compile(r\"/v2alpha/\"),  # Contains 'v2alpha'\n        re.compile(r\"/result\"),\n        # Temporary\n        re.compile(r\"/chat\"),  # api returns 404 for now\n    ]\n\n    api_reader = None\n    all_paths_cache = None\n    pooling_paths_cache = None\n    allowed_paths_cache = None\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n\n        if StabilityAIGenericProcessor.allowed_paths_cache is None:\n            StabilityAIGenericProcessor.initialize_allowed_paths_cache()\n\n        self.api_host = os.getenv(\n            \"STABLE_DIFFUSION_STABILITYAI_API_HOST\", \"https://api.stability.ai\"\n        )\n        self.path = self.get_input_by_name(\"path\")\n        self.initialize_api_config()\n        self.final_node_config = self.get_dynamic_node_config(dict(path=self.path))\n\n    @classmethod\n    def initialize_allowed_paths_cache(cls):\n        cls.api_reader = OpenAPIReader(StabilityAIGenericProcessor.openapi_file_path)\n        paths_names = cls.api_reader.get_all_paths_names()\n        cls.all_paths_cache = paths_names\n        cls.pooling_paths_cache = [path for path in paths_names if \"/result/\" in path]\n        cls.allowed_paths_cache = [\n            path\n            for path in paths_names\n            if not cls.is_path_banned(path, cls.paths_denied)\n        ]\n\n    @staticmethod\n    def is_path_banned(path, denied_patterns):\n        return any(pattern.search(path) for pattern in denied_patterns)\n\n    @staticmethod\n    def get_pooling_path(path_selected):\n        for path in StabilityAIGenericProcessor.pooling_paths_cache:\n            if path.startswith(path_selected):\n                return path\n        return None\n\n    def transform_path_options_labels(options):\n        transformed_options = []\n        for option in options:\n            # Remove the first path element and split the rest\n            parts = re.sub(r\"^/[^/]+/\", \"\", option.label).split(\"/\")\n\n            # Take the last two elements, or one if alone\n            if len(parts) > 1:\n                label = f\"{parts[-2].capitalize()} - {parts[-1].replace('-', ' ').capitalize()}\"\n            else:\n                label = parts[-1].replace(\"-\", \" \").capitalize()\n\n            transformed_option = Option(\n                default=option.default, value=option.value, label=label\n            )\n            transformed_options.append(transformed_option)\n\n        transformed_options.sort(key=lambda option: option.label)\n        return transformed_options\n\n    def get_node_config(self):\n        if StabilityAIGenericProcessor.allowed_paths_cache is None:\n            StabilityAIGenericProcessor.initialize_allowed_paths_cache()\n\n        path_options = [\n            Option(default=False, value=name, label=name)\n            for i, name in enumerate(StabilityAIGenericProcessor.allowed_paths_cache)\n        ]\n\n        path_options = self.transform_path_options_labels(path_options)\n        path_options[0].default = True\n\n        path = (\n            FieldBuilder()\n            .set_name(\"path\")\n            .set_label(\"Path\")\n            .set_type(\"select\")\n            .set_options(path_options)\n            .build()\n        )\n\n        return (\n            NodeConfigBuilder()\n            .set_node_name(\"StabilityAI\")\n            .set_processor_type(self.processor_type)\n            .set_icon(\"StabilityAILogo\")\n            .set_section(\"models\")\n            .set_help_message(\"stableDiffusionPromptHelp\")\n            .set_show_handles(True)\n            .add_field(path)\n            .set_is_dynamic(True)  # Important\n            .build()\n        )\n\n    def initialize_api_config(self):\n        response_content_path = self.path\n        response_method = \"post\"\n\n        self.path_accept = StabilityAIGenericProcessor.api_reader.get_path_accept(\n            self.path, \"post\"\n        )\n        self.pooling_path = self.get_pooling_path(self.path)\n\n        if self.pooling_path is not None:\n            response_content_path = self.pooling_path\n            response_method = \"get\"\n            self.pooling_path_accept = (\n                StabilityAIGenericProcessor.api_reader.get_path_accept(\n                    self.pooling_path, \"get\"\n                )\n            )\n\n        self.response_content_type = (\n            StabilityAIGenericProcessor.api_reader.get_response_content_type(\n                response_content_path, response_method\n            )[0]\n        )\n\n        print(f\"Response content type {self.response_content_type}\")\n\n    @staticmethod\n    def determine_output_type(path_accept):\n        if path_accept is None:\n            return None\n        elif path_accept == \"video/*\":\n            return \"videoUrl\"\n        elif \"model\" in path_accept:\n            return \"3dUrl\"\n        else:\n            return \"imageUrl\"\n\n    def get_dynamic_node_config(self, data) -> NodeConfig:\n        if StabilityAIGenericProcessor.allowed_paths_cache is None:\n            StabilityAIGenericProcessor.initialize_allowed_paths_cache()\n\n        selected_api_path = data[\"path\"]\n\n        schema = StabilityAIGenericProcessor.api_reader.get_request_schema_for_path(\n            selected_api_path, \"post\"\n        )\n        path_accept = StabilityAIGenericProcessor.api_reader.get_path_accept(\n            selected_api_path, \"post\"\n        )\n\n        output_type = StabilityAIGenericProcessor.determine_output_type(path_accept)\n        pooling_path = self.get_pooling_path(selected_api_path)\n\n        if pooling_path is not None:\n            pooling_path_accept = (\n                StabilityAIGenericProcessor.api_reader.get_path_accept(\n                    pooling_path, \"get\"\n                )\n            )\n            output_type = StabilityAIGenericProcessor.determine_output_type(\n                pooling_path_accept\n            )\n\n        builder = OpenAPIConverter().convert_schema_to_node_config(schema)\n\n        path_components = selected_api_path.split(\"/\")\n        last_component = (\n            path_components[-1] if path_components[-1] else path_components[-2]\n        )\n        node_name = \" \".join(word.capitalize() for word in last_component.split(\"-\"))\n\n        (\n            builder.set_node_name(f\"StabilityAI - {node_name}\")\n            .set_processor_type(self.processor_type)\n            .set_icon(\"StabilityAILogo\")\n            .set_section(\"models\")\n            .set_help_message(\"stableDiffusionPromptHelp\")\n            .set_show_handles(True)\n        )\n        if output_type is not None:\n            builder.set_output_type(output_type)\n\n        return builder.build()\n\n    def perform_pooling(self, client, path):\n        return client.pooling(path=path, accept=self.pooling_path_accept)\n\n    def prepare_and_process_response(self, response):\n        storage = self.get_storage()\n        timestamp_str = datetime.now().strftime(\"%Y%m%d%H%M%S%f\")\n        extension = self.get_input_by_name(\"output_format\")\n        if extension:\n            filename = f\"{self.name}-{timestamp_str}.{extension}\"\n        else:\n            if \"gltf-binary\" in self.response_content_type:\n                extension = \"glb\"\n            else:\n                extension = self.response_content_type.split(\"/\")[-1]\n            filename = f\"{self.name}-{timestamp_str}.{extension}\"\n        url = storage.save(filename, response)\n\n        return url\n\n    def get_fields_from_config(self):\n        if self.final_node_config is None:\n            return []\n\n        if isinstance(self.final_node_config, NodeConfig):\n            return self.final_node_config.fields\n\n        discriminators_values = []\n        for discriminator_name in self.final_node_config.discriminatorFields:\n            value = self.get_input_by_name(discriminator_name)\n            discriminators_values.append(value)\n\n        corresponding_config = get_sub_configuration(\n            discriminators_values, self.final_node_config\n        )\n        if corresponding_config is None:\n            return []\n\n        return corresponding_config.config.fields\n\n    def quick_filter(self, data):\n        if \"mode\" in data:\n            if data[\"mode\"] == \"image-to-image\":\n                if \"aspect_ratio\" in data:\n                    del data[\"aspect_ratio\"]\n            if data[\"mode\"] == \"text-to-image\":\n                if \"strength\" in data:\n                    del data[\"strength\"]\n                if \"image\" in data:\n                    del data[\"image\"]\n\n    def process(self):\n\n        api_key = self._processor_context.get_value(\"stabilityai_api_key\")\n        fields = self.get_fields_from_config()\n        data = {field.name: self.get_input_by_name(field.name) for field in fields}\n        self.quick_filter(data)\n\n        binaryFieldNames = [field.name for field in fields if field.isBinary]\n        files = {} if len(binaryFieldNames) > 0 else {\"none\": (None, \"\")}\n\n        for field_name in binaryFieldNames:\n\n            if field_name not in data:\n                files[field_name] = None\n                continue\n\n            url = data[field_name]\n            data[field_name] = None\n            del data[field_name]\n\n            if url:\n                files[field_name] = stream_download_file_as_binary(url)\n            else:\n                files[field_name] = None\n\n        client = Client(\n            api_token=api_key,\n            base_url=self.api_host,\n        )\n\n        response = client.post(\n            path=self.path, data=data, files=files, accept=self.path_accept\n        )\n\n        if self.pooling_path:\n            response_str = response.decode(\"utf-8\")\n            response_json = json.loads(response_str)\n            key_name = \"id\"\n            key_value = response_json[key_name]\n            updated_pooling_path = self.pooling_path.replace(\n                \"{\" + key_name + \"}\", str(key_value)\n            )\n            response = self.perform_pooling(client, updated_pooling_path)\n\n        return self.prepare_and_process_response(response)\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/extension/stable_diffusion_three_processor.py",
    "content": "import logging\nimport os\nimport requests\n\nfrom ..node_config_builder import FieldBuilder, NodeConfigBuilder\nfrom ...context.processor_context import ProcessorContext\nfrom ..model import Option\nfrom .extension_processor import ContextAwareExtensionProcessor\nfrom datetime import datetime\n\n\nclass StableDiffusionThreeProcessor(ContextAwareExtensionProcessor):\n    processor_type = \"stabilityai-stable-diffusion-3-processor\"\n\n    def __init__(self, config, context: ProcessorContext):\n        super().__init__(config, context)\n        self.api_host = os.getenv(\n            \"STABLE_DIFFUSION_STABILITYAI_API_HOST\", \"https://api.stability.ai\"\n        )\n\n    def get_node_config(self):\n        prompt = (\n            FieldBuilder()\n            .set_name(\"prompt\")\n            .set_label(\"Prompt\")\n            .set_type(\"textfield\")\n            .set_required(True)\n            .set_placeholder(\"GenericPromptPlaceholder\")\n            .set_has_handle(True)\n            .build()\n        )\n\n        negative_prompt = (\n            FieldBuilder()\n            .set_name(\"negative_prompt\")\n            .set_label(\"Negative Prompt\")\n            .set_type(\"textfield\")\n            .set_placeholder(\"GenericNegativePromptPlaceholder\")\n            .set_has_handle(True)\n            .build()\n        )\n\n        model_options = [\n            Option(\n                default=True, value=\"sd3.5-large\", label=\"Stable Diffusion 3.5 Large\"\n            ),\n            Option(\n                default=False,\n                value=\"sd3.5-large-turbo\",\n                label=\"Stable Diffusion 3.5 Large Turbo\",\n            ),\n            Option(default=False, value=\"sd3-large\", label=\"Stable Diffusion 3 Large\"),\n            Option(\n                default=False, value=\"sd3-medium\", label=\"Stable Diffusion 3 Medium\"\n            ),\n            Option(\n                default=False,\n                value=\"sd3-large-turbo\",\n                label=\"Stable Diffusion 3 Large Turbo\",\n            ),\n        ]\n\n        model = (\n            FieldBuilder()\n            .set_name(\"model\")\n            .set_label(\"Model\")\n            .set_type(\"select\")\n            .set_options(model_options)\n            .build()\n        )\n\n        aspect_ratio_options = [\n            Option(default=True, value=\"1:1\", label=\"1:1\"),\n            Option(default=False, value=\"16:9\", label=\"16:9\"),\n            Option(default=False, value=\"3:2\", label=\"3:2\"),\n            Option(default=False, value=\"2:3\", label=\"2:3\"),\n            Option(default=False, value=\"4:5\", label=\"4:5\"),\n            Option(default=False, value=\"5:4\", label=\"5:4\"),\n            Option(default=False, value=\"9:16\", label=\"9:16\"),\n            Option(default=False, value=\"9:21\", label=\"9:21\"),\n            Option(default=False, value=\"21:9\", label=\"21:9\"),\n        ]\n\n        aspect_ratio = (\n            FieldBuilder()\n            .set_name(\"aspect_ratio\")\n            .set_label(\"Aspect Ratio\")\n            .set_type(\"select\")\n            .set_options(aspect_ratio_options)\n            .build()\n        )\n\n        seed = (\n            FieldBuilder()\n            .set_name(\"seed\")\n            .set_label(\"Seed\")\n            .set_type(\"numericfield\")\n            .set_placeholder(\"Enter a numeric seed\")\n            .set_default_value(0)\n            .set_has_handle(True)\n            .build()\n        )\n\n        return (\n            NodeConfigBuilder()\n            .set_node_name(\"Stable Diffusion 3.5\")\n            .set_processor_type(self.processor_type)\n            .set_icon(\"StabilityAILogo\")\n            .set_section(\"models\")\n            .set_help_message(\"stableDiffusionPromptHelp\")\n            .set_output_type(\"imageUrl\")\n            .set_show_handles(True)\n            .add_field(prompt)\n            .add_field(negative_prompt)\n            .add_field(model)\n            .add_field(aspect_ratio)\n            .add_field(seed)\n            .build()\n        )\n\n    def process(self):\n        prompt = self.get_input_by_name(\"prompt\")\n        model = self.get_input_by_name(\"model\")\n        seed = self.get_input_by_name(\"seed\")\n        aspect_ratio = self.get_input_by_name(\"aspect_ratio\")\n        negative_prompt = self.get_input_by_name(\"negative_prompt\")\n\n        if prompt is None:\n            return None\n\n        api_key = self._processor_context.get_value(\"stabilityai_api_key\")\n\n        data_to_send = {\n            \"prompt\": prompt,\n            \"negative_prompt\": negative_prompt if model != \"sd3-turbo\" else None,\n            \"model\": model,\n            \"seed\": seed,\n            \"aspect_ratio\": aspect_ratio,\n        }\n\n        response = requests.post(\n            f\"{self.api_host}/v2beta/stable-image/generate/sd3\",\n            headers={\n                \"Accept\": \"image/*\",\n                \"Authorization\": f\"Bearer {api_key}\",\n            },\n            files={\"none\": \"\"},\n            data=data_to_send,\n        )\n\n        return self.prepare_and_process_response(response)\n\n    def prepare_and_process_response(self, response):\n        if response.status_code != 200:\n            logging.warning(\n                f\"API call to StabilityAI failed with status {response.status_code}: {response.text}\"\n            )\n            logging.warning(\"User prompt : \" + self.get_input_by_name(\"prompt\") or \"\")\n            raise Exception(f\"Error message from StabilityAI : \\n {response.text}\")\n\n        storage = self.get_storage()\n        timestamp_str = datetime.now().strftime(\"%Y%m%d%H%M%S%f\")\n        filename = f\"{self.name}-{timestamp_str}.png\"\n        url = storage.save(filename, response.content)\n\n        return url\n\n    def cancel(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/model.py",
    "content": "# generated by datamodel-codegen:\n#   filename:  schema.json\n#   timestamp: 2025-05-26T04:44:24+00:00\n\nfrom __future__ import annotations\n\nfrom typing import Any, Dict, List, Optional, Union\n\nfrom pydantic import BaseModel, RootModel\nfrom typing_extensions import Literal\n\n\nclass Model(RootModel[Any]):\n    root: Any\n\n\nclass FieldType(\n    RootModel[\n        Literal[\n            'boolean',\n            'dictionnary',\n            'fileUpload',\n            'imageMaskCreator',\n            'input',\n            'inputInt',\n            'inputNameBar',\n            'json',\n            'list',\n            'nonRendered',\n            'numericfield',\n            'option',\n            'select',\n            'slider',\n            'switch',\n            'textToDisplay',\n            'textarea',\n            'textfield',\n        ]\n    ]\n):\n    root: Literal[\n        'boolean',\n        'dictionnary',\n        'fileUpload',\n        'imageMaskCreator',\n        'input',\n        'inputInt',\n        'inputNameBar',\n        'json',\n        'list',\n        'nonRendered',\n        'numericfield',\n        'option',\n        'select',\n        'slider',\n        'switch',\n        'textToDisplay',\n        'textarea',\n        'textfield',\n    ]\n\n\nclass Operator(\n    RootModel[\n        Literal[\n            'equals',\n            'exists',\n            'greater than',\n            'in',\n            'less than',\n            'not equals',\n            'not exists',\n            'not in',\n        ]\n    ]\n):\n    root: Literal[\n        'equals',\n        'exists',\n        'greater than',\n        'in',\n        'less than',\n        'not equals',\n        'not exists',\n        'not in',\n    ]\n\n\nclass Option(BaseModel):\n    default: Optional[bool] = None\n    label: Optional[str] = None\n    value: Optional[str] = None\n\n\nclass OutputType(\n    RootModel[\n        Literal[\n            '3dUrl',\n            'audioUrl',\n            'fileUrl',\n            'imageBase64',\n            'imageUrl',\n            'markdown',\n            'pdfUrl',\n            'text',\n            'videoUrl',\n        ]\n    ]\n):\n    root: Literal[\n        '3dUrl',\n        'audioUrl',\n        'fileUrl',\n        'imageBase64',\n        'imageUrl',\n        'markdown',\n        'pdfUrl',\n        'text',\n        'videoUrl',\n    ]\n\n\nclass SectionType(RootModel[Literal['image-generation', 'input', 'models', 'tools']]):\n    root: Literal['image-generation', 'input', 'models', 'tools']\n\n\nclass Condition(BaseModel):\n    field: Optional[str] = None\n    operator: Optional[Operator] = None\n    value: Optional[Any] = None\n\n\nclass ConditionGroup(BaseModel):\n    conditions: Optional[List[Condition]] = None\n    logic: Optional[Literal['AND', 'OR']] = None\n\n\nclass FieldCondition(RootModel[Union[Condition, ConditionGroup]]):\n    root: Union[Condition, ConditionGroup]\n\n\nclass OmitNodeConfigFieldsOutputType(BaseModel):\n    defaultHideOutput: Optional[bool] = None\n    hasInputHandle: Optional[bool] = None\n    helpMessage: Optional[str] = None\n    hideFieldsIfParent: Optional[bool] = None\n    icon: Optional[str] = None\n    inputNames: Optional[List[str]] = None\n    isBeta: Optional[bool] = None\n    isDynamicallyGenerated: Optional[bool] = None\n    nodeName: Optional[str] = None\n    processorType: Optional[str] = None\n    section: Optional[SectionType] = None\n    showHandlesNames: Optional[bool] = None\n\n\nclass Field(BaseModel):\n    allowDecimal: Optional[bool] = None\n    associatedField: Optional[str] = None\n    canAddChildrenFields: Optional[bool] = None\n    condition: Optional[FieldCondition] = None\n    defaultValue: Optional[Any] = None\n    description: Optional[str] = None\n    hasHandle: Optional[bool] = None\n    hidden: Optional[bool] = None\n    hideIfParent: Optional[bool] = None\n    isBinary: Optional[bool] = None\n    isChild: Optional[bool] = None\n    isLinked: Optional[bool] = None\n    label: Optional[str] = None\n    max: Optional[float] = None\n    min: Optional[float] = None\n    name: Optional[str] = None\n    options: Optional[List[Option]] = None\n    placeholder: Optional[str] = None\n    required: Optional[bool] = None\n    step: Optional[float] = None\n    type: Optional[FieldType] = None\n    withModalEdit: Optional[bool] = None\n\n\nclass NodeConfig(BaseModel):\n    defaultHideOutput: Optional[bool] = None\n    fields: Optional[List[Field]] = None\n    hasInputHandle: Optional[bool] = None\n    helpMessage: Optional[str] = None\n    hideFieldsIfParent: Optional[bool] = None\n    icon: Optional[str] = None\n    inputNames: Optional[List[str]] = None\n    isBeta: Optional[bool] = None\n    isDynamicallyGenerated: Optional[bool] = None\n    nodeName: Optional[str] = None\n    outputType: Optional[OutputType] = None\n    processorType: Optional[str] = None\n    section: Optional[SectionType] = None\n    showHandlesNames: Optional[bool] = None\n\n\nclass DiscriminatedNodeConfig(BaseModel):\n    config: Optional[NodeConfig] = None\n    discriminators: Optional[Dict[str, str]] = None\n\n\nclass NodeSubConfig(BaseModel):\n    discriminatorFields: Optional[List[str]] = None\n    subConfigurations: Optional[List[DiscriminatedNodeConfig]] = None\n\n\nclass NodeConfigVariant(NodeSubConfig, OmitNodeConfigFieldsOutputType):\n    pass\n"
  },
  {
    "path": "packages/backend/app/processors/components/node_config_builder.py",
    "content": "from typing import Dict, List, Optional, Union\nfrom .model import (\n    DiscriminatedNodeConfig,\n    Field,\n    FieldType,\n    NodeConfig,\n    NodeConfigVariant,\n    Option,\n    OutputType,\n    SectionType,\n)\n\n\nclass BaseNodeConfigBuilder:\n    def __init__(self):\n        self.nodeName: Optional[str] = None\n        self.processorType: Optional[str] = None\n        self.icon: Optional[str] = None\n        self.outputType: Optional[str] = None\n        self.section: Optional[str] = None\n        self.helpMessage: Optional[str] = None\n        self.showHandlesNames: Optional[bool] = False\n        self.isBeta: Optional[bool] = False\n        self.defaultHideOutput: Optional[bool] = False\n\n    def set_node_name(self, name: str) -> \"BaseNodeConfigBuilder\":\n        self.nodeName = name\n        return self\n\n    def set_processor_type(self, processor_type: str) -> \"BaseNodeConfigBuilder\":\n        self.processorType = processor_type\n        return self\n\n    def set_icon(self, icon: str) -> \"BaseNodeConfigBuilder\":\n        self.icon = icon\n        return self\n\n    def set_output_type(self, output_type: str) -> \"BaseNodeConfigBuilder\":\n        self.outputType = OutputType(root=output_type)\n        return self\n\n    def set_section(self, section: str) -> \"BaseNodeConfigBuilder\":\n        self.section = SectionType(root=section)\n        return self\n\n    def set_help_message(self, help_message: str) -> \"BaseNodeConfigBuilder\":\n        self.helpMessage = help_message\n        return self\n\n    def set_show_handles(self, show: bool) -> \"BaseNodeConfigBuilder\":\n        self.showHandlesNames = show\n        return self\n\n    def set_is_beta(self, beta: bool) -> \"NodeConfigBuilder\":\n        self.isBeta = beta\n        return self\n\n    def set_default_hide_output(self, hide: bool) -> \"NodeConfigBuilder\":\n        self.defaultHideOutput = hide\n        return self\n\n\nclass NodeConfigBuilder(BaseNodeConfigBuilder):\n    def __init__(self):\n        super().__init__()\n        self.fields: List[Field] = []\n        self.isDynamicallyGenerated: Optional[bool] = False\n        self.discriminators: Optional[Dict[str, str]] = None\n\n    def set_is_dynamic(self, dyna: bool) -> \"NodeConfigBuilder\":\n        self.isDynamicallyGenerated = dyna\n        return self\n\n    def set_fields(self, fields: List[Field]) -> \"NodeConfigBuilder\":\n        self.fields = fields\n        return self\n\n    def add_field(self, field: Field) -> \"NodeConfigBuilder\":\n        self.fields.append(field)\n        return self\n\n    def add_discriminator(self, key, value) -> \"NodeConfigBuilder\":\n        if self.discriminators is None:\n            self.discriminators = {}\n        self.discriminators[key] = value\n        return self\n\n    def build(self) -> NodeConfig:\n        baseConfig = NodeConfig(\n            nodeName=self.nodeName,\n            processorType=self.processorType,\n            icon=self.icon,\n            fields=self.fields,\n            outputType=self.outputType,\n            section=self.section,\n            helpMessage=self.helpMessage,\n            showHandlesNames=self.showHandlesNames,\n            isDynamicallyGenerated=self.isDynamicallyGenerated,\n            isBeta=self.isBeta,\n            defaultHideOutput=self.defaultHideOutput,\n        )\n        if self.discriminators is not None:\n            return DiscriminatedNodeConfig(\n                config=baseConfig, discriminators=self.discriminators\n            )\n        else:\n            return baseConfig\n\n\nclass NodeConfigVariantBuilder(BaseNodeConfigBuilder):\n    def __init__(self):\n        super().__init__()\n        self.subConfigurations: List[NodeConfig] = []\n        self.discriminatorFields: Optional[List[str]] = []\n\n    def add_discriminator_field(self, field: str) -> \"NodeConfigVariantBuilder\":\n        if self.discriminatorFields is None:\n            self.discriminatorFields = []\n        self.discriminatorFields.append(field)\n        return self\n\n    def add_sub_configuration(\n        self, sub_configuration: NodeConfig\n    ) -> \"NodeConfigVariantBuilder\":\n        self.subConfigurations.append(sub_configuration)\n        return self\n\n    def build(self) -> NodeConfigVariant:\n        for subConfig in self.subConfigurations:\n            config = subConfig.config\n            config.showHandlesNames = self.showHandlesNames\n            config.icon = self.icon\n            config.nodeName = self.nodeName\n            config.outputType = self.outputType\n            config.section = self.section\n            config.processorType = self.processorType\n            config.helpMessage = self.helpMessage\n\n        return NodeConfigVariant(\n            subConfigurations=self.subConfigurations,\n            discriminatorFields=self.discriminatorFields,\n        )\n\n\nclass FieldBuilder:\n    def __init__(self):\n        self._field = Field()\n\n    def set_name(self, name: str) -> \"FieldBuilder\":\n        self._field.name = name\n        return self\n\n    def set_label(self, label: str) -> \"FieldBuilder\":\n        self._field.label = label\n        return self\n\n    def set_description(self, description: str) -> \"FieldBuilder\":\n        self._field.description = description\n        return self\n\n    def set_type(self, field_type: str) -> \"FieldBuilder\":\n        self._field.type = FieldType(root=field_type)\n        return self\n\n    def set_min(self, min: float) -> \"FieldBuilder\":\n        self._field.min = min\n        return self\n\n    def set_max(self, max: float) -> \"FieldBuilder\":\n        self._field.max = max\n        return self\n\n    def set_is_binary(self, binary: bool) -> \"FieldBuilder\":\n        self._field.isBinary = binary\n        return self\n\n    def set_placeholder(self, placeholder: str) -> \"FieldBuilder\":\n        self._field.placeholder = placeholder\n        return self\n\n    def set_required(self, required: bool) -> \"FieldBuilder\":\n        self._field.required = required\n        return self\n\n    def set_options(self, options: List[Option]) -> \"FieldBuilder\":\n        self._field.options = options\n        return self\n\n    def add_option(self, option: Option) -> \"FieldBuilder\":\n        if not self._field.options:\n            self._field.options = []\n        self._field.options.append(option)\n        return self\n\n    def set_default_value(self, default_value: Union[str, float]) -> \"FieldBuilder\":\n        self._field.defaultValue = default_value\n        return self\n\n    def set_has_handle(self, has_handle: bool) -> \"FieldBuilder\":\n        self._field.hasHandle = has_handle\n        return self\n\n    def build(self) -> Field:\n        return self._field\n"
  },
  {
    "path": "packages/backend/app/processors/components/node_config_utils.py",
    "content": "from .model import NodeConfigVariant\n\n\ndef get_sub_configuration(discriminators_values, node_config: NodeConfigVariant):\n    for subconfig in node_config.subConfigurations:\n        subconfig_discriminator_values = [\n            subconfig.discriminators[discriminator]\n            for discriminator in subconfig.discriminators\n        ]\n        if subconfig_discriminator_values == discriminators_values:\n            return subconfig\n"
  },
  {
    "path": "packages/backend/app/processors/components/processor.py",
    "content": "from abc import ABC, abstractmethod\nimport json\nimport logging\nfrom typing import Any, List, Optional, TypedDict, Union, Dict\n\nfrom ..launcher.processor_event import ProcessorEvent\nfrom ..launcher.event_type import EventType\n\nfrom ..observer.observer import Observer\n\nfrom .core.processor_type_name_utils import ProcessorType\n\nfrom ...storage.storage_strategy import StorageStrategy\n\nfrom ..context.processor_context import ProcessorContext\n\n\nclass BadKeyInputIndex(Exception):\n    \"\"\"Exception raised for index out of bounds in the output list.\"\"\"\n\n    def __init__(self, message=\"This input key does not exists\"):\n        self.message = message\n        super().__init__(self.message)\n\n\nclass InputItem(TypedDict, total=False):\n    inputName: Optional[str]\n    inputNode: str\n    inputNodeOutputKey: int\n\n\nclass Processor(ABC):\n    processor_type: Optional[\"ProcessorType\"] = None\n    \"\"\"The type of the processor\"\"\"\n\n    observers: List[Observer] = []\n    \"\"\"The observers of the processor\"\"\"\n\n    storage_strategy: Optional[\"StorageStrategy\"]\n    \"\"\"The storage strategy used by the processor\"\"\"\n\n    _processor_context: Optional[\"ProcessorContext\"]\n    \"\"\"The context data of the processor\"\"\"\n\n    name: str\n    \"\"\"The name of the processor\"\"\"\n\n    _output: Optional[Any]\n    \"\"\"The output of the processor\"\"\"\n\n    inputs: Optional[List[InputItem]]\n    \"\"\"A list of inputs accepted by the processor.\"\"\"\n\n    input_processors: List[\"Processor\"]\n    \"\"\"The processors set as inputs\"\"\"\n\n    is_processing: bool\n    \"\"\"Flag indicating if the processor has started working, useful when using API with cold start\"\"\"\n\n    is_finished: bool\n    \"\"\"Flag indicating if the processor's has produced his output\"\"\"\n\n    _has_dynamic_behavior: bool\n    \"\"\"Flag indicating if the processor's behavior and execution time are unpredictable and subject to change at runtime.\"\"\"\n\n    def __init__(self, config: Dict[str, Any]) -> None:\n        self.name = config[\"name\"]\n        self.processor_type = config[\"processorType\"]\n        self.observers = []\n        self._output = None\n        self.inputs = None\n        self._processor_context = None\n        self.input_processors = []\n        self.storage_strategy = None\n        self.is_finished = False\n        self._has_dynamic_behavior = False\n        self._config = config\n        if (\n            config.get(\"config\") is not None\n            and config.get(\"config\").get(\"fields\") is not None\n            and config.get(\"config\").get(\"fields\") != []\n        ):\n            self.fields = config.get(\"config\").get(\"fields\")\n            self.fields_names = [field[\"name\"] for field in self.fields]\n        if config.get(\"inputs\") is not None and config.get(\"inputs\") != []:\n            self.inputs = config.get(\"inputs\")\n\n    def cleanup(self) -> None:\n        self.input_processors = None\n        self._processor_context = None\n        self._output = None\n        self.storage_strategy = None\n\n    def process_and_update(self):\n        output = self.process()\n        if output is not None:\n            self.set_output(output)\n        return output\n\n    @abstractmethod\n    def process(self):\n        pass\n\n    @abstractmethod\n    def cancel(self) -> None:\n        pass\n\n    def add_observer(self, observer):\n        self.observers.append(observer)\n\n    def remove_observer(self, observer):\n        self.observers.remove(observer)\n        if len(self.observers) == 0:\n            self.observers = None\n        return self.observers\n\n    def notify(self, event: EventType, data: ProcessorEvent):\n        for observer in self.observers:\n            observer.notify(event, data)\n\n    def get_output(self, input_key=None) -> Optional[str]:\n        output = getattr(self, \"_output\", None)\n        if output is not None and isinstance(output, list) and len(output) > 0:\n            if input_key is not None:\n                if input_key < 0 or input_key >= len(output):\n                    logging.warning(\n                        f\"Index {input_key} out of bounds for output of size {len(output)}.\"\n                    )\n                    return None\n                return output[input_key]\n            else:\n                return output\n        return None\n\n    def set_output(self, value: Union[List, str]) -> None:\n        if isinstance(value, list):\n            self._output = value\n        elif isinstance(value, str):\n            self._output = [value]\n        else:\n            raise TypeError(\"Value should be either a list or a string.\")\n        self.is_finished = True\n\n    def get_inputs(self) -> Optional[List[InputItem]]:\n        return self.inputs\n\n    def get_input_processor(self) -> Optional[\"Processor\"]:\n        if self.input_processors is None or len(self.input_processors) == 0:\n            return None\n        return self.input_processors[0]\n\n    def get_input_processors(self) -> List[\"Processor\"]:\n        return self.input_processors\n\n    def get_input_node_output_key(self) -> Optional[int]:\n        if self.inputs is None or len(self.inputs) == 0:\n            return None\n        if self.inputs[0].get(\"inputNodeOutputKey\") is None:\n            return 0\n        return self.inputs[0].get(\"inputNodeOutputKey\")\n\n    def get_input_node_output_key_by_node_name(\n        self, input_node_name: str\n    ) -> Optional[int]:\n        keys = []\n        for input in self.inputs:\n            if input.get(\"inputNode\") == input_node_name:\n                keys.append(input.get(\"inputNodeOutputKey\"))\n        return keys\n\n    def get_input_node_output_keys(self) -> Optional[List[int]]:\n        if self.inputs is None or len(self.inputs) == 0:\n            return None\n        return [input.get(\"inputNodeOutputKey\") for input in self.inputs]\n\n    def get_input_names(self) -> Optional[List[str]]:\n        if self.inputs is None or len(self.inputs) == 0:\n            return None\n        return [input.get(\"inputName\") for input in self.inputs]\n\n    def get_input_names_from_config(self) -> Optional[List[str]]:\n        return self._config.get(\"config\").get(\"inputNames\")\n\n    def get_input_by_name(\n        self, name: str, default=None, accept_object=False\n    ) -> Optional[InputItem]:\n        input = self._config.get(name, default)\n\n        input_processors = self.get_input_processors()\n        input_output_keys = self.get_input_node_output_keys()\n        input_names = self.get_input_names()\n\n        if input_processors:\n            for processor, input_name, key in zip(\n                input_processors, input_names, input_output_keys\n            ):\n                if input_name == name:\n                    input_processor_output = processor.get_output(key)\n                    if (\n                        isinstance(input_processor_output, dict)\n                        or isinstance(input_processor_output, list)\n                        and not accept_object\n                    ):\n                        input_processor_output = json.dumps(input_processor_output)\n                    return input_processor_output\n\n        return input\n\n    def add_input_processor(self, input_processor: \"Processor\") -> None:\n        self.input_processors.append(input_processor)\n\n    def set_storage_strategy(self, storage_strategy: \"StorageStrategy\") -> None:\n        self.storage_strategy = storage_strategy\n\n    def __str__(self) -> str:\n        return f\"Processor(name={self.name}, type={self.processor_type})\"\n\n    def get_context(self) -> Optional[\"ProcessorContext\"]:\n        return self._processor_context\n\n    def get_storage(self) -> Optional[\"StorageStrategy\"]:\n        return self.storage_strategy\n\n    def has_dynamic_behavior(self) -> bool:\n        return self._has_dynamic_behavior\n\n\nclass BasicProcessor(Processor):\n    def __init__(self, config):\n        super().__init__(config)\n\n    def cancel(self):\n        pass\n\n\nclass ContextAwareProcessor(Processor):\n    def __init__(self, config, context: ProcessorContext = None):\n        super().__init__(config)\n        self._processor_context = context\n"
  },
  {
    "path": "packages/backend/app/processors/context/processor_context.py",
    "content": "from abc import ABC, abstractmethod\nfrom typing import Optional\nfrom typing import List\n\n\nclass ProcessorContext(ABC):\n    @abstractmethod\n    def get_context(self) -> \"ProcessorContext\":\n        pass\n\n    @abstractmethod\n    def get_current_user_id(self) -> Optional[str]:\n        pass\n\n    @abstractmethod\n    def get_session_id(self) -> Optional[str]:\n        pass\n\n    @abstractmethod\n    def get_parameter_names(self) -> List[str]:\n        \"\"\"\n        List all the parameter names currently stored in the context.\n        Returns:\n            A list of parameter names.\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def get_value(self, name) -> Optional[str]:\n        \"\"\"\n        Retrieve the value associated with the specified parameter name.\n        Returns:\n            The value of the parameter if found, otherwise None.\n        \"\"\"\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/context/processor_context_flask_request.py",
    "content": "from typing import List, Optional\nfrom ...flask.utils.constants import SESSION_USER_ID_KEY\nfrom .processor_context import ProcessorContext\n\nfrom copy import deepcopy\n\n\nclass ProcessorContextFlaskRequest(ProcessorContext):\n    parameter_prefix = \"session_\"\n\n    def __init__(self, g_context=None, session_data=None, session_id=None):\n        self.g_context = deepcopy(g_context) if g_context is not None else {}\n        self.session_data = deepcopy(session_data) if session_data is not None else {}\n        self.session_id = deepcopy(session_id) if session_id is not None else None\n\n    def get_context(self) -> \"ProcessorContext\":\n        \"\"\"Retrieve the stored Flask global context.\"\"\"\n        return self.g_context\n\n    def get_current_user_id(self) -> str:\n        \"\"\"Retrieve the current user ID from the stored session data.\"\"\"\n        return self.session_data.get(SESSION_USER_ID_KEY)\n\n    def get_session_id(self) -> str:\n        return self.session_id\n\n    def get_parameter_names(self) -> List[str]:\n        return [\n            key.replace(self.parameter_prefix, \"\")\n            for key in dir(self.g_context)\n            if not key.startswith(\"_\") and key not in dir(type(self.g_context))\n        ]\n\n    def get_value(self, name) -> Optional[str]:\n        return self.g_context.get(self.parameter_prefix + name)\n"
  },
  {
    "path": "packages/backend/app/processors/exceptions.py",
    "content": "class LightException(Exception):\n    def __init__(\n        self,\n        message: str,\n        langvar_message: str = \"LightException\",\n        langvar_values: dict = None,\n    ):\n        self.message = message\n        self.langvar_message = langvar_message\n        self.langvar_values = langvar_values\n        super().__init__(f\"{message}\")\n"
  },
  {
    "path": "packages/backend/app/processors/factory/processor_factory.py",
    "content": "from abc import ABC, abstractmethod\n\nfrom ...storage.storage_strategy import StorageStrategy\nfrom ..context.processor_context import ProcessorContext\n\n\nclass ProcessorFactory(ABC):\n    @abstractmethod\n    def create_processor(\n        self,\n        config,\n        context: ProcessorContext = None,\n        storage_strategy: StorageStrategy = None,\n    ):\n        pass\n\n    @abstractmethod\n    def load_processors(self):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/factory/processor_factory_iter_modules.py",
    "content": "from enum import Enum\nimport importlib\nimport logging\nimport pkgutil\nimport inspect\nfrom ..components.processor import Processor\nfrom .processor_factory import ProcessorFactory\nfrom injector import singleton\n\n\n@singleton\nclass ProcessorFactoryIterModules(ProcessorFactory):\n    def __init__(self):\n        self._processors = {}\n\n    def register_processor(self, processor_type, processor_class):\n        self._processors[processor_type] = processor_class\n\n    def create_processor(self, config, context_data=None, storage_strategy=None):\n        processor_type = config[\"processorType\"]\n        processor_class = self._processors.get(processor_type)\n        if not processor_class:\n            raise ValueError(f\"Processor type '{processor_type}' not supported\")\n\n        params = inspect.signature(processor_class.__init__).parameters\n        context_param = params.get(\"context\")\n\n        processor = None\n        if context_param is not None:\n            processor = processor_class(config=config, context=context_data)\n        else:\n            processor = processor_class(config=config)\n        processor.set_storage_strategy(storage_strategy)\n\n        return processor\n\n    def load_processors(self):\n        self._load_recursive(\"app.processors.components\")\n\n    def _load_recursive(self, package_name):\n        package = importlib.import_module(package_name)\n        prefix = package.__name__ + \".\"\n        for importer, module_name, is_pkg in pkgutil.iter_modules(\n            package.__path__, prefix\n        ):\n            if is_pkg:\n                self._load_recursive(module_name)\n            else:\n                module = __import__(module_name, fromlist=\"dummy\")\n                for attribute_name in dir(module):\n                    attribute = getattr(module, attribute_name)\n                    if isinstance(attribute, type) and issubclass(attribute, Processor):\n                        if attribute.processor_type is not None:\n                            processor_type_key = (\n                                attribute.processor_type.value\n                                if isinstance(attribute.processor_type, Enum)\n                                else attribute.processor_type\n                            )\n                            self.register_processor(processor_type_key, attribute)\n"
  },
  {
    "path": "packages/backend/app/processors/launcher/abstract_topological_processor_launcher.py",
    "content": "from abc import abstractmethod\nimport json\nimport logging\nfrom typing import List\nfrom injector import inject\nfrom .processor_launcher import ProcessorLauncher\nfrom .event_type import EventType\nfrom .processor_launcher_event import ProcessorLauncherEvent\n\nfrom ..context.processor_context import ProcessorContext\n\nfrom ..observer.observer import Observer\n\nfrom ...storage.storage_strategy import StorageStrategy\nfrom ..factory.processor_factory import ProcessorFactory\n\n\nclass AbstractTopologicalProcessorLauncher(ProcessorLauncher):\n    \"\"\"\n    Basic Processor Launcher emiting event through flask_socketio websockets\n\n    A class that launches processors based on configuration data.\n    \"\"\"\n\n    processor_factory: ProcessorFactory\n    storage_strategy: StorageStrategy\n    observers: List[Observer]\n    context: ProcessorContext\n\n    @inject\n    def __init__(\n        self,\n        processor_factory: ProcessorFactory,\n        storage_strategy: StorageStrategy,\n        observers: List[Observer] = None,\n    ) -> None:\n        self.processor_factory = processor_factory\n        self.storage_strategy = storage_strategy\n        self.processor_factory.load_processors()\n        self.observers = observers or []\n        self.context = None\n\n    def set_context(self, context: ProcessorContext):\n        self.context = context\n\n    def add_observer(self, observer):\n        self.observers.append(observer)\n\n    def _load_config_data(self, fileName):\n        with open(fileName, \"r\") as file:\n            config_data = json.load(file)\n        return config_data\n\n    def _link_processors(self, processors):\n        for processor in processors.values():\n            if hasattr(processor, \"inputs\") and processor.inputs is not None:\n                for input in processor.inputs:\n                    input_processor = processors.get(input.get(\"inputNode\"))\n                    if not input_processor:\n                        logging.error(\n                            f\"Link_processors - processor name : '{processor.name}' - input_processor : '{input.get('inputNode')}'\"\n                        )\n                        raise ValueError(\n                            f\"Input processor '{input.get('inputNode')}' not found\"\n                        )\n                    processor.add_input_processor(input_processor)\n\n    def load_processors(self, config_data):\n        processors = {\n            config[\"name\"]: self.processor_factory.create_processor(\n                config, self.context, self.storage_strategy\n            )\n            for config in config_data\n        }\n\n        self._link_processors(processors)\n        return processors\n\n    def get_node_by_name(self, config_data, node_name):\n        \"\"\"\n        Retrieves a node by its name from the available nodes.\n\n        Parameters:\n            config_data (list): A list of dictionaries containing the configuration data for each processor.\n            node_name (str): The name of the node to find.\n\n        Returns:\n            The node with the given name if found, otherwise None.\n        \"\"\"\n        for node in config_data:\n            if node.get(\"name\") == node_name:\n                return node\n        return None\n\n    def notify_error(self, processor, e):\n        error_event_data = ProcessorLauncherEvent(\n            instance_name=processor.name,\n            user_id=self.context.get_current_user_id(),\n            processor=processor,\n            error=e,\n            session_id=self.context.get_session_id(),\n            processor_type=processor.processor_type,\n        )\n        self.notify_observers(EventType.ERROR.value, error_event_data)\n\n    def notify_streaming(self, processor, output, isDone=False, duration=0):\n        streaming_event_data = ProcessorLauncherEvent(\n            instance_name=processor.name,\n            user_id=self.context.get_current_user_id(),\n            output=output,\n            processor=processor,\n            isDone=isDone,\n            processor_type=processor.processor_type,\n            session_id=self.context.get_session_id(),\n            duration=duration,\n        )\n        self.notify_observers(EventType.STREAMING.value, streaming_event_data)\n\n    def notify_progress(self, processor, output, isDone=False, duration=0):\n        progress_event_data = ProcessorLauncherEvent(\n            instance_name=processor.name,\n            user_id=self.context.get_current_user_id(),\n            output=output,\n            processor=processor,\n            isDone=isDone,\n            processor_type=processor.processor_type,\n            session_id=self.context.get_session_id(),\n            duration=duration,\n        )\n        self.notify_observers(EventType.PROGRESS.value, progress_event_data)\n\n    def notify_current_node_running(self, processor):\n        current_node_running_event_data = ProcessorLauncherEvent(\n            instance_name=processor.name,\n            user_id=self.context.get_current_user_id(),\n            processor=processor,\n            session_id=self.context.get_session_id(),\n            processor_type=processor.processor_type,\n        )\n\n        self.notify_observers(\n            EventType.CURRENT_NODE_RUNNING.value, current_node_running_event_data\n        )\n\n    def load_required_processors(self, config_data, node_name):\n        \"\"\"\n        Loads the necessary processors based on the given configuration data and node name.\n\n        Parameters:\n            config_data (list): A list of dictionaries containing the configuration data for each processor.\n            node_name (str): The name of the node being processed.\n\n        Returns:\n            dict: A dictionary mapping processor names to their respective instances.\n\n        The function operates as follows:\n            - Iterates over each configuration in config_data.\n            - Creates a new processor instance based on the configuration.\n            - If outputData is not None and differs from node_name, the processor's output is set accordingly.\n            - Stores each processor instance in a dictionary with its name as the key.\n        \"\"\"\n        processors = {}\n        node = self.get_node_by_name(config_data, node_name)\n        if node and not node.get(\"inputs\"):\n            processor = self.processor_factory.create_processor(\n                node, self.context, self.storage_strategy\n            )\n            processors[node[\"name\"]] = processor\n            logging.debug(f\"Created single processor for node - {node_name}\")\n        else:\n            related_config_data = self.get_related_config_data(\n                config_data, node_name, []\n            )\n            related_config_data.reverse()\n            for config in related_config_data:\n                config_output = config.get(\"outputData\", None)\n                if config_output is None or config[\"name\"] == node_name:\n                    logging.debug(f\"Empty or current node - {config['name']}\")\n                    processor = self.processor_factory.create_processor(\n                        config, self.context, self.storage_strategy\n                    )\n                    processors[config[\"name\"]] = processor\n                else:\n                    logging.debug(f\"Non empty node -  {config['name']}\")\n                    processor = self.processor_factory.create_processor(\n                        config, self.context, self.storage_strategy\n                    )\n                    processor.set_output(config_output)\n                    processors[config[\"name\"]] = processor\n        return processors\n\n    def get_related_config_data(self, config_data, node_name, visited):\n        if node_name in visited:\n            return []\n        visited.append(node_name)\n\n        current_config = next(\n            (config for config in config_data if config[\"name\"] == node_name), None\n        )\n\n        if not current_config:\n            return []\n\n        related_configs = [current_config]\n\n        for input in current_config.get(\"inputs\", []):\n            related_configs.extend(\n                self.get_related_config_data(\n                    config_data, input.get(\"inputNode\"), visited\n                )\n            )\n\n        return related_configs\n\n    def load_processors_for_node(self, config_data, node_name):\n        processors = self.load_required_processors(config_data, node_name)\n\n        self._link_processors(processors)\n        return processors\n\n    @abstractmethod\n    def launch_processors(self, processors):\n        pass\n\n    @abstractmethod\n    def launch_processors_for_node(self, processors, node_name=None):\n        pass\n\n    def notify_observers(self, event, data):\n        for observer in self.observers:\n            observer.notify(event, data)\n"
  },
  {
    "path": "packages/backend/app/processors/launcher/async_processor_launcher.py",
    "content": "import gc\nimport threading\nimport time\nimport eventlet\nfrom eventlet.semaphore import Semaphore\nimport logging\nimport traceback\n\nfrom typing import Dict, List\nfrom enum import Enum\n\nfrom .processor_event import ProcessorEvent\n\nfrom .event_type import EventType\n\nfrom ..observer.observer import Observer\n\nfrom ..components.processor import Processor\nfrom .abstract_topological_processor_launcher import (\n    AbstractTopologicalProcessorLauncher,\n)\n\n\nclass AsyncProcessorLauncher(AbstractTopologicalProcessorLauncher, Observer):\n    \"\"\"\n    AsyncProcessorLauncher extends the functionality of the Basic Processor Launcher.\n\n    The main enhancement in this class is the implementation of the 'launch_processors' method,\n    which leverages eventlet greenthreads. This allows for asynchronous execution of processors,\n    enabling efficient handling of I/O-bound tasks and improving the overall performance of processor execution.\n    \"\"\"\n\n    GREENTHREAD_POOL_SIZE = 7\n\n    class NodeState(Enum):\n        PENDING = 1\n        RUNNING = 2\n        COMPLETED = 3\n        ERROR = 4\n\n    class Node:\n        def __init__(self, id: str, parent_ids: List[str], processor: Processor):\n            self.id = id\n            self.parent_ids = parent_ids\n            self.state = AsyncProcessorLauncher.NodeState.PENDING\n            self.output = None\n            self.processor = processor\n            self.lock = Semaphore(1)\n\n        def run(self):\n            with self.lock:\n                if self.state != AsyncProcessorLauncher.NodeState.PENDING:\n                    logging.warning(\n                        f\"Node {self.id} is already being processed or completed.\"\n                    )\n                    return self.output\n\n                self.state = AsyncProcessorLauncher.NodeState.RUNNING\n\n                try:\n                    self.output = self.processor.process_and_update()\n                except Exception as e:\n                    self.state = AsyncProcessorLauncher.NodeState.ERROR\n                    raise e\n\n                self.state = AsyncProcessorLauncher.NodeState.COMPLETED\n                return self.output\n\n        def get_processor(self):\n            return self.processor\n\n    def get_input_processor_names(self, processor: Processor):\n        return [\n            input_processor.name for input_processor in processor.get_input_processors()\n        ]\n\n    def convert_processors_to_node_dict(self, processors: List[Processor]):\n        nodes = {}\n        for processor in processors.values():\n            nodes[processor.name] = self.Node(\n                processor.name, self.get_input_processor_names(processor), processor\n            )\n        return nodes\n\n    def launch_processors(self, processors: List[Processor]):\n        for processor in processors.values():\n            processor.add_observer(self)\n\n        nodes = self.convert_processors_to_node_dict(processors)\n\n        pool = eventlet.GreenPool(AsyncProcessorLauncher.GREENTHREAD_POOL_SIZE)\n\n        logging.debug(nodes)\n\n        initialized_nodes = set()\n\n        while nodes:\n            error_detected = any(\n                node.state == AsyncProcessorLauncher.NodeState.ERROR\n                for node in nodes.values()\n            )\n\n            if error_detected:\n                logging.debug(\"A node is in ERROR state. Halting processing.\")\n                break\n\n            for id, node in nodes.items():\n                if (\n                    node.state == AsyncProcessorLauncher.NodeState.PENDING\n                    and self.can_run(node, nodes)\n                    and id not in initialized_nodes\n                ):\n                    logging.debug(f\"Spawning green thread for node {id}.\")\n                    initialized_nodes.add(id)\n                    pool.spawn(self.run_node, node)\n\n            eventlet.sleep(0.5)\n\n            nodes = self.remove_completed_nodes(nodes)\n            logging.debug(f\"Remaining nodes: {[node.id for node in nodes.values()]}\")\n\n        pool.waitall()\n\n    def remove_completed_nodes(self, nodes: List[Node]):\n        return {\n            id: n\n            for id, n in nodes.items()\n            if n.state not in [AsyncProcessorLauncher.NodeState.COMPLETED]\n        }\n\n    def can_run(self, node: Node, nodes: List[Node]):\n        # If parents aren't in the list, then the node can run\n        return all(parent_id not in nodes for parent_id in node.parent_ids)\n\n    def launch_processors_for_node(self, processors: List[Processor], node_name=None):\n        for processor in processors.values():\n            if processor.get_output() is None or processor.name == node_name:\n                processor.add_observer(self)\n                self.run_processor(processor)\n\n            if processor.name == node_name:\n                break\n\n    def run_processor(self, processor: \"Processor\"):\n        try:\n            self.notify_current_node_running(processor)\n\n            start_time = time.time()\n            output = processor.process_and_update()\n\n            end_time = time.time()\n            duration = end_time - start_time\n            self.notify_progress(processor, output, duration=duration, isDone=True)\n        except Exception as e:\n            self.notify_error(processor, e)\n            raise e\n\n    def run_node(self, node: Node):\n        try:\n            processor = node.get_processor()\n            self.notify_current_node_running(processor)\n\n            start_time = time.time()\n            output = node.run()\n            end_time = time.time()\n            duration = end_time - start_time\n            self.notify_progress(node.get_processor(), output, duration=duration)\n        except Exception as e:\n            node.state = AsyncProcessorLauncher.NodeState.ERROR\n            self.notify_error(node.get_processor(), e)\n            traceback.print_exc()\n            raise e\n\n    def notify(self, event: EventType, data: ProcessorEvent):\n        if event == EventType.STREAMING:\n            self.notify_streaming(data.source, data.output)\n"
  },
  {
    "path": "packages/backend/app/processors/launcher/basic_processor_launcher.py",
    "content": "from .abstract_topological_processor_launcher import AbstractTopologicalProcessorLauncher\n\n\nclass BasicProcessorLauncher(AbstractTopologicalProcessorLauncher):\n    \"\"\"\n    Basic Processor Launcher emiting event\n\n    A class that launches processors based on configuration data.\n    \"\"\"\n\n    def launch_processors(self, processors):\n        for processor in processors.values():\n            self.notify_current_node_running(processor)\n            try :\n                    output = processor.process()\n                    self.notify_progress(processor, output)\n                    \n            except Exception as e:\n                self.notify_error(processor, e)\n                raise e\n\n    def launch_processors_for_node(self, processors, node_name=None):\n        for processor in processors.values():\n            if processor.get_output() is None or processor.name == node_name:\n                \n                self.notify_current_node_running(processor)\n                try :\n                    output = processor.process()\n                    self.notify_progress(processor, output)\n                    \n                except Exception as e:\n                    self.notify_error(processor, e)\n                    raise e\n\n            if processor.name == node_name:\n                break\n"
  },
  {
    "path": "packages/backend/app/processors/launcher/event_type.py",
    "content": "from enum import Enum\n\n\nclass EventType(Enum):\n    PROGRESS = \"progress\"\n    STREAMING = \"streaming\"\n    CURRENT_NODE_RUNNING = \"current_node_running\"\n    ERROR = \"error\"\n"
  },
  {
    "path": "packages/backend/app/processors/launcher/processor_event.py",
    "content": "from dataclasses import dataclass, field\nfrom typing import Any\n\n\n@dataclass\nclass ProcessorEvent:\n    source: Any = field(default=None)\n    output: Any = field(default=None)\n    error: str = field(default=None)\n"
  },
  {
    "path": "packages/backend/app/processors/launcher/processor_launcher.py",
    "content": "from abc import ABC, abstractmethod\n\nfrom ..context.processor_context import ProcessorContext\n\n\nclass ProcessorLauncher(ABC):\n    @abstractmethod\n    def load_processors(self, config_data):\n        pass\n\n    @abstractmethod\n    def load_processors_for_node(self, config_data, node_name):\n        pass\n\n    @abstractmethod\n    def launch_processors(self, processor):\n        pass\n\n    @abstractmethod\n    def launch_processors_for_node(self, processors, node_name):\n        pass\n    \n    @abstractmethod\n    def set_context(self, context: ProcessorContext):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/launcher/processor_launcher_event.py",
    "content": "from dataclasses import dataclass, field\nfrom typing import Any\n\nfrom ..components.processor import Processor\n\n\n@dataclass\nclass ProcessorLauncherEvent:\n    instance_name: str\n    user_id: int = field(default=None)\n    output: Any = field(default=None)\n    processor_type: str = field(default=None)\n    processor: Processor = field(default=None)\n    isDone: bool = field(default=False)\n    error: str = field(default=None)\n    session_id: str = field(default=None)\n    duration: float = field(default=0)\n"
  },
  {
    "path": "packages/backend/app/processors/observer/observer.py",
    "content": "from abc import ABC, abstractmethod\n\n\nclass Observer(ABC):\n    @abstractmethod\n    def notify(self, event, data):\n        pass\n"
  },
  {
    "path": "packages/backend/app/processors/observer/socketio_event_emitter.py",
    "content": "from ..launcher.event_type import EventType\nfrom ..launcher.processor_launcher_event import ProcessorLauncherEvent\n\nfrom .observer import Observer\nimport logging\nfrom ...flask.socketio_init import socketio\n\n\nclass SocketIOEventEmitter(Observer):\n    \"\"\"\n    A SocketIO event emitter that emits events to clients connected via WebSocket.\n\n    This class implements the Observer pattern and is designed to emit events\n    to specific client sessions in a Flask-SocketIO application. It can be safely\n    executed within greenthreads, making it suitable for use in environments\n    where asynchronous operations and real-time communication are required.\n\n    Attributes:\n        None\n\n    Methods:\n        notify(event, data): Emits the specified event to the client associated\n                            with the session ID in `data`. Handles exceptions\n                            gracefully and logs emission details.\n    \"\"\"\n\n    def notify(self, event: EventType, data: ProcessorLauncherEvent):\n        if event == EventType.STREAMING.value:\n            event = EventType.PROGRESS.value\n\n        json_event = {}\n\n        json_event[\"instanceName\"] = data.instance_name\n\n        if data.output is not None:\n            json_event[\"output\"] = data.output\n\n        if data.isDone is not None:\n            json_event[\"isDone\"] = data.isDone\n\n        if data.error is not None:\n            json_event[\"error\"] = str(data.error)\n\n        try:\n            socketio.emit(event, json_event, to=data.session_id)\n            logging.debug(\n                f\"Successfully emitted event {event} with data {json_event} to {data.session_id}\"\n            )\n        except Exception as e:\n            logging.error(f\"Error emitting event {event}: {e}\")\n"
  },
  {
    "path": "packages/backend/app/processors/utils/retry_mixin.py",
    "content": "import time\nimport logging\n\n\nclass RetryMixin:\n    def run_with_retry(self, func, *args, **kwargs):\n        \"\"\"\n        Executes `func` with retries as defined in the processor configuration.\n        Expected configuration keys:\n            - max_retries: number of extra attempts (default 0 means no retry)\n            - retry_delay: delay (in seconds) between attempts (default 0)\n        \"\"\"\n        retries = getattr(self, \"max_retries\", 0)\n        delay = getattr(self, \"retry_delay\", 0)\n        for attempt in range(retries + 1):\n            try:\n                return func(*args, **kwargs)\n            except Exception as e:\n                logging.warning(\n                    f\"Attempt {attempt+1}/{retries+1} for {func.__name__} failed\"\n                )\n                if attempt == retries:\n                    raise\n                if delay:\n                    time.sleep(delay)\n"
  },
  {
    "path": "packages/backend/app/root_injector.py",
    "content": "from typing import List\nfrom injector import Injector, Binder, Module\nfrom tests.utils.processor_factory_mock import ProcessorFactoryMock\nfrom app.processors.launcher.async_processor_launcher import AsyncProcessorLauncher\n\nfrom app.processors.observer.socketio_event_emitter import SocketIOEventEmitter\nfrom app.processors.observer.observer import Observer\nfrom app.storage.local_storage_strategy import LocalStorageStrategy\nfrom app.storage.s3_storage_strategy import S3StorageStrategy\nfrom app.storage.storage_strategy import StorageStrategy\nfrom app.env_config import is_mock_env, is_s3_enabled\nfrom app.processors.factory.processor_factory import ProcessorFactory\nfrom app.processors.factory.processor_factory_iter_modules import (\n    ProcessorFactoryIterModules,\n)\nfrom app.processors.launcher.processor_launcher import ProcessorLauncher\nimport logging\n\n\nclass ProcessorFactoryModule(Module):\n    def configure(self, binder: Binder):\n        if is_mock_env():\n            fake_factory = ProcessorFactoryMock(with_delay=True)\n            binder.bind(ProcessorFactory, to=fake_factory)\n        else:\n            binder.bind(ProcessorFactory, to=ProcessorFactoryIterModules)\n\n\nclass StorageModule(Module):\n    def configure(self, binder: Binder):\n        if is_s3_enabled():\n            logging.info(\"Using S3 storage strategy\")\n            binder.bind(StorageStrategy, to=S3StorageStrategy)\n        else:\n            logging.info(\"Using local storage strategy\")\n            binder.bind(StorageStrategy, to=LocalStorageStrategy)\n\n\nclass ProcessorLauncherModule(Module):\n    def configure(self, binder: Binder):\n        binder.bind(ProcessorLauncher, to=AsyncProcessorLauncher)\n        observer_list = [SocketIOEventEmitter()]\n\n        binder.multibind(List[Observer], to=observer_list)\n\n\ndef create_application_injector() -> Injector:\n    injector = Injector(\n        [\n            ProcessorFactoryModule(),\n            StorageModule(),\n            ProcessorLauncherModule(),\n        ],\n        auto_bind=True,\n    )\n    return injector\n\n\n_current_injector: Injector = create_application_injector()\n\n\ndef get_root_injector() -> Injector:\n    return _current_injector\n\n\ndef refresh_root_injector() -> None:\n    global _current_injector\n    _current_injector = create_application_injector()\n"
  },
  {
    "path": "packages/backend/app/storage/local_storage_strategy.py",
    "content": "from typing import Any\nfrom ..storage.storage_strategy import StorageStrategy\nfrom werkzeug.utils import secure_filename\nimport os\nfrom app.env_config import (\n    get_local_storage_folder_path,\n)\nfrom injector import singleton\n\n\n@singleton\nclass LocalStorageStrategy(StorageStrategy):\n    \"\"\"Local storage strategy. To be used only when you're running the app on your own machine.\n    Every generated image is saved in a local directory.\"\"\"\n\n    LOCAL_DIR = get_local_storage_folder_path()\n\n    def save(self, filename: str, data: Any) -> str:\n        if not os.path.exists(self.LOCAL_DIR):\n            os.makedirs(self.LOCAL_DIR)\n\n        secure_name = secure_filename(filename)\n        filepath = os.path.join(self.LOCAL_DIR, secure_name)\n        with open(filepath, \"wb\") as f:\n            f.write(data)\n\n        return self.get_url(secure_name)\n\n    def get_url(self, filename: str) -> str:\n        port = os.getenv(\"PORT\")\n        return f\"http://localhost:{port}/image/{filename}\"\n\n    def get_file(self, filename: str) -> bytes:\n        pass\n"
  },
  {
    "path": "packages/backend/app/storage/s3_storage_strategy.py",
    "content": "import logging\nfrom typing import Any\nimport uuid\nfrom ..storage.storage_strategy import CloudStorageStrategy\nimport boto3\nfrom botocore.config import Config\nimport os\nfrom datetime import timedelta\nfrom injector import singleton\nimport mimetypes\nimport requests\n\n\n@singleton\nclass S3StorageStrategy(CloudStorageStrategy):\n    \"\"\"S3 storage strategy. For the cloud version, every generated image is saved in an S3 bucket for 12H.\"\"\"\n\n    EXPIRATION = timedelta(hours=24)\n    UPLOAD_EXPIRATION = timedelta(minutes=10)\n    MAX_UPLOAD_SIZE_BYTES = int(os.getenv(\"MAX_UPLOAD_SIZE_MB\", \"300\")) * 1024 * 1024\n    MAX_POOL_CONNECTIONS = int(os.getenv(\"MAX_POOL_CONNECTIONS\", \"100\"))\n\n    def __init__(self):\n        self.BUCKET_NAME = os.getenv(\"S3_BUCKET_NAME\")\n        endpoint_url = os.getenv(\"S3_ENDPOINT_URL\")\n\n        if not endpoint_url:\n            endpoint_url = None\n\n        kwargs = {\n            \"aws_access_key_id\": os.getenv(\"S3_AWS_ACCESS_KEY_ID\"),\n            \"aws_secret_access_key\": os.getenv(\"S3_AWS_SECRET_ACCESS_KEY\"),\n            \"region_name\": os.getenv(\"S3_AWS_REGION_NAME\"),\n            \"config\": Config(max_pool_connections=self.MAX_POOL_CONNECTIONS),\n        }\n\n        if endpoint_url is not None:\n            kwargs[\"endpoint_url\"] = endpoint_url\n\n        self.s3_client = boto3.client(\n            \"s3\",\n            **kwargs,\n        )\n\n    def save(self, filename: str, data: Any, bucket_name: str = None) -> str:\n        if bucket_name is None:\n            bucket_name = self.BUCKET_NAME\n\n        self.s3_client.put_object(Bucket=bucket_name, Key=filename, Body=data)\n\n        url = self.s3_client.generate_presigned_url(\n            ClientMethod=\"get_object\",\n            Params={\"Bucket\": bucket_name, \"Key\": filename},\n            ExpiresIn=int(self.EXPIRATION.total_seconds()),\n        )\n\n        return url\n\n    def get_upload_link(self, filename=None) -> str:\n        file_key = f\"uploads/{uuid.uuid4()}\"\n\n        content_type = None\n\n        if not mimetypes.guess_type(\"test.webp\")[0]:\n            mimetypes.add_type(\"image/webp\", \".webp\")\n\n        if not mimetypes.guess_type(\"test.safetensors\")[0]:\n            mimetypes.add_type(\"application/octet-stream\", \".safetensors\")\n\n        if filename:\n            extension = filename.split(\".\")[-1]\n            file_key += f\".{extension}\"\n            mime_type, _ = mimetypes.guess_type(filename)\n            content_type = mime_type\n\n        try:\n            upload_data = self.s3_client.generate_presigned_post(\n                Bucket=self.BUCKET_NAME,\n                Key=file_key,\n                Fields=None,\n                Conditions=[[\"content-length-range\", 0, self.MAX_UPLOAD_SIZE_BYTES]],\n                ExpiresIn=int(self.UPLOAD_EXPIRATION.total_seconds()),\n            )\n\n            download_url = self.s3_client.generate_presigned_url(\n                ClientMethod=\"get_object\",\n                Params={\n                    \"Bucket\": self.BUCKET_NAME,\n                    \"Key\": file_key,\n                    \"ResponseContentType\": content_type,\n                },\n                ExpiresIn=int(self.EXPIRATION.total_seconds()),\n            )\n        except Exception as e:\n            logging.error(e)\n            raise Exception(\n                \"Error uploading file. \"\n                \"Please check your S3 configuration. \"\n                \"If you've not configured S3 please refer to docs.ai-flow.net/docs/file-upload\"\n            )\n\n        return upload_data, download_url\n\n    def get_url(self, filename: str, bucket_name: str = None) -> str:\n        \"\"\"Get presigned URL based on filename (URI)\"\"\"\n        if bucket_name is None:\n            bucket_name = self.BUCKET_NAME\n        try:\n            url = self.s3_client.generate_presigned_url(\n                ClientMethod=\"get_object\",\n                Params={\n                    \"Bucket\": bucket_name,\n                    \"Key\": filename,\n                },\n                ExpiresIn=int(self.EXPIRATION.total_seconds()),\n            )\n            return url\n        except Exception as e:\n            logging.error(f\"Error generating presigned URL for {filename}: {e}\")\n            raise Exception(\"Error generating presigned URL. \")\n\n    def get_file(self, filename: str, bucket_name: str = None) -> bytes:\n        \"\"\"Get file based on filename (URI)\"\"\"\n\n        if filename.startswith(\"s3://\"):\n            filename = filename[len(\"s3://\") :]\n            filename = filename[filename.index(\"/\") + 1 :]\n\n        if bucket_name is None:\n            bucket_name = self.BUCKET_NAME\n        try:\n            response = self.s3_client.get_object(Bucket=bucket_name, Key=filename)\n            return response[\"Body\"].read()\n        except Exception as e:\n            logging.error(f\"Error getting file {filename}: {e}\")\n            raise Exception(\"Error getting file. \")\n\n    def upload_and_get_link(self, filename: str, bucket_name: str = None) -> str:\n        \"\"\"Upload file and get link based on filename (URI)\"\"\"\n        if bucket_name is None:\n            bucket_name = self.BUCKET_NAME\n\n        upload_data, download_url = self.get_upload_link(filename)\n        url = upload_data[\"url\"]\n        fields = upload_data[\"fields\"]\n\n        filepath = filename\n        if not os.path.isfile(filepath):\n            raise FileNotFoundError(f\"File '{filename}' does not exist.\")\n\n        with open(filepath, \"rb\") as file:\n            files = {\"file\": file}\n\n            response = requests.post(url, data=fields, files=files)\n\n            if response.status_code == 204:\n                logging.info(\"File uploaded successfully.\")\n                return download_url\n            else:\n                logging.error(f\"Failed to upload file: {response.text}\")\n                response.raise_for_status()\n"
  },
  {
    "path": "packages/backend/app/storage/storage_strategy.py",
    "content": "from abc import ABC, abstractmethod\nfrom typing import Any, Optional\n\n\nclass StorageStrategy(ABC):\n    \"\"\"Storage strategy interface. We use this storage strategy to save and get the url of documents.\n    This is especially useful for the image generated by the stable diffusion model.\"\"\"\n\n    @abstractmethod\n    def save(self, filename: str, data: Any) -> Optional[str]:\n        pass\n\n    @abstractmethod\n    def get_url(self, filename: str) -> str:\n        pass\n\n    @abstractmethod\n    def get_file(self, filename: str, *args) -> bytes:\n        pass\n\n\nclass CloudStorageStrategy(StorageStrategy):\n    @abstractmethod\n    def get_upload_link(self, filename: str) -> str:\n        pass\n"
  },
  {
    "path": "packages/backend/app/tasks/green_pool_task_manager.py",
    "content": "import logging\nfrom queue import Queue\nimport eventlet\nfrom eventlet.green import threading\n\n\nfrom .task_exception import TaskAlreadyRegisteredError\n\nfrom ..env_config import get_background_task_max_workers\n\ntask_queues = {}\ntask_processors = {}\ntask_semaphores = {}\n\npool = eventlet.GreenPool(size=get_background_task_max_workers())\n\n\ndef register_task_processor(task_name, processor_func, max_concurrent_tasks=2):\n    if task_name in task_queues:\n        raise TaskAlreadyRegisteredError(task_name=task_name)\n\n    task_queue = Queue()\n    task_queues[task_name] = task_queue\n    task_processors[task_name] = processor_func\n    task_semaphores[task_name] = threading.Semaphore(max_concurrent_tasks)\n\n    logging.info(\n        f\"Registered green pool task processor '{task_name}' with max_concurrent_tasks={max_concurrent_tasks}\"\n    )\n\n\ndef process_task(task_name, task_data, task_result_queue):\n    semaphore = task_semaphores.get(task_name)\n    if semaphore is not None:\n        with semaphore:\n            if task_name in task_processors:\n                processor_func = task_processors[task_name]\n                result = processor_func(task_data)\n                task_result_queue.put(result)\n            else:\n                raise ValueError(f\"Nao task processor registered for {task_name}\")\n    else:\n        raise ValueError(f\"No semaphore registered for {task_name}\")\n\n\ndef add_task(task_name, task_data, result_queue):\n    if task_name in task_queues:\n        return pool.spawn(process_task, task_name, task_data, result_queue)\n    else:\n        raise ValueError(f\"No task processor registered for {task_name}\")\n"
  },
  {
    "path": "packages/backend/app/tasks/single_thread_tasks/browser/async_browser_task.py",
    "content": "import logging\nimport re\nimport asyncio\nimport threading\nfrom ....utils.web_scrapping.async_browser_manager import (\n    AsyncBrowserManager,\n)\n\nbrowser_task_queue = None\nevent_loop = None\n\n\nasync def accept_cookies(page, cookies_consent_label, timeout=5000):\n    try:\n        await page.wait_for_selector(\n            f\"button:has-text('{cookies_consent_label}')\", timeout=timeout\n        )\n        accept_button = page.locator(\n            f\"button:has-text('{cookies_consent_label}')\"\n        ).first\n        if not accept_button:\n            return\n        await accept_button.click()\n        await page.wait_for_timeout(2000)\n    except Exception as e:\n        logging.warning(\"Could not find or click the cookie accept button:\", e)\n\n\ndef strip_attributes(html):\n    return re.sub(r\"(<\\w+)(\\s+[^>]+)?(>)\", r\"\\1\\3\", html)\n\n\nasync def fetch_url_content(\n    url,\n    browser_manager,\n    with_html_tags=False,\n    with_html_attributes=False,\n    selectors=None,\n    selectors_to_remove=None,\n    auto_consent_cookies=False,\n    enable_ad_blocker=False,\n    cookies_consent_label=None,\n):\n    page, context = await browser_manager.get_tab()\n    try:\n        await page.goto(url, timeout=30000, wait_until=\"domcontentloaded\")\n    except Exception as e:\n        logging.error(f\"Failed to load page: {str(e)}\")\n        return \"\"\n    try:\n        await page.wait_for_load_state(\"networkidle\", timeout=10000)\n        content_attempts = 0\n        max_attempts = 3\n\n        while True:\n            content_attempts += 1\n\n            try:\n                content = await page.content()\n                break\n            except Exception as e:\n                if \"Page.content\" in str(e):\n                    if content_attempts >= max_attempts:\n                        logging.error(\n                            f\"Failed to retrieve page {url} content after {content_attempts} attempts: {str(e)}\"\n                        )\n                        return \"\"\n\n                    await page.wait_for_load_state(\"load\", timeout=5000)\n                else:\n                    raise\n\n        if selectors_to_remove:\n            for selector in selectors_to_remove:\n                elements = await page.query_selector_all(selector)\n                for element in elements:\n                    await page.evaluate(\"(element) => element.remove()\", element)\n        content = \"\"\n        if selectors and len(selectors) > 0:\n            for selector in selectors:\n                await page.wait_for_selector(selector, timeout=3000)\n                elements = await page.query_selector_all(selector)\n                for element in elements:\n                    content_piece = (\n                        await element.inner_html()\n                        if with_html_tags\n                        else await element.inner_text()\n                    )\n                    content += content_piece + \"\\n\"\n        else:\n            content = (\n                await page.content()\n                if with_html_tags\n                else await page.inner_text(\"body\")\n            )\n        if with_html_tags and not with_html_attributes:\n            content = strip_attributes(content)\n    except Exception as e:\n        logging.error(f\"Error processing {url} page content: {str(e)}\")\n\n        content = \"\"\n    finally:\n        await browser_manager.release_tab(page, context)\n    return content\n\n\nasync def scrapping_task(task_data, browser_manager):\n    selectors = task_data.get(\"selectors\", [])\n    selectors_to_remove = task_data.get(\"selectors_to_remove\", [])\n    with_html_tags = task_data.get(\"with_html_tags\", False)\n    with_html_attributes = task_data.get(\"with_html_attributes\", False)\n    url = task_data.get(\"url\")\n    cookies_consent_label = task_data.get(\"cookies_consent_label\", None)\n    auto_consent_cookies = task_data.get(\"auto_consent_cookies\", False)\n    enable_ad_blocker = task_data.get(\"enable_ad_blocker\", False)\n    content = await fetch_url_content(\n        url,\n        browser_manager,\n        with_html_tags=with_html_tags,\n        with_html_attributes=with_html_attributes,\n        selectors=selectors,\n        selectors_to_remove=selectors_to_remove,\n        cookies_consent_label=cookies_consent_label,\n        auto_consent_cookies=auto_consent_cookies,\n        enable_ad_blocker=enable_ad_blocker,\n    )\n    return content\n\n\nasync def add_task(task_data, result_queue):\n    await browser_task_queue.put((task_data, result_queue))\n\n\nasync def browser_task_worker():\n    global browser_task_queue\n    browser_task_queue = asyncio.Queue()\n    browser_manager = AsyncBrowserManager()\n    await browser_manager.initialize_browser()\n\n    logging.info(\"Starting browser task worker\")\n    while True:\n        task_data, result_queue = await browser_task_queue.get()\n        if task_data is None:  # Exit signal\n            logging.info(\"Exiting browser task worker\")\n            break\n        try:\n            result = await scrapping_task(task_data, browser_manager)\n            result_queue.put(result)\n        except Exception as e:\n            logging.error(f\"Error in browser task worker: {e}\")\n            import traceback\n\n            traceback.print_exc()\n\n\nevent_loop = None\n\n\ndef start_event_loop():\n    global event_loop\n    event_loop = asyncio.new_event_loop()\n    asyncio.set_event_loop(event_loop)\n    event_loop.run_until_complete(browser_task_worker())\n    event_loop.run_forever()\n\n\ndef stop_event_loop():\n    event_loop.run_until_complete(browser_manager.close_browser())\n    event_loop.stop()\n    event_loop.close()\n\n\ndef add_task_sync(task_data, result_queue):\n    future = asyncio.run_coroutine_threadsafe(\n        add_task(task_data, result_queue), event_loop\n    )\n    return future\n\n\nevent_loop_thread = threading.Thread(target=start_event_loop)\nevent_loop_thread.start()\n"
  },
  {
    "path": "packages/backend/app/tasks/single_thread_tasks/browser/browser_task.py",
    "content": "import logging\nimport queue\nimport re\nimport threading\nimport time\nfrom ....utils.web_scrapping.browser_manager import BrowserManager\n\n\nbrowser_task_queue = queue.Queue()\n\n\ndef accept_cookies(page, cookies_consent_label, timeout=5000):\n    try:\n        page.wait_for_selector(\n            f\"button:has-text('{cookies_consent_label}')\", timeout=timeout\n        )\n        accept_button = page.locator(\n            f\"button:has-text('{cookies_consent_label}')\"\n        ).first\n        accept_button.click()\n        page.wait_for_timeout(2000)\n    except Exception as e:\n        logging.warning(\"Could not find or click the cookie accept button:\", e)\n\n\ndef strip_attributes(html):\n    return re.sub(r\"(<\\w+)(\\s+[^>]+)?(>)\", r\"\\1\\3\", html)\n\n\ndef fetch_url_content(\n    url,\n    browser_manager,\n    with_html_tags=False,\n    with_html_attributes=False,\n    selectors=None,\n    selectors_to_remove=None,\n    cookies_consent_label=None,\n):\n    page, context = browser_manager.get_tab()\n    try:\n        page.goto(url, timeout=30000)\n    except Exception as e:\n        logging.error(f\"Failed to load page: {str(e)}\")\n        return \"\"\n    try:\n        if cookies_consent_label:\n            accept_cookies(page, cookies_consent_label)\n        if selectors_to_remove:\n            for selector in selectors_to_remove:\n                elements = page.query_selector_all(selector)\n                for element in elements:\n                    page.evaluate(\"(element) => element.remove()\", element)\n        content = \"\"\n        if selectors and len(selectors) > 0:\n            for selector in selectors:\n                elements = page.query_selector_all(selector)\n                for element in elements:\n                    content_piece = (\n                        element.inner_html() if with_html_tags else element.inner_text()\n                    )\n                    content += content_piece + \"\\n\"\n        else:\n            content = page.content() if with_html_tags else page.inner_text(\"body\")\n        if with_html_tags and not with_html_attributes:\n            content = strip_attributes(content)\n    except Exception as e:\n        logging.error(f\"Error processing page content: {str(e)}\")\n        content = \"\"\n    finally:\n        browser_manager.release_tab(page, context)\n    return content\n\n\ndef scrapping_task(task_data, browser_manager):\n\n    selectors = task_data.get(\"selectors\", [])\n    selectors_to_remove = task_data.get(\"selectors_to_remove\", [])\n    with_html_tags = task_data.get(\"with_html_tags\", False)\n    with_html_attributes = task_data.get(\"with_html_attributes\", False)\n    url = task_data.get(\"url\")\n    cookies_consent_label = task_data.get(\"cookies_consent_label\", None)\n    content = fetch_url_content(\n        url,\n        browser_manager,\n        with_html_tags=with_html_tags,\n        with_html_attributes=with_html_attributes,\n        selectors=selectors,\n        selectors_to_remove=selectors_to_remove,\n        cookies_consent_label=cookies_consent_label,\n    )\n    return content\n\n\ndef add_task_sync(task_data, result_queue):\n    browser_task_queue.put((task_data, result_queue))\n\n\ndef browser_thread_func(task_queue):\n    logging.info(\"Starting browser thread\")\n    browser_manager = BrowserManager()\n    browser_manager.initialize_browser()\n\n    while True:\n        try:\n            task_data, result_queue = task_queue.get()\n            if task_data is None:  # Exit signal\n                logging.info(\"Exiting browser thread\")\n                break\n\n            result = scrapping_task(task_data, browser_manager)\n            result_queue.put(result)\n\n        except Exception as e:\n            logging.error(f\"Error in browser thread: {e}\")\n        finally:\n            time.sleep(0.1)\n\n\ndef stop_browser_thread():\n    browser_task_queue.put((None, None))\n    browser_thread.join()\n\n\nbrowser_thread = threading.Thread(\n    target=browser_thread_func, args=(browser_task_queue,)\n)\nbrowser_thread.start()\n"
  },
  {
    "path": "packages/backend/app/tasks/task_exception.py",
    "content": "class TaskAlreadyRegisteredError(Exception):\n    \"\"\"Exception raised when attempting to register a task that is already registered.\"\"\"\n\n    def __init__(self, task_name):\n        self.task_name = task_name\n        super().__init__(f\"Task '{task_name}' is already registered.\")\n"
  },
  {
    "path": "packages/backend/app/tasks/task_manager.py",
    "content": "import logging\nfrom queue import Queue\nfrom concurrent.futures import ThreadPoolExecutor\nimport threading\n\nfrom .task_exception import TaskAlreadyRegisteredError\n\nfrom ..env_config import get_background_task_max_workers\n\ntask_queues = {}\ntask_processors = {}\ntask_semaphores = {}\n\nexecutor = ThreadPoolExecutor(max_workers=get_background_task_max_workers())\n\n\ndef register_task_processor(task_name, processor_func, max_concurrent_tasks=2):\n    if task_name in task_queues:\n        raise TaskAlreadyRegisteredError(task_name=task_name)\n\n    task_queue = Queue()\n    task_queues[task_name] = task_queue\n    task_processors[task_name] = processor_func\n    task_semaphores[task_name] = threading.Semaphore(max_concurrent_tasks)\n\n    logging.info(\n        f\"Registered task processor '{task_name}' with max_concurrent_tasks={max_concurrent_tasks}\"\n    )\n\n\ndef process_task(task_name, task_data, task_result_queue):\n    semaphore = task_semaphores.get(task_name)\n    if semaphore is not None:\n        with semaphore:\n            if task_name in task_processors:\n                processor_func = task_processors[task_name]\n                result = processor_func(task_data)\n                task_result_queue.put(result)\n            else:\n                raise ValueError(f\"No task processor registered for {task_name}\")\n    else:\n        raise ValueError(f\"No semaphore registered for {task_name}\")\n\n\ndef add_task(task_name, task_data, result_queue):\n    if task_name in task_queues:\n        executor.submit(process_task, task_name, task_data, result_queue)\n    else:\n        raise ValueError(f\"No task processor registered for {task_name}\")\n"
  },
  {
    "path": "packages/backend/app/tasks/task_utils.py",
    "content": "from queue import Empty\nimport time\nimport eventlet\n\n\ndef wait_for_result(queue, timeout=120, initial_sleep=0.1, max_sleep=5.0):\n    start_time = time.time()\n    sleep_duration = initial_sleep\n\n    while True:\n        try:\n            result = queue.get_nowait()\n            return result\n        except Empty:\n            if time.time() - start_time >= timeout:\n                raise TimeoutError(\"Operation timed out after the specified timeout\")\n\n            eventlet.sleep(sleep_duration)\n            sleep_duration = min(sleep_duration * 1.5, max_sleep)\n"
  },
  {
    "path": "packages/backend/app/tasks/thread_pool_task_manager.py",
    "content": "from concurrent.futures import ThreadPoolExecutor\nimport logging\nfrom queue import Queue\nimport threading\n\n\nfrom .task_exception import TaskAlreadyRegisteredError\n\nfrom ..env_config import get_background_task_max_workers\n\ntask_queues = {}\ntask_processors = {}\ntask_semaphores = {}\n\nexecutor = ThreadPoolExecutor(max_workers=get_background_task_max_workers())\n\n\ndef register_task_processor(task_name, processor_func, max_concurrent_tasks=2):\n    if task_name in task_queues:\n        raise TaskAlreadyRegisteredError(task_name=task_name)\n\n    task_queue = Queue()\n    task_queues[task_name] = task_queue\n    task_processors[task_name] = processor_func\n    task_semaphores[task_name] = threading.Semaphore(max_concurrent_tasks)\n\n    logging.info(\n        f\"Registered thread pool task processor '{task_name}' with max_concurrent_tasks={max_concurrent_tasks}\"\n    )\n\n\ndef process_task(task_name, task_data, task_result_queue):\n    semaphore = task_semaphores.get(task_name)\n    if semaphore is not None:\n        with semaphore:\n            if task_name in task_processors:\n                processor_func = task_processors[task_name]\n                result = processor_func(task_data)\n                task_result_queue.put(result)\n            else:\n                raise ValueError(f\"Nao task processor registered for {task_name}\")\n    else:\n        raise ValueError(f\"No semaphore registered for {task_name}\")\n\n\ndef add_task(task_name, task_data, result_queue):\n    if task_name in task_queues:\n        return executor.submit(process_task, task_name, task_data, result_queue)\n    else:\n        raise ValueError(f\"No task processor registered for {task_name}\")\n"
  },
  {
    "path": "packages/backend/app/utils/node_extension_utils.py",
    "content": "import importlib\nimport logging\nimport os\nimport pkgutil\nfrom cachetools import TTLCache, cached\n\nfrom ..processors.components.extension.extension_processor import (\n    DynamicExtensionProcessor,\n    ExtensionProcessor,\n)\n\nvery_long_ttl_cache = 120000\n\nraw_blacklist = os.getenv(\"EXTENSIONS_BLACKLIST\", \"\").strip()\nEXTENSIONS_BLACKLIST = raw_blacklist.split(\",\") if raw_blacklist else []\n\nraw_whitelist = os.getenv(\"EXTENSIONS_WHITELIST\", \"\").strip()\nEXTENSIONS_WHITELIST = raw_whitelist.split(\",\") if raw_whitelist else []\n\n\ndef _load_dynamic_extension(processor_type, data):\n    package = importlib.import_module(\"app.processors.components.extension\")\n    prefix = package.__name__ + \".\"\n\n    for importer, module_name, is_pkg in pkgutil.iter_modules(package.__path__, prefix):\n        module = importlib.import_module(module_name)\n\n        for attribute_name in dir(module):\n            attribute = getattr(module, attribute_name)\n            if isinstance(attribute, type) and issubclass(\n                attribute, DynamicExtensionProcessor\n            ):\n                if hasattr(attribute, \"get_dynamic_node_config\") and hasattr(\n                    attribute, \"processor_type\"\n                ):\n                    if attribute.processor_type == processor_type:\n                        schema = attribute.get_dynamic_node_config(attribute, data)\n                        return schema\n    return None\n\n\ndef _load_all_extension_schemas():\n    schemas = []\n    package = importlib.import_module(\"app.processors.components.extension\")\n    prefix = package.__name__ + \".\"\n\n    for importer, module_name, is_pkg in pkgutil.iter_modules(package.__path__, prefix):\n        module = importlib.import_module(module_name)\n\n        for attribute_name in dir(module):\n            try:\n                attribute = getattr(module, attribute_name)\n                if isinstance(attribute, type) and issubclass(\n                    attribute, ExtensionProcessor\n                ):\n                    if hasattr(attribute, \"get_node_config\"):\n                        schema = attribute.get_node_config(attribute)\n                        if schema is not None:\n                            schemas.append(schema)\n            except Exception as e:\n                logging.warning(\n                    f\"Error loading extension {module_name}.{attribute_name}\"\n                )\n                continue\n    return schemas\n\n\ndef filter_extensions(extensions):\n    if len(EXTENSIONS_WHITELIST) > 0:\n        extensions = [e for e in extensions if e.processorType in EXTENSIONS_WHITELIST]\n    if len(EXTENSIONS_BLACKLIST) > 0:\n        extensions = [\n            e for e in extensions if e.processorType not in EXTENSIONS_BLACKLIST\n        ]\n    return extensions\n\n\ndef get_extensions():\n    schemas = _load_all_extension_schemas()\n    schemas = filter_extensions(schemas)\n    schemas_dict = []\n    for schema in schemas:\n        if schema is not None:\n            schemas_dict.append(schema.dict())\n    return schemas_dict\n\n\ndef get_dynamic_extension_config(processor_type, data):\n    schema = _load_dynamic_extension(processor_type, data)\n    return schema\n"
  },
  {
    "path": "packages/backend/app/utils/openapi_client.py",
    "content": "import logging\nfrom typing import Optional\nimport requests\nimport eventlet\n\n\nclass Client:\n\n    def __init__(\n        self,\n        api_token: Optional[str] = None,\n        base_url: Optional[str] = None,\n        timeout_ms: int = None,\n        **kwargs,\n    ) -> None:\n        super().__init__()\n\n        self._api_token = api_token\n        self._base_url = base_url\n        self._timeout_ms = timeout_ms\n        self._client_kwargs = kwargs\n\n    def post(\n        self,\n        path: str,\n        data: Optional[dict] = None,\n        files: Optional[dict] = {\"none\": (None, \"\")},\n        content_type: str = None,\n        accept: str = \"application/json\",\n        **kwargs,\n    ) -> dict:\n        headers = {\n            \"Authorization\": f\"Bearer {self._api_token}\",\n            \"Accept\": accept,\n        }\n\n        if files:\n            response = requests.post(\n                f\"{self._base_url}{path}\",\n                headers=headers,\n                files=files,\n                data=data,\n                timeout=self._timeout_ms,\n                **self._client_kwargs,\n                **kwargs,\n            )\n        else:\n            logging.info(\"JSON BOI\")\n            response = requests.post(\n                f\"{self._base_url}{path}\",\n                headers=headers,\n                json=data,\n                timeout=self._timeout_ms,\n                **self._client_kwargs,\n                **kwargs,\n            )\n        if response.status_code == 200:\n            if accept == \"application/json\":\n                return response.json()\n            elif accept == \"image/*\":\n                return response.content\n            else:\n                return response.content\n        else:\n            raise Exception(response.text)\n\n    def get(\n        self,\n        path: str,\n        content_type: str = None,\n        accept: str = \"application/json\",\n        **kwargs,\n    ) -> dict:\n        response = requests.get(\n            f\"{self._base_url}{path}\",\n            headers={\n                \"Content-Type\": content_type,\n                \"Authorization\": f\"Bearer {self._api_token}\",\n                \"Accept\": accept,\n            },\n            timeout=self._timeout_ms,\n            **self._client_kwargs,\n            **kwargs,\n        )\n\n        if response.status_code == 200:\n            if accept == \"application/json\":\n                return response.json()\n            elif accept == \"image/*\":\n                return response.content\n            else:\n                return response.content\n        else:\n            print(response.status_code)\n            if response.json()[\"status\"] == \"in-progress\":\n                return response.json()\n            raise Exception(str(response.json()))\n\n    def pooling(\n        self,\n        path: str,\n        content_type: str = None,\n        accept: str = \"application/json\",\n        **kwargs,\n    ) -> dict:\n        pooling_response = self.get(path=path, accept=accept)\n        while (\n            isinstance(pooling_response, dict)\n            and pooling_response.get(\"status\") == \"in-progress\"\n        ):\n            print(\"Pooling...\")\n            eventlet.sleep(0.5)\n            pooling_response = self.get(path=path, accept=accept)\n        return pooling_response\n\n\n# if __name__ == \"__main__\":\n#     api_reader = OpenAPIReader(\"../../resources/openapi/stabilityai.json\")\n#     print(\"API Key Name:\", api_reader.get_api_key_name())\n#     specific_path = \"/v2beta/stable-image/generate/core\"  # Remplacez par un chemin valide de votre fichier OpenAPI\n#     params = api_reader.get_request_schema_for_path(specific_path, \"POST\")\n#     print(params)\n\n#     requestBody = {\n#         \"prompt\": \"A cute baby sea otter\",\n#         \"aspect_ratio\": \"16:9\",\n#     }\n\n#     serverUrl = api_reader.get_servers()[0]\n\n#     print(serverUrl)\n\n#     client = Client(\n#         api_token=\"sk-rQKpzMtDplCq8NFjnPxKOKkFzjmdEsticRFMUWUk11uPrflL\",\n#         base_url=serverUrl,\n#     )\n\n#     response = client.post(specific_path, requestBody, accept=\"image/*\")\n\n#     print(response)\n"
  },
  {
    "path": "packages/backend/app/utils/openapi_converter.py",
    "content": "import json\n\nfrom ..processors.components.model import Option\nfrom ..processors.components.node_config_builder import (\n    FieldBuilder,\n    NodeConfigBuilder,\n    NodeConfigVariantBuilder,\n)\n\n\nclass OpenAPIConverter:\n    def __init__(self):\n        pass\n\n    def convert_enum_to_options(self, enum, defaultValue):\n        options = []\n        for value in enum:\n            options.append(\n                Option(default=(value == defaultValue), value=value, label=value)\n            )\n        return options\n\n    def convert_properties_to_fields(self, schema):\n        fields = []\n        properties = schema.get(\"properties\", {})\n        for key, value in properties.items():\n            field_builder = FieldBuilder()\n            field_builder.set_name(key)\n            field_builder.set_label(key)\n            field_builder.set_description(value.get(\"description\"))\n            field_type = value.get(\"type\")\n\n            if field_type == \"string\":\n                if \"enum\" in value:\n                    field_builder.set_type(\"select\")\n                    field_builder.set_options(\n                        self.convert_enum_to_options(\n                            value[\"enum\"], value.get(\"default\")\n                        )\n                    )\n                else:\n                    field_builder.set_type(\"textfield\")\n                    field_builder.set_has_handle(True)\n            elif field_type == \"number\":\n                if \"minimum\" or \"maximum\" in value:\n                    if value.get(\"maximum\") > 1000:\n                        field_builder.set_type(\"numericfield\")\n                    else:\n                        field_builder.set_type(\"slider\")\n                    field_builder.set_min(value.get(\"minimum\"))\n                    field_builder.set_max(value.get(\"maximum\"))\n                else:\n                    field_builder.set_type(\"numericfield\")\n                    field_builder.set_has_handle(True)\n            else:\n                field_builder.set_type(\"textfield\")\n\n            if \"example\" in value:\n                if \"format\" in value and value[\"format\"] == \"binary\":\n                    field_builder.set_placeholder(\"Resource URL\")\n                    field_builder.set_is_binary(True)\n                else:\n                    field_builder.set_placeholder(value[\"example\"])\n\n            if \"default\" in value:\n                field_builder.set_default_value(value[\"default\"])\n\n            required_fields = schema.get(\"required\", [])\n            if key in required_fields:\n                field_builder.set_required(True)\n\n            fields.append(field_builder.build())\n        return fields\n\n    def convert_schema_to_node_config(self, schema):\n        schema = schema.get(\"schema\")\n        if schema.get(\"oneOf\") and schema.get(\"discriminator\"):\n            builder = NodeConfigVariantBuilder()\n            discriminatorName = schema.get(\"discriminator\").get(\"propertyName\")\n            mapping = schema.get(\"discriminator\", {}).get(\"mapping\")\n            builder.add_discriminator_field(discriminatorName)\n            for key, value in mapping.items():\n                fields = self.convert_properties_to_fields(value)\n                builder.add_sub_configuration(\n                    NodeConfigBuilder()\n                    .add_discriminator(discriminatorName, key)\n                    .set_fields(fields)\n                    .build()\n                )\n        else:\n            builder = NodeConfigBuilder()\n            # print(schema)\n            fields = self.convert_properties_to_fields(schema)\n            builder.set_fields(fields)\n\n        return builder\n"
  },
  {
    "path": "packages/backend/app/utils/openapi_reader.py",
    "content": "import json\nimport hashlib\nfrom openapi_spec_validator.readers import read_from_filename\n\n\nclass OpenAPIReader:\n    HTTP_METHODS_LIST = [\"get\", \"post\", \"put\", \"delete\", \"patch\", \"options\", \"head\"]\n\n    def __init__(self, file_path):\n        spec_dict, base_uri = read_from_filename(file_path)\n        self._api_data = spec_dict\n        self._paths = self.get_all_paths()\n\n    def get_api_key_name(self):\n        security_schemes = self._api_data.get(\"components\", {}).get(\n            \"securitySchemes\", {}\n        )\n        for key, value in security_schemes.items():\n            if value.get(\"type\") == \"apiKey\":\n                return key\n        return None\n\n    def get_servers(self):\n        servers = self._api_data.get(\"servers\", [])\n        return [server[\"url\"] for server in servers if \"url\" in server]\n\n    def get_all_paths(self):\n        paths = self._api_data.get(\"paths\", {})\n        path_details = {}\n        for path, operations in paths.items():\n            operations_info = {}\n            for operation_type, operation_details in operations.items():\n                if operation_type in OpenAPIReader.HTTP_METHODS_LIST:\n                    operations_info[operation_type.upper()] = {\n                        \"summary\": operation_details.get(\n                            \"summary\", \"No summary provided\"\n                        ),\n                        \"description\": operation_details.get(\n                            \"description\", \"No description provided\"\n                        ),\n                    }\n            path_details[path] = operations_info\n        return path_details\n\n    def get_all_paths_names(self):\n        return list(self._paths.keys())\n\n    def get_params_for_path(self, path, method):\n        path_details = self._api_data.get(\"paths\", {}).get(path, {})\n        params = {}\n        for path_method, details in path_details.items():\n            if method.upper() == path_method.upper():\n                params[method.upper()] = details.get(\"parameters\", [])\n        return json.dumps(params, indent=4)\n\n    def get_path_accept(self, path, method):\n        path_details = self._api_data.get(\"paths\", {}).get(path, {})\n        parameters = path_details[method].get(\"parameters\", {})\n        for param in parameters:\n            if param[\"name\"] == \"accept\":\n                return param[\"schema\"][\"default\"]\n        return None\n\n    def get_response_content_type(self, path, method):\n        path_details = self._api_data.get(\"paths\", {}).get(path, {})\n        responses = path_details[method].get(\"responses\", {})\n        response_200 = responses.get(\"200\", {})\n        content_types = response_200.get(\"content\", {})\n        return list(content_types.keys())\n\n    def get_request_schema_for_path(self, path, method, content_type=None):\n        path_details = self._api_data.get(\"paths\", {}).get(path, {})\n        if method.lower() in path_details:\n            request_body = path_details[method.lower()].get(\"requestBody\", {})\n            content_types = request_body.get(\"content\", {})\n            resolved_request_body = {}\n            for request_content_type, content_details in content_types.items():\n                schema = content_details.get(\"schema\", {})\n                resolved_schema = self.resolve_schema(\n                    schema\n                )  # Recursively resolve schema including nested $ref\n                resolved_request_body[request_content_type] = {\n                    \"schema\": resolved_schema\n                }\n\n            if content_type and content_type in resolved_request_body:\n                return resolved_request_body[content_type]\n            elif resolved_request_body:\n                first_content_type = next(iter(resolved_request_body))\n                return resolved_request_body[first_content_type]\n        return \"{}\"\n\n    def merge_schemas(base_schema, additions):\n        if \"required\" in additions:\n            if \"required\" in base_schema:\n                base_schema[\"required\"].extend(additions[\"required\"])\n            else:\n                base_schema[\"required\"] = additions[\"required\"]\n        return base_schema\n\n    def resolve_schema(self, schema):\n        \"\"\"Recursively resolve schema references including nested references.\"\"\"\n        if \"$ref\" in schema:\n            return self.resolve_ref(schema[\"$ref\"])\n        elif \"schema\" in schema:\n            return self.resolve_schema(schema[\"schema\"])\n        elif \"allOf\" in schema:\n            allOf = schema[\"allOf\"]\n            resolved_parts = []\n            for item in allOf:\n                resolved_parts.append(self.resolve_schema(item))\n\n            resolved_path = resolved_parts[0]\n            for part in resolved_parts[1:]:\n                resolved_path = OpenAPIReader.merge_schemas(resolved_path, part)\n            return resolved_path\n        elif \"oneOf\" in schema:\n            oneOf = schema[\"oneOf\"]\n            resolved_parts = []\n            for item in oneOf:\n                resolved_parts.append(self.resolve_schema(item))\n\n            schema[\"oneOf\"] = resolved_parts\n\n            if \"discriminator\" in schema:\n                discriminator = schema[\"discriminator\"]\n                discriminator_mapping = discriminator[\"mapping\"]\n                discriminator_resolved = {}\n                for key, value in discriminator_mapping.items():\n                    if type(value) != str:\n                        discriminator_resolved[key] = value\n                    else:\n                        discriminator_resolved[key] = self.resolve_ref(value)\n\n                discriminator[\"mapping\"] = discriminator_resolved\n\n            return schema\n        elif schema.get(\"type\") == \"object\" and \"properties\" in schema:\n            # If the schema is of type object and has properties, recursively resolve each property\n            resolved_properties = {}\n            for prop, prop_schema in schema[\"properties\"].items():\n                resolved_properties[prop] = self.resolve_schema(prop_schema)\n            # Return the schema with resolved properties, preserving other aspects of the schema such as 'required'\n            return {\n                \"type\": \"object\",\n                \"properties\": resolved_properties,\n                \"required\": schema.get(\"required\", []),\n            }\n        elif schema.get(\"type\") == \"array\" and \"items\" in schema:\n            resolved_items = self.resolve_schema(schema[\"items\"])\n            return {\"type\": \"array\", \"items\": resolved_items}\n\n        return schema\n\n    def resolve_ref(self, ref):\n        ref_parts = ref.split(\"/\")\n        if ref_parts[0] == \"#\":  # Assumes the first part is '#'\n            component_part = self._api_data\n            for part in ref_parts[1:]:\n                component_part = component_part.get(part, {})\n            return component_part\n        return {}\n\n    def get_response_schema_for_path(self, path, method, content_type=None):\n        path_details = self._api_data.get(\"paths\", {}).get(path, {})\n        if method.lower() in path_details:\n            responses = path_details[method.lower()].get(\"responses\", {})\n            response_200 = responses.get(\"200\", {})\n            if \"content\" in response_200:\n                content = response_200[\"content\"]\n                if content_type:\n                    # If a specific content type is requested, return that schema\n                    if content_type in content:\n                        return self.resolve_schema(content[content_type])\n                else:\n                    # Otherwise, return the schema for the first available content type\n                    first_content_type = list(content.keys())[0]\n                    return self.resolve_schema(content[first_content_type])\n        return \"{}\"\n\n\n# Usage de la classe\nif __name__ == \"__main__\":\n\n    def resolve_references(schema, root):\n        if isinstance(schema, dict):\n            if \"$ref\" in schema:\n                ref_path = schema[\"$ref\"]\n                ref_parts = ref_path.strip(\"#/\").split(\"/\")\n                ref_schema = root\n                for part in ref_parts:\n                    ref_schema = ref_schema[part]\n                return resolve_references(ref_schema, root)\n            else:\n                return {k: resolve_references(v, root) for k, v in schema.items()}\n        elif isinstance(schema, list):\n            return [resolve_references(item, root) for item in schema]\n        else:\n            return schema\n\n    spec_dict, base_uri = read_from_filename(\"../../resources/openapi/stabilityai.json\")\n    reader = OpenAPIReader(\"../../resources/openapi/stabilityai.json\")\n    path_img_to_video = reader.get_request_schema_for_path(\n        \"/v2beta/image-to-video\", \"post\"\n    )\n    resolved_path = reader.resolve_schema(path_img_to_video)\n    print(json.dumps(resolved_path, indent=4))\n"
  },
  {
    "path": "packages/backend/app/utils/processor_utils.py",
    "content": "from datetime import datetime\nimport os\nimport tempfile\nfrom urllib.parse import urlparse\nimport requests\n\n\ndef create_empty_tmp_file(prefix=\"tmp\"):\n    temp_dir = tempfile.TemporaryDirectory()\n    timestamp_str = datetime.now().strftime(\"%Y%m%d%H%M%S%f\")\n    temp_file = os.path.join(temp_dir.name, f\"{prefix}-{timestamp_str}\")\n    return temp_file, temp_dir\n\n\ndef create_temp_file_with_str_content(content):\n    temp_file, temp_dir = create_empty_tmp_file()\n    with open(temp_file, \"w\") as f:\n        f.write(content)\n    return temp_file, temp_dir\n\n\ndef create_temp_file_with_bytes_content(content):\n    temp_file, temp_dir = create_empty_tmp_file()\n    with open(temp_file, \"wb\") as f:\n        f.write(content)\n    return temp_file, temp_dir\n\n\ndef get_file_size_from_url(url):\n    response = requests.head(url)\n    if response.status_code != 200:\n        raise ValueError(\n            f\"Failed to reach the URL: returned status code {response.status_code}\"\n        )\n\n    content_length = response.headers.get(\"Content-Length\")\n    return content_length\n\n\ndef get_file_size_from_url_in_mb(url):\n    content_length = get_file_size_from_url(url)\n    if content_length is None:\n        raise ValueError(\"Failed to get file size from URL\")\n    return round(int(content_length) / (1024 * 1024), 2)\n\n\ndef get_max_file_size_in_mb():\n    return int(os.getenv(\"MAX_TMP_FILE_SIZE_MB\", 300))\n\n\ndef is_s3_file(url):\n    bucket_name = os.getenv(\"S3_BUCKET_NAME\")\n    if not bucket_name:\n        return False\n    return url.startswith(f\"s3://{bucket_name}\") or url.startswith(\n        f\"https://{bucket_name}.s3.amazonaws.com\"\n    )\n\n\ndef is_accepted_url_file_size(url):\n    size_mb = get_file_size_from_url_in_mb(url)\n    if size_mb > get_max_file_size_in_mb():\n        return False\n    return True\n\n\ndef is_valid_url(url):\n    result = urlparse(url)\n    if not all([result.scheme, result.netloc]):\n        return False\n    return True\n\n\ndef file_downloable_check(url):\n    if is_s3_file(url):\n        return\n    if not is_accepted_url_file_size(url):\n        raise ValueError(\n            f\"File size is too large. Max file size is {get_max_file_size_in_mb()} MB\"\n        )\n    if not is_valid_url(url):\n        raise ValueError(f\"Invalid URL: {url}\")\n\n\ndef download_file_as_binary(url):\n    try:\n        file_downloable_check(url)\n    except ValueError as e:\n        raise ValueError(f\"Can't download file: {e}\")\n\n    response = requests.get(url)\n    response.raise_for_status()\n\n    return response.content\n\n\ndef stream_download_file_as_binary(url):\n    try:\n        file_downloable_check(url)\n    except ValueError as e:\n        raise ValueError(f\"Can't download file: {e}\")\n\n    with requests.get(url, stream=True) as response:\n        response.raise_for_status()\n        chunks = []\n        for chunk in response.iter_content(chunk_size=8192):\n            chunks.append(chunk)\n        return b\"\".join(chunks)\n"
  },
  {
    "path": "packages/backend/app/utils/replicate_utils.py",
    "content": "import os\nimport requests\nfrom ..env_config import get_replicate_api_key\nfrom cachetools import TTLCache, cached\nimport logging\n\n\nshort_ttl_cache = 600\nlong_ttl_cache = 12000\nvery_long_ttl_cache = 120000\n\nREPLICATE_API_URL = \"https://api.replicate.com\"\nREPLICATE_MODEL_API_URL = f\"{REPLICATE_API_URL}/v1/models\"\nREPLICATE_COLLECTION_API_URL = f\"{REPLICATE_API_URL}/v1/collections\"\n\n\n@cached(TTLCache(maxsize=100, ttl=600))\ndef get_replicate_models(cursor: str = None):\n    api_token = get_replicate_api_key()\n\n    if not api_token:\n        raise Exception(\"Replicate API token not found in environment variables\")\n\n    headers = {\"Authorization\": f\"Token {api_token}\"}\n\n    url = REPLICATE_MODEL_API_URL\n\n    if cursor:\n        url += f\"?cursor={cursor}\"\n\n    response = requests.get(url=url, headers=headers)\n\n    if response.status_code != 200:\n        raise Exception(f\"Failed to fetch models: {response.status_code}\")\n\n    data = response.json()\n\n    for model in data[\"results\"]:\n        if \"latest_version\" in model:\n            del model[\"latest_version\"]\n        if \"default_example\" in model:\n            del model[\"default_example\"]\n\n    return data\n\n\n@cached(TTLCache(maxsize=100, ttl=long_ttl_cache))\ndef get_replicate_collections():\n    api_token = get_replicate_api_key()\n\n    if not api_token:\n        raise Exception(\"Replicate API token not found in environment variables\")\n\n    headers = {\"Authorization\": f\"Token {api_token}\"}\n\n    response = requests.get(REPLICATE_COLLECTION_API_URL, headers=headers)\n\n    if response.status_code != 200:\n        raise Exception(f\"Failed to fetch collections: {response.status_code}\")\n\n    collections = response.json()\n\n    return collections\n\n\n@cached(TTLCache(maxsize=100, ttl=long_ttl_cache))\ndef get_replicate_collection_models(collection_slug: str, cursor=None):\n    api_token = get_replicate_api_key()\n\n    if not api_token:\n        raise Exception(\"Replicate API token not found in environment variables\")\n\n    headers = {\"Authorization\": f\"Token {api_token}\"}\n\n    url = f\"{REPLICATE_COLLECTION_API_URL}/{collection_slug}\"\n\n    if cursor:\n        url += f\"?cursor={cursor}\"\n\n    response = requests.get(url=url, headers=headers)\n\n    if response.status_code != 200:\n        raise Exception(\n            f\"Failed to fetch collections: {collection_slug} - {response.status_code}\"\n        )\n\n    data = response.json()\n\n    for model in data[\"models\"]:\n        if \"latest_version\" in model:\n            del model[\"latest_version\"]\n        if \"default_example\" in model:\n            del model[\"default_example\"]\n\n    return data\n\n\n@cached(TTLCache(maxsize=100, ttl=very_long_ttl_cache))\ndef get_highlighted_models_info():\n    models_str = os.getenv(\"REPLICATE_MODELS_HIGHLIGHTED\", None)\n\n    models = []\n    if models_str:\n        models = models_str.split(\",\")\n        models = [model.strip() for model in models]\n    else:\n        return None\n    models_info = []\n\n    for model in models:\n        try:\n            info = get_model_info(model)\n            models_info.append(info)\n        except Exception as e:\n            logging.error(f\"Failed to fetch model info for {model}: {e}\")\n            continue\n\n    return models_info\n\n\n@cached(TTLCache(maxsize=100, ttl=long_ttl_cache))\ndef get_model_info(model_id: str):\n    api_token = get_replicate_api_key()\n    url = f\"{REPLICATE_MODEL_API_URL}/{model_id}\"\n\n    headers = {\"Authorization\": f\"Token {api_token}\"}\n\n    response = requests.get(url, headers=headers)\n\n    if response.status_code != 200:\n        raise Exception(\n            f\"Failed to fetch model info: {model_id} - {response.status_code}\"\n        )\n\n    model = response.json()\n\n    if \"latest_version\" in model:\n        del model[\"latest_version\"]\n    if \"default_example\" in model:\n        del model[\"default_example\"]\n\n    return model\n\n\n@cached(TTLCache(maxsize=100, ttl=long_ttl_cache))\ndef get_model_openapi_schema(model_id: str):\n    api_token = get_replicate_api_key()\n    url = f\"{REPLICATE_MODEL_API_URL}/{model_id}\"\n\n    headers = {\"Authorization\": f\"Token {api_token}\"}\n\n    response = requests.get(url, headers=headers)\n\n    if response.status_code != 200:\n        raise Exception(\n            f\"Failed to fetch model schema: {model_id} - {response.status_code}\"\n        )\n\n    version = response.json().get(\"latest_version\")\n    schema = version.get(\"openapi_schema\", None)\n\n    if not schema:\n        raise Exception(\n            f\"OpenAPI schema not found in the response - model_id : {model_id}\"\n        )\n\n    return {\n        \"schema\": schema,\n        \"modelId\": version[\"id\"],\n    }\n\n\ndef get_input_schema_from_open_API_schema(openapi_schema):\n    input_schema = openapi_schema[\"components\"][\"schemas\"][\"Input\"]\n    return input_schema\n\n\ndef get_output_schema_from_open_API_schema(openapi_schema):\n    output_schema = openapi_schema[\"components\"][\"schemas\"][\"Output\"]\n    return output_schema\n"
  },
  {
    "path": "packages/backend/app/utils/web_scrapping/async_browser_manager.py",
    "content": "import logging\nimport asyncio\nfrom asyncio import Queue\nfrom queue import Empty\nimport tempfile\nimport time\nimport zipfile\n\nfrom ...env_config import get_browser_tab_max_usage, get_browser_tab_pool_size\nfrom playwright.async_api import async_playwright\n\n\nclass AsyncBrowserManager:\n    def __init__(self):\n        self.playwright = None\n        self.browser = None\n        self.pool_size = get_browser_tab_pool_size()\n        self.max_usage = get_browser_tab_max_usage()\n        self.lock = asyncio.Lock()\n        self.tab_pool = Queue(maxsize=self.pool_size)\n        self.tab_usage_count = {}\n\n    async def initialize_browser(self):\n        self.playwright = await async_playwright().start()\n        await self.initialize_pool()\n        logging.info(\"Browser initialized\")\n\n    def unzip_extension(zip_path, extract_to):\n        with zipfile.ZipFile(zip_path, \"r\") as zip_ref:\n            zip_ref.extractall(extract_to)\n\n    async def launch_context(self):\n        user_data_dir = tempfile.mkdtemp()\n        args = []\n        args.append(\"--headless=new\")\n\n        context = await self.playwright.chromium.launch_persistent_context(\n            user_data_dir,\n            headless=False,\n            args=args,\n            viewport={\"width\": 1920, \"height\": 1080},\n        )\n\n        return context\n\n    async def initialize_pool(self):\n        for _ in range(self.pool_size):\n            context = await self.launch_context()\n            page = await context.new_page()\n            await self.tab_pool.put((page, context))\n            self.tab_usage_count[page] = 0\n\n    async def check_extensions_loaded(self, take_extensions_screenshot=False):\n        page, context = await self.get_tab()\n        await page.goto(\"chrome://extensions/\")\n\n        # Wait for the extensions list to load\n        await page.wait_for_selector(\"extensions-manager\")\n\n        # Extract the extensions displayed\n        extensions = await page.evaluate(\n            \"\"\"() => {\n            return new Promise((resolve, reject) => {\n                try {\n                    chrome.management.getAll((extensions) => {\n                        resolve(extensions);\n                    });\n                } catch (error) {\n                    reject(error);\n                }\n            });\n        }\"\"\"\n        )\n\n        extension_names = [ext[\"name\"] for ext in extensions]\n        logging.info(f\"Extensions loaded in Chromium : {extension_names}\")\n        if take_extensions_screenshot:\n            screenshot_path = \"extensions_screenshot.png\"\n            await page.screenshot(path=screenshot_path)\n            logging.info(f\"Screenshot saved to {screenshot_path}\")\n\n    async def close_browser(self):\n        if self.browser:\n            await self.browser.close()\n        if self.playwright:\n            await self.playwright.stop()\n\n    async def _recycle_tab(self, page, context):\n        try:\n            await context.close()  # Close existing context to free memory\n        except Exception as e:\n            logging.error(f\"Error closing context: {e}\")\n        context = await self.launch_context()\n        page = await context.new_page()\n        self.tab_usage_count[page] = 1\n        return page, context\n\n    async def get_tab(self, timeout=10):\n        start_time = time.time()\n        while time.time() - start_time < timeout:\n            try:\n                async with self.lock:\n                    page, context = await self.tab_pool.get()\n\n                self.tab_usage_count[page] += 1\n                if self.tab_usage_count[page] > self.max_usage:\n                    # Recycle tab if max usage is reached\n                    page, context = await self._recycle_tab(page, context)\n\n                return page, context\n            except Empty:\n                await asyncio.sleep(0.1)  # Yield control to other tasks\n        raise Exception(\"No available tabs in the pool after waiting\")\n\n    async def release_tab(self, page, context):\n        async with self.lock:\n            try:\n                await context.clear_cookies()\n                await self.tab_pool.put((page, context))\n            except Exception as e:\n                logging.error(f\"Error releasing tab: {e}\")\n                # Recycle the tab if an error occurs during release\n                page, context = await self._recycle_tab(page, context)\n                await self.tab_pool.put((page, context))\n"
  },
  {
    "path": "packages/backend/app/utils/web_scrapping/browser_manager.py",
    "content": "import logging\nimport os\nfrom queue import Empty, Queue\nimport time\nimport threading\nfrom ...env_config import get_browser_tab_max_usage, get_browser_tab_pool_size\nfrom playwright.sync_api import sync_playwright\n\n\nclass BrowserManager:\n    def __init__(self, pool_size=None, max_usage=None):\n        self.playwright = None\n        self.browser = None\n        self.pool_size = get_browser_tab_pool_size() if pool_size is None else pool_size\n        self.max_usage = get_browser_tab_max_usage() if max_usage is None else max_usage\n        self.lock = threading.Lock()\n        self.tab_pool = Queue(maxsize=self.pool_size)\n        self.tab_usage_count = {}\n\n    def initialize_browser(self):\n        self.playwright = sync_playwright().start()\n        self.browser = self.playwright.chromium.launch(headless=True)\n        self.initialize_pool()\n        logging.info(\"Browser initialized\")\n\n    def initialize_pool(self):\n        for _ in range(self.pool_size):\n            context = self.browser.new_context()\n            page = context.new_page()\n            self.tab_pool.put((page, context))\n            self.tab_usage_count[page] = 0\n\n    def close_browser(self):\n        if self.browser:\n            self.browser.close()\n        if self.playwright:\n            self.playwright.stop()\n\n    def get_browser(self):\n        if not self.browser:\n            raise Exception(\"Browser not initialized\")\n        return self.browser\n\n    def _recycle_tab(self, page, context):\n        context.close()\n        context = self.browser.new_context()\n        page = context.new_page()\n        self.tab_usage_count[page] = 1\n        return page, context\n\n    def get_tab(self, timeout=10):\n        start_time = time.time()\n        while time.time() - start_time < timeout:\n            try:\n                with self.lock:\n                    page, context = self.tab_pool.get_nowait()\n\n                self.tab_usage_count[page] += 1\n                if self.tab_usage_count[page] > self.max_usage:\n                    # Recycle tab if max usage is reached\n                    page, context = self._recycle_tab(page, context)\n\n                return page, context\n            except Empty:\n                time.sleep(0.1)  # Yield control to other threads\n        raise Exception(\"No available tabs in the pool after waiting\")\n\n    def release_tab(self, page, context):\n        with self.lock:\n            page.goto(\"about:blank\")  # Clear the page by navigating to a blank page\n            context.clear_cookies()\n\n            self.tab_pool.put((page, context))\n"
  },
  {
    "path": "packages/backend/config.yaml",
    "content": "core:\n  openai_api_key:\n    tag: \"core\"\n    description: \"API key for accessing OpenAI services.\"\n  \n  stabilityai_api_key:\n    tag: \"core\"\n    description: \"API key for accessing Stability AI services.\"\n  \n  replicate_api_key:\n    tag: \"core\"\n    description: \"API key for accessing Replicate services.\"\n\nextension:\n  anthropic_api_key:\n    tag: \"core\"\n    description: \"API key for accessing Anthropic services.\"\n  \n  deepseek_api_key:\n    tag: \"core\"\n    description: \"API key for accessing DeepSeek services.\"\n\n  openrouter_api_key:\n    tag: \"core\"\n    description: \"API key for accessing OpenRouter services.\""
  },
  {
    "path": "packages/backend/hooks/hook-app.processors.py",
    "content": "from PyInstaller.utils.hooks import collect_submodules\n\nhiddenimports = collect_submodules('app.processors')"
  },
  {
    "path": "packages/backend/pyproject.toml",
    "content": "[tool.poetry]\nname = \"ai-flow-back\"\nversion = \"0.11.3\"\ndescription = \"\"\nauthors = [\"DahnM20 <you@example.com>\"]\nreadme = \"README.md\"\npackages = [{ include = \"app\" }]\n\n\n[tool.poetry.dependencies]\npython = \">=3.9,<3.12\"\npython-dotenv = \"^1.0.0\"\nopenai = \"1.76.0\"\nflask = \"^2.2.3\"\nflask-socketio = \"^5.3.3\"\nflask-cors = \"^3.0.10\"\nunstructured = \"^0.6.3\"\nlangchain = \">=0.0.303\"\npython-magic = \"^0.4.14\"\npytesseract = \"^0.3.10\"\nrequests = \"^2.31.0\"\ntabulate = \"^0.9.0\"\npdf2image = \"^1.16.3\"\ncolorlog = \"^6.7.0\"\neventlet = \"^0.33.3\"\nplaywright = \"1.39\"\nyoutube-transcript-api = \"^0.6.1\"\npytube = \"^15.0.0\"\nboto3 = \"^1.28.52\"\npyjwt = \"^2.8.0\"\njwcrypto = \"^1.5.0\"\npython-jose = \"^3.3.0\"\ncachetools = \"^5.3.1\"\nflask-injector = \"^0.15.0\"\nsetuptools = \"^68.2.2\"\npypdf = \"4.2.0\"\nreplicate = \"0.22.0\"\ntiktoken = \"0.7.0\"\nopenapi-spec-validator = \"^0.7.1\"\nanthropic = \"0.49.0\"\npymupdf = \"^1.24.7\"\npydub = \"^0.25.1\"\nmarkdownify = \"^1.1.0\"\nbeautifulsoup4 = \"^4.13.3\"\n\n\n[tool.poetry.group.dev.dependencies]\ndatamodel-code-generator = \"^0.25.5\"\n\n[build-system]\nrequires = [\"poetry-core\"]\nbuild-backend = \"poetry.core.masonry.api\"\n"
  },
  {
    "path": "packages/backend/requirements_windows.txt",
    "content": "python-magic-bin"
  },
  {
    "path": "packages/backend/resources/data/openrouter_models.json",
    "content": "{\"data\":[{\"id\":\"deepseek/deepseek-chat\",\"name\":\"DeepSeek V3\",\"created\":1735241320,\"description\":\"DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported evaluations reveal that the model outperforms other open-source models and rivals leading closed-source models. For model details, please visit [the DeepSeek-V3 repo](https://github.com/deepseek-ai/DeepSeek-V3) for more information.\",\"context_length\":64000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000014\",\"completion\":\"0.00000028\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":64000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"qwen/qvq-72b-preview\",\"name\":\"Qwen: QvQ 72B Preview\",\"created\":1735088567,\"description\":\"QVQ-72B-Preview is an experimental research model developed by the [Qwen](/qwen) team, focusing on enhancing visual reasoning capabilities.\\n\\n## Performance\\n\\n|                | **QVQ-72B-Preview** | o1-2024-12-17 | gpt-4o-2024-05-13 | Claude3.5 Sonnet-20241022 | Qwen2VL-72B |\\n|----------------|-----------------|---------------|-------------------|----------------------------|-------------|\\n| MMMU(val)      | 70.3            | 77.3          | 69.1              | 70.4                       | 64.5        |\\n| MathVista(mini) | 71.4            | 71.0          | 63.8              | 65.3                       | 70.5        |\\n| MathVision(full)   | 35.9            | –             | 30.4              | 35.6                       | 25.9        |\\n| OlympiadBench  | 20.4            | –             | 25.9              | –                          | 11.2        |\\n\\n\\n## Limitations\\n\\n1. **Language Mixing and Code-Switching:** The model might occasionally mix different languages or unexpectedly switch between them, potentially affecting the clarity of its responses.\\n2. **Recursive Reasoning Loops:**  There's a risk of the model getting caught in recursive reasoning loops, leading to lengthy responses that may not even arrive at a final answer.\\n3. **Safety and Ethical Considerations:** Robust safety measures are needed to ensure reliable and safe performance. Users should exercise caution when deploying this model.\\n4. **Performance and Benchmark Limitations:** Despite the improvements in visual reasoning, QVQ doesn’t entirely replace the capabilities of [Qwen2-VL-72B](/qwen/qwen-2-vl-72b-instruct). During multi-step visual reasoning, the model might gradually lose focus on the image content, leading to hallucinations. Moreover, QVQ doesn’t show significant improvement over [Qwen2-VL-72B](/qwen/qwen-2-vl-72b-instruct) in basic recognition tasks like identifying people, animals, or plants.\\n\\nNote: Currently, the model only supports single-round dialogues and image outputs. It does not support video inputs.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000025\",\"completion\":\"0.0000005\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-2.0-flash-thinking-exp:free\",\"name\":\"Google: Gemini 2.0 Flash Thinking Experimental (free)\",\"created\":1734650026,\"description\":\"Gemini 2.0 Flash Thinking Mode is an experimental model that's trained to generate the \\\"thinking process\\\" the model goes through as part of its response. As a result, Thinking Mode is capable of stronger reasoning capabilities in its responses than the [base Gemini 2.0 Flash model](/google/gemini-2.0-flash-exp).\",\"context_length\":40000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":40000,\"max_completion_tokens\":8000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"sao10k/l3.3-euryale-70b\",\"name\":\"Sao10K: Llama 3.3 Euryale 70B\",\"created\":1734535928,\"description\":\"Euryale L3.3 70B is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). It is the successor of [Euryale L3 70B v2.2](/models/sao10k/l3-euryale-70b).\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.0000015\",\"completion\":\"0.0000015\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"inflatebot/mn-mag-mell-r1\",\"name\":\"Inflatebot: Mag Mell R1 12B\",\"created\":1734535439,\"description\":\"Mag Mell is a merge of pre-trained language models created using mergekit, based on [Mistral Nemo](/mistralai/mistral-nemo). It is a great roleplay and storytelling model which combines the best parts of many other models to be a general purpose solution for many usecases.\\n\\nIntended to be a general purpose \\\"Best of Nemo\\\" model for any fictional, creative use case. \\n\\nMag Mell is composed of 3 intermediate parts:\\n- Hero (RP, trope coverage)\\n- Monk (Intelligence, groundedness)\\n- Deity (Prose, flair)\",\"context_length\":16000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.0000009\",\"completion\":\"0.0000009\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/o1\",\"name\":\"OpenAI: o1\",\"created\":1734459999,\"description\":\"The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 model series is trained with large-scale reinforcement learning to reason using chain of thought. \\n\\nThe o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).\\n\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000015\",\"completion\":\"0.00006\",\"image\":\"0.021675\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":100000,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"eva-unit-01/eva-llama-3.33-70b\",\"name\":\"EVA Llama 3.33 70b\",\"created\":1734377303,\"description\":\"EVA Llama 3.33 70b is a roleplay and storywriting specialist model. It is a full-parameter finetune of [Llama-3.3-70B-Instruct](https://openrouter.ai/meta-llama/llama-3.3-70b-instruct) on mixture of synthetic and natural data.\\n\\nIt uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and \\\"flavor\\\" of the resulting model\\n\\nThis model was built with Llama by Meta.\\n\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.000004\",\"completion\":\"0.000006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"x-ai/grok-2-vision-1212\",\"name\":\"xAI: Grok 2 Vision 1212\",\"created\":1734237338,\"description\":\"Grok 2 Vision 1212 advances image-based AI with stronger visual comprehension, refined instruction-following, and multilingual support. From object recognition to style analysis, it empowers developers to build more intuitive, visually aware applications. Its enhanced steerability and reasoning establish a robust foundation for next-generation image solutions.\\n\\nTo read more about this model, check out [xAI's announcement](https://x.ai/blog/grok-1212).\",\"context_length\":32768,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Grok\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000002\",\"completion\":\"0.00001\",\"image\":\"0.0036\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"x-ai/grok-2-1212\",\"name\":\"xAI: Grok 2 1212\",\"created\":1734232814,\"description\":\"Grok 2 1212 introduces significant enhancements to accuracy, instruction adherence, and multilingual support, making it a powerful and flexible choice for developers seeking a highly steerable, intelligent model.\",\"context_length\":131072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Grok\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000002\",\"completion\":\"0.00001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"cohere/command-r7b-12-2024\",\"name\":\"Cohere: Command R7B (12-2024)\",\"created\":1734158152,\"description\":\"Command R7B (12-2024) is a small, fast update of the Command R+ model, delivered in December 2024. It excels at RAG, tool use, agents, and similar tasks requiring complex reasoning and multiple steps.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Cohere\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000000375\",\"completion\":\"0.00000015\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-2.0-flash-exp:free\",\"name\":\"Google: Gemini Flash 2.0 Experimental (free)\",\"created\":1733937523,\"description\":\"Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](google/gemini-pro-1.5). It introduces notable enhancements in multimodal understanding, coding capabilities, complex instruction following, and function calling. These advancements come together to deliver more seamless and robust agentic experiences.\",\"context_length\":1048576,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":1048576,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-exp-1206:free\",\"name\":\"Google: Gemini Experimental 1206 (free)\",\"created\":1733507713,\"description\":\"Experimental release (December 6, 2024) of Gemini.\",\"context_length\":2097152,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":2097152,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.3-70b-instruct\",\"name\":\"Meta: Llama 3.3 70B Instruct\",\"created\":1733506137,\"description\":\"The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks.\\n\\nSupported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.\\n\\n[Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md)\",\"context_length\":131072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000012\",\"completion\":\"0.0000003\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"amazon/nova-lite-v1\",\"name\":\"Amazon: Nova Lite 1.0\",\"created\":1733437363,\"description\":\"Amazon Nova Lite 1.0 is a very low-cost multimodal model from Amazon that focused on fast processing of image, video, and text inputs to generate text output. Amazon Nova Lite can handle real-time customer interactions, document analysis, and visual question-answering tasks with high accuracy.\\n\\nWith an input context of 300K tokens, it can analyze multiple images or up to 30 minutes of video in a single input.\",\"context_length\":300000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Nova\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000006\",\"completion\":\"0.00000024\",\"image\":\"0.00009\",\"request\":\"0\"},\"top_provider\":{\"context_length\":300000,\"max_completion_tokens\":5120,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"amazon/nova-micro-v1\",\"name\":\"Amazon: Nova Micro 1.0\",\"created\":1733437237,\"description\":\"Amazon Nova Micro 1.0 is a text-only model that delivers the lowest latency responses in the Amazon Nova family of models at a very low cost. With a context length of 128K tokens and optimized for speed and cost, Amazon Nova Micro excels at tasks such as text summarization, translation, content classification, interactive chat, and brainstorming. It has  simple mathematical reasoning and coding abilities.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Nova\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000000035\",\"completion\":\"0.00000014\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":5120,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"amazon/nova-pro-v1\",\"name\":\"Amazon: Nova Pro 1.0\",\"created\":1733436303,\"description\":\"Amazon Nova Pro 1.0 is a capable multimodal model from Amazon focused on providing a combination of accuracy, speed, and cost for a wide range of tasks. As of December 2024, it achieves state-of-the-art performance on key benchmarks including visual question answering (TextVQA) and video understanding (VATEX).\\n\\nAmazon Nova Pro demonstrates strong capabilities in processing both visual and textual information and at analyzing financial documents.\\n\\n**NOTE**: Video input is not supported at this time.\",\"context_length\":300000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Nova\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.0000032\",\"image\":\"0.0012\",\"request\":\"0\"},\"top_provider\":{\"context_length\":300000,\"max_completion_tokens\":5120,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"qwen/qwq-32b-preview\",\"name\":\"Qwen: QwQ 32B Preview\",\"created\":1732754541,\"description\":\"QwQ-32B-Preview is an experimental research model focused on AI reasoning capabilities developed by the Qwen Team. As a preview release, it demonstrates promising analytical abilities while having several important limitations:\\n\\n1. **Language Mixing and Code-Switching**: The model may mix languages or switch between them unexpectedly, affecting response clarity.\\n2. **Recursive Reasoning Loops**: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer.\\n3. **Safety and Ethical Considerations**: The model requires enhanced safety measures to ensure reliable and secure performance, and users should exercise caution when deploying it.\\n4. **Performance and Benchmark Limitations**: The model excels in math and coding but has room for improvement in other areas, such as common sense reasoning and nuanced language understanding.\\n\\n\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000012\",\"completion\":\"0.00000018\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-exp-1121:free\",\"name\":\"Google: Gemini Experimental 1121 (free)\",\"created\":1732216725,\"description\":\"Experimental release (November 21st, 2024) of Gemini.\",\"context_length\":40960,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":40960,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/learnlm-1.5-pro-experimental:free\",\"name\":\"Google: LearnLM 1.5 Pro Experimental (free)\",\"created\":1732216551,\"description\":\"An experimental version of [Gemini 1.5 Pro](/google/gemini-pro-1.5) from Google.\",\"context_length\":40960,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":40960,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"eva-unit-01/eva-qwen-2.5-72b\",\"name\":\"EVA Qwen2.5 72B\",\"created\":1732210606,\"description\":\"EVA Qwen2.5 72B is a roleplay and storywriting specialist model. It's a full-parameter finetune of Qwen2.5-72B on mixture of synthetic and natural data.\\n\\nIt uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and \\\"flavor\\\" of the resulting model.\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.000004\",\"completion\":\"0.000006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-4o-2024-11-20\",\"name\":\"OpenAI: GPT-4o (2024-11-20)\",\"created\":1732127594,\"description\":\"The 2024-11-20 version of GPT-4o offers a leveled-up creative writing ability with more natural, engaging, and tailored writing to improve relevance & readability. It’s also better at working with uploaded files, providing deeper insights & more thorough responses.\\n\\nGPT-4o (\\\"o\\\" for \\\"omni\\\") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000025\",\"completion\":\"0.00001\",\"image\":\"0.003613\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":16384,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-large-2411\",\"name\":\"Mistral Large 2411\",\"created\":1731978685,\"description\":\"Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411)\\n\\nIt provides a significant upgrade on the previous [Mistral Large 24.07](/mistralai/mistral-large-2407), with notable improvements in long context understanding, a new system prompt, and more accurate function calling.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000002\",\"completion\":\"0.000006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-large-2407\",\"name\":\"Mistral Large 2407\",\"created\":1731978415,\"description\":\"This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch announcement [here](https://mistral.ai/news/mistral-large-2407/).\\n\\nIt supports dozens of languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, along with 80+ coding languages including Python, Java, C, C++, JavaScript, and Bash. Its long context window allows precise information recall from large documents.\\n\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000002\",\"completion\":\"0.000006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/pixtral-large-2411\",\"name\":\"Mistral: Pixtral Large 2411\",\"created\":1731977388,\"description\":\"Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of [Mistral Large 2](/mistralai/mistral-large-2411). The model is able to understand documents, charts and natural images.\\n\\nThe model is available under the Mistral Research License (MRL) for research and educational use, and the Mistral Commercial License for experimentation, testing, and production for commercial purposes.\\n\\n\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000002\",\"completion\":\"0.000006\",\"image\":\"0.002888\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"x-ai/grok-vision-beta\",\"name\":\"xAI: Grok Vision Beta\",\"created\":1731976624,\"description\":\"Grok Vision Beta is xAI's experimental language model with vision capability.\\n\\n\",\"context_length\":8192,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Grok\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000005\",\"completion\":\"0.000015\",\"image\":\"0.009\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-exp-1114:free\",\"name\":\"Google: Gemini Experimental 1114 (free)\",\"created\":1731714740,\"description\":\"Gemini 11-14 (2024) experimental model features \\\"quality\\\" improvements.\",\"context_length\":40960,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":40960,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"infermatic/mn-inferor-12b\",\"name\":\"Infermatic: Mistral Nemo Inferor 12B\",\"created\":1731464428,\"description\":\"Inferor 12B is a merge of top roleplay models, expert on immersive narratives and storytelling.\\n\\nThis model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [anthracite-org/magnum-v4-12b](https://openrouter.ai/anthracite-org/magnum-v4-72b) as a base.\\n\",\"context_length\":32000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.00000025\",\"completion\":\"0.0000005\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"qwen/qwen-2.5-coder-32b-instruct\",\"name\":\"Qwen2.5 Coder 32B Instruct\",\"created\":1731368400,\"description\":\"Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:\\n\\n- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. \\n- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.\\n\\nTo read more about its evaluation results, check out [Qwen 2.5 Coder's blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).\",\"context_length\":33000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.00000007\",\"completion\":\"0.00000016\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":33000,\"max_completion_tokens\":3000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"raifle/sorcererlm-8x22b\",\"name\":\"SorcererLM 8x22B\",\"created\":1731105083,\"description\":\"SorcererLM is an advanced RP and storytelling model, built as a Low-rank 16-bit LoRA fine-tuned on [WizardLM-2 8x22B](/microsoft/wizardlm-2-8x22b).\\n\\n- Advanced reasoning and emotional intelligence for engaging and immersive interactions\\n- Vivid writing capabilities enriched with spatial and contextual awareness\\n- Enhanced narrative depth, promoting creative and dynamic storytelling\",\"context_length\":16000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"vicuna\"},\"pricing\":{\"prompt\":\"0.0000045\",\"completion\":\"0.0000045\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"eva-unit-01/eva-qwen-2.5-32b\",\"name\":\"EVA Qwen2.5 32B\",\"created\":1731104847,\"description\":\"EVA Qwen2.5 32B is a roleplaying/storywriting specialist model. It's a full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.\\n\\nIt uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and \\\"flavor\\\" of the resulting model.\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.0000026\",\"completion\":\"0.0000034\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"thedrummer/unslopnemo-12b\",\"name\":\"Unslopnemo 12b\",\"created\":1731103448,\"description\":\"UnslopNemo v4.1 is the latest addition from the creator of Rocinante, designed for adventure writing and role-play scenarios.\",\"context_length\":32000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.0000005\",\"completion\":\"0.0000005\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3.5-haiku-20241022:beta\",\"name\":\"Anthropic: Claude 3.5 Haiku (2024-10-22) (self-moderated)\",\"created\":1730678400,\"description\":\"Claude 3.5 Haiku features enhancements across all skill sets including coding, tool use, and reasoning. As the fastest model in the Anthropic lineup, it offers rapid response times suitable for applications that require high interactivity and low latency, such as user-facing chatbots and on-the-fly code completions. It also excels in specialized tasks like data extraction and real-time content moderation, making it a versatile tool for a broad range of industries.\\n\\nIt does not support image inputs.\\n\\nSee the launch announcement and benchmark results [here](https://www.anthropic.com/news/3-5-models-and-computer-use)\",\"context_length\":200000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3.5-haiku-20241022\",\"name\":\"Anthropic: Claude 3.5 Haiku (2024-10-22)\",\"created\":1730678400,\"description\":\"Claude 3.5 Haiku features enhancements across all skill sets including coding, tool use, and reasoning. As the fastest model in the Anthropic lineup, it offers rapid response times suitable for applications that require high interactivity and low latency, such as user-facing chatbots and on-the-fly code completions. It also excels in specialized tasks like data extraction and real-time content moderation, making it a versatile tool for a broad range of industries.\\n\\nIt does not support image inputs.\\n\\nSee the launch announcement and benchmark results [here](https://www.anthropic.com/news/3-5-models-and-computer-use)\",\"context_length\":200000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":8192,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3.5-haiku:beta\",\"name\":\"Anthropic: Claude 3.5 Haiku (self-moderated)\",\"created\":1730678400,\"description\":\"Claude 3.5 Haiku features offers enhanced capabilities in speed, coding accuracy, and tool use. Engineered to excel in real-time applications, it delivers quick response times that are essential for dynamic tasks such as chat interactions and immediate coding suggestions.\\n\\nThis makes it highly suitable for environments that demand both speed and precision, such as software development, customer service bots, and data management systems.\\n\\nThis model is currently pointing to [Claude 3.5 Haiku (2024-10-22)](/anthropic/claude-3-5-haiku-20241022).\",\"context_length\":200000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3.5-haiku\",\"name\":\"Anthropic: Claude 3.5 Haiku\",\"created\":1730678400,\"description\":\"Claude 3.5 Haiku features offers enhanced capabilities in speed, coding accuracy, and tool use. Engineered to excel in real-time applications, it delivers quick response times that are essential for dynamic tasks such as chat interactions and immediate coding suggestions.\\n\\nThis makes it highly suitable for environments that demand both speed and precision, such as software development, customer service bots, and data management systems.\\n\\nThis model is currently pointing to [Claude 3.5 Haiku (2024-10-22)](/anthropic/claude-3-5-haiku-20241022).\",\"context_length\":200000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":8192,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"neversleep/llama-3.1-lumimaid-70b\",\"name\":\"NeverSleep: Lumimaid v0.2 70B\",\"created\":1729555200,\"description\":\"Lumimaid v0.2 70B is a finetune of [Llama 3.1 70B](/meta-llama/llama-3.1-70b-instruct) with a \\\"HUGE step up dataset wise\\\" compared to Lumimaid v0.1. Sloppy chats output were purged.\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.000003375\",\"completion\":\"0.0000045\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthracite-org/magnum-v4-72b\",\"name\":\"Magnum v4 72B\",\"created\":1729555200,\"description\":\"This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet(https://openrouter.ai/anthropic/claude-3.5-sonnet) and Opus(https://openrouter.ai/anthropic/claude-3-opus).\\n\\nThe model is fine-tuned on top of [Qwen2.5 72B](https://openrouter.ai/qwen/qwen-2.5-72b-instruct).\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.000001875\",\"completion\":\"0.00000225\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":1024,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3.5-sonnet:beta\",\"name\":\"Anthropic: Claude 3.5 Sonnet (self-moderated)\",\"created\":1729555200,\"description\":\"New Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at:\\n\\n- Coding: Scores ~49% on SWE-Bench Verified, higher than the last best score, and without any fancy prompt scaffolding\\n- Data science: Augments human data science expertise; navigates unstructured data while using multiple tools for insights\\n- Visual processing: excelling at interpreting charts, graphs, and images, accurately transcribing text to derive insights beyond just the text alone\\n- Agentic tasks: exceptional tool use, making it great at agentic tasks (i.e. complex, multi-step problem solving tasks that require engaging with other systems)\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000015\",\"image\":\"0.0048\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3.5-sonnet\",\"name\":\"Anthropic: Claude 3.5 Sonnet\",\"created\":1729555200,\"description\":\"New Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at:\\n\\n- Coding: Scores ~49% on SWE-Bench Verified, higher than the last best score, and without any fancy prompt scaffolding\\n- Data science: Augments human data science expertise; navigates unstructured data while using multiple tools for insights\\n- Visual processing: excelling at interpreting charts, graphs, and images, accurately transcribing text to derive insights beyond just the text alone\\n- Agentic tasks: exceptional tool use, making it great at agentic tasks (i.e. complex, multi-step problem solving tasks that require engaging with other systems)\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000015\",\"image\":\"0.0048\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":8192,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"x-ai/grok-beta\",\"name\":\"xAI: Grok Beta\",\"created\":1729382400,\"description\":\"Grok Beta is xAI's experimental language model with state-of-the-art reasoning capabilities, best for complex and multi-step use cases.\\n\\nIt is the successor of [Grok 2](https://x.ai/blog/grok-2) with enhanced context length.\",\"context_length\":131072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Grok\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000005\",\"completion\":\"0.000015\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/ministral-8b\",\"name\":\"Mistral: Ministral 8B\",\"created\":1729123200,\"description\":\"Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up to 128k context length and excels in knowledge and reasoning tasks. It outperforms peers in the sub-10B category, making it perfect for low-latency, privacy-first applications.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000001\",\"completion\":\"0.0000001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/ministral-3b\",\"name\":\"Mistral: Ministral 3B\",\"created\":1729123200,\"description\":\"Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000004\",\"completion\":\"0.00000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"qwen/qwen-2.5-7b-instruct\",\"name\":\"Qwen2.5 7B Instruct\",\"created\":1729036800,\"description\":\"Qwen2.5 7B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:\\n\\n- Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.\\n\\n- Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.\\n\\n- Long-context Support up to 128K tokens and can generate up to 8K tokens.\\n\\n- Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.\\n\\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.00000027\",\"completion\":\"0.00000027\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"nvidia/llama-3.1-nemotron-70b-instruct\",\"name\":\"NVIDIA: Llama 3.1 Nemotron 70B Instruct\",\"created\":1728950400,\"description\":\"NVIDIA's Llama 3.1 Nemotron 70B is a language model designed for generating precise and useful responses. Leveraging [Llama 3.1 70B](/models/meta-llama/llama-3.1-70b-instruct) architecture and Reinforcement Learning from Human Feedback (RLHF), it excels in automatic alignment benchmarks. This model is tailored for applications requiring high accuracy in helpfulness and response generation, suitable for diverse user queries across multiple domains.\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).\",\"context_length\":131000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000012\",\"completion\":\"0.0000003\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131000,\"max_completion_tokens\":131000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"inflection/inflection-3-pi\",\"name\":\"Inflection: Inflection 3 Pi\",\"created\":1728604800,\"description\":\"Inflection 3 Pi powers Inflection's [Pi](https://pi.ai) chatbot, including backstory, emotional intelligence, productivity, and safety. It has access to recent news, and excels in scenarios like customer support and roleplay.\\n\\nPi has been trained to mirror your tone and style, if you use more emojis, so will Pi! Try experimenting with various prompts and conversation styles.\",\"context_length\":8000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000025\",\"completion\":\"0.00001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8000,\"max_completion_tokens\":1024,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"inflection/inflection-3-productivity\",\"name\":\"Inflection: Inflection 3 Productivity\",\"created\":1728604800,\"description\":\"Inflection 3 Productivity is optimized for following instructions. It is better for tasks requiring JSON output or precise adherence to provided guidelines. It has access to recent news.\\n\\nFor emotional intelligence similar to Pi, see [Inflect 3 Pi](/inflection/inflection-3-pi)\\n\\nSee [Inflection's announcement](https://inflection.ai/blog/enterprise) for more details.\",\"context_length\":8000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000025\",\"completion\":\"0.00001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-flash-1.5-8b\",\"name\":\"Google: Gemini Flash 1.5 8B\",\"created\":1727913600,\"description\":\"Gemini Flash 1.5 8B is optimized for speed and efficiency, offering enhanced performance in small prompt tasks like chat, transcription, and translation. With reduced latency, it is highly effective for real-time and large-scale operations. This model focuses on cost-effective solutions while maintaining high-quality results.\\n\\n[Click here to learn more about this model](https://developers.googleblog.com/en/gemini-15-flash-8b-is-now-generally-available-for-use/).\\n\\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\",\"context_length\":1000000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000000375\",\"completion\":\"0.00000015\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":1000000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthracite-org/magnum-v2-72b\",\"name\":\"Magnum v2 72B\",\"created\":1727654400,\"description\":\"From the maker of [Goliath](https://openrouter.ai/models/alpindale/goliath-120b), Magnum 72B is the seventh in a family of models designed to achieve the prose quality of the Claude 3 models, notably Opus & Sonnet.\\n\\nThe model is based on [Qwen2 72B](https://openrouter.ai/models/qwen/qwen-2-72b-instruct) and trained with 55 million tokens of highly curated roleplay (RP) data.\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000003\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"liquid/lfm-40b\",\"name\":\"Liquid: LFM 40B MoE\",\"created\":1727654400,\"description\":\"Liquid's 40.3B Mixture of Experts (MoE) model. Liquid Foundation Models (LFMs) are large neural networks built with computational units rooted in dynamic systems.\\n\\nLFMs are general-purpose AI models that can be used to model any kind of sequential data, including video, audio, text, time series, and signals.\\n\\nSee the [launch announcement](https://www.liquid.ai/liquid-foundation-models) for benchmarks and more info.\",\"context_length\":66000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":\"vicuna\"},\"pricing\":{\"prompt\":\"0.00000015\",\"completion\":\"0.00000015\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":66000,\"max_completion_tokens\":66000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"thedrummer/rocinante-12b\",\"name\":\"Rocinante 12B\",\"created\":1727654400,\"description\":\"Rocinante 12B is designed for engaging storytelling and rich prose.\\n\\nEarly testers have reported:\\n- Expanded vocabulary with unique and expressive word choices\\n- Enhanced creativity for vivid narratives\\n- Adventure-filled and captivating stories\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.00000025\",\"completion\":\"0.0000005\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.2-3b-instruct:free\",\"name\":\"Meta: Llama 3.2 3B Instruct (free)\",\"created\":1727222400,\"description\":\"Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with the latest transformer architecture, it supports eight languages, including English, Spanish, and Hindi, and is adaptable for additional languages.\\n\\nTrained on 9 trillion tokens, the Llama 3.2 3B model excels in instruction-following, complex reasoning, and tool use. Its balanced performance makes it ideal for applications needing accuracy and efficiency in text generation across multilingual settings.\\n\\nClick here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md).\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.2-3b-instruct\",\"name\":\"Meta: Llama 3.2 3B Instruct\",\"created\":1727222400,\"description\":\"Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with the latest transformer architecture, it supports eight languages, including English, Spanish, and Hindi, and is adaptable for additional languages.\\n\\nTrained on 9 trillion tokens, the Llama 3.2 3B model excels in instruction-following, complex reasoning, and tool use. Its balanced performance makes it ideal for applications needing accuracy and efficiency in text generation across multilingual settings.\\n\\nClick here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md).\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).\",\"context_length\":131000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.000000015\",\"completion\":\"0.000000025\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131000,\"max_completion_tokens\":131000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.2-1b-instruct:free\",\"name\":\"Meta: Llama 3.2 1B Instruct (free)\",\"created\":1727222400,\"description\":\"Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Its smaller size allows it to operate efficiently in low-resource environments while maintaining strong task performance.\\n\\nSupporting eight core languages and fine-tunable for more, Llama 1.3B is ideal for businesses or developers seeking lightweight yet powerful AI solutions that can operate in diverse multilingual settings without the high computational demand of larger models.\\n\\nClick here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md).\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.2-1b-instruct\",\"name\":\"Meta: Llama 3.2 1B Instruct\",\"created\":1727222400,\"description\":\"Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Its smaller size allows it to operate efficiently in low-resource environments while maintaining strong task performance.\\n\\nSupporting eight core languages and fine-tunable for more, Llama 1.3B is ideal for businesses or developers seeking lightweight yet powerful AI solutions that can operate in diverse multilingual settings without the high computational demand of larger models.\\n\\nClick here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md).\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).\",\"context_length\":131072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000001\",\"completion\":\"0.00000001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.2-90b-vision-instruct:free\",\"name\":\"Meta: Llama 3.2 90B Vision Instruct (free)\",\"created\":1727222400,\"description\":\"The Llama 90B Vision model is a top-tier, 90-billion-parameter multimodal model designed for the most challenging visual reasoning and language tasks. It offers unparalleled accuracy in image captioning, visual question answering, and advanced image-text comprehension. Pre-trained on vast multimodal datasets and fine-tuned with human feedback, the Llama 90B Vision is engineered to handle the most demanding image-based AI tasks.\\n\\nThis model is perfect for industries requiring cutting-edge multimodal AI capabilities, particularly those dealing with complex, real-time visual and textual analysis.\\n\\nClick here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md).\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).\",\"context_length\":4096,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.2-90b-vision-instruct\",\"name\":\"Meta: Llama 3.2 90B Vision Instruct\",\"created\":1727222400,\"description\":\"The Llama 90B Vision model is a top-tier, 90-billion-parameter multimodal model designed for the most challenging visual reasoning and language tasks. It offers unparalleled accuracy in image captioning, visual question answering, and advanced image-text comprehension. Pre-trained on vast multimodal datasets and fine-tuned with human feedback, the Llama 90B Vision is engineered to handle the most demanding image-based AI tasks.\\n\\nThis model is perfect for industries requiring cutting-edge multimodal AI capabilities, particularly those dealing with complex, real-time visual and textual analysis.\\n\\nClick here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md).\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).\",\"context_length\":131072,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.0000009\",\"completion\":\"0.0000009\",\"image\":\"0.001301\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.2-11b-vision-instruct:free\",\"name\":\"Meta: Llama 3.2 11B Vision Instruct (free)\",\"created\":1727222400,\"description\":\"Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis.\\n\\nIts ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research.\\n\\nClick here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md).\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.2-11b-vision-instruct\",\"name\":\"Meta: Llama 3.2 11B Vision Instruct\",\"created\":1727222400,\"description\":\"Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis.\\n\\nIts ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research.\\n\\nClick here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md).\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).\",\"context_length\":131072,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.000000055\",\"completion\":\"0.000000055\",\"image\":\"0.00007948\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"qwen/qwen-2.5-72b-instruct\",\"name\":\"Qwen2.5 72B Instruct\",\"created\":1726704000,\"description\":\"Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:\\n\\n- Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.\\n\\n- Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.\\n\\n- Long-context Support up to 128K tokens and can generate up to 8K tokens.\\n\\n- Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.\\n\\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).\",\"context_length\":32000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.00000023\",\"completion\":\"0.0000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"qwen/qwen-2-vl-72b-instruct\",\"name\":\"Qwen2-VL 72B Instruct\",\"created\":1726617600,\"description\":\"Qwen2 VL 72B is a multimodal LLM from the Qwen Team with the following key enhancements:\\n\\n- SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.\\n\\n- Understanding videos of 20min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.\\n\\n- Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.\\n\\n- Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.\\n\\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub repo](https://github.com/QwenLM/Qwen2-VL).\\n\\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).\",\"context_length\":4096,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000004\",\"completion\":\"0.0000004\",\"image\":\"0.000578\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"neversleep/llama-3.1-lumimaid-8b\",\"name\":\"NeverSleep: Lumimaid v0.2 8B\",\"created\":1726358400,\"description\":\"Lumimaid v0.2 8B is a finetune of [Llama 3.1 8B](/models/meta-llama/llama-3.1-8b-instruct) with a \\\"HUGE step up dataset wise\\\" compared to Lumimaid v0.1. Sloppy chats output were purged.\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.0000001875\",\"completion\":\"0.000001125\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/o1-mini-2024-09-12\",\"name\":\"OpenAI: o1-mini (2024-09-12)\",\"created\":1726099200,\"description\":\"The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding.\\n\\nThe o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).\\n\\nNote: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000012\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":65536,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/o1-preview\",\"name\":\"OpenAI: o1-preview\",\"created\":1726099200,\"description\":\"The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding.\\n\\nThe o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).\\n\\nNote: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000015\",\"completion\":\"0.00006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":32768,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/o1-preview-2024-09-12\",\"name\":\"OpenAI: o1-preview (2024-09-12)\",\"created\":1726099200,\"description\":\"The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding.\\n\\nThe o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).\\n\\nNote: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000015\",\"completion\":\"0.00006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":32768,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/o1-mini\",\"name\":\"OpenAI: o1-mini\",\"created\":1726099200,\"description\":\"The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding.\\n\\nThe o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1).\\n\\nNote: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000012\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":65536,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"mistralai/pixtral-12b\",\"name\":\"Mistral: Pixtral 12B\",\"created\":1725926400,\"description\":\"The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent: https://x.com/mistralai/status/1833758285167722836.\",\"context_length\":4096,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000001\",\"completion\":\"0.0000001\",\"image\":\"0.0001445\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"cohere/command-r-08-2024\",\"name\":\"Cohere: Command R (08-2024)\",\"created\":1724976000,\"description\":\"command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it is better at math, code and reasoning and is competitive with the previous version of the larger Command R+ model.\\n\\nRead the launch post [here](https://docs.cohere.com/changelog/command-gets-refreshed).\\n\\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Cohere\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000001425\",\"completion\":\"0.00000057\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"cohere/command-r-plus-08-2024\",\"name\":\"Cohere: Command R+ (08-2024)\",\"created\":1724976000,\"description\":\"command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while keeping the hardware footprint the same.\\n\\nRead the launch post [here](https://docs.cohere.com/changelog/command-gets-refreshed).\\n\\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Cohere\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000002375\",\"completion\":\"0.0000095\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"qwen/qwen-2-vl-7b-instruct\",\"name\":\"Qwen2-VL 7B Instruct\",\"created\":1724803200,\"description\":\"Qwen2 VL 7B is a multimodal LLM from the Qwen Team with the following key enhancements:\\n\\n- SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.\\n\\n- Understanding videos of 20min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.\\n\\n- Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.\\n\\n- Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.\\n\\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub repo](https://github.com/QwenLM/Qwen2-VL).\\n\\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).\",\"context_length\":4096,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000001\",\"completion\":\"0.0000001\",\"image\":\"0.0001445\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-flash-1.5-exp\",\"name\":\"Google: Gemini Flash 1.5 Experimental\",\"created\":1724803200,\"description\":\"Gemini 1.5 Flash Experimental is an experimental version of the [Gemini 1.5 Flash](/models/google/gemini-flash-1.5) model.\\n\\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\\n\\n#multimodal\\n\\nNote: This model is experimental and not suited for production use-cases. It may be removed or redirected to another model in the future.\",\"context_length\":1000000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":1000000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"sao10k/l3.1-euryale-70b\",\"name\":\"Sao10K: Llama 3.1 Euryale 70B v2.2\",\"created\":1724803200,\"description\":\"Euryale L3.1 70B v2.2 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). It is the successor of [Euryale L3 70B v2.1](/models/sao10k/l3-euryale-70b).\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000035\",\"completion\":\"0.0000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-flash-1.5-8b-exp\",\"name\":\"Google: Gemini Flash 1.5 8B Experimental\",\"created\":1724803200,\"description\":\"Gemini Flash 1.5 8B Experimental is an experimental, 8B parameter version of the [Gemini Flash 1.5](/models/google/gemini-flash-1.5) model.\\n\\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\\n\\n#multimodal\\n\\nNote: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.\",\"context_length\":1000000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":1000000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"ai21/jamba-1-5-large\",\"name\":\"AI21: Jamba 1.5 Large\",\"created\":1724371200,\"description\":\"Jamba 1.5 Large is part of AI21's new family of open models, offering superior speed, efficiency, and quality.\\n\\nIt features a 256K effective context window, the longest among open models, enabling improved performance on tasks like document summarization and analysis.\\n\\nBuilt on a novel SSM-Transformer architecture, it outperforms larger models like Llama 3.1 70B on benchmarks while maintaining resource efficiency.\\n\\nRead their [announcement](https://www.ai21.com/blog/announcing-jamba-model-family) to learn more.\",\"context_length\":256000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000002\",\"completion\":\"0.000008\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":256000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"ai21/jamba-1-5-mini\",\"name\":\"AI21: Jamba 1.5 Mini\",\"created\":1724371200,\"description\":\"Jamba 1.5 Mini is the world's first production-grade Mamba-based model, combining SSM and Transformer architectures for a 256K context window and high efficiency.\\n\\nIt works with 9 languages and can handle various writing and analysis tasks as well as or better than similar small models.\\n\\nThis model uses less computer memory and works faster with longer texts than previous designs.\\n\\nRead their [announcement](https://www.ai21.com/blog/announcing-jamba-model-family) to learn more.\",\"context_length\":256000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000002\",\"completion\":\"0.0000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":256000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"microsoft/phi-3.5-mini-128k-instruct\",\"name\":\"Microsoft: Phi-3.5 Mini 128K Instruct\",\"created\":1724198400,\"description\":\"Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties. Phi-3.5 Mini uses 3.8B parameters, and is a dense decoder-only transformer model using the same tokenizer as [Phi-3 Mini](/models/microsoft/phi-3-mini-128k-instruct).\\n\\nThe models underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3.5 models showcased robust and state-of-the-art performance among models with less than 13 billion parameters.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":\"phi3\"},\"pricing\":{\"prompt\":\"0.0000001\",\"completion\":\"0.0000001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"nousresearch/hermes-3-llama-3.1-70b\",\"name\":\"Nous: Hermes 3 70B Instruct\",\"created\":1723939200,\"description\":\"Hermes 3 is a generalist language model with many improvements over [Hermes 2](/models/nousresearch/nous-hermes-2-mistral-7b-dpo), including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.\\n\\nHermes 3 70B is a competitive, if not superior finetune of the [Llama-3.1 70B foundation model](/models/meta-llama/llama-3.1-70b-instruct), focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.\\n\\nThe Hermes 3 series builds and expands on the Hermes 2 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.\",\"context_length\":131000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.00000012\",\"completion\":\"0.0000003\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131000,\"max_completion_tokens\":131000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"nousresearch/hermes-3-llama-3.1-405b\",\"name\":\"Nous: Hermes 3 405B Instruct\",\"created\":1723766400,\"description\":\"Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.\\n\\nHermes 3 405B is a frontier-level, full-parameter finetune of the Llama-3.1 405B foundation model, focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.\\n\\nThe Hermes 3 series builds and expands on the Hermes 2 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.\\n\\nHermes 3 is competitive, if not superior, to Llama-3.1 Instruct models at general capabilities, with varying strengths and weaknesses attributable between the two.\",\"context_length\":131072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.0000008\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"perplexity/llama-3.1-sonar-huge-128k-online\",\"name\":\"Perplexity: Llama 3.1 Sonar 405B Online\",\"created\":1723593600,\"description\":\"Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance. The model is built upon the Llama 3.1 405B and has internet access.\",\"context_length\":127072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000005\",\"completion\":\"0.000005\",\"image\":\"0\",\"request\":\"0.005\"},\"top_provider\":{\"context_length\":127072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/chatgpt-4o-latest\",\"name\":\"OpenAI: ChatGPT-4o\",\"created\":1723593600,\"description\":\"OpenAI ChatGPT 4o is continually updated by OpenAI to point to the current version of GPT-4o used by ChatGPT. It therefore differs slightly from the API version of [GPT-4o](/models/openai/gpt-4o) in that it has additional RLHF. It is intended for research and evaluation.\\n\\nOpenAI notes that this model is not suited for production use-cases as it may be removed or redirected to another model in the future.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000005\",\"completion\":\"0.000015\",\"image\":\"0.007225\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":16384,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"sao10k/l3-lunaris-8b\",\"name\":\"Sao10K: Llama 3 8B Lunaris\",\"created\":1723507200,\"description\":\"Lunaris 8B is a versatile generalist and roleplaying model based on Llama 3. It's a strategic merge of multiple models, designed to balance creativity with improved logic and general knowledge.\\n\\nCreated by [Sao10k](https://huggingface.co/Sao10k), this model aims to offer an improved experience over Stheno v3.2, with enhanced creativity and logical reasoning.\\n\\nFor best results, use with Llama 3 Instruct context template, temperature 1.4, and min_p 0.1.\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000003\",\"completion\":\"0.00000006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"aetherwiing/mn-starcannon-12b\",\"name\":\"Aetherwiing: Starcannon 12B\",\"created\":1723507200,\"description\":\"Starcannon 12B v2 is a creative roleplay and story writing model, based on Mistral Nemo, using [nothingiisreal/mn-celeste-12b](/nothingiisreal/mn-celeste-12b) as a base, with [intervitens/mini-magnum-12b-v1.1](https://huggingface.co/intervitens/mini-magnum-12b-v1.1) merged in using the [TIES](https://arxiv.org/abs/2306.01708) method.\\n\\nAlthough more similar to Magnum overall, the model remains very creative, with a pleasant writing style. It is recommended for people wanting more variety than Magnum, and yet more verbose prose than Celeste.\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.0000012\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-4o-2024-08-06\",\"name\":\"OpenAI: GPT-4o (2024-08-06)\",\"created\":1722902400,\"description\":\"The 2024-08-06 version of GPT-4o offers improved performance in structured outputs, with the ability to supply a JSON schema in the respone_format. Read more [here](https://openai.com/index/introducing-structured-outputs-in-the-api/).\\n\\nGPT-4o (\\\"o\\\" for \\\"omni\\\") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.\\n\\nFor benchmarking against other models, it was briefly called [\\\"im-also-a-good-gpt2-chatbot\\\"](https://twitter.com/LiamFedus/status/1790064963966370209)\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000025\",\"completion\":\"0.00001\",\"image\":\"0.003613\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":16384,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.1-405b\",\"name\":\"Meta: Llama 3.1 405B (base)\",\"created\":1722556800,\"description\":\"Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This is the base 405B pre-trained version.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"none\"},\"pricing\":{\"prompt\":\"0.000002\",\"completion\":\"0.000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"nothingiisreal/mn-celeste-12b\",\"name\":\"Mistral Nemo 12B Celeste\",\"created\":1722556800,\"description\":\"A specialized story writing and roleplaying model based on Mistral's NeMo 12B Instruct. Fine-tuned on curated datasets including Reddit Writing Prompts and Opus Instruct 25K.\\n\\nThis model excels at creative writing, offering improved NSFW capabilities, with smarter and more active narration. It demonstrates remarkable versatility in both SFW and NSFW scenarios, with strong Out of Character (OOC) steering capabilities, allowing fine-tuned control over narrative direction and character behavior.\\n\\nCheck out the model's [HuggingFace page](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9) for details on what parameters and prompts work best!\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.0000012\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"perplexity/llama-3.1-sonar-small-128k-chat\",\"name\":\"Perplexity: Llama 3.1 Sonar 8B\",\"created\":1722470400,\"description\":\"Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.\\n\\nThis is a normal offline LLM, but the [online version](/models/perplexity/llama-3.1-sonar-small-128k-online) of this model has Internet access.\",\"context_length\":131072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000002\",\"completion\":\"0.0000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-pro-1.5-exp\",\"name\":\"Google: Gemini Pro 1.5 Experimental\",\"created\":1722470400,\"description\":\"Gemini 1.5 Pro Experimental is a bleeding-edge version of the [Gemini 1.5 Pro](/models/google/gemini-pro-1.5) model. Because it's currently experimental, it will be **heavily rate-limited** by Google.\\n\\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\\n\\n#multimodal\",\"context_length\":1000000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":1000000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"perplexity/llama-3.1-sonar-large-128k-chat\",\"name\":\"Perplexity: Llama 3.1 Sonar 70B\",\"created\":1722470400,\"description\":\"Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.\\n\\nThis is a normal offline LLM, but the [online version](/models/perplexity/llama-3.1-sonar-large-128k-online) of this model has Internet access.\",\"context_length\":131072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"perplexity/llama-3.1-sonar-large-128k-online\",\"name\":\"Perplexity: Llama 3.1 Sonar 70B Online\",\"created\":1722470400,\"description\":\"Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.\\n\\nThis is the online version of the [offline chat model](/models/perplexity/llama-3.1-sonar-large-128k-chat). It is focused on delivering helpful, up-to-date, and factual responses. #online\",\"context_length\":127072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000001\",\"image\":\"0\",\"request\":\"0.005\"},\"top_provider\":{\"context_length\":127072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"perplexity/llama-3.1-sonar-small-128k-online\",\"name\":\"Perplexity: Llama 3.1 Sonar 8B Online\",\"created\":1722470400,\"description\":\"Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.\\n\\nThis is the online version of the [offline chat model](/models/perplexity/llama-3.1-sonar-small-128k-chat). It is focused on delivering helpful, up-to-date, and factual responses. #online\",\"context_length\":127072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000002\",\"completion\":\"0.0000002\",\"image\":\"0\",\"request\":\"0.005\"},\"top_provider\":{\"context_length\":127072,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.1-405b-instruct:free\",\"name\":\"Meta: Llama 3.1 405B Instruct (free)\",\"created\":1721692800,\"description\":\"The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, the Meta AI team continues to push the frontier of open-source LLMs.\\n\\nMeta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 405B instruct-tuned version is optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models including GPT-4o and Claude 3.5 Sonnet in evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8000,\"max_completion_tokens\":4000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.1-405b-instruct\",\"name\":\"Meta: Llama 3.1 405B Instruct\",\"created\":1721692800,\"description\":\"The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, the Meta AI team continues to push the frontier of open-source LLMs.\\n\\nMeta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 405B instruct-tuned version is optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models including GPT-4o and Claude 3.5 Sonnet in evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":32000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.0000008\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.1-405b-instruct:nitro\",\"name\":\"Meta: Llama 3.1 405B Instruct (nitro)\",\"created\":1721692800,\"description\":\"The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, the Meta AI team continues to push the frontier of open-source LLMs.\\n\\nMeta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 405B instruct-tuned version is optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models including GPT-4o and Claude 3.5 Sonnet in evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00001462\",\"completion\":\"0.00001462\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.1-8b-instruct:free\",\"name\":\"Meta: Llama 3.1 8B Instruct (free)\",\"created\":1721692800,\"description\":\"Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.1-8b-instruct\",\"name\":\"Meta: Llama 3.1 8B Instruct\",\"created\":1721692800,\"description\":\"Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":131072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000002\",\"completion\":\"0.00000005\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.1-70b-instruct:free\",\"name\":\"Meta: Llama 3.1 70B Instruct (free)\",\"created\":1721692800,\"description\":\"Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.1-70b-instruct\",\"name\":\"Meta: Llama 3.1 70B Instruct\",\"created\":1721692800,\"description\":\"Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":131072,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000012\",\"completion\":\"0.0000003\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131072,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3.1-70b-instruct:nitro\",\"name\":\"Meta: Llama 3.1 70B Instruct (nitro)\",\"created\":1721692800,\"description\":\"Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":64000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000325\",\"completion\":\"0.00000325\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":64000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-nemo\",\"name\":\"Mistral: Mistral Nemo\",\"created\":1721347200,\"description\":\"A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.\\n\\nThe model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.\\n\\nIt supports function calling and is released under the Apache 2.0 license.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.000000035\",\"completion\":\"0.00000008\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/codestral-mamba\",\"name\":\"Mistral: Codestral Mamba\",\"created\":1721347200,\"description\":\"A 7.3B parameter Mamba-based model designed for code and reasoning tasks.\\n\\n- Linear time inference, allowing for theoretically infinite sequence lengths\\n- 256k token context window\\n- Optimized for quick responses, especially beneficial for code productivity\\n- Performs comparably to state-of-the-art transformer models in code and reasoning tasks\\n- Available under the Apache 2.0 license for free use, modification, and distribution\",\"context_length\":256000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000025\",\"completion\":\"0.00000025\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":256000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-4o-mini\",\"name\":\"OpenAI: GPT-4o-mini\",\"created\":1721260800,\"description\":\"GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs.\\n\\nAs their most advanced small model, it is many multiples more affordable than other recent frontier models, and more than 60% cheaper than [GPT-3.5 Turbo](/models/openai/gpt-3.5-turbo). It maintains SOTA intelligence, while being significantly more cost-effective.\\n\\nGPT-4o mini achieves an 82% score on MMLU and presently ranks higher than GPT-4 on chat preferences [common leaderboards](https://arena.lmsys.org/).\\n\\nCheck out the [launch announcement](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) to learn more.\\n\\n#multimodal\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000015\",\"completion\":\"0.0000006\",\"image\":\"0.007225\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":16384,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/gpt-4o-mini-2024-07-18\",\"name\":\"OpenAI: GPT-4o-mini (2024-07-18)\",\"created\":1721260800,\"description\":\"GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs.\\n\\nAs their most advanced small model, it is many multiples more affordable than other recent frontier models, and more than 60% cheaper than [GPT-3.5 Turbo](/models/openai/gpt-3.5-turbo). It maintains SOTA intelligence, while being significantly more cost-effective.\\n\\nGPT-4o mini achieves an 82% score on MMLU and presently ranks higher than GPT-4 on chat preferences [common leaderboards](https://arena.lmsys.org/).\\n\\nCheck out the [launch announcement](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) to learn more.\\n\\n#multimodal\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000015\",\"completion\":\"0.0000006\",\"image\":\"0.007225\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":16384,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"qwen/qwen-2-7b-instruct:free\",\"name\":\"Qwen 2 7B Instruct (free)\",\"created\":1721088000,\"description\":\"Qwen2 7B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning.\\n\\nIt features SwiGLU activation, attention QKV bias, and group query attention. It is pretrained on extensive data with supervised finetuning and direct preference optimization.\\n\\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen2/) and [GitHub repo](https://github.com/QwenLM/Qwen2).\\n\\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"qwen/qwen-2-7b-instruct\",\"name\":\"Qwen 2 7B Instruct\",\"created\":1721088000,\"description\":\"Qwen2 7B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning.\\n\\nIt features SwiGLU activation, attention QKV bias, and group query attention. It is pretrained on extensive data with supervised finetuning and direct preference optimization.\\n\\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen2/) and [GitHub repo](https://github.com/QwenLM/Qwen2).\\n\\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.000000054\",\"completion\":\"0.000000054\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemma-2-27b-it\",\"name\":\"Google: Gemma 2 27B\",\"created\":1720828800,\"description\":\"Gemma 2 27B by Google is an open model built from the same research and technology used to create the [Gemini models](/models?q=gemini).\\n\\nGemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning.\\n\\nSee the [launch announcement](https://blog.google/technology/developers/google-gemma-2/) for more details. Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":\"gemma\"},\"pricing\":{\"prompt\":\"0.00000027\",\"completion\":\"0.00000027\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"alpindale/magnum-72b\",\"name\":\"Magnum 72B\",\"created\":1720656000,\"description\":\"From the maker of [Goliath](https://openrouter.ai/models/alpindale/goliath-120b), Magnum 72B is the first in a new family of models designed to achieve the prose quality of the Claude 3 models, notably Opus & Sonnet.\\n\\nThe model is based on [Qwen2 72B](https://openrouter.ai/models/qwen/qwen-2-72b-instruct) and trained with 55 million tokens of highly curated roleplay (RP) data.\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.000001875\",\"completion\":\"0.00000225\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":1024,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemma-2-9b-it:free\",\"name\":\"Google: Gemma 2 9B (free)\",\"created\":1719532800,\"description\":\"Gemma 2 9B by Google is an advanced, open-source language model that sets a new standard for efficiency and performance in its size class.\\n\\nDesigned for a wide variety of tasks, it empowers developers and researchers to build innovative applications, while maintaining accessibility, safety, and cost-effectiveness.\\n\\nSee the [launch announcement](https://blog.google/technology/developers/google-gemma-2/) for more details. Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms).\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":\"gemma\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemma-2-9b-it\",\"name\":\"Google: Gemma 2 9B\",\"created\":1719532800,\"description\":\"Gemma 2 9B by Google is an advanced, open-source language model that sets a new standard for efficiency and performance in its size class.\\n\\nDesigned for a wide variety of tasks, it empowers developers and researchers to build innovative applications, while maintaining accessibility, safety, and cost-effectiveness.\\n\\nSee the [launch announcement](https://blog.google/technology/developers/google-gemma-2/) for more details. Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms).\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":\"gemma\"},\"pricing\":{\"prompt\":\"0.00000003\",\"completion\":\"0.00000006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"01-ai/yi-large\",\"name\":\"01.AI: Yi Large\",\"created\":1719273600,\"description\":\"The Yi Large model was designed by 01.AI with the following usecases in mind: knowledge search, data classification, human-like chat bots, and customer service.\\n\\nIt stands out for its multilingual proficiency, particularly in Spanish, Chinese, Japanese, German, and French.\\n\\nCheck out the [launch announcement](https://01-ai.github.io/blog/01.ai-yi-large-llm-launch) to learn more.\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Yi\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000003\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"ai21/jamba-instruct\",\"name\":\"AI21: Jamba Instruct\",\"created\":1719273600,\"description\":\"The Jamba-Instruct model, introduced by AI21 Labs, is an instruction-tuned variant of their hybrid SSM-Transformer Jamba model, specifically optimized for enterprise applications.\\n\\n- 256K Context Window: It can process extensive information, equivalent to a 400-page novel, which is beneficial for tasks involving large documents such as financial reports or legal documents\\n- Safety and Accuracy: Jamba-Instruct is designed with enhanced safety features to ensure secure deployment in enterprise environments, reducing the risk and cost of implementation\\n\\nRead their [announcement](https://www.ai21.com/blog/announcing-jamba) to learn more.\\n\\nJamba has a knowledge cutoff of February 2024.\",\"context_length\":256000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000005\",\"completion\":\"0.0000007\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":256000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3.5-sonnet-20240620:beta\",\"name\":\"Anthropic: Claude 3.5 Sonnet (2024-06-20) (self-moderated)\",\"created\":1718841600,\"description\":\"Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at:\\n\\n- Coding: Autonomously writes, edits, and runs code with reasoning and troubleshooting\\n- Data science: Augments human data science expertise; navigates unstructured data while using multiple tools for insights\\n- Visual processing: excelling at interpreting charts, graphs, and images, accurately transcribing text to derive insights beyond just the text alone\\n- Agentic tasks: exceptional tool use, making it great at agentic tasks (i.e. complex, multi-step problem solving tasks that require engaging with other systems)\\n\\nFor the latest version (2024-10-23), check out [Claude 3.5 Sonnet](/anthropic/claude-3.5-sonnet).\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000015\",\"image\":\"0.0048\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3.5-sonnet-20240620\",\"name\":\"Anthropic: Claude 3.5 Sonnet (2024-06-20)\",\"created\":1718841600,\"description\":\"Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at:\\n\\n- Coding: Autonomously writes, edits, and runs code with reasoning and troubleshooting\\n- Data science: Augments human data science expertise; navigates unstructured data while using multiple tools for insights\\n- Visual processing: excelling at interpreting charts, graphs, and images, accurately transcribing text to derive insights beyond just the text alone\\n- Agentic tasks: exceptional tool use, making it great at agentic tasks (i.e. complex, multi-step problem solving tasks that require engaging with other systems)\\n\\nFor the latest version (2024-10-23), check out [Claude 3.5 Sonnet](/anthropic/claude-3.5-sonnet).\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000015\",\"image\":\"0.0048\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":8192,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"sao10k/l3-euryale-70b\",\"name\":\"Sao10k: Llama 3 Euryale 70B v2.1\",\"created\":1718668800,\"description\":\"Euryale 70B v2.1 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k).\\n\\n- Better prompt adherence.\\n- Better anatomy / spatial awareness.\\n- Adapts much better to unique and custom formatting / reply formats.\\n- Very creative, lots of unique swipes.\\n- Is not restrictive during roleplays.\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000035\",\"completion\":\"0.0000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"cognitivecomputations/dolphin-mixtral-8x22b\",\"name\":\"Dolphin 2.9.2 Mixtral 8x22B 🐬\",\"created\":1717804800,\"description\":\"Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a finetune of [Mixtral 8x22B Instruct](/models/mistralai/mixtral-8x22b-instruct). It features a 64k context length and was fine-tuned with a 16k sequence length using ChatML templates.\\n\\nThis model is a successor to [Dolphin Mixtral 8x7B](/models/cognitivecomputations/dolphin-mixtral-8x7b).\\n\\nThe model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at [erichartford.com/uncensored-models](https://erichartford.com/uncensored-models).\\n\\n#moe #uncensored\",\"context_length\":16000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.0000009\",\"completion\":\"0.0000009\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"qwen/qwen-2-72b-instruct\",\"name\":\"Qwen 2 72B Instruct\",\"created\":1717718400,\"description\":\"Qwen2 72B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning.\\n\\nIt features SwiGLU activation, attention QKV bias, and group query attention. It is pretrained on extensive data with supervised finetuning and direct preference optimization.\\n\\nFor more details, see this [blog post](https://qwenlm.github.io/blog/qwen2/) and [GitHub repo](https://github.com/QwenLM/Qwen2).\\n\\nUsage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Qwen\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.00000034\",\"completion\":\"0.00000039\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-7b-instruct:free\",\"name\":\"Mistral: Mistral 7B Instruct (free)\",\"created\":1716768000,\"description\":\"A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.\\n\\n*Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.*\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-7b-instruct\",\"name\":\"Mistral: Mistral 7B Instruct\",\"created\":1716768000,\"description\":\"A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.\\n\\n*Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.*\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.00000003\",\"completion\":\"0.000000055\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-7b-instruct:nitro\",\"name\":\"Mistral: Mistral 7B Instruct (nitro)\",\"created\":1716768000,\"description\":\"A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.\\n\\n*Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.*\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.00000007\",\"completion\":\"0.00000007\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-7b-instruct-v0.3\",\"name\":\"Mistral: Mistral 7B Instruct v0.3\",\"created\":1716768000,\"description\":\"A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.\\n\\nAn improved version of [Mistral 7B Instruct v0.2](/models/mistralai/mistral-7b-instruct-v0.2), with the following changes:\\n\\n- Extended vocabulary to 32768\\n- Supports v3 Tokenizer\\n- Supports function calling\\n\\nNOTE: Support for function calling depends on the provider.\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.00000003\",\"completion\":\"0.000000055\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"nousresearch/hermes-2-pro-llama-3-8b\",\"name\":\"NousResearch: Hermes 2 Pro - Llama-3 8B\",\"created\":1716768000,\"description\":\"Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\",\"context_length\":131000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.000000025\",\"completion\":\"0.00000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":131000,\"max_completion_tokens\":131000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"microsoft/phi-3-mini-128k-instruct:free\",\"name\":\"Microsoft: Phi-3 Mini 128K Instruct (free)\",\"created\":1716681600,\"description\":\"Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing.\\n\\nAt time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. This model is static, trained on an offline dataset with an October 2023 cutoff date.\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":\"phi3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"microsoft/phi-3-mini-128k-instruct\",\"name\":\"Microsoft: Phi-3 Mini 128K Instruct\",\"created\":1716681600,\"description\":\"Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing.\\n\\nAt time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. This model is static, trained on an offline dataset with an October 2023 cutoff date.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":\"phi3\"},\"pricing\":{\"prompt\":\"0.0000001\",\"completion\":\"0.0000001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"microsoft/phi-3-medium-128k-instruct:free\",\"name\":\"Microsoft: Phi-3 Medium 128K Instruct (free)\",\"created\":1716508800,\"description\":\"Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing.\\n\\nAt time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. In the MMLU-Pro eval, the model even comes close to a Llama3 70B level of performance.\\n\\nFor 4k context length, try [Phi-3 Medium 4K](/models/microsoft/phi-3-medium-4k-instruct).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":\"phi3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"microsoft/phi-3-medium-128k-instruct\",\"name\":\"Microsoft: Phi-3 Medium 128K Instruct\",\"created\":1716508800,\"description\":\"Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing.\\n\\nAt time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. In the MMLU-Pro eval, the model even comes close to a Llama3 70B level of performance.\\n\\nFor 4k context length, try [Phi-3 Medium 4K](/models/microsoft/phi-3-medium-4k-instruct).\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":\"phi3\"},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"neversleep/llama-3-lumimaid-70b\",\"name\":\"NeverSleep: Llama 3 Lumimaid 70B\",\"created\":1715817600,\"description\":\"The NeverSleep team is back, with a Llama 3 70B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary.\\n\\nTo enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength.\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.000003375\",\"completion\":\"0.0000045\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-flash-1.5\",\"name\":\"Google: Gemini Flash 1.5\",\"created\":1715644800,\"description\":\"Gemini 1.5 Flash is a foundation model that performs well at a variety of multimodal tasks such as visual understanding, classification, summarization, and creating content from image, audio and video. It's adept at processing visual and text inputs such as photographs, documents, infographics, and screenshots.\\n\\nGemini 1.5 Flash is designed for high-volume, high-frequency tasks where cost and latency matter. On most common tasks, Flash achieves comparable quality to other Gemini Pro models at a significantly reduced cost. Flash is well-suited for applications like chat assistants and on-demand content generation where speed and scale matter.\\n\\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\\n\\n#multimodal\",\"context_length\":1000000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000000075\",\"completion\":\"0.0000003\",\"image\":\"0.00004\",\"request\":\"0\"},\"top_provider\":{\"context_length\":1000000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"perplexity/llama-3-sonar-large-32k-chat\",\"name\":\"Perplexity: Llama3 Sonar 70B\",\"created\":1715644800,\"description\":\"Llama3 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.\\n\\nThis is a normal offline LLM, but the [online version](/models/perplexity/llama-3-sonar-large-32k-online) of this model has Internet access.\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000001\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"perplexity/llama-3-sonar-large-32k-online\",\"name\":\"Perplexity: Llama3 Sonar 70B Online\",\"created\":1715644800,\"description\":\"Llama3 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.\\n\\nThis is the online version of the [offline chat model](/models/perplexity/llama-3-sonar-large-32k-chat). It is focused on delivering helpful, up-to-date, and factual responses. #online\",\"context_length\":28000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000001\",\"image\":\"0\",\"request\":\"0.005\"},\"top_provider\":{\"context_length\":28000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"deepseek/deepseek-chat-v2.5\",\"name\":\"DeepSeek V2.5\",\"created\":1715644800,\"description\":\"DeepSeek-V2.5 is an upgraded version that combines DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct. The new model integrates the general and coding abilities of the two previous versions. For model details, please visit [DeepSeek-V2 page](https://github.com/deepseek-ai/DeepSeek-V2) for more information.\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000002\",\"completion\":\"0.000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"perplexity/llama-3-sonar-small-32k-chat\",\"name\":\"Perplexity: Llama3 Sonar 8B\",\"created\":1715644800,\"description\":\"Llama3 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance.\\n\\nThis is a normal offline LLM, but the [online version](/models/perplexity/llama-3-sonar-small-32k-online) of this model has Internet access.\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000002\",\"completion\":\"0.0000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-4o-2024-05-13\",\"name\":\"OpenAI: GPT-4o (2024-05-13)\",\"created\":1715558400,\"description\":\"GPT-4o (\\\"o\\\" for \\\"omni\\\") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.\\n\\nFor benchmarking against other models, it was briefly called [\\\"im-also-a-good-gpt2-chatbot\\\"](https://twitter.com/LiamFedus/status/1790064963966370209)\\n\\n#multimodal\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000005\",\"completion\":\"0.000015\",\"image\":\"0.007225\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-guard-2-8b\",\"name\":\"Meta: LlamaGuard 2 8B\",\"created\":1715558400,\"description\":\"This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, [LlamaGuard 1](https://huggingface.co/meta-llama/LlamaGuard-7b), it can do both prompt and response classification.\\n\\nLlamaGuard 2 acts as a normal LLM would, generating text that indicates whether the given input/output is safe/unsafe. If deemed unsafe, it will also share the content categories violated.\\n\\nFor best results, please use raw prompt input or the `/completions` endpoint, instead of the chat API.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"none\"},\"pricing\":{\"prompt\":\"0.00000018\",\"completion\":\"0.00000018\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-4o\",\"name\":\"OpenAI: GPT-4o\",\"created\":1715558400,\"description\":\"GPT-4o (\\\"o\\\" for \\\"omni\\\") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.\\n\\nFor benchmarking against other models, it was briefly called [\\\"im-also-a-good-gpt2-chatbot\\\"](https://twitter.com/LiamFedus/status/1790064963966370209)\\n\\n#multimodal\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000025\",\"completion\":\"0.00001\",\"image\":\"0.003613\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":16384,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/gpt-4o:extended\",\"name\":\"OpenAI: GPT-4o (extended)\",\"created\":1715558400,\"description\":\"GPT-4o (\\\"o\\\" for \\\"omni\\\") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.\\n\\nFor benchmarking against other models, it was briefly called [\\\"im-also-a-good-gpt2-chatbot\\\"](https://twitter.com/LiamFedus/status/1790064963966370209)\\n\\n#multimodal\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000006\",\"completion\":\"0.000018\",\"image\":\"0.007225\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":64000,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"neversleep/llama-3-lumimaid-8b:extended\",\"name\":\"NeverSleep: Llama 3 Lumimaid 8B (extended)\",\"created\":1714780800,\"description\":\"The NeverSleep team is back, with a Llama 3 8B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary.\\n\\nTo enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength.\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":24576,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.0000001875\",\"completion\":\"0.000001125\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":24576,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"neversleep/llama-3-lumimaid-8b\",\"name\":\"NeverSleep: Llama 3 Lumimaid 8B\",\"created\":1714780800,\"description\":\"The NeverSleep team is back, with a Llama 3 8B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary.\\n\\nTo enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength.\\n\\nUsage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":24576,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.0000001875\",\"completion\":\"0.000001125\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":24576,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3-8b-instruct:free\",\"name\":\"Meta: Llama 3 8B Instruct (free)\",\"created\":1713398400,\"description\":\"Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3-8b-instruct\",\"name\":\"Meta: Llama 3 8B Instruct\",\"created\":1713398400,\"description\":\"Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000003\",\"completion\":\"0.00000006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3-8b-instruct:extended\",\"name\":\"Meta: Llama 3 8B Instruct (extended)\",\"created\":1713398400,\"description\":\"Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":16384,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.0000001875\",\"completion\":\"0.000001125\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3-8b-instruct:nitro\",\"name\":\"Meta: Llama 3 8B Instruct (nitro)\",\"created\":1713398400,\"description\":\"Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.0000002\",\"completion\":\"0.0000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3-70b-instruct\",\"name\":\"Meta: Llama 3 70B Instruct\",\"created\":1713398400,\"description\":\"Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.00000023\",\"completion\":\"0.0000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-3-70b-instruct:nitro\",\"name\":\"Meta: Llama 3 70B Instruct (nitro)\",\"created\":1713398400,\"description\":\"Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases.\\n\\nIt has demonstrated strong performance compared to leading closed-source models in human evaluations.\\n\\nTo read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama3\",\"instruct_type\":\"llama3\"},\"pricing\":{\"prompt\":\"0.000000792\",\"completion\":\"0.000000792\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mixtral-8x22b-instruct\",\"name\":\"Mistral: Mixtral 8x22B Instruct\",\"created\":1713312000,\"description\":\"Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include:\\n- strong math, coding, and reasoning\\n- large context length (64k)\\n- fluency in English, French, Italian, German, and Spanish\\n\\nSee benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/).\\n#moe\",\"context_length\":65536,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.0000009\",\"completion\":\"0.0000009\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":65536,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"microsoft/wizardlm-2-8x22b\",\"name\":\"WizardLM-2 8x22B\",\"created\":1713225600,\"description\":\"WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all existing state-of-the-art opensource models.\\n\\nIt is an instruct finetune of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b).\\n\\nTo read more about the model release, [click here](https://wizardlm.github.io/WizardLM2/).\\n\\n#moe\",\"context_length\":65536,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"vicuna\"},\"pricing\":{\"prompt\":\"0.0000005\",\"completion\":\"0.0000005\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":65536,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"microsoft/wizardlm-2-7b\",\"name\":\"WizardLM-2 7B\",\"created\":1713225600,\"description\":\"WizardLM-2 7B is the smaller variant of Microsoft AI's latest Wizard model. It is the fastest and achieves comparable performance with existing 10x larger opensource leading models\\n\\nIt is a finetune of [Mistral 7B Instruct](/models/mistralai/mistral-7b-instruct), using the same technique as [WizardLM-2 8x22B](/models/microsoft/wizardlm-2-8x22b).\\n\\nTo read more about the model release, [click here](https://wizardlm.github.io/WizardLM2/).\\n\\n#moe\",\"context_length\":32000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"vicuna\"},\"pricing\":{\"prompt\":\"0.000000055\",\"completion\":\"0.000000055\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-pro-1.5\",\"name\":\"Google: Gemini Pro 1.5\",\"created\":1712620800,\"description\":\"Google's latest multimodal model, supports image and video[0] in text or chat prompts.\\n\\nOptimized for language tasks including:\\n\\n- Code generation\\n- Text generation\\n- Text editing\\n- Problem solving\\n- Recommendations\\n- Information extraction\\n- Data extraction or generation\\n- AI agents\\n\\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\\n\\n* [0]: Video input is not available through OpenRouter at this time.\",\"context_length\":2000000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000125\",\"completion\":\"0.000005\",\"image\":\"0.0006575\",\"request\":\"0\"},\"top_provider\":{\"context_length\":2000000,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-4-turbo\",\"name\":\"OpenAI: GPT-4 Turbo\",\"created\":1712620800,\"description\":\"The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling.\\n\\nTraining data: up to December 2023.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00001\",\"completion\":\"0.00003\",\"image\":\"0.01445\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"cohere/command-r-plus\",\"name\":\"Cohere: Command R+\",\"created\":1712188800,\"description\":\"Command R+ is a new, 104B-parameter LLM from Cohere. It's useful for roleplay, general consumer usecases, and Retrieval Augmented Generation (RAG).\\n\\nIt offers multilingual support for ten key languages to facilitate global business operations. See benchmarks and the launch post [here](https://txt.cohere.com/command-r-plus-microsoft-azure/).\\n\\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Cohere\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000285\",\"completion\":\"0.00001425\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"cohere/command-r-plus-04-2024\",\"name\":\"Cohere: Command R+ (04-2024)\",\"created\":1712016000,\"description\":\"Command R+ is a new, 104B-parameter LLM from Cohere. It's useful for roleplay, general consumer usecases, and Retrieval Augmented Generation (RAG).\\n\\nIt offers multilingual support for ten key languages to facilitate global business operations. See benchmarks and the launch post [here](https://txt.cohere.com/command-r-plus-microsoft-azure/).\\n\\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Cohere\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000285\",\"completion\":\"0.00001425\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"databricks/dbrx-instruct\",\"name\":\"Databricks: DBRX 132B Instruct\",\"created\":1711670400,\"description\":\"DBRX is a new open source large language model developed by Databricks. At 132B, it outperforms existing open source LLMs like Llama 2 70B and [Mixtral-8x7b](/models/mistralai/mixtral-8x7b) on standard industry benchmarks for language understanding, programming, math, and logic.\\n\\nIt uses a fine-grained mixture-of-experts (MoE) architecture. 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts.\\n\\nSee the launch announcement and benchmark results [here](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).\\n\\n#moe\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Other\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.00000108\",\"completion\":\"0.00000108\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"sophosympatheia/midnight-rose-70b\",\"name\":\"Midnight Rose 70B\",\"created\":1711065600,\"description\":\"A merge with a complex family tree, this model was crafted for roleplaying and storytelling. Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge produced so far by sophosympatheia.\\n\\nDescending from earlier versions of Midnight Rose and [Wizard Tulu Dolphin 70B](https://huggingface.co/sophosympatheia/Wizard-Tulu-Dolphin-70B-v1.0), it inherits the best qualities of each.\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"airoboros\"},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.0000008\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"cohere/command\",\"name\":\"Cohere: Command\",\"created\":1710374400,\"description\":\"Command is an instruction-following conversational model that performs language tasks with high quality, more reliably and with a longer context than our base generative models.\\n\\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Cohere\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000095\",\"completion\":\"0.0000019\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":4000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"cohere/command-r\",\"name\":\"Cohere: Command R\",\"created\":1710374400,\"description\":\"Command-R is a 35B parameter model that performs conversational language tasks at a higher quality, more reliably, and with a longer context than previous models. It can be used for complex workflows like code generation, retrieval augmented generation (RAG), tool use, and agents.\\n\\nRead the launch post [here](https://txt.cohere.com/command-r/).\\n\\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Cohere\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000000475\",\"completion\":\"0.000001425\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3-haiku:beta\",\"name\":\"Anthropic: Claude 3 Haiku (self-moderated)\",\"created\":1710288000,\"description\":\"Claude 3 Haiku is Anthropic's fastest and most compact model for\\nnear-instant responsiveness. Quick and accurate targeted performance.\\n\\nSee the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-haiku)\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000025\",\"completion\":\"0.00000125\",\"image\":\"0.0004\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3-haiku\",\"name\":\"Anthropic: Claude 3 Haiku\",\"created\":1710288000,\"description\":\"Claude 3 Haiku is Anthropic's fastest and most compact model for\\nnear-instant responsiveness. Quick and accurate targeted performance.\\n\\nSee the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-haiku)\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000025\",\"completion\":\"0.00000125\",\"image\":\"0.0004\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3-opus:beta\",\"name\":\"Anthropic: Claude 3 Opus (self-moderated)\",\"created\":1709596800,\"description\":\"Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding.\\n\\nSee the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-family)\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000015\",\"completion\":\"0.000075\",\"image\":\"0.024\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3-opus\",\"name\":\"Anthropic: Claude 3 Opus\",\"created\":1709596800,\"description\":\"Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding.\\n\\nSee the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-family)\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000015\",\"completion\":\"0.000075\",\"image\":\"0.024\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3-sonnet:beta\",\"name\":\"Anthropic: Claude 3 Sonnet (self-moderated)\",\"created\":1709596800,\"description\":\"Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments.\\n\\nSee the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-family)\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000015\",\"image\":\"0.0048\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-3-sonnet\",\"name\":\"Anthropic: Claude 3 Sonnet\",\"created\":1709596800,\"description\":\"Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments.\\n\\nSee the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-family)\\n\\n#multimodal\",\"context_length\":200000,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000015\",\"image\":\"0.0048\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"cohere/command-r-03-2024\",\"name\":\"Cohere: Command R (03-2024)\",\"created\":1709341200,\"description\":\"Command-R is a 35B parameter model that performs conversational language tasks at a higher quality, more reliably, and with a longer context than previous models. It can be used for complex workflows like code generation, retrieval augmented generation (RAG), tool use, and agents.\\n\\nRead the launch post [here](https://txt.cohere.com/command-r/).\\n\\nUse of this model is subject to Cohere's [Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Cohere\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000000475\",\"completion\":\"0.000001425\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-large\",\"name\":\"Mistral Large\",\"created\":1708905600,\"description\":\"This is Mistral AI's flagship model, Mistral Large 2 (version `mistral-large-2407`). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch announcement [here](https://mistral.ai/news/mistral-large-2407/).\\n\\nIt supports dozens of languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, along with 80+ coding languages including Python, Java, C, C++, JavaScript, and Bash. Its long context window allows precise information recall from large documents.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000002\",\"completion\":\"0.000006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-3.5-turbo-0613\",\"name\":\"OpenAI: GPT-3.5 Turbo (older v0613)\",\"created\":1706140800,\"description\":\"GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks.\\n\\nTraining data up to Sep 2021.\",\"context_length\":4095,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4095,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-4-turbo-preview\",\"name\":\"OpenAI: GPT-4 Turbo Preview\",\"created\":1706140800,\"description\":\"The preview GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Dec 2023.\\n\\n**Note:** heavily rate limited by OpenAI while in preview.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00001\",\"completion\":\"0.00003\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"nousresearch/nous-hermes-2-mixtral-8x7b-dpo\",\"name\":\"Nous: Hermes 2 Mixtral 8x7B DPO\",\"created\":1705363200,\"description\":\"Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](/models/mistralai/mixtral-8x7b).\\n\\nThe model was trained on over 1,000,000 entries of primarily [GPT-4](/models/openai/gpt-4) generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.\\n\\n#moe\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.00000054\",\"completion\":\"0.00000054\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-small\",\"name\":\"Mistral Small\",\"created\":1704844800,\"description\":\"With 22 billion parameters, Mistral Small v24.09 offers a convenient mid-point between (Mistral NeMo 12B)[/mistralai/mistral-nemo] and (Mistral Large 2)[/mistralai/mistral-large], providing a cost-effective solution that can be deployed across various platforms and environments. It has better reasoning, exhibits more capabilities, can produce and reason about code, and is multiligual, supporting English, French, German, Italian, and Spanish.\",\"context_length\":32000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000002\",\"completion\":\"0.0000006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-tiny\",\"name\":\"Mistral Tiny\",\"created\":1704844800,\"description\":\"This model is currently powered by Mistral-7B-v0.2, and incorporates a \\\"better\\\" fine-tuning than [Mistral 7B](/models/mistralai/mistral-7b-instruct-v0.1), inspired by community work. It's best used for large batch processing tasks where cost is a significant factor but reasoning capabilities are not crucial.\",\"context_length\":32000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000025\",\"completion\":\"0.00000025\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-medium\",\"name\":\"Mistral Medium\",\"created\":1704844800,\"description\":\"This is Mistral AI's closed-source, medium-sided model. It's powered by a closed-source prototype and excels at reasoning, code, JSON, chat, and more. In benchmarks, it compares with many of the flagship models of other companies.\",\"context_length\":32000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00000275\",\"completion\":\"0.0000081\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32000,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-7b-instruct-v0.2\",\"name\":\"Mistral: Mistral 7B Instruct v0.2\",\"created\":1703721600,\"description\":\"A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.\\n\\nAn improved version of [Mistral 7B Instruct](/modelsmistralai/mistral-7b-instruct-v0.1), with the following changes:\\n\\n- 32k context window (vs 8k context in v0.1)\\n- Rope-theta = 1e6\\n- No Sliding-Window Attention\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.00000018\",\"completion\":\"0.00000018\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"cognitivecomputations/dolphin-mixtral-8x7b\",\"name\":\"Dolphin 2.6 Mixtral 8x7B 🐬\",\"created\":1703116800,\"description\":\"This is a 16k context fine-tune of [Mixtral-8x7b](/models/mistralai/mixtral-8x7b). It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning.\\n\\nThe model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at [erichartford.com/uncensored-models](https://erichartford.com/uncensored-models).\\n\\n#moe #uncensored\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.0000005\",\"completion\":\"0.0000005\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-pro-vision\",\"name\":\"Google: Gemini Pro Vision 1.0\",\"created\":1702425600,\"description\":\"Google's flagship multimodal model, supporting image and video in text or chat prompts for a text or code response.\\n\\nSee the benchmarks and prompting guidelines from [Deepmind](https://deepmind.google/technologies/gemini/).\\n\\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\\n\\n#multimodal\",\"context_length\":16384,\"architecture\":{\"modality\":\"text+image->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000005\",\"completion\":\"0.0000015\",\"image\":\"0.0025\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16384,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/gemini-pro\",\"name\":\"Google: Gemini Pro 1.0\",\"created\":1702425600,\"description\":\"Google's flagship text generation model. Designed to handle natural language tasks, multiturn text and code chat, and code generation.\\n\\nSee the benchmarks and prompting guidelines from [Deepmind](https://deepmind.google/technologies/gemini/).\\n\\nUsage of Gemini is subject to Google's [Gemini Terms of Use](https://ai.google.dev/terms).\",\"context_length\":32760,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Gemini\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000005\",\"completion\":\"0.0000015\",\"image\":\"0.0025\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32760,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mixtral-8x7b\",\"name\":\"Mistral: Mixtral 8x7B (base)\",\"created\":1702166400,\"description\":\"Mixtral 8x7B is a pretrained generative Sparse Mixture of Experts, by Mistral AI. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters. Base model (not fine-tuned for instructions) - see [Mixtral 8x7B Instruct](/models/mistralai/mixtral-8x7b-instruct) for an instruct-tuned model.\\n\\n#moe\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"none\"},\"pricing\":{\"prompt\":\"0.00000054\",\"completion\":\"0.00000054\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mixtral-8x7b-instruct\",\"name\":\"Mistral: Mixtral 8x7B Instruct\",\"created\":1702166400,\"description\":\"Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters.\\n\\nInstruct model fine-tuned by Mistral. #moe\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.00000024\",\"completion\":\"0.00000024\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mistralai/mixtral-8x7b-instruct:nitro\",\"name\":\"Mistral: Mixtral 8x7B Instruct (nitro)\",\"created\":1702166400,\"description\":\"Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters.\\n\\nInstruct model fine-tuned by Mistral. #moe\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.00000054\",\"completion\":\"0.00000054\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openchat/openchat-7b:free\",\"name\":\"OpenChat 3.5 7B (free)\",\"created\":1701129600,\"description\":\"OpenChat 7B is a library of open-source language models, fine-tuned with \\\"C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)\\\" - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels.\\n\\n- For OpenChat fine-tuned on Mistral 7B, check out [OpenChat 7B](/models/openchat/openchat-7b).\\n- For OpenChat fine-tuned on Llama 8B, check out [OpenChat 8B](/models/openchat/openchat-8b).\\n\\n#open-source\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"openchat\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openchat/openchat-7b\",\"name\":\"OpenChat 3.5 7B\",\"created\":1701129600,\"description\":\"OpenChat 7B is a library of open-source language models, fine-tuned with \\\"C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)\\\" - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels.\\n\\n- For OpenChat fine-tuned on Mistral 7B, check out [OpenChat 7B](/models/openchat/openchat-7b).\\n- For OpenChat fine-tuned on Llama 8B, check out [OpenChat 8B](/models/openchat/openchat-8b).\\n\\n#open-source\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"openchat\"},\"pricing\":{\"prompt\":\"0.000000055\",\"completion\":\"0.000000055\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"neversleep/noromaid-20b\",\"name\":\"Noromaid 20B\",\"created\":1700956800,\"description\":\"A collab between IkariDev and Undi. This merge is suitable for RP, ERP, and general knowledge.\\n\\n#merge #uncensored\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.0000015\",\"completion\":\"0.00000225\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-2:beta\",\"name\":\"Anthropic: Claude v2 (self-moderated)\",\"created\":1700611200,\"description\":\"Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.\",\"context_length\":200000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000008\",\"completion\":\"0.000024\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-2\",\"name\":\"Anthropic: Claude v2\",\"created\":1700611200,\"description\":\"Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.\",\"context_length\":200000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000008\",\"completion\":\"0.000024\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"anthropic/claude-2.1:beta\",\"name\":\"Anthropic: Claude v2.1 (self-moderated)\",\"created\":1700611200,\"description\":\"Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.\",\"context_length\":200000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000008\",\"completion\":\"0.000024\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-2.1\",\"name\":\"Anthropic: Claude v2.1\",\"created\":1700611200,\"description\":\"Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.\",\"context_length\":200000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000008\",\"completion\":\"0.000024\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":200000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"teknium/openhermes-2.5-mistral-7b\",\"name\":\"OpenHermes 2.5 Mistral 7B\",\"created\":1700438400,\"description\":\"A continuation of [OpenHermes 2 model](/models/teknium/openhermes-2-mistral-7b), trained on additional code datasets.\\nPotentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant.\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.00000017\",\"completion\":\"0.00000017\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"lizpreciatior/lzlv-70b-fp16-hf\",\"name\":\"lzlv 70B\",\"created\":1699747200,\"description\":\"A Mythomax/MLewd_13B-style merge of selected 70B models.\\nA multi-model merge of several LLaMA2 70B finetunes for roleplaying and creative work. The goal was to create a model that combines creativity with intelligence for an enhanced experience.\\n\\n#merge #uncensored\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"airoboros\"},\"pricing\":{\"prompt\":\"0.00000035\",\"completion\":\"0.0000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"undi95/toppy-m-7b:free\",\"name\":\"Toppy M 7B (free)\",\"created\":1699574400,\"description\":\"A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit.\\nList of merged models:\\n- NousResearch/Nous-Capybara-7B-V1.9\\n- [HuggingFaceH4/zephyr-7b-beta](/models/huggingfaceh4/zephyr-7b-beta)\\n- lemonilia/AshhLimaRP-Mistral-7B\\n- Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b\\n- Undi95/Mistral-pippa-sharegpt-7b-qlora\\n\\n#merge #uncensored\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"undi95/toppy-m-7b:nitro\",\"name\":\"Toppy M 7B (nitro)\",\"created\":1699574400,\"description\":\"A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit.\\nList of merged models:\\n- NousResearch/Nous-Capybara-7B-V1.9\\n- [HuggingFaceH4/zephyr-7b-beta](/models/huggingfaceh4/zephyr-7b-beta)\\n- lemonilia/AshhLimaRP-Mistral-7B\\n- Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b\\n- Undi95/Mistral-pippa-sharegpt-7b-qlora\\n\\n#merge #uncensored\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.00000007\",\"completion\":\"0.00000007\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"undi95/toppy-m-7b\",\"name\":\"Toppy M 7B\",\"created\":1699574400,\"description\":\"A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit.\\nList of merged models:\\n- NousResearch/Nous-Capybara-7B-V1.9\\n- [HuggingFaceH4/zephyr-7b-beta](/models/huggingfaceh4/zephyr-7b-beta)\\n- lemonilia/AshhLimaRP-Mistral-7B\\n- Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b\\n- Undi95/Mistral-pippa-sharegpt-7b-qlora\\n\\n#merge #uncensored\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.00000007\",\"completion\":\"0.00000007\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"alpindale/goliath-120b\",\"name\":\"Goliath 120B\",\"created\":1699574400,\"description\":\"A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale.\\n\\nCredits to\\n- [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).\\n- [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.\\n\\n#merge\",\"context_length\":6144,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"airoboros\"},\"pricing\":{\"prompt\":\"0.000009375\",\"completion\":\"0.000009375\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":6144,\"max_completion_tokens\":512,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openrouter/auto\",\"name\":\"Auto Router (best for prompt)\",\"created\":1699401600,\"description\":\"Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output.\\n\\nTo see which model was used, visit [Activity](/activity), or read the `model` attribute of the response. Your response will be priced at the same rate as the routed model.\\n\\nLearn more about how Not Diamond's meta-model works [here](https://docs.notdiamond.ai/docs/how-not-diamond-works).\\n\\nRequests will be routed to the following models:\\n- [openai/gpt-4o-2024-08-06](/openai/gpt-4o-2024-08-06)\\n- [openai/gpt-4o-2024-05-13](/openai/gpt-4o-2024-05-13)\\n- [openai/gpt-4o-mini-2024-07-18](/openai/gpt-4o-mini-2024-07-18)\\n- [openai/chatgpt-4o-latest](/openai/chatgpt-4o-latest)\\n- [openai/o1-preview-2024-09-12](/openai/o1-preview-2024-09-12)\\n- [openai/o1-mini-2024-09-12](/openai/o1-mini-2024-09-12)\\n- [anthropic/claude-3.5-sonnet](/anthropic/claude-3.5-sonnet)\\n- [anthropic/claude-3.5-haiku](/anthropic/claude-3.5-haiku)\\n- [anthropic/claude-3-opus](/anthropic/claude-3-opus)\\n- [anthropic/claude-2.1](/anthropic/claude-2.1)\\n- [google/gemini-pro-1.5](/google/gemini-pro-1.5)\\n- [google/gemini-flash-1.5](/google/gemini-flash-1.5)\\n- [mistralai/mistral-large-2407](/mistralai/mistral-large-2407)\\n- [mistralai/mistral-nemo](/mistralai/mistral-nemo)\\n- [meta-llama/llama-3.1-70b-instruct](/meta-llama/llama-3.1-70b-instruct)\\n- [meta-llama/llama-3.1-405b-instruct](/meta-llama/llama-3.1-405b-instruct)\\n- [mistralai/mixtral-8x22b-instruct](/mistralai/mixtral-8x22b-instruct)\\n- [cohere/command-r-plus](/cohere/command-r-plus)\\n- [cohere/command-r](/cohere/command-r)\",\"context_length\":2000000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Router\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"-1\",\"completion\":\"-1\",\"request\":\"-1\",\"image\":\"-1\"},\"top_provider\":{\"context_length\":null,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-3.5-turbo-1106\",\"name\":\"OpenAI: GPT-3.5 Turbo 16k (older v1106)\",\"created\":1699228800,\"description\":\"An older GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Sep 2021.\",\"context_length\":16385,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16385,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/gpt-4-1106-preview\",\"name\":\"OpenAI: GPT-4 Turbo (older v1106)\",\"created\":1699228800,\"description\":\"The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling.\\n\\nTraining data: up to April 2023.\",\"context_length\":128000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00001\",\"completion\":\"0.00003\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":128000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"google/palm-2-chat-bison-32k\",\"name\":\"Google: PaLM 2 Chat 32k\",\"created\":1698969600,\"description\":\"PaLM 2 is a language model by Google with improved multilingual, reasoning and coding capabilities.\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"PaLM\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/palm-2-codechat-bison-32k\",\"name\":\"Google: PaLM 2 Code Chat 32k\",\"created\":1698969600,\"description\":\"PaLM 2 fine-tuned for chatbot conversations that help with code-related questions.\",\"context_length\":32768,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"PaLM\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32768,\"max_completion_tokens\":8192,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"jondurbin/airoboros-l2-70b\",\"name\":\"Airoboros 70B\",\"created\":1698537600,\"description\":\"A Llama 2 70B fine-tune using synthetic data (the Airoboros dataset).\\n\\nCurrently based on [jondurbin/airoboros-l2-70b](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1), but might get updated in the future.\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"airoboros\"},\"pricing\":{\"prompt\":\"0.0000005\",\"completion\":\"0.0000005\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"xwin-lm/xwin-lm-70b\",\"name\":\"Xwin 70B\",\"created\":1697328000,\"description\":\"Xwin-LM aims to develop and open-source alignment tech for LLMs. Our first release, built-upon on the [Llama2](/models/${Model.Llama_2_13B_Chat}) base models, ranked TOP-1 on AlpacaEval. Notably, it's the first to surpass [GPT-4](/models/${Model.GPT_4}) on this benchmark. The project will be continuously updated.\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"airoboros\"},\"pricing\":{\"prompt\":\"0.00000375\",\"completion\":\"0.00000375\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":512,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-3.5-turbo-instruct\",\"name\":\"OpenAI: GPT-3.5 Turbo Instruct\",\"created\":1695859200,\"description\":\"This model is a variant of GPT-3.5 Turbo tuned for instructional prompts and omitting chat-related optimizations. Training data: up to Sep 2021.\",\"context_length\":4095,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":\"chatml\"},\"pricing\":{\"prompt\":\"0.0000015\",\"completion\":\"0.000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4095,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"mistralai/mistral-7b-instruct-v0.1\",\"name\":\"Mistral: Mistral 7B Instruct v0.1\",\"created\":1695859200,\"description\":\"A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"mistral\"},\"pricing\":{\"prompt\":\"0.00000018\",\"completion\":\"0.00000018\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"pygmalionai/mythalion-13b\",\"name\":\"Pygmalion: Mythalion 13B\",\"created\":1693612800,\"description\":\"A blend of the new Pygmalion-13b and MythoMax. #merge\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.0000012\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-3.5-turbo-16k\",\"name\":\"OpenAI: GPT-3.5 Turbo 16k\",\"created\":1693180800,\"description\":\"This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single request at a higher cost. Training data: up to Sep 2021.\",\"context_length\":16385,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000003\",\"completion\":\"0.000004\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16385,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/gpt-4-32k\",\"name\":\"OpenAI: GPT-4 32k\",\"created\":1693180800,\"description\":\"GPT-4-32k is an extended version of GPT-4, with the same capabilities but quadrupled context length, allowing for processing up to 40 pages of text in a single pass. This is particularly beneficial for handling longer content like interacting with PDFs without an external vector database. Training data: up to Sep 2021.\",\"context_length\":32767,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00006\",\"completion\":\"0.00012\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32767,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/gpt-4-32k-0314\",\"name\":\"OpenAI: GPT-4 32k (older v0314)\",\"created\":1693180800,\"description\":\"GPT-4-32k is an extended version of GPT-4, with the same capabilities but quadrupled context length, allowing for processing up to 40 pages of text in a single pass. This is particularly beneficial for handling longer content like interacting with PDFs without an external vector database. Training data: up to Sep 2021.\",\"context_length\":32767,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00006\",\"completion\":\"0.00012\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":32767,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"nousresearch/nous-hermes-llama2-13b\",\"name\":\"Nous: Hermes 13B\",\"created\":1692489600,\"description\":\"A state-of-the-art language model fine-tuned on over 300k instructions by Nous Research, with Teknium and Emozilla leading the fine tuning process.\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.00000017\",\"completion\":\"0.00000017\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"mancer/weaver\",\"name\":\"Mancer: Weaver (alpha)\",\"created\":1690934400,\"description\":\"An attempt to recreate Claude-style verbosity, but don't expect the same level of coherence or memory. Meant for use in roleplay/narrative situations.\",\"context_length\":8000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.0000015\",\"completion\":\"0.00000225\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8000,\"max_completion_tokens\":1000,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"huggingfaceh4/zephyr-7b-beta:free\",\"name\":\"Hugging Face: Zephyr 7B (free)\",\"created\":1690934400,\"description\":\"Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](/models/mistralai/mistral-7b-instruct-v0.1) that was trained on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO).\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Mistral\",\"instruct_type\":\"zephyr\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-2.0:beta\",\"name\":\"Anthropic: Claude v2.0 (self-moderated)\",\"created\":1690502400,\"description\":\"Anthropic's flagship model. Superior performance on tasks that require complex reasoning. Supports hundreds of pages of text.\",\"context_length\":100000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000008\",\"completion\":\"0.000024\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":100000,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"anthropic/claude-2.0\",\"name\":\"Anthropic: Claude v2.0\",\"created\":1690502400,\"description\":\"Anthropic's flagship model. Superior performance on tasks that require complex reasoning. Supports hundreds of pages of text.\",\"context_length\":100000,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Claude\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000008\",\"completion\":\"0.000024\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":100000,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"undi95/remm-slerp-l2-13b\",\"name\":\"ReMM SLERP 13B\",\"created\":1689984000,\"description\":\"A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.0000008\",\"completion\":\"0.0000012\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"undi95/remm-slerp-l2-13b:extended\",\"name\":\"ReMM SLERP 13B (extended)\",\"created\":1689984000,\"description\":\"A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge\",\"context_length\":6144,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.000001125\",\"completion\":\"0.000001125\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":6144,\"max_completion_tokens\":512,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/palm-2-chat-bison\",\"name\":\"Google: PaLM 2 Chat\",\"created\":1689811200,\"description\":\"PaLM 2 is a language model by Google with improved multilingual, reasoning and coding capabilities.\",\"context_length\":9216,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"PaLM\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":9216,\"max_completion_tokens\":1024,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"google/palm-2-codechat-bison\",\"name\":\"Google: PaLM 2 Code Chat\",\"created\":1689811200,\"description\":\"PaLM 2 fine-tuned for chatbot conversations that help with code-related questions.\",\"context_length\":7168,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"PaLM\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.000001\",\"completion\":\"0.000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":7168,\"max_completion_tokens\":1024,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"gryphe/mythomax-l2-13b:free\",\"name\":\"MythoMax 13B (free)\",\"created\":1688256000,\"description\":\"One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0\",\"completion\":\"0\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":2048,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"gryphe/mythomax-l2-13b\",\"name\":\"MythoMax 13B\",\"created\":1688256000,\"description\":\"One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.000000065\",\"completion\":\"0.000000065\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":4096,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"gryphe/mythomax-l2-13b:nitro\",\"name\":\"MythoMax 13B (nitro)\",\"created\":1688256000,\"description\":\"One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.0000002\",\"completion\":\"0.0000002\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"gryphe/mythomax-l2-13b:extended\",\"name\":\"MythoMax 13B (extended)\",\"created\":1688256000,\"description\":\"One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge\",\"context_length\":8192,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"alpaca\"},\"pricing\":{\"prompt\":\"0.000001125\",\"completion\":\"0.000001125\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8192,\"max_completion_tokens\":512,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"meta-llama/llama-2-13b-chat\",\"name\":\"Meta: Llama 2 13B Chat\",\"created\":1687219200,\"description\":\"A 13 billion parameter language model from Meta, fine tuned for chat completions\",\"context_length\":4096,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"Llama2\",\"instruct_type\":\"llama2\"},\"pricing\":{\"prompt\":\"0.000000198\",\"completion\":\"0.000000198\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":4096,\"max_completion_tokens\":null,\"is_moderated\":false},\"per_request_limits\":null},{\"id\":\"openai/gpt-3.5-turbo\",\"name\":\"OpenAI: GPT-3.5 Turbo\",\"created\":1685232000,\"description\":\"GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks.\\n\\nTraining data up to Sep 2021.\",\"context_length\":16385,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000005\",\"completion\":\"0.0000015\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16385,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/gpt-3.5-turbo-0125\",\"name\":\"OpenAI: GPT-3.5 Turbo 16k\",\"created\":1685232000,\"description\":\"The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Sep 2021.\\n\\nThis version has a higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls.\",\"context_length\":16385,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.0000005\",\"completion\":\"0.0000015\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":16385,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/gpt-4\",\"name\":\"OpenAI: GPT-4\",\"created\":1685232000,\"description\":\"OpenAI's flagship model, GPT-4 is a large-scale multimodal language model capable of solving difficult problems with greater accuracy than previous models due to its broader general knowledge and advanced reasoning capabilities. Training data: up to Sep 2021.\",\"context_length\":8191,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00003\",\"completion\":\"0.00006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8191,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null},{\"id\":\"openai/gpt-4-0314\",\"name\":\"OpenAI: GPT-4 (older v0314)\",\"created\":1685232000,\"description\":\"GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training data: up to Sep 2021.\",\"context_length\":8191,\"architecture\":{\"modality\":\"text->text\",\"tokenizer\":\"GPT\",\"instruct_type\":null},\"pricing\":{\"prompt\":\"0.00003\",\"completion\":\"0.00006\",\"image\":\"0\",\"request\":\"0\"},\"top_provider\":{\"context_length\":8191,\"max_completion_tokens\":4096,\"is_moderated\":true},\"per_request_limits\":null}]}"
  },
  {
    "path": "packages/backend/resources/openapi/stabilityai.json",
    "content": "{\n  \"openapi\": \"3.0.3\",\n  \"info\": {\n    \"version\": \"v2beta\",\n    \"title\": \"StabilityAI REST API\",\n    \"description\": \"Welcome to the Stability Platform API. As of March 2024, we are building the REST v2beta API service to be the primary API service for the Stability Platform. \\nAll AI services on other APIs (gRPC, REST v1, RESTv2alpha) will continue to be maintained, however they will not receive\\nnew features or parameters.\\n\\nIf you are a REST v2alpha user, we strongly recommend that you adjust the URL calls for the specific services that you are using over to the equivalent REST v2beta URL. Normally, this means simply replacing \\\"v2alpha\\\" with \\\"v2beta\\\". We are not deprecating v2alpha URLs at this time for users that are currently using them.\\n\\n#### Authentication\\n\\nYou will need your [Stability API key](https://platform.stability.ai/account/keys) in order to make requests to this API.\\nMake sure you never share your API key with anyone, and you never commit it to a public repository. Include this key in \\nthe `Authorization` header of your requests.\\n\\n#### Rate limiting\\n\\nThis API is rate-limited to 150 requests every 10 seconds. If you exceed this limit, you will receive a `429` response\\nand be timed out for 60 seconds. If you find this limit too restrictive, please reach out to us via [this form](https://stabilityplatform.freshdesk.com/support/home).\\n\\n#### Support\\n\\nPlease see our [FAQ](https://platform.stability.ai/faq) for answers to common questions. If you have any other questions or concerns,\\nplease reach out to us via [this form](https://stabilityplatform.freshdesk.com/support/tickets/new).\\n\\nTo see the health of our APIs, please check our [Status Page](https://stabilityai.instatus.com/).\"\n  },\n  \"servers\": [\n    {\n      \"url\": \"https://api.stability.ai\"\n    }\n  ],\n  \"security\": [\n    {\n      \"STABILITY_API_KEY\": []\n    }\n  ],\n  \"tags\": [\n    {\n      \"name\": \"Edit\",\n      \"description\": \"Tools for editing your own and generated images.\\n\\n**[Erase](/docs/api-reference#tag/Edit/paths/~1v2beta~1stable-image~1edit~1erase/post)**\\n\\nThe Erase service removes unwanted objects, such as blemishes on portraits or items on desks, using image masks.\\n\\n**[Outpaint](/docs/api-reference#tag/Edit/paths/~1v2beta~1stable-image~1edit~1outpaint/post)**\\n\\nThe outpaint service inserts additional content in an image to fill in the space in any direction, allowing you to \\\"zoom-out\\\" of an image.\\n\\n**[Inpaint](/docs/api-reference#tag/Edit/paths/~1v2beta~1stable-image~1edit~1inpaint/post)**\\n\\nThe Inpaint service modifies images by filling in or replacing specified areas with new content based on the content of a \\\"mask\\\" image.\\n\\n**[Search and Replace](/docs/api-reference#tag/Edit/paths/~1v2beta~1stable-image~1edit~1search-and-replace/post)**\\n\\nThe Search and Replace service, similar to inpaint, allows to replace specified areas with new content, but this time with the help of a prompt instead of a mask. The service will automatically segment the object and replace it with the object requested in the prompt.\\n\\n**[Search and Recolor](/docs/api-reference#tag/Edit/paths/~1v2beta~1stable-image~1edit~1search-and-recolor/post)**\\n\\nThe Search and Recolor service is another derivative of the inpaint service and provides the ability to change the color of a specific object in an image using a prompt. The Search and Recolor service will automatically segment the object and recolor it using the colors requested in the prompt.\\n\\n**[Remove Background](/docs/api-reference#tag/Edit/paths/~1v2beta~1stable-image~1edit~1remove-background/post)**\\n\\nThe Remove Background service accurately segments the foreground from an image to removes the background.\"\n    },\n    {\n      \"name\": \"Upscale\",\n      \"description\": \"Tools for increasing the size and resolution of your existing images.\\n\\n**[Fast Upscaler](/docs/api-reference#tag/Upscale/paths/~1v2beta~1stable-image~1upscale~1fast/post)**\\n\\nThis service enhances image resolution by 4x using predictive and generative AI. This lightweight and fast service (processing in ~1 second) is ideal for enhancing the quality of compressed images, making it suitable for social media posts and other applications.\\n\\n**[Conservative Upscaler](/docs/api-reference#tag/Upscale/paths/~1v2beta~1stable-image~1upscale~1conservative/post)**\\n\\nThis service can upscale images by 20 to 40 times up to a 4 megapixel output image with minimal alteration to the original image. The Conservative Upscaler can upscale images as small as 64x64 pixels directly to a 4 megapixel output. Use this option if you directly need a 4 megapixel output.\\n\\n**[Creative Upscaler](/docs/api-reference#tag/Upscale/paths/~1v2beta~1stable-image~1upscale~1creative/post)**\\n\\nThe service can upscale highly degraded images (lower than 1 megapixel) with a creative twist to provide high resolution results.\"\n    },\n    {\n      \"name\": \"Generate\",\n      \"description\": \"Tools to generate new images from text, or create variations of existing images. Our different services include:\\n\\n**[Stable Image Ultra](/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1ultra/post)**: Photorealistic, Large-Scale Output\\n\\nOur state of the art text to image model based on Stable Diffusion 3.5. Stable Image Ultra Produces the highest quality, photorealistic outputs perfect for professional print media and large format applications. Stable Image Ultra excels at rendering exceptional detail and realism.\\n\\n**[Stable Image Core](/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1core/post)**: Fast and Affordable\\n\\nOptimized for fast and aﬀordable image generation, great for rapidly iterating on concepts during ideation. Stable Image Core is the next generation model following Stable Diffusion XL.\\n\\n**[Stable Diffusion 3 & 3.5 Model Suite](/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1sd3/post)**: Stability AI's latest base models\\n\\nThe different versions of our open models are available via API, letting you test and adjust speed and quality based on your use case. All model versions strike a balance between generation speed and output quality and are ideal for creating high-volume, high-quality digital assets like websites, newsletters, and marketing materials.\"\n    },\n    {\n      \"name\": \"Control\",\n      \"description\": \"Tools for generating precise, controlled variations of existing images or sketches.\\n\\n**[Sketch](/docs/api-reference#tag/Control/paths/~1v2beta~1stable-image~1control~1sketch/post)**\\n\\nThis service upgrades sketches to refined outputs with precise control. For non-sketch images, it allows detailed manipulation of the final appearance by leveraging the contour lines and edges within the image. \\n\\n**[Structure](/docs/api-reference#tag/Control/paths/~1v2beta~1stable-image~1control~1structure/post)**\\n\\nThis service excels in generating images by maintaining the structure of an input image, making it especially valuable for advanced content creation scenarios such as recreating scenes or rendering characters from models.\\n\\n**[Style](/docs/api-reference#tag/Control/paths/~1v2beta~1stable-image~1control~1style/post)**\\n\\nThis service extracts stylistic elements from an input image (control image) and uses it to guide the creation of an output image based on the prompt. The result is a new image in the same style as the control image.\"\n    },\n    {\n      \"name\": \"Results\",\n      \"description\": \"Tools for fetching the results of your async generations.\"\n    },\n    {\n      \"name\": \"User\",\n      \"description\": \"Manage your Stability account, and view account/organization balances.\"\n    },\n    {\n      \"name\": \"Engines\",\n      \"description\": \"Enumerate engines that work with 'Version 1' REST API endpoints.\"\n    },\n    {\n      \"name\": \"SDXL 1.0 & SD1.6\",\n      \"description\": \"Generate images using SDXL 1.0 or SD1.6.\"\n    }\n  ],\n  \"paths\": {\n    \"/v2alpha/generation/image-to-video\": {\n      \"post\": {\n        \"tags\": [\"v2alpha/generation\"],\n        \"summary\": \"image-to-video\",\n        \"description\": \"Generate a short video based on an initial image with [Stable Video Diffusion](https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf),\\na latent video diffusion model. \\n\\n\\n\\n### How to generate a video\\nVideo generations are asynchronous, so after starting a generation use the `id` returned in the response to poll [/v2alpha/generation/image-to-video/result/{id}](#tag/v2alphageneration/paths/~1v2alpha~1generation~1image-to-video~1result~1%7Bid%7D/get) for results.\\n\\n### Price\\nFlat rate of 20 cents per generation.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2alpha/generation/image-to-video\\\",\\n    headers={\\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\"},\\n    files={\\\"image\\\": open(\\\"./kittens-in-space.png\\\", \\\"rb\\\")},\\n    data={\\n        \\\"seed\\\": 0,\\n        \\\"cfg_scale\\\": 1.8,\\n        \\\"motion_bucket_id\\\": 127\\n    },\\n)\\n\\nprint(\\\"Generation ID:\\\", response.json().get('id'))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst data = new FormData();\\ndata.append(\\\"image\\\", fs.readFileSync(\\\"./image.png\\\"), \\\"image.png\\\");\\ndata.append(\\\"seed\\\", 0);\\ndata.append(\\\"cfg_scale\\\", 1.8);\\ndata.append(\\\"motion_bucket_id\\\", 127);\\n\\nconst response = await axios.request({\\n  url: `https://api.stability.ai/v2alpha/generation/image-to-video`,\\n  method: \\\"post\\\",\\n  validateStatus: undefined,\\n  headers: {\\n    authorization: `Bearer sk-MYAPIKEY`,\\n    ...data.getHeaders(),\\n  },\\n  data: data,\\n});\\n\\nconsole.log(\\\"Generation ID:\\\", response.data.id);\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2alpha/generation/image-to-video\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -F image=@\\\"./image.png\\\" \\\\\\n  -F seed=0 \\\\\\n  -F cfg_scale=1.8 \\\\\\n  -F motion_bucket_id=127 \\\\\\n  -o \\\"./output.json\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"$ref\": \"#/components/schemas/ImageToVideoRequest\"\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Video generation started. Poll for results using the `id` in the response [here](#tag/v2alphageneration/paths/~1v2alpha~1generation~1image-to-video~1result~1%7Bid%7D/get).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    }\n                  },\n                  \"required\": [\"id\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2alpha/generation/image-to-video/result/{id}\": {\n      \"get\": {\n        \"tags\": [\"v2alpha/generation\"],\n        \"summary\": \"image-to-video/result\",\n        \"description\": \"Fetch the result of an image-to-video generation by ID. Make sure you use the same API key that you used to\\ngenerate the video, otherwise you will receive a `404` response.\\n\\n### How is progress reported?\\nYour generation is either `in-progress` (i.e. status code `202`) or it is complete (i.e. status code `200`). \\nWe may add more fine-grained progress reporting in the future (e.g. a numerical progress).\\n\\n### How long are results stored?\\nResults are stored for 24 hours after generation. After that, the results are deleted and you will need to \\nre-generate your video.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\ngeneration_id = \\\"e52772ac75b...\\\"\\n\\nresponse = requests.request(\\n    \\\"GET\\\",\\n    f\\\"https://api.stability.ai/v2alpha/generation/image-to-video/result/{generation_id}\\\",\\n    headers={\\n        'Accept': \\\"video/*\\\",  # Use 'application/json' to receive base64 encoded JSON\\n        'authorization': f\\\"Bearer sk-MYAPIKEY\\\"\\n    },\\n)\\n\\nif response.status_code == 202:\\n    print(\\\"Generation in-progress, try again in 10 seconds.\\\")\\nelif response.status_code == 200:\\n    print(\\\"Generation complete!\\\")\\n    with open(\\\"video.mp4\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst generationID = \\\"e52772ac75b...\\\";\\n\\nconst response = await axios.request({\\n  url: `https://api.stability.ai/v2alpha/generation/image-to-video/result/${generationID}`,\\n  method: \\\"GET\\\",\\n  validateStatus: undefined,\\n  responseType: \\\"arraybuffer\\\",\\n  headers: {\\n    accept: \\\"video/*\\\", // Use 'application/json' to receive base64 encoded JSON\\n    authorization: `Bearer sk-MYAPIKEY`,\\n  },\\n});\\n\\nif (response.status === 202) {\\n  console.log(\\\"Generation is still running, try again in 10 seconds.\\\");\\n} else if (response.status === 200) {\\n  console.log(\\\"Generation is complete!\\\");\\n  fs.writeFileSync(\\\"video.mp4\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`Response ${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"generation_id=\\\"e52772ac75b...\\\"\\nurl=\\\"https://api.stability.ai/v2alpha/generation/image-to-video/result/$generation_id\\\"\\nhttp_status=$(curl -sS -f -o \\\"./output.mp4\\\" -w '%{http_code}' -H \\\"authorization: sk-MYAPIKEY\\\" -H 'accept: video/*' \\\"$url\\\")\\n\\ncase $http_status in\\n    202)\\n        echo \\\"Still processing. Retrying in 10 seconds...\\\"\\n        ;;\\n    200)\\n        echo \\\"Download complete!\\\"\\n        ;;\\n    4*|5*)\\n        mv \\\"./output.mp4\\\" \\\"./error.json\\\"\\n        echo \\\"Error: Check ./error.json for details.\\\"\\n        exit 1\\n        ;;\\nesac\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/GenerationID\"\n            },\n            \"required\": true,\n            \"name\": \"id\",\n            \"in\": \"path\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"video/*\",\n              \"description\": \"Specify `video/*` to get the video bytes directly. Otherwise specify `application/json` to receive the video as base64 encoded JSON.\",\n              \"enum\": [\"video/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          }\n        ],\n        \"responses\": {\n          \"200\": {\n            \"description\": \"The result of your video generation.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated video.\\n\\n To receive the bytes of the video directly, specify `video/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"mp4\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"video/mp4\"\n                  },\n                  \"mp4JSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=video/mp4\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and one or more frames have been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"video/mp4\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated video.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated mp4.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=video/mp4\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"video\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated video, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and one or more frames have been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"video\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"202\": {\n            \"description\": \"Your image-to-video generation is still in-progress.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    },\n                    \"status\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"in-progress\"],\n                      \"description\": \"The status of your generation.\"\n                    }\n                  },\n                  \"required\": [\"id\", \"status\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"404\": {\n            \"description\": \"id: the generation either does not exist or has expired.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2bca35116bc5431d6dc4b4ea2ef3da2f\",\n                    \"name\": \"generation_not_found\",\n                    \"errors\": [\n                      \"id: the generation either does not exist or has expired.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2alpha/generation/stable-image/upscale\": {\n      \"post\": {\n        \"tags\": [\"v2alpha/generation\"],\n        \"summary\": \"stable-image/upscale\",\n        \"description\": \"Takes images between 64x64 and 1 megapixel and upscales them all the way to **4K** resolution.  Put more \\ngenerally, it can upscale images ~20-40x times while preserving, and often enhancing, quality.\\n\\n### How to use\\n  - Invoke this endpoint with the required parameters to start a generation\\n  - Use that `id` in the response to poll for results at the [upscale/result/{id}](#tag/v2alphageneration/paths/~1v2alpha~1generation~1stable-image~1upscale~1result~1%7Bid%7D/get) endpoint\\n    - Rate-limiting or other errors may occur if you poll more than once every 10 seconds\\n    \\n### Price\\nFlat rate of 25 cents per generation.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2alpha/generation/stable-image/upscale\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./kitten-in-space.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"cute fluffy white kitten floating in space, pastel colors\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nprint(\\\"Generation ID:\\\", response.json().get('id'))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst formData = {\\n  image: fs.createReadStream(\\\"./kitten-in-space.png\\\"),\\n  prompt: \\\"cute fluffy white kitten floating in space, pastel colors\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2alpha/generation/stable-image/upscale`,\\n  axios.toFormData(formData, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    headers: { Authorization: `Bearer sk-MYAPIKEY` },\\n  },\\n);\\n\\nconsole.log(\\\"Generation ID:\\\", response.data.id);\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2alpha/generation/stable-image/upscale\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -F image=@\\\"./kitten-in-rainforest.png\\\" \\\\\\n  -F prompt=\\\"cute fluffy white kitten sitting in a rainforest, pastel colors\\\" \\\\\\n  -F output_format=webp \\\\\\n  -o \\\"./output.json\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image you wish to upscale.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 1,048,576 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"creativity\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 0.35,\n                    \"default\": 0.3,\n                    \"description\": \"Indicates how creative the model should be when upscaling an image.\\nHigher values will result in more details being added to the image during upscaling.\"\n                  }\n                },\n                \"required\": [\"image\", \"prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Upscaling was successful!\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    }\n                  },\n                  \"required\": [\"id\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2alpha/generation/stable-image/upscale/result/{id}\": {\n      \"get\": {\n        \"tags\": [\"v2alpha/generation\"],\n        \"summary\": \"stable-image/upscale/result\",\n        \"description\": \"Fetch the result of an upscale generation by ID. Make sure to use the same API key to fetch the generation result\\nthat you used to create the generation, otherwise you will receive a `404` response.\\n\\n### How is progress reported?\\nYour generation is either `in-progress` (i.e. status code `202`) or it is complete (i.e. status code `200`). \\nWe may add more fine-grained progress reporting in the future (e.g. a numerical progress).\\n\\n### How long are results stored?\\nResults are stored for 24 hours after generation. After that, the results are deleted.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\ngeneration_id = \\\"e52772ac75b...\\\"\\n\\nresponse = requests.request(\\n    \\\"GET\\\",\\n    f\\\"https://api.stability.ai/v2alpha/generation/stable-image/upscale/result/{generation_id}\\\",\\n    headers={\\n        'Accept': \\\"image/*\\\",  # Use 'application/json' to receive base64 encoded JSON\\n        'authorization': f\\\"Bearer sk-MYAPIKEY\\\"\\n    },\\n)\\n\\nif response.status_code == 202:\\n    print(\\\"Generation in-progress, try again in 10 seconds.\\\")\\nelif response.status_code == 200:\\n    print(\\\"Generation complete!\\\")\\n    with open(\\\"upscaled.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst generationID = \\\"e52772ac75b...\\\";\\n\\nconst response = await axios.request({\\n  url: `https://api.stability.ai/v2alpha/generation/stable-image/upscale/result/${generationID}`,\\n  method: \\\"GET\\\",\\n  validateStatus: undefined,\\n  responseType: \\\"arraybuffer\\\",\\n  headers: {\\n    accept: \\\"image/*\\\", // Use 'application/json' to receive base64 encoded JSON\\n    authorization: `Bearer sk-MYAPIKEY`,\\n  },\\n});\\n\\nif (response.status === 202) {\\n  console.log(\\\"Generation is still running, try again in 10 seconds.\\\");\\n} else if (response.status === 200) {\\n  console.log(\\\"Generation is complete!\\\");\\n  fs.writeFileSync(\\\"upscaled.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`Response ${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"generation_id=\\\"e52772ac75b...\\\"\\nurl=\\\"https://api.stability.ai/v2alpha/generation/stable-image/upscale/result/$generation_id\\\"\\nhttp_status=$(curl -sS -f -o \\\"./upscaled.webp\\\" -w '%{http_code}' -H \\\"authorization: sk-MYAPIKEY\\\" -H 'accept: image/*' \\\"$url\\\")\\n\\ncase $http_status in\\n    202)\\n        echo \\\"Still processing. Retrying in 10 seconds...\\\"\\n        ;;\\n    200)\\n        echo \\\"Download complete!\\\"\\n        ;;\\n    4*|5*)\\n        mv \\\"./upscaled.webp\\\" \\\"./error.json\\\"\\n        echo \\\"Error: Check ./error.json for details.\\\"\\n        exit 1\\n        ;;\\nesac\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/GenerationID\"\n            },\n            \"required\": true,\n            \"name\": \"id\",\n            \"in\": \"path\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to get the image bytes directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          }\n        ],\n        \"responses\": {\n          \"200\": {\n            \"description\": \"The upscale was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"202\": {\n            \"description\": \"Your upscale generation is still in-progress.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    },\n                    \"status\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"in-progress\"],\n                      \"description\": \"The status of your generation.\"\n                    }\n                  },\n                  \"required\": [\"id\", \"status\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"404\": {\n            \"description\": \"id: the generation either does not exist or has expired.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2bca35116bc5431d6dc4b4ea2ef3da2f\",\n                    \"name\": \"generation_not_found\",\n                    \"errors\": [\n                      \"id: the generation either does not exist or has expired.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2alpha/generation/stable-image/inpaint\": {\n      \"post\": {\n        \"tags\": [\"v2alpha/generation\"],\n        \"summary\": \"stable-image/inpaint\",\n        \"description\": \"Inpaint an existing image, with or without a mask, using our latest-and-greatest inpainting model.\\n\\n### Search-and-Replace Mode\\nThis mode is ideal for individuals of all levels of skill in design. It can be used for straightforward \\nadjustments to images. The service will automatically mask the most appropriate object based on the contents\\nof the `search_prompt`, and replace it with a generated result based on the `prompt`.\\n\\n**How to use:** set the `mode` parameter to `search` and provide a short description of what to \\nsearch-and-replace in the `search_prompt` parameter.\\n\\n### Mask Mode\\nThis mode allows for precise control of generative fill tasks on an image, down to the level of \\nindividual pixels. Design professionals can provide a `mask` for the section of the image to be replaced, \\nand use standard image prompting to describe the full image as it should appear after the editing. \\nThe resulting image will incorporate all of the elements described in the `prompt`.\\n\\n**How to use:** set the `mode` parameter to `mask` and either pass in an `image` with an alpha channel \\nor provide an explicit mask image to the `mask` parameter. If both are present the `mask` parameter will\\ntake precedence.\\n\\n### Price\\n- Requests with `mode` set to `search` cost 4 cents.\\n- Requests with `mode` set to `mask` cost 3 cents.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2alpha/generation/stable-image/inpaint\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./husky-in-a-field.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"golden retriever in a field\\\",\\n        \\\"mode\\\": \\\"search\\\",\\n        \\\"search_prompt\\\": \\\"dog\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./golden-retriever-in-a-field.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst formData = {\\n  image: fs.createReadStream(\\\"./husky-in-a-field.png\\\"),\\n  prompt: \\\"golden retriever standing in a field\\\",\\n  mode: \\\"search\\\",\\n  search_prompt: \\\"dog\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2alpha/generation/stable-image/inpaint`,\\n  axios.toFormData(formData, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { Authorization: `Bearer sk-MYAPIKEY`, accept: \\\"image/*\\\" },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./golden-retriever-in-a-field.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2alpha/generation/stable-image/inpaint\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./husky-in-a-field.png\\\" \\\\\\n  -F prompt=\\\"golden retriever in a field\\\" \\\\\\n  -F mode=\\\"search\\\" \\\\\\n  -F search_prompt=\\\"dog\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./golden-retriever-in-a-field.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to get the image bytes directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"oneOf\": [\n                  {\n                    \"$ref\": \"#/components/schemas/InpaintingSearchModeRequestBody\"\n                  },\n                  {\n                    \"$ref\": \"#/components/schemas/InpaintingMaskingModeRequestBody\"\n                  }\n                ],\n                \"discriminator\": {\n                  \"propertyName\": \"mode\",\n                  \"mapping\": {\n                    \"search\": \"#/components/schemas/InpaintingSearchModeRequestBody\",\n                    \"mask\": \"#/components/schemas/InpaintingMaskingModeRequestBody\"\n                  }\n                }\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Inpainting was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/image-to-video\": {\n      \"post\": {\n        \"tags\": [\"Image-to-Video\"],\n        \"summary\": \"Start generation\",\n        \"description\": \"Generate a short video based on an initial image with [Stable Video Diffusion](https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf),\\na latent video diffusion model. \\n\\n\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`.\\n\\nThe body of the request should include:\\n- `image`\\n\\nThe body may optionally include:\\n- `seed`\\n- `cfg_scale`\\n- `motion_bucket_id`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\nAfter invoking this endpoint with the required parameters, use the `id` in the response to poll for results at the\\n[image-to-video/result/{id}](#tag/Image-to-Video/paths/~1v2beta~1image-to-video~1result~1%7Bid%7D/get) endpoint.  Rate-limiting or other errors may occur if you poll more than once every 10 seconds.\\n\\n### Credits\\nFlat rate of 20 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/image-to-video\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./kittens-in-space.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"seed\\\": 0,\\n        \\\"cfg_scale\\\": 1.8,\\n        \\\"motion_bucket_id\\\": 127\\n    },\\n)\\n\\nprint(\\\"Generation ID:\\\", response.json().get('id'))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst data = new FormData();\\ndata.append(\\\"image\\\", fs.readFileSync(\\\"./image.png\\\"), \\\"image.png\\\");\\ndata.append(\\\"seed\\\", 0);\\ndata.append(\\\"cfg_scale\\\", 1.8);\\ndata.append(\\\"motion_bucket_id\\\", 127);\\n\\nconst response = await axios.request({\\n  url: `https://api.stability.ai/v2beta/image-to-video`,\\n  method: \\\"post\\\",\\n  validateStatus: undefined,\\n  headers: {\\n    authorization: `Bearer sk-MYAPIKEY`,\\n    ...data.getHeaders(),\\n  },\\n  data: data,\\n});\\n\\nconsole.log(\\\"Generation ID:\\\", response.data.id);\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/image-to-video\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -F image=@\\\"./image.png\\\" \\\\\\n  -F seed=0 \\\\\\n  -F cfg_scale=1.8 \\\\\\n  -F motion_bucket_id=127 \\\\\\n  -o \\\"./output.json\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"$ref\": \"#/components/schemas/ImageToVideoRequest\"\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Video generation started. Poll for results using the `id` in the response [here](#tag/Image-to-Video/paths/~1v2beta~1image-to-video~1result~1%7Bid%7D/get).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    }\n                  },\n                  \"required\": [\"id\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/image-to-video/result/{id}\": {\n      \"get\": {\n        \"tags\": [\"Image-to-Video\"],\n        \"summary\": \"Fetch generation result\",\n        \"description\": \"Fetch the result of an image-to-video generation by ID.\\n\\nMake sure to use the same API key to fetch the generation result that you used to create the generation, \\notherwise you will receive a `404` response.\\n\\n### How to use\\nPlease invoke this endpoint with a `GET` request.\\n\\nThe headers of the request must include an API key in the `authorization` field and the ID\\nof your generation must be in the path.\\n\\n### How is progress reported?\\nYour generation is either `in-progress` (i.e. status code `202`) or it is complete (i.e. status code `200`). \\nWe may add more fine-grained progress reporting in the future (e.g. a numerical progress).\\n\\n### How long are results stored?\\nResults are stored for 24 hours after generation. After that, the results are deleted and you will need to \\nre-generate your video.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\ngeneration_id = \\\"e52772ac75b...\\\"\\n\\nresponse = requests.request(\\n    \\\"GET\\\",\\n    f\\\"https://api.stability.ai/v2beta/image-to-video/result/{generation_id}\\\",\\n    headers={\\n        'accept': \\\"video/*\\\",  # Use 'application/json' to receive base64 encoded JSON\\n        'authorization': f\\\"Bearer sk-MYAPIKEY\\\"\\n    },\\n)\\n\\nif response.status_code == 202:\\n    print(\\\"Generation in-progress, try again in 10 seconds.\\\")\\nelif response.status_code == 200:\\n    print(\\\"Generation complete!\\\")\\n    with open(\\\"video.mp4\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst generationID = \\\"e52772ac75b...\\\";\\n\\nconst response = await axios.request({\\n  url: `https://api.stability.ai/v2beta/image-to-video/result/${generationID}`,\\n  method: \\\"GET\\\",\\n  validateStatus: undefined,\\n  responseType: \\\"arraybuffer\\\",\\n  headers: {\\n    Authorization: `Bearer sk-MYAPIKEY`,\\n    Accept: \\\"video/*\\\", // Use 'application/json' to receive base64 encoded JSON\\n  },\\n});\\n\\nif (response.status === 202) {\\n  console.log(\\\"Generation is still running, try again in 10 seconds.\\\");\\n} else if (response.status === 200) {\\n  console.log(\\\"Generation is complete!\\\");\\n  fs.writeFileSync(\\\"video.mp4\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`Response ${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"generation_id=\\\"e52772ac75b...\\\"\\nurl=\\\"https://api.stability.ai/v2beta/image-to-video/result/$generation_id\\\"\\nhttp_status=$(curl -sS -f -o \\\"./output.mp4\\\" -w '%{http_code}' -H \\\"authorization: sk-MYAPIKEY\\\" -H 'accept: video/*' \\\"$url\\\")\\n\\ncase $http_status in\\n    202)\\n        echo \\\"Still processing. Retrying in 10 seconds...\\\"\\n        ;;\\n    200)\\n        echo \\\"Download complete!\\\"\\n        ;;\\n    4*|5*)\\n        mv \\\"./output.mp4\\\" \\\"./error.json\\\"\\n        echo \\\"Error: Check ./error.json for details.\\\"\\n        exit 1\\n        ;;\\nesac\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/GenerationID\"\n            },\n            \"required\": true,\n            \"name\": \"id\",\n            \"in\": \"path\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"video/*\",\n              \"description\": \"Specify `video/*` to receive the bytes of the video directly. Otherwise specify `application/json` to receive the video as base64 encoded JSON.\",\n              \"enum\": [\"video/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"responses\": {\n          \"200\": {\n            \"description\": \"The result of your video generation.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated video.\\n\\n To receive the bytes of the video directly, specify `video/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"mp4\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"video/mp4\"\n                  },\n                  \"mp4JSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=video/mp4\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and one or more frames have been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"video/mp4\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated video.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated mp4.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=video/mp4\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"video\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated video, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and one or more frames have been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"video\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"202\": {\n            \"description\": \"Your image-to-video generation is still in-progress.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    },\n                    \"status\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"in-progress\"],\n                      \"description\": \"The status of your generation.\"\n                    }\n                  },\n                  \"required\": [\"id\", \"status\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"404\": {\n            \"description\": \"id: the generation either does not exist or has expired.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2bca35116bc5431d6dc4b4ea2ef3da2f\",\n                    \"name\": \"generation_not_found\",\n                    \"errors\": [\n                      \"id: the generation either does not exist or has expired.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/3d/stable-fast-3d\": {\n      \"post\": {\n        \"tags\": [\"3D\"],\n        \"summary\": \"Stable Fast 3D\",\n        \"description\": \"Stable Fast 3D generates high-quality 3D assets from a single 2D input image.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_3D_API.ipynb)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`.\\n\\nThe body of the request should include:\\n- `image`\\n\\nThe body may optionally include:\\n- `texture_resolution`\\n- `foreground_ratio`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe output is a binary blob that includes a glTF asset, including JSON, buffers, and images. \\nSee the [GLB File Format Specification](https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#glb-file-format-specification) for more details.\\n\\n### Credits\\nFlat rate of 2 credits per successful generation. You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/3d/stable-fast-3d\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./cat-statue.png\\\", \\\"rb\\\")\\n    },\\n    data={},\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./3d-cat-statue.glb\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst payload = {\\n    image: fs.createReadStream(\\\"./cat-statue.png\\\"),\\n};\\n\\nconst response = await axios.postForm(\\n    `https://api.stability.ai/v2beta/3d/stable-fast-3d`,\\n    axios.toFormData(payload, new FormData()),\\n    {\\n        validateStatus: undefined,\\n        responseType: \\\"arraybuffer\\\",\\n        headers: {\\n            Authorization: `Bearer sk-MYAPIKEY`,\\n        },\\n    },\\n);\\n\\nif (response.status === 200) {\\n    fs.writeFileSync(\\\"./3d-cat-statue.glb\\\", Buffer.from(response.data));\\n} else {\\n    throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/3d/stable-fast-3d\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -F image=@\\\"./cat-statue.png\\\" \\\\\\n  -o \\\"./3d-cat-statue.glb\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image to generate a 3D model from.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 4,194,304 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"texture_resolution\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"512\", \"1024\", \"2048\"],\n                    \"default\": \"1024\",\n                    \"description\": \"Determines the resolution of the textures used for both the albedo (color) map\\nand the normal map. The resolution is specified in pixels, and a higher value\\ncorresponds to a higher level of detail in the textures, allowing for more\\nintricate and precise rendering of surfaces. However, increasing the resolution\\nalso results in larger asset sizes, which may impact loading times and\\nperformance. 1024 is a good default value and rarely requires changing.\"\n                  },\n                  \"foreground_ratio\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0.1,\n                    \"maximum\": 1,\n                    \"default\": 0.85,\n                    \"description\": \"Controls the amount of padding around the object to be processed within the frame.\\nThis ratio determines the relative size of the object compared to the total frame\\nsize. A higher ratio means less padding and a larger object, while a lower ratio\\nincreases the padding, effectively reducing the object’s size within the frame. This\\ncan be useful when a long and narrow object, such as a car or bus, is viewed from the\\nfront (the narrow side). Here, lowering the foreground ratio might help prevent the\\ngenerated 3D assets from appearing squished or distorted. The default value of 0.85 \\nis good for most objects.\"\n                  },\n                  \"remesh\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"none\", \"triangle\", \"quad\"],\n                    \"default\": \"none\",\n                    \"description\": \"Controls the remeshing algorithm used to generate the 3D model. The remeshing\\nalgorithm determines how the 3D model is constructed from the input image. The\\ndefault value of \\\"none\\\" means that the model is generated without remeshing,\\nwhich is suitable for most use cases. The \\\"triangle\\\" option generates a model\\nwith triangular faces, while the \\\"quad\\\" option generates a model with quadrilateral\\nfaces. The \\\"quad\\\" option is useful when the 3D model will be used in DCC tools such\\nas Maya or Blender.\"\n                  },\n                  \"vertex_count\": {\n                    \"type\": \"number\",\n                    \"minimum\": -1,\n                    \"maximum\": 20000,\n                    \"default\": -1,\n                    \"description\": \"If specified, the result will have approximately this many vertices (and consequently fewer faces) in the simplified mesh. \\n\\nSetting this value to -1 (the default value) means that a limit is not set.\"\n                  }\n                },\n                \"required\": [\"image\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Generation was successful.\",\n            \"headers\": {\n              \"content-type\": {\n                \"description\": \"The format of the 3D model.\",\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"example\": \"model/gltf-binary\"\n                }\n              }\n            },\n            \"content\": {\n              \"model/gltf-binary\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated 3D model.\",\n                  \"format\": \"binary\"\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/3d/stable-point-aware-3d\": {\n      \"post\": {\n        \"tags\": [\"3D\"],\n        \"summary\": \"Stable Point Aware 3D\",\n        \"description\": \"Stable Point Aware 3D (SPAR3D) can make real-time edits and create the complete structure \\nof a 3D object from a single image in a few seconds. SPAR3D combines the strengths of \\npoint-cloud diffusion (probabilistic) and mesh regression (deterministic) to have improved \\ndetails on the unseen back regions in the input image. \\n\\nCompared to our previous model [Stable Fast 3D](#tag/3D/paths/~1v2beta~13d~1stable-fast-3d/post), this new \\none allows editing of backside information using the point cloud representation and also \\nleverages a larger Diffusion model to generally improve the depth and backside \\npredictions.\\n\\nRead more about the model capabilities [here](https://bit.ly/4h7cpgF). \\n\\nThis API is currently in \\npreview. Please don’t hesitate to [contact us](https://stability.ai/contact) with any questions. \\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_3D_API.ipynb)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`.\\n\\nThe body of the request should include:\\n- `image`\\n\\nThe body may optionally include:\\n- `texture_resolution`\\n- `foreground_ratio`\\n- `remesh`\\n- `target_type`\\n- `target_count`\\n- `guidance_scale`\\n- `seed`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe output is a binary blob that includes a glTF asset, including JSON, buffers, and images. \\nSee the [GLB File Format Specification](https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#glb-file-format-specification) for more details.\\n\\n### Credits\\nFlat rate of 4 credits per successful generation. You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/3d/stable-point-aware-3d\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./cat-statue.png\\\", \\\"rb\\\")\\n    },\\n    data={},\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./3d-cat-statue.glb\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst payload = {\\n    image: fs.createReadStream(\\\"./cat-statue.png\\\"),\\n};\\n\\nconst response = await axios.postForm(\\n    `https://api.stability.ai/v2beta/3d/stable-point-aware-3d`,\\n    axios.toFormData(payload, new FormData()),\\n    {\\n        validateStatus: undefined,\\n        responseType: \\\"arraybuffer\\\",\\n        headers: {\\n            Authorization: `Bearer sk-MYAPIKEY`,\\n        },\\n    },\\n);\\n\\nif (response.status === 200) {\\n    fs.writeFileSync(\\\"./3d-cat-statue.glb\\\", Buffer.from(response.data));\\n} else {\\n    throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/3d/stable-point-aware-3d\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -F image=@\\\"./cat-statue.png\\\" \\\\\\n  -o \\\"./3d-cat-statue.glb\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image to generate a 3D model from.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 4,194,304 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"texture_resolution\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"512\", \"1024\", \"2048\"],\n                    \"default\": \"1024\",\n                    \"description\": \"Determines the resolution of the textures used for both the albedo (color) map and the \\nnormal map. The resolution is specified in pixels, and a higher value corresponds to a \\nhigher level of detail in the textures, allowing for more intricate and precise rendering \\nof surfaces. However, increasing the resolution also results in larger asset sizes, which \\nmay impact loading times and performance. `1024` is a good default value and rarely requires \\nchanging.\"\n                  },\n                  \"foreground_ratio\": {\n                    \"type\": \"number\",\n                    \"minimum\": 1,\n                    \"maximum\": 2,\n                    \"default\": 1.3,\n                    \"description\": \"Controls the amount of padding around the object to be processed within the frame. This \\nratio determines the relative size of the object compared to the total frame size. A \\nhigher ratio means less padding and a larger object, while a lower ratio increases the \\npadding, effectively reducing the object’s size within the frame. This can be useful when \\na long and narrow object, such as a car or bus, is viewed from the front (the narrow \\nside). Here, lowering the foreground ratio might help prevent the generated 3D assets from \\nappearing squished or distorted. The default value of `1.3` is good for most objects.\"\n                  },\n                  \"remesh\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"none\", \"triangle\", \"quad\"],\n                    \"default\": \"none\",\n                    \"description\": \"Controls the remeshing algorithm used to generate the 3D model. The remeshing algorithm \\ndetermines how the 3D model is constructed from the input image. The default value of \\n\\\"none\\\" means that the model is generated without remeshing, which is suitable for most use \\ncases. The \\\"triangle\\\" option generates a model with triangular faces, while the \\\"quad\\\" \\noption generates a model with quadrilateral faces. The \\\"quad\\\" option is useful when the 3D \\nmodel will be used in DCC tools such as Maya or Blender.\"\n                  },\n                  \"target_type\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"none\", \"vertex\", \"face\"],\n                    \"default\": \"none\",\n                    \"description\": \"If set to `vertex` or `face`, the result will have approximately `target_count` many vertices or \\nfaces in the simplified mesh, respectively.\"\n                  },\n                  \"target_count\": {\n                    \"type\": \"number\",\n                    \"minimum\": 100,\n                    \"maximum\": 20000,\n                    \"default\": 1000,\n                    \"description\": \"This sets the target vertex or face count defined by `target_type`. Selecting extremely low \\ncounts reduces the quality of the mesh severely and values of 1,000 - 10,000 are recommended.\"\n                  },\n                  \"guidance_scale\": {\n                    \"type\": \"number\",\n                    \"minimum\": 1,\n                    \"maximum\": 10,\n                    \"default\": 3,\n                    \"description\": \"This sets the guidance scaling of the point diffusion module. Lower values produce less \\ndetail and higher can introduce artifacts. The default of `3` produces best results.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  }\n                },\n                \"required\": [\"image\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Generation was successful.\",\n            \"headers\": {\n              \"content-type\": {\n                \"description\": \"The format of the 3D model.\",\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"example\": \"model/gltf-binary\"\n                }\n              }\n            },\n            \"content\": {\n              \"model/gltf-binary\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated 3D model.\",\n                  \"format\": \"binary\"\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/results/{id}\": {\n      \"get\": {\n        \"tags\": [\"Results\"],\n        \"summary\": \"Fetch async generation result\",\n        \"description\": \"Fetch the result of a generation by ID. \\n\\nMake sure to use the same API key to fetch the generation result that you used to create the generation, \\notherwise you will receive a `404` response.\\n\\n### How to use\\nPlease invoke this endpoint with a `GET` request.\\n\\nThe headers of the request must include an API key in the `authorization` field and the ID\\nof your generation must be in the path.\\n\\n### How is progress reported?\\nYour generation is either `in-progress` (i.e. status code `202`) or it is complete (i.e. status code `200`). \\nWe may add more fine-grained progress reporting in the future (e.g. a numerical progress).\\n\\n### How long are results stored?\\nResults are stored for 24 hours after generation. After that, the results are deleted.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\ngeneration_id = \\\"e52772ac75b...\\\"\\n\\nresponse = requests.request(\\n    \\\"GET\\\",\\n    f\\\"https://api.stability.ai/v2beta/results/{generation_id}\\\",\\n    headers={\\n        'accept': \\\"image/*\\\",  # Use 'application/json' to receive base64 encoded JSON\\n        'authorization': f\\\"Bearer sk-MYAPIKEY\\\"\\n    },\\n)\\n\\nif response.status_code == 202:\\n    print(\\\"Generation in-progress, try again in 10 seconds.\\\")\\nelif response.status_code == 200:\\n    print(\\\"Generation complete!\\\")\\n    with open(\\\"result.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst generationID = \\\"e52772ac75b...\\\";\\n\\nconst response = await axios.request({\\n  url: `https://api.stability.ai/v2beta/results/${generationID}`,\\n  method: \\\"GET\\\",\\n  validateStatus: undefined,\\n  responseType: \\\"arraybuffer\\\",\\n  headers: {\\n    Authorization: `Bearer sk-MYAPIKEY`,\\n    Accept: \\\"image/*\\\", // Use 'application/json' to receive base64 encoded JSON\\n  },\\n});\\n\\nif (response.status === 202) {\\n  console.log(\\\"Generation is still running, try again in 10 seconds.\\\");\\n} else if (response.status === 200) {\\n  console.log(\\\"Generation is complete!\\\");\\n  fs.writeFileSync(\\\"result.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`Response ${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"generation_id=\\\"e52772ac75b...\\\"\\nurl=\\\"https://api.stability.ai/v2beta/results/$generation_id\\\"\\nhttp_status=$(curl -sS -f -o \\\"./result.webp\\\" -w '%{http_code}' -H \\\"authorization: sk-MYAPIKEY\\\" -H 'accept: image/*' \\\"$url\\\")\\n\\ncase $http_status in\\n    202)\\n        echo \\\"Still processing. Retrying in 10 seconds...\\\"\\n        ;;\\n    200)\\n        echo \\\"Download complete!\\\"\\n        ;;\\n    4*|5*)\\n        mv \\\"./result.webp\\\" \\\"./error.json\\\"\\n        echo \\\"Error: Check ./error.json for details.\\\"\\n        exit 1\\n        ;;\\nesac\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/GenerationID\"\n            },\n            \"required\": true,\n            \"name\": \"id\",\n            \"in\": \"path\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"*/*\",\n              \"description\": \"Specify `*/*` to receive the bytes of the result directly. Otherwise specify `application/json` to receive the result as base64 encoded JSON.\",\n              \"enum\": [\"*/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Generation finished.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"202\": {\n            \"description\": \"Your generation is still in-progress.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    },\n                    \"status\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"in-progress\"],\n                      \"description\": \"The status of your generation.\"\n                    }\n                  },\n                  \"required\": [\"id\", \"status\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"404\": {\n            \"description\": \"id: the generation either does not exist or has expired.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2bca35116bc5431d6dc4b4ea2ef3da2f\",\n                    \"name\": \"generation_not_found\",\n                    \"errors\": [\n                      \"id: the generation either does not exist or has expired.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/upscale/conservative\": {\n      \"post\": {\n        \"tags\": [\"Upscale\"],\n        \"summary\": \"Conservative\",\n        \"description\": \"Takes images between 64x64 and 1 megapixel and upscales them all the way to 4K resolution. Put more generally, it can upscale images ~20-40x times while preserving all aspects. Conservative Upscale minimizes alterations to the image and should not be used to reimagine an image.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=t1Q4w2uvvza0)\\n\\n### How to use\\n\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n  \\nThe body of the request must include:\\n- `image`\\n- `prompt`\\n\\nOptionally, the body of the request may also include:\\n- `negative_prompt`\\n- `seed`\\n- `output_format`\\n- `creativity`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will be 4 megapixels.\\n\\n### Credits\\nFlat rate of 25 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/upscale/conservative\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./low-res-flower.jpg\\\", \\\"rb\\\"),\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"a flower\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./flower.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  image: fs.createReadStream(\\\"./low-res-flower.jpg\\\"),\\n  prompt: \\\"a flower\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/upscale/conservative`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\" \\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./flower.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/upscale/conservative\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./low-res-flower.jpg\\\" \\\\\\n  -F prompt=\\\"a flower\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./flower.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image you wish to upscale.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\\n- The aspect ratio must be between 1:2.5 and 2.5:1\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  },\n                  \"creativity\": {\n                    \"$ref\": \"#/components/schemas/Creativity\"\n                  }\n                },\n                \"required\": [\"image\", \"prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Upscale was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/upscale/creative\": {\n      \"post\": {\n        \"tags\": [\"Upscale\"],\n        \"summary\": \"Creative Upscale (async)\",\n        \"description\": \"Takes images between 64x64 and 1 megapixel and upscales them all the way to **4K** resolution.  Put more \\ngenerally, it can upscale images ~20-40x times while preserving, and often enhancing, quality. \\nCreative Upscale **works best on highly degraded images and is not for photos of 1mp or above** as it performs \\nheavy reimagining (controlled by creativity scale).\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=QXxi9tfI425t)\\n\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`.\\n\\nThe body of the request should include:\\n- `image`\\n- `prompt`\\n\\nThe body may optionally include:\\n- `seed`\\n- `negative_prompt`\\n- `output_format`\\n- `creativity`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Results\\nAfter invoking this endpoint with the required parameters, use the `id` in the response to poll for results at the\\n[results/{id} endpoint](#tag/Results/paths/~1v2beta~1results~1%7Bid%7D/get).  Rate-limiting or other errors may occur if you poll more than once every 10 seconds.\\n\\n### Credits\\nFlat rate of 25 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/upscale/creative\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./kitten-in-space.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"cute fluffy white kitten floating in space, pastel colors\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nprint(\\\"Generation ID:\\\", response.json().get('id'))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  image: fs.createReadStream(\\\"./kitten-in-space.png\\\"),\\n  prompt: \\\"cute fluffy white kitten floating in space, pastel colors\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/upscale/creative`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`\\n    },\\n  },\\n);\\n\\nconsole.log(\\\"Generation ID:\\\", response.data.id);\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/upscale/creative\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -F image=@\\\"./kitten-in-rainforest.png\\\" \\\\\\n  -F prompt=\\\"cute fluffy white kitten sitting in a rainforest, pastel colors\\\" \\\\\\n  -F output_format=webp \\\\\\n  -o \\\"./output.json\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image you wish to upscale.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 1,048,576 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"creativity\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 0.35,\n                    \"default\": 0.3,\n                    \"description\": \"Indicates how creative the model should be when upscaling an image.\\nHigher values will result in more details being added to the image during upscaling.\"\n                  }\n                },\n                \"required\": [\"image\", \"prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Upscale was started.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    }\n                  },\n                  \"required\": [\"id\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/upscale/creative/result/{id}\": {\n      \"get\": {\n        \"tags\": [],\n        \"summary\": \"Fetch Creative Upscale result\",\n        \"description\": \"Fetch the result of an upscale generation by ID. \\n\\nMake sure to use the same API key to fetch the generation result that you used to create the generation, \\notherwise you will receive a `404` response.\\n\\n### How to use\\nPlease invoke this endpoint with a `GET` request.\\n\\nThe headers of the request must include an API key in the `authorization` field and the ID\\nof your generation must be in the path.\\n\\n### How is progress reported?\\nYour generation is either `in-progress` (i.e. status code `202`) or it is complete (i.e. status code `200`). \\nWe may add more fine-grained progress reporting in the future (e.g. a numerical progress).\\n\\n### How long are results stored?\\nResults are stored for 24 hours after generation. After that, the results are deleted.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\ngeneration_id = \\\"e52772ac75b...\\\"\\n\\nresponse = requests.request(\\n    \\\"GET\\\",\\n    f\\\"https://api.stability.ai/v2beta/stable-image/upscale/creative/result/{generation_id}\\\",\\n    headers={\\n        'accept': \\\"image/*\\\",  # Use 'application/json' to receive base64 encoded JSON\\n        'authorization': f\\\"Bearer sk-MYAPIKEY\\\"\\n    },\\n)\\n\\nif response.status_code == 202:\\n    print(\\\"Generation in-progress, try again in 10 seconds.\\\")\\nelif response.status_code == 200:\\n    print(\\\"Generation complete!\\\")\\n    with open(\\\"upscaled.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst generationID = \\\"e52772ac75b...\\\";\\n\\nconst response = await axios.request({\\n  url: `https://api.stability.ai/v2beta/stable-image/upscale/creative/result/${generationID}`,\\n  method: \\\"GET\\\",\\n  validateStatus: undefined,\\n  responseType: \\\"arraybuffer\\\",\\n  headers: {\\n    Authorization: `Bearer sk-MYAPIKEY`,\\n    Accept: \\\"image/*\\\", // Use 'application/json' to receive base64 encoded JSON\\n  },\\n});\\n\\nif (response.status === 202) {\\n  console.log(\\\"Generation is still running, try again in 10 seconds.\\\");\\n} else if (response.status === 200) {\\n  console.log(\\\"Generation is complete!\\\");\\n  fs.writeFileSync(\\\"upscaled.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`Response ${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"generation_id=\\\"e52772ac75b...\\\"\\nurl=\\\"https://api.stability.ai/v2beta/stable-image/upscale/creative/result/$generation_id\\\"\\nhttp_status=$(curl -sS -f -o \\\"./upscaled.webp\\\" -w '%{http_code}' -H \\\"authorization: sk-MYAPIKEY\\\" -H 'accept: image/*' \\\"$url\\\")\\n\\ncase $http_status in\\n    202)\\n        echo \\\"Still processing. Retrying in 10 seconds...\\\"\\n        ;;\\n    200)\\n        echo \\\"Download complete!\\\"\\n        ;;\\n    4*|5*)\\n        mv \\\"./upscaled.webp\\\" \\\"./error.json\\\"\\n        echo \\\"Error: Check ./error.json for details.\\\"\\n        exit 1\\n        ;;\\nesac\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/GenerationID\"\n            },\n            \"required\": true,\n            \"name\": \"id\",\n            \"in\": \"path\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Upscale finished.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"202\": {\n            \"description\": \"Your upscale generation is still in-progress.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    },\n                    \"status\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"in-progress\"],\n                      \"description\": \"The status of your generation.\"\n                    }\n                  },\n                  \"required\": [\"id\", \"status\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"404\": {\n            \"description\": \"id: the generation either does not exist or has expired.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2bca35116bc5431d6dc4b4ea2ef3da2f\",\n                    \"name\": \"generation_not_found\",\n                    \"errors\": [\n                      \"id: the generation either does not exist or has expired.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/upscale/fast\": {\n      \"post\": {\n        \"tags\": [\"Upscale\"],\n        \"summary\": \"Fast\",\n        \"description\": \"Our Fast Upscaler service enhances image resolution by 4x using predictive and generative AI. This lightweight and fast service (processing in ~1 second) is ideal for enhancing the quality of compressed images, making it suitable for social media posts and other applications.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=t1Q4w2uvvza0)\\n\\n### How to use\\n\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n  \\nThe body of the request must include:\\n- `image`\\n\\nOptionally, the body of the request may also include:\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image is 4 times that of the input image with a maximum size of 16 megapixels.\\n\\n### Credits\\nFlat rate of 1 credit per successful generation. You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/upscale/fast\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./low-res-flower.jpg\\\", \\\"rb\\\"),\\n    },\\n    data={\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./flower.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  image: fs.createReadStream(\\\"./low-res-flower.jpg\\\"),\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/upscale/fast`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\" \\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./flower.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/upscale/fast\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./low-res-flower.jpg\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./flower.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image you wish to upscale.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Width must be between 32 and 1,536 pixels\\n- Height must be between 32 and 1,536 pixels\\n- Total pixel count must be between 1,024 and 1,048,576 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"image\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Upscale was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/edit/erase\": {\n      \"post\": {\n        \"tags\": [\"Edit\"],\n        \"summary\": \"Erase\",\n        \"description\": \"The Erase service removes unwanted objects, such as blemishes on portraits or items on desks, using image masks.\\n\\nThe mask is provided in one of two ways:\\n  1. Explicitly passing in a separate image via the `mask` parameter \\n  2. Derived from the alpha channel of the `image` parameter.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=t1Q4w2uvvza0)\\n\\n### How to use\\n\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n  \\nThe body of the request must include:\\n- `image`\\n\\nOptionally, the body of the request may also include:\\n- `mask`\\n- `seed`\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will be 4 megapixels.\\n\\n### Credits\\nFlat rate of 3 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/edit/erase\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./kangaroo-on-the-beach.png\\\", \\\"rb\\\"),\\n        \\\"mask\\\": open(\\\"./mask-of-kangaroo.png\\\", \\\"rb\\\"),\\n    },\\n    data={\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./just-the-beach.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  image: fs.createReadStream(\\\"./kangaroo-on-the-beach.png\\\"),\\n  mask: fs.createReadStream(\\\"./mask-of-kangaroo.png\\\"),\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/edit/erase`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\" \\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./just-the-beach.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/edit/erase\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./kangaroo-on-the-beach.png\\\" \\\\\\n  -F mask=@\\\"./mask-of-kangaroo.png\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./just-the-beach.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image you wish to erase from.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"mask\": {\n                    \"type\": \"string\",\n                    \"description\": \"Controls the strength of the inpainting process on a per-pixel basis, either via a \\nsecond image (passed into this parameter) or via the alpha channel of the `image` parameter.\\n\\n**Passing in a Mask**  \\n\\nThe image passed to this parameter should be a black and white image that represents, \\nat any pixel, the strength of inpainting based on how dark or light the given pixel is. \\nCompletely black pixels represent no inpainting strength while completely white pixels \\nrepresent maximum strength.\\n\\nIn the event the mask is a different size than the `image` parameter, it will be automatically resized.\\n\\n**Alpha Channel Support**\\n\\nIf you don't provide an explicit mask, one will be derived from the alpha channel of the `image` parameter.\\nTransparent pixels will be inpainted while opaque pixels will be preserved.\\n\\nIn the event an `image` with an alpha channel is provided along with a `mask`, the `mask` will take precedence.\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"grow_mask\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 20,\n                    \"default\": 5,\n                    \"description\": \"Grows the edges of the mask outward in all directions by the specified number of pixels. The expanded area around the mask will be blurred, which can help smooth the transition between inpainted content and the original image.\\n\\nTry this parameter if you notice seams or rough edges around the inpainted content.\\n\\n> Note: Excessive growth may obscure fine details in the mask and/or merge nearby masked regions.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"image\", \"prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Erase was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/edit/inpaint\": {\n      \"post\": {\n        \"tags\": [\"Edit\"],\n        \"summary\": \"Inpaint\",\n        \"description\": \"Intelligently modify images by filling in or replacing specified areas with new content based\\non the content of a \\\"mask\\\" image. \\n\\nThe \\\"mask\\\" is provided in one of two ways:\\n  1. Explicitly passing in a separate image via the `mask` parameter \\n  2. Derived from the alpha channel of the `image` parameter.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=t1Q4w2uvvza0)\\n\\n### How to use\\n\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n  \\nThe body of the request must include:\\n- `image`\\n- `prompt`\\n\\nOptionally, the body of the request may also include:\\n- `mask`\\n- `negative_prompt`\\n- `seed`\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will be 4 megapixels.\\n\\n### Credits\\nFlat rate of 3 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/edit/inpaint\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./dog-wearing-vr-goggles.png\\\", \\\"rb\\\"),\\n        \\\"mask\\\": open(\\\"./mask.png\\\", \\\"rb\\\"),\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"dog wearing black glasses\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./dog-wearing-black-glasses.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  image: fs.createReadStream(\\\"./dog-wearing-vr-goggles.png\\\"),\\n  mask: fs.createReadStream(\\\"./mask.png\\\"),\\n  prompt: \\\"dog wearing black glasses\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/edit/inpaint`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\" \\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./dog-wearing-black-glasses.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/edit/inpaint\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./dog-wearing-vr-goggles.png\\\" \\\\\\n  -F mask=@\\\"./mask.png\\\" \\\\\\n  -F prompt=\\\"golden retriever in a field\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./dog-wearing-black-glasses.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image you wish to inpaint.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"mask\": {\n                    \"type\": \"string\",\n                    \"description\": \"Controls the strength of the inpainting process on a per-pixel basis, either via a \\nsecond image (passed into this parameter) or via the alpha channel of the `image` parameter.\\n\\n**Passing in a Mask**  \\n\\nThe image passed to this parameter should be a black and white image that represents, \\nat any pixel, the strength of inpainting based on how dark or light the given pixel is. \\nCompletely black pixels represent no inpainting strength while completely white pixels \\nrepresent maximum strength.\\n\\nIn the event the mask is a different size than the `image` parameter, it will be automatically resized.\\n\\n**Alpha Channel Support**\\n\\nIf you don't provide an explicit mask, one will be derived from the alpha channel of the `image` parameter.\\nTransparent pixels will be inpainted while opaque pixels will be preserved.\\n\\nIn the event an `image` with an alpha channel is provided along with a `mask`, the `mask` will take precedence.\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"grow_mask\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 100,\n                    \"default\": 5,\n                    \"description\": \"Grows the edges of the mask outward in all directions by the specified number of pixels. The expanded area around the mask will be blurred, which can help smooth the transition between inpainted content and the original image.\\n\\nTry this parameter if you notice seams or rough edges around the inpainted content.\\n\\n> Note: Excessive growth may obscure fine details in the mask and/or merge nearby masked regions.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"image\", \"prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Inpainting was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/edit/outpaint\": {\n      \"post\": {\n        \"tags\": [\"Edit\"],\n        \"summary\": \"Outpaint\",\n        \"description\": \"The Outpaint service inserts additional content in an image to fill in the space in any direction. \\nCompared to other automated or manual attempts to expand the content in an image, the Outpaint service \\nshould minimize artifacts and signs that the original image has been edited.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=bZ2yK7VQSgLw)\\n\\n### How to use\\n\\nPlease invoke this endpoint with a POST request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n  \\nThe body of the request must include:\\n- `image`\\n\\nAlong with _at least one_ outpaint direction:\\n- `left`\\n- `right`\\n- `up`\\n- `down`\\n\\n> **Note:** for best quality use outpaint direction values smaller or equal to your source image dimensions.\\n    \\nEach of these parameters should be set to a number between 0 and 2000, representing the number of pixels to outpaint in that direction.\\n\\nOptionally, the body of the request may also include:\\n- `prompt`\\n- `seed`\\n- `output_format`\\n- `creativity`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Credits\\nFlat rate of 4 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/edit/outpaint\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./husky-in-a-field.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"left\\\": 200,\\n        \\\"down\\\": 200,\\n        \\\"output_format\\\": \\\"webp\\\"\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./husky-in-a-huge-field.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  image: fs.createReadStream(\\\"./husky-in-a-field.png\\\"),\\n  left: 200,\\n  down: 200,\\n  output_format: \\\"webp\\\",\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/edit/outpaint`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\" \\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./husky-in-a-huge-field.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/edit/outpaint\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./husky-in-a-field.png\\\" \\\\\\n  -F left=200 \\\\\\n  -F bottom=200 \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./husky-in-a-huge-field.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image you wish to outpaint.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\\n- The aspect ratio must be between 1:2.5 and 2.5:1\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"left\": {\n                    \"type\": \"integer\",\n                    \"minimum\": 0,\n                    \"maximum\": 2000,\n                    \"default\": 0,\n                    \"description\": \"The number of pixels to outpaint on the left side of the image. At least one outpainting direction must be supplied with a non-zero value.\"\n                  },\n                  \"right\": {\n                    \"type\": \"integer\",\n                    \"minimum\": 0,\n                    \"maximum\": 2000,\n                    \"default\": 0,\n                    \"description\": \"The number of pixels to outpaint on the right side of the image. At least one outpainting direction must be supplied with a non-zero value.\"\n                  },\n                  \"up\": {\n                    \"type\": \"integer\",\n                    \"minimum\": 0,\n                    \"maximum\": 2000,\n                    \"default\": 0,\n                    \"description\": \"The number of pixels to outpaint on the top of the image. At least one outpainting direction must be supplied with a non-zero value.\"\n                  },\n                  \"down\": {\n                    \"type\": \"integer\",\n                    \"minimum\": 0,\n                    \"maximum\": 2000,\n                    \"default\": 0,\n                    \"description\": \"The number of pixels to outpaint on the bottom of the image. At least one outpainting direction must be supplied with a non-zero value.\"\n                  },\n                  \"creativity\": {\n                    \"allOf\": [\n                      {\n                        \"$ref\": \"#/components/schemas/Creativity\"\n                      },\n                      {\n                        \"minimum\": 0,\n                        \"maximum\": 1,\n                        \"default\": 0.5\n                      }\n                    ]\n                  },\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 0,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"png\", \"jpeg\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"image\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Outpainting was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/edit/search-and-replace\": {\n      \"post\": {\n        \"tags\": [\"Edit\"],\n        \"summary\": \"Search and Replace\",\n        \"description\": \"The Search and Replace service is a specific version of inpainting that does not require a mask. \\nInstead, users can leverage a `search_prompt` to identify an object in simple language to be replaced. \\nThe service will automatically segment the object and replace it with the object requested in the prompt.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=0lDpGa2jAmAs)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n\\nThe body of the request should include:\\n- `image`\\n- `prompt`\\n- `search_prompt`\\n\\nThe body may optionally include:\\n- `seed`\\n- `negative_prompt`\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will be 4 megapixels.\\n\\n### Credits\\nFlat rate of 4 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/edit/search-and-replace\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./husky-in-a-field.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"golden retriever in a field\\\",\\n        \\\"search_prompt\\\": \\\"dog\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./golden-retriever-in-a-field.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  image: fs.createReadStream(\\\"./husky-in-a-field.png\\\"),\\n  prompt: \\\"golden retriever standing in a field\\\",\\n  search_prompt: \\\"dog\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/edit/search-and-replace`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\"\\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./golden-retriever-in-a-field.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/edit/search-and-replace\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./husky-in-a-field.png\\\" \\\\\\n  -F prompt=\\\"golden retriever in a field\\\" \\\\\\n  -F search_prompt=\\\"dog\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./golden-retriever-in-a-field.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"An image containing content you wish to replace.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\\n- The aspect ratio must be between 1:2.5 and 2.5:1\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"search_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"Short description of what to inpaint in the `image`.\",\n                    \"example\": \"glasses\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"grow_mask\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 20,\n                    \"default\": 3,\n                    \"description\": \"Grows the edges of the mask outward in all directions by the specified number of pixels. The expanded area around the mask will be blurred, which can help smooth the transition between inpainted content and the original image.\\n\\nTry this parameter if you notice seams or rough edges around the inpainted content.\\n\\n> Note: Excessive growth may obscure fine details in the mask and/or merge nearby masked regions.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"image\", \"prompt\", \"search_prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Search-and-Replace was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/edit/search-and-recolor\": {\n      \"post\": {\n        \"tags\": [\"Edit\"],\n        \"summary\": \"Search and Recolor\",\n        \"description\": \"The Search and Recolor service provides the ability to change the color of a specific object in an image using a prompt.\\nThis service is a specific version of inpainting that does not require a mask. The Search and Recolor \\nservice will automatically segment the object and recolor it using the colors requested in the prompt.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=mtgSh4Stj3l)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n\\nThe body of the request should include:\\n- `image`\\n- `prompt`\\n- `select_prompt`\\n\\nThe body may optionally include:\\n- `grow_mask`\\n- `seed`\\n- `negative_prompt`\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will match the resolution of the input image.\\n\\n### Credits\\nFlat rate of 5 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/edit/search-and-recolor\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./red-car.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"a yellow car\\\",\\n        \\\"select_prompt\\\": \\\"car\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./yellow-car.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  image: fs.createReadStream(\\\"./red-car.png\\\"),\\n  prompt: \\\"a yellow car\\\",\\n  select_prompt: \\\"car\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/edit/search-and-recolor`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\"\\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./yellow-car.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/edit/search-and-recolor\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./red-car.png\\\" \\\\\\n  -F prompt=\\\"a yellow car\\\" \\\\\\n  -F select_prompt=\\\"car\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./yellow-car.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"An image containing content you wish to recolor.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\\n- The aspect ratio must be between 1:2.5 and 2.5:1\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"select_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"Short description of what to search for in the `image`.\",\n                    \"example\": \"glasses\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"grow_mask\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 20,\n                    \"default\": 3,\n                    \"description\": \"Grows the edges of the mask outward in all directions by the specified number of pixels. The expanded area around the mask will be blurred, which can help smooth the transition between inpainted content and the original image.\\n\\nTry this parameter if you notice seams or rough edges around the inpainted content.\\n\\n> Note: Excessive growth may obscure fine details in the mask and/or merge nearby masked regions.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"image\", \"prompt\", \"select_prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Search-and-Recolor was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/edit/remove-background\": {\n      \"post\": {\n        \"tags\": [\"Edit\"],\n        \"summary\": \"Remove Background\",\n        \"description\": \"The Remove Background service accurately segments the foreground from an image and implements \\nand removes the background.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=VHofb3LAVmqi)\\n\\n\\n### How to use\\n\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n  \\nThe body of the request must include:\\n- `image`\\n\\nOptionally, the body of the request may also include:\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Credits\\nFlat rate of 2 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/edit/remove-background\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./husky-in-a-field.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"output_format\\\": \\\"webp\\\"\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./husky.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  image: fs.createReadStream(\\\"./husky-in-a-field.png\\\"),\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/edit/remove-background`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\" \\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./husky.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/edit/remove-background\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./husky-in-a-field.png\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./husky.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image whose background you wish to remove.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 4,194,304 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"image\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Background successfully removed.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/edit/replace-background-and-relight\": {\n      \"post\": {\n        \"tags\": [\"Edit\"],\n        \"summary\": \"Replace Background and Relight (async)\",\n        \"description\": \"The Replace Background and Relight edit service lets users swap backgrounds with\\nAI-generated or uploaded images while adjusting lighting to match the subject. This\\nnew API provides a streamlined image editing solution and can serve e-commerce, real\\nestate, photography, and creative projects.\\n\\nSome of the things you can do include:\\n  - Background Replacement: Remove existing background and add new ones.\\n  - AI Background Generation: Create new backgrounds using AI generated images based on prompts.\\n  - Relighting: Adjust lighting in images that are under or overexposed.\\n  - Flexible Inputs: Use your own background image or generate one.\\n  - Lighting Adjustments: Modify light reference, direction, and strength.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=mtgSh4Stj3l)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`.\\n\\nThe body of the request should include:\\n- `subject_image`\\n- `background_prompt` and/or `background_reference`\\n\\nThe body may optionally include:\\n- `light_reference` or `light_source_direction`\\n- `light_source_strength` (requires `light_reference` or `light_source_direction`)\\n- `foreground_prompt`\\n- `negative_prompt`\\n- `preserve_original_subject`\\n- `original_background_depth`\\n- `keep_original_background`\\n- `light_source_strength`\\n- `seed`\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Results\\nAfter invoking this endpoint with the required parameters, use the `id` in the response to poll for results at the\\n[results/{id} endpoint](#tag/Results/paths/~1v2beta~1results~1%7Bid%7D/get).  Rate-limiting or other errors may occur if you poll more than once every 10 seconds.\\n\\n### Credits\\nFlat rate of 8 credits per successful generation. You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/edit/replace-background-and-relight\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"subject_image\\\": open(\\\"./husky-in-a-field.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"background_prompt\\\": \\\"cinematic lighting\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nprint(\\\"Generation ID:\\\", response.json().get('id'))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  subject_image: fs.createReadStream(\\\"./husky-in-a-field.png\\\"),\\n  background_prompt: \\\"cinematic lighting\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/edit/replace-background-and-relight`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n    },\\n  },\\n);\\n\\nconsole.log(\\\"Generation ID:\\\", response.data.id);\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/edit/replace-background-and-relight\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F subject_image=@\\\"./husky-in-a-field.png\\\" \\\\\\n  -F background_prompt=\\\"cinematic lighting\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./output.json\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"subject_image\": {\n                    \"type\": \"string\",\n                    \"description\": \"An image containing the subject that you wish to change background and relight.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\\n- The aspect ratio must be between 1:2.5 and 2.5:1\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"background_reference\": {\n                    \"type\": \"string\",\n                    \"description\": \"An image whose style you wish to use in the background. Similar to the Control: Style API,\\nstylistic elements from this image are added to the background.\\n\\n> **Important:** either `background_reference` or `background_prompt` must be provided.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"background_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the background of the output image. This could be a description\\nof the desired background scene, or just a description of the lighting if modifying the\\nlight source through `light_source_direction` or `light_reference`.\\n\\n> **Important:** either `background_reference` or `background_prompt` must be provided.\"\n                  },\n                  \"foreground_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"Description of the subject. Use this to prevent elements of the background from\\nbleeding into the subject. For example, if you find your subject is turning \\ngreen with a forest in the background, try putting a short description of the \\nsubject in this field.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"preserve_original_subject\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 1,\n                    \"default\": 0.6,\n                    \"description\": \"How much to overlay the original subject to exactly match the original image. A \\n1.0 is an exact pixel match for the subject, and 0.0 is a close match but will \\nhave new lighting qualities. This is an advanced feature.\"\n                  },\n                  \"original_background_depth\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 1,\n                    \"default\": 0.5,\n                    \"description\": \"Controls the generated background to have the same depth as the original subject image. This is an advanced feature.\"\n                  },\n                  \"keep_original_background\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"true\", \"false\"],\n                    \"default\": \"false\",\n                    \"description\": \"Whether to keep the background of the original image. When this is on, the background\\nwill have different lighting than the original image that changes based on the other\\nparameters in this API.\"\n                  },\n                  \"light_source_direction\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"left\", \"right\", \"above\", \"below\"],\n                    \"description\": \"Direction of the light source.\"\n                  },\n                  \"light_reference\": {\n                    \"type\": \"string\",\n                    \"description\": \"An image with the desired lighting. Lighter sections of the light_reference image will correspond to sections with brighter lighting in the output image.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"light_source_strength\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 1,\n                    \"default\": 0.3,\n                    \"description\": \"If using `light_reference_image` or `light_source_direction`, controls the strength \\nof the light source. 1.0 is brighter and 0.0 is dimmer. This is an advanced feature.\\n\\n> **Important:** Use of this parameter requires `light_reference` or `light_source_direction` to be provided.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"subject_image\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Replace Background and Relight was started.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"$ref\": \"#/components/schemas/GenerationID\"\n                    }\n                  },\n                  \"required\": [\"id\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/generate/ultra\": {\n      \"post\": {\n        \"tags\": [\"Generate\"],\n        \"summary\": \"Stable Image Ultra\",\n        \"description\": \"Our most advanced text to image generation service, Stable Image Ultra creates the highest quality images\\nwith unprecedented prompt understanding. Ultra excels in typography, complex compositions, dynamic lighting, \\nvibrant hues, and overall cohesion and structure of an art piece. Made from the most advanced models,\\nincluding Stable Diffusion 3.5, Ultra offers the best of the Stable Diffusion ecosystem.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=yXhs626oZdr1)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`.  The accept header should be set to one of the following:\\n- `image/*` to receive the image in the format specified by the `output_format` parameter.\\n- `application/json` to receive the image in the format specified by the `output_format` parameter, but encoded to base64 in a JSON response.\\n\\nThe only required parameter is the `prompt` field, which should contain the text prompt for the image generation.\\n\\nThe body of the request should include:\\n- `prompt` - text to generate the image from\\n\\nThe body may optionally include:\\n- `image` - the image to use as the starting point for the generation\\n- `strength` - controls how much influence the `image` parameter has on the output image\\n- `aspect_ratio` - the aspect ratio of the output image\\n- `negative_prompt` - keywords of what you **do not** wish to see in the output image\\n- `seed` - the randomness seed to use for the generation\\n- `output_format` - the the format of the output image\\n\\n> **Note:** for the full list of optional parameters, please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will be 1 megapixel. The default resolution is 1024x1024.\\n\\n### Credits\\nThe Ultra service uses 8 credits per successful result. You will not be charged for failed results.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/generate/ultra\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\\"none\\\": ''},\\n    data={\\n        \\\"prompt\\\": \\\"Lighthouse on a cliff overlooking the ocean\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./lighthouse.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  prompt: \\\"Lighthouse on a cliff overlooking the ocean\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/generate/ultra`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\" \\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./lighthouse.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/generate/ultra\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F prompt=\\\"Lighthouse on a cliff overlooking the ocean\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./lighthouse.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"aspect_ratio\": {\n                    \"type\": \"string\",\n                    \"enum\": [\n                      \"21:9\",\n                      \"16:9\",\n                      \"3:2\",\n                      \"5:4\",\n                      \"1:1\",\n                      \"4:5\",\n                      \"2:3\",\n                      \"9:16\",\n                      \"9:21\"\n                    ],\n                    \"default\": \"1:1\",\n                    \"description\": \"Controls the aspect ratio of the generated image.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"jpeg\", \"png\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  },\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image to use as the starting point for the generation.\\n\\n> **Important:** The `strength` parameter is required when `image` is provided.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Width must be between 64 and 16,384 pixels\\n- Height must be between 64 and 16,384 pixels\\n- Total pixel count must be at least 4,096 pixels\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"strength\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 1,\n                    \"description\": \"Sometimes referred to as _denoising_, this parameter controls how much influence the \\n`image` parameter has on the generated image.  A value of 0 would yield an image that \\nis identical to the input.  A value of 1 would be as if you passed in no image at all.\\n\\n> **Important:** This parameter is required when `image` is provided.\"\n                  }\n                },\n                \"required\": [\"prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Generation was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/generate/core\": {\n      \"post\": {\n        \"tags\": [\"Generate\"],\n        \"summary\": \"Stable Image Core\",\n        \"description\": \"Our primary service for text-to-image generation, Stable Image Core represents the best quality achievable at high \\nspeed. No prompt engineering is required! Try asking for a style, a scene, or a character, and see what you get.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=yXhs626oZdr1)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n\\nThe body of the request should include:\\n- `prompt`\\n\\nThe body may optionally include:\\n- `aspect_ratio`\\n- `negative_prompt`\\n- `seed`\\n- `style_preset`\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will be 1.5 megapixels.\\n\\n### Credits\\nFlat rate of 3 credits per successful generation.  You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/generate/core\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\\"none\\\": ''},\\n    data={\\n        \\\"prompt\\\": \\\"Lighthouse on a cliff overlooking the ocean\\\",\\n        \\\"output_format\\\": \\\"webp\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./lighthouse.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  prompt: \\\"Lighthouse on a cliff overlooking the ocean\\\",\\n  output_format: \\\"webp\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/generate/core`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\" \\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./lighthouse.webp\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/generate/core\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F prompt=\\\"Lighthouse on a cliff overlooking the ocean\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./lighthouse.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"aspect_ratio\": {\n                    \"type\": \"string\",\n                    \"enum\": [\n                      \"21:9\",\n                      \"16:9\",\n                      \"3:2\",\n                      \"5:4\",\n                      \"1:1\",\n                      \"4:5\",\n                      \"2:3\",\n                      \"9:16\",\n                      \"9:21\"\n                    ],\n                    \"default\": \"1:1\",\n                    \"description\": \"Controls the aspect ratio of the generated image.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"style_preset\": {\n                    \"type\": \"string\",\n                    \"enum\": [\n                      \"enhance\",\n                      \"anime\",\n                      \"photographic\",\n                      \"digital-art\",\n                      \"comic-book\",\n                      \"fantasy-art\",\n                      \"line-art\",\n                      \"analog-film\",\n                      \"neon-punk\",\n                      \"isometric\",\n                      \"low-poly\",\n                      \"origami\",\n                      \"modeling-compound\",\n                      \"cinematic\",\n                      \"3d-model\",\n                      \"pixel-art\",\n                      \"tile-texture\"\n                    ],\n                    \"description\": \"Guides the image model towards a particular style.\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"png\", \"jpeg\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Generation was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/generate/sd3\": {\n      \"post\": {\n        \"tags\": [\"Generate\"],\n        \"summary\": \"Stable Diffusion 3.0 & 3.5\",\n        \"description\": \"Generate using Stable Diffusion 3.5 models, Stability AI latest base model:\\n\\n- **Stable Diffusion 3.5 Large**: At 8 billion parameters, with superior quality and\\n  prompt adherence, this base model is the most powerful in the Stable Diffusion\\n  family. This model is ideal for professional use cases at 1 megapixel resolution.\\n\\n- **Stable Diffusion 3.5 Large Turbo**: A distilled version of Stable Diffusion 3.5 Large.\\n  SD3.5 Large Turbo generates high-quality images with exceptional prompt adherence\\n  in just 4 steps, making it considerably faster than Stable Diffusion 3.5 Large.\\n\\n- **Stable Diffusion 3.5 Medium**: With 2.5 billion parameters, the model delivers an\\n  optimal balance between prompt accuracy and image quality, making it an efficient\\n  choice for fast high-performance image generation.\\n\\nRead more about the model capabilities [here](https://stability.ai/news/introducing-stable-diffusion-3-5).\\n\\nStable Diffusion 3.0 models are also supported, powered by [Fireworks AI](https://fireworks.ai/). API status can be reviewed [here](https://readme.fireworks.ai/page/application-status).\\n\\n- **SD3 Large**: the 8 billion parameter model\\n- **SD3 Large Turbo**: the 8 billion parameter model with a faster inference time\\n- **SD3 Medium**: the 2 billion parameter model\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/SD3_API.ipynb)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`.  The accept header should be set to one of the following:\\n- `image/*` to receive the image in the format specified by the `output_format` parameter.\\n- `application/json` to receive the image encoded as base64 in a JSON response.\\n\\n#### **Generating with a prompt**\\nCommonly referred to as **text-to-image**, this mode generates an image from text alone. While the only required\\nparameter is the `prompt`, it also supports an `aspect_ratio` parameter which can be used to control the\\naspect ratio of the generated image.\\n\\n#### **Generating with a prompt *and* an image**\\nCommonly referred to as **image-to-image**, this mode also generates an image from text but uses an existing image as the\\nstarting point. The required parameters are:\\n- `prompt` - text to generate the image from\\n- `image` - the image to use as the starting point for the generation\\n- `strength` - controls how much influence the `image` parameter has on the output image\\n- `mode` - must be set to `image-to-image`\\n\\n> **Note:** maximum request size is 10MiB.\\n\\n#### **Optional Parameters:**\\nBoth modes support the following optional parameters:\\n- `model` - the model to use (SD3 Large, SD3 Large Turbo, or SD3 Medium)\\n- `output_format` - the the format of the output image\\n- `seed` - the randomness seed to use for the generation\\n- `negative_prompt` - keywords of what you **do not** wish to see in the output image\\n- `cfg_scale` - controls how strictly the diffusion process adheres to the prompt text\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will be 1MP. The default resolution is 1024x1024.\\n\\n### Credits\\n- **SD 3.5 & 3.0 Large**: Flat rate of 6.5 credits per successful generation.\\n- **SD 3.5 & 3.0 Large Turbo**: Flat rate of 4 credits per successful generation.\\n- **SD 3.5 & 3.0 Medium**: Flat rate of 3.5 credits per successful generation.\\n\\nAs always, you will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/generate/sd3\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\\"none\\\": ''},\\n    data={\\n        \\\"prompt\\\": \\\"Lighthouse on a cliff overlooking the ocean\\\",\\n        \\\"output_format\\\": \\\"jpeg\\\",\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./lighthouse.jpeg\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import fs from \\\"node:fs\\\";\\nimport axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\n\\nconst payload = {\\n  prompt: \\\"Lighthouse on a cliff overlooking the ocean\\\",\\n  output_format: \\\"jpeg\\\"\\n};\\n\\nconst response = await axios.postForm(\\n  `https://api.stability.ai/v2beta/stable-image/generate/sd3`,\\n  axios.toFormData(payload, new FormData()),\\n  {\\n    validateStatus: undefined,\\n    responseType: \\\"arraybuffer\\\",\\n    headers: { \\n      Authorization: `Bearer sk-MYAPIKEY`, \\n      Accept: \\\"image/*\\\" \\n    },\\n  },\\n);\\n\\nif(response.status === 200) {\\n  fs.writeFileSync(\\\"./lighthouse.jpeg\\\", Buffer.from(response.data));\\n} else {\\n  throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/generate/sd3\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F prompt=\\\"Lighthouse on a cliff overlooking the ocean\\\" \\\\\\n  -F output_format=\\\"jpeg\\\" \\\\\\n  -o \\\"./lighthouse.jpeg\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines\\nelements, colors, and subjects will lead to better results.\"\n                  },\n                  \"mode\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"text-to-image\", \"image-to-image\"],\n                    \"default\": \"text-to-image\",\n                    \"description\": \"Controls whether this is a text-to-image or image-to-image generation, which affects which parameters are required:\\n- **text-to-image** requires only the `prompt` parameter\\n- **image-to-image** requires the `prompt`, `image`, and `strength` parameters\",\n                    \"title\": \"GenerationMode\"\n                  },\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"The image to use as the starting point for the generation.\\n\\nSupported formats:\\n  - jpeg\\n  - png\\n  - webp\\n\\nSupported dimensions:\\n  - Every side must be at least 64 pixels\\n  \\n> **Important:** This parameter is only valid for **image-to-image** requests.\",\n                    \"format\": \"binary\"\n                  },\n                  \"strength\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 1,\n                    \"description\": \"Sometimes referred to as _denoising_, this parameter controls how much influence the \\n`image` parameter has on the generated image.  A value of 0 would yield an image that \\nis identical to the input.  A value of 1 would be as if you passed in no image at all.\\n\\n> **Important:** This parameter is only valid for **image-to-image** requests.\"\n                  },\n                  \"aspect_ratio\": {\n                    \"type\": \"string\",\n                    \"enum\": [\n                      \"21:9\",\n                      \"16:9\",\n                      \"3:2\",\n                      \"5:4\",\n                      \"1:1\",\n                      \"4:5\",\n                      \"2:3\",\n                      \"9:16\",\n                      \"9:21\"\n                    ],\n                    \"default\": \"1:1\",\n                    \"description\": \"Controls the aspect ratio of the generated image. Defaults to 1:1.\\n\\n> **Important:** This parameter is only valid for **text-to-image** requests.\"\n                  },\n                  \"model\": {\n                    \"type\": \"string\",\n                    \"enum\": [\n                      \"sd3.5-large\",\n                      \"sd3.5-large-turbo\",\n                      \"sd3.5-medium\",\n                      \"sd3-medium\",\n                      \"sd3-large\",\n                      \"sd3-large-turbo\"\n                    ],\n                    \"default\": \"sd3.5-large\",\n                    \"description\": \"The model to use for generation.\\n\\n- `sd3.5-large` requires 6.5 credits per generation\\n- `sd3.5-large-turbo` requires 4 credits per generation\\n- `sd3.5-medium` requires 3.5 credits per generation\\n- `sd3-large` requires 6.5 credits per generation\\n- `sd3-large-turbo` requires 4 credits per generation\\n- `sd3-medium` requires 3.5 credits per generation\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"png\", \"jpeg\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"Keywords of what you **do not** wish to see in the output image.\\nThis is an advanced feature.\\n\\n> **Important:** This parameter does **not** work with `sd3-large-turbo`.\"\n                  },\n                  \"cfg_scale\": {\n                    \"type\": \"number\",\n                    \"minimum\": 1,\n                    \"maximum\": 10,\n                    \"description\": \"How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt).\"\n                  }\n                },\n                \"required\": [\"prompt\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Generation was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/control/sketch\": {\n      \"post\": {\n        \"tags\": [\"Control\"],\n        \"summary\": \"Sketch\",\n        \"description\": \"This service offers an ideal solution for design projects that require brainstorming and\\nfrequent iterations. It upgrades rough hand-drawn sketches to refined outputs with precise \\ncontrol. For non-sketch images, it allows detailed manipulation of the final appearance by \\nleveraging the contour lines and edges within the image.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=ZKIAqHzJzzUo)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n\\nThe body of the request should include:\\n- `image`\\n- `prompt`\\n\\nThe body may optionally include:\\n- `control_strength`\\n- `negative_prompt`\\n- `seed`\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will match that of the input image.\\n\\n### Credits\\nFlat rate of 3 credits per successful generation. You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/control/sketch\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./sketch.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"a medieval castle on a hill\\\",\\n        \\\"control_strength\\\": 0.7,\\n        \\\"output_format\\\": \\\"webp\\\"\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./castle.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst payload = {\\n    image: fs.createReadStream(\\\"./sketch.png\\\"),\\n    prompt: \\\"a medieval castle on a hill\\\",\\n    control_strength: 0.6,\\n    output_format: \\\"webp\\\",\\n};\\n\\nconst response = await axios.postForm(\\n    `https://api.stability.ai/v2beta/stable-image/control/sketch`,\\n    axios.toFormData(payload, new FormData()),\\n    {\\n        validateStatus: undefined,\\n        responseType: \\\"arraybuffer\\\",\\n        headers: {\\n            Authorization: `Bearer sk-MYAPIKEY`,\\n            Accept: \\\"image/*\\\"\\n        },\\n    },\\n);\\n\\nif (response.status === 200) {\\n    fs.writeFileSync(\\\"./castle.webp\\\", Buffer.from(response.data));\\n} else {\\n    throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/control/sketch\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./sketch.png\\\" \\\\\\n  -F prompt=\\\"a medieval castle on a hill\\\" \\\\\\n  -F control_strength=0.7 \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./castle.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"Supported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nImage Dimensions:\\n- Every side must be at least 64 pixels\\n- The total pixel count cannot exceed 9,437,184 pixels (e.g. 3072x3072, 4096x2304, etc.)\\n\\nImage Aspect Ratio:\\n- Must be between 1:2.5 and 2.5:1 (i.e. cannot be too tall or too wide)\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"control_strength\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 1,\n                    \"default\": 0.7,\n                    \"description\": \"How much influence, or control, the `image` has on the generation. Represented as a float between 0 and 1, where 0 is the least influence and 1 is the maximum.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"png\", \"jpeg\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"prompt\", \"image\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Generation was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/control/structure\": {\n      \"post\": {\n        \"tags\": [\"Control\"],\n        \"summary\": \"Structure\",\n        \"description\": \"This service excels in generating images by maintaining the structure of an input image, \\nmaking it especially valuable for advanced content creation scenarios such as recreating \\nscenes or rendering characters from models.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=59RaZazXz0AU)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n\\nThe body of the request should include:\\n- `image`\\n- `prompt`\\n\\nThe body may optionally include:\\n- `control_strength`\\n- `negative_prompt`\\n- `seed`\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will match that of the input image.\\n\\n### Credits\\nFlat rate of 3 credits per successful generation. You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/control/structure\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./cat-statue.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"a well manicured shrub in an english garden\\\",\\n        \\\"control_strength\\\": 0.7,\\n        \\\"output_format\\\": \\\"webp\\\"\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./shrub-in-a-garden.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst payload = {\\n    image: fs.createReadStream(\\\"./cat-statue.png\\\"),\\n    prompt: \\\"a well manicured shrub in an english garden\\\",\\n    control_strength: 0.6,\\n    output_format: \\\"webp\\\",\\n};\\n\\nconst response = await axios.postForm(\\n    `https://api.stability.ai/v2beta/stable-image/control/structure`,\\n    axios.toFormData(payload, new FormData()),\\n    {\\n        validateStatus: undefined,\\n        responseType: \\\"arraybuffer\\\",\\n        headers: {\\n            Authorization: `Bearer sk-MYAPIKEY`,\\n            Accept: \\\"image/*\\\"\\n        },\\n    },\\n);\\n\\nif (response.status === 200) {\\n    fs.writeFileSync(\\\"./shrub-in-a-garden.webp\\\", Buffer.from(response.data));\\n} else {\\n    throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/control/structure\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./cat-statue.png\\\" \\\\\\n  -F prompt=\\\"a well manicured shrub in an english garden\\\" \\\\\\n  -F control_strength=0.7 \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./shrub-in-a-garden.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"An image whose structure you wish to use as the foundation for a generation.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\\n- The aspect ratio must be between 1:2.5 and 2.5:1\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"control_strength\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 1,\n                    \"default\": 0.7,\n                    \"description\": \"How much influence, or control, the `image` has on the generation. Represented as a float between 0 and 1, where 0 is the least influence and 1 is the maximum.\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"png\", \"jpeg\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"prompt\", \"image\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Generation was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v2beta/stable-image/control/style\": {\n      \"post\": {\n        \"tags\": [\"Control\"],\n        \"summary\": \"Style\",\n        \"description\": \"This service extracts stylistic elements from an input image (control image) and uses it to guide the creation of an output image based on the prompt. The result is a new image in the same style as the control image.\\n\\n### Try it out\\nGrab your [API key](https://platform.stability.ai/account/keys) and head over to [![Open Google Colab](https://platform.stability.ai/svg/google-colab.svg)](https://colab.research.google.com/github/stability-ai/stability-sdk/blob/main/nbs/Stable_Image_API_Public.ipynb#scrollTo=y0WKjG72RvTE)\\n\\n### How to use\\nPlease invoke this endpoint with a `POST` request.\\n\\nThe headers of the request must include an API key in the `authorization` field. The body of the request must be\\n`multipart/form-data`, and the `accept` header should be set to one of the following:\\n  - `image/*` to receive the image in the format specified by the `output_format` parameter.\\n  - `application/json` to receive the image encoded as base64 in a JSON response.\\n\\nThe body of the request should include:\\n- `image`\\n- `prompt`\\n\\nThe body may optionally include:\\n- `negative_prompt`\\n- `aspect_ratio`\\n- `fidelity`\\n- `seed`\\n- `output_format`\\n\\n> **Note:** for more details about these parameters please see the request schema below.\\n\\n### Output\\nThe resolution of the generated image will be 1MP. The default resolution is 1024x1024.\\n\\n### Credits\\nFlat rate of 4 credits per successful generation. You will not be charged for failed generations.\",\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"python\",\n            \"label\": \"Python\",\n            \"source\": \"import requests\\n\\nresponse = requests.post(\\n    f\\\"https://api.stability.ai/v2beta/stable-image/control/style\\\",\\n    headers={\\n        \\\"authorization\\\": f\\\"Bearer sk-MYAPIKEY\\\",\\n        \\\"accept\\\": \\\"image/*\\\"\\n    },\\n    files={\\n        \\\"image\\\": open(\\\"./cinematic-portrait.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"prompt\\\": \\\"a majestic portrait of a chicken\\\",\\n        \\\"output_format\\\": \\\"webp\\\"\\n    },\\n)\\n\\nif response.status_code == 200:\\n    with open(\\\"./chicken-portrait.webp\\\", 'wb') as file:\\n        file.write(response.content)\\nelse:\\n    raise Exception(str(response.json()))\"\n          },\n          {\n            \"lang\": \"javascript\",\n            \"label\": \"JavaScript\",\n            \"source\": \"import axios from \\\"axios\\\";\\nimport FormData from \\\"form-data\\\";\\nimport fs from \\\"node:fs\\\";\\n\\nconst payload = {\\n    image: fs.createReadStream(\\\"./cinematic-portrait.png\\\"),\\n    prompt: \\\"a majestic portrait of a chicken\\\",\\n    output_format: \\\"webp\\\",\\n};\\n\\nconst response = await axios.postForm(\\n    `https://api.stability.ai/v2beta/stable-image/control/style`,\\n    axios.toFormData(payload, new FormData()),\\n    {\\n        validateStatus: undefined,\\n        responseType: \\\"arraybuffer\\\",\\n        headers: {\\n            Authorization: `Bearer sk-MYAPIKEY`,\\n            Accept: \\\"image/*\\\"\\n        },\\n    },\\n);\\n\\nif (response.status === 200) {\\n    fs.writeFileSync(\\\"./chicken-portrait.webp\\\", Buffer.from(response.data));\\n} else {\\n    throw new Error(`${response.status}: ${response.data.toString()}`);\\n}\"\n          },\n          {\n            \"lang\": \"terminal\",\n            \"label\": \"cURL\",\n            \"source\": \"curl -f -sS \\\"https://api.stability.ai/v2beta/stable-image/control/style\\\" \\\\\\n  -H \\\"authorization: Bearer sk-MYAPIKEY\\\" \\\\\\n  -H \\\"accept: image/*\\\" \\\\\\n  -F image=@\\\"./cinematic-portrait.png\\\" \\\\\\n  -F prompt=\\\"a majestic portrait of a chicken\\\" \\\\\\n  -F output_format=\\\"webp\\\" \\\\\\n  -o \\\"./chicken-portrait.webp\\\"\"\n          }\n        ],\n        \"parameters\": [\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"description\": \"Your [Stability API key](https://platform.stability.ai/account/keys), used to authenticate your requests. Although you may have multiple keys in your account, you should use the same key for all requests to this API.\",\n              \"minLength\": 1\n            },\n            \"required\": true,\n            \"name\": \"authorization\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"minLength\": 1,\n              \"description\": \"The content type of the request body. Do not manually specify this header; your HTTP client library will automatically include the appropriate boundary parameter.\",\n              \"example\": \"multipart/form-data\"\n            },\n            \"required\": true,\n            \"name\": \"content-type\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"type\": \"string\",\n              \"default\": \"image/*\",\n              \"description\": \"Specify `image/*` to receive the bytes of the image directly. Otherwise specify `application/json` to receive the image as base64 encoded JSON.\",\n              \"enum\": [\"image/*\", \"application/json\"]\n            },\n            \"required\": false,\n            \"name\": \"accept\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientUserID\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-user-id\",\n            \"in\": \"header\"\n          },\n          {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/StabilityClientVersion\"\n            },\n            \"required\": false,\n            \"name\": \"stability-client-version\",\n            \"in\": \"header\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                  \"prompt\": {\n                    \"type\": \"string\",\n                    \"minLength\": 1,\n                    \"maxLength\": 10000,\n                    \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n                  },\n                  \"image\": {\n                    \"type\": \"string\",\n                    \"description\": \"An image whose style you wish to use as the foundation for a generation.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\\n- The aspect ratio must be between 1:2.5 and 2.5:1\",\n                    \"format\": \"binary\",\n                    \"example\": \"./some/image.png\"\n                  },\n                  \"negative_prompt\": {\n                    \"type\": \"string\",\n                    \"maxLength\": 10000,\n                    \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n                  },\n                  \"aspect_ratio\": {\n                    \"type\": \"string\",\n                    \"enum\": [\n                      \"21:9\",\n                      \"16:9\",\n                      \"3:2\",\n                      \"5:4\",\n                      \"1:1\",\n                      \"4:5\",\n                      \"2:3\",\n                      \"9:16\",\n                      \"9:21\"\n                    ],\n                    \"default\": \"1:1\",\n                    \"description\": \"Controls the aspect ratio of the generated image.\"\n                  },\n                  \"fidelity\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 1,\n                    \"default\": 0.5,\n                    \"description\": \"How closely the output image's style resembles the input image's style.\"\n                  },\n                  \"seed\": {\n                    \"type\": \"number\",\n                    \"minimum\": 0,\n                    \"maximum\": 4294967294,\n                    \"default\": 0,\n                    \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n                  },\n                  \"output_format\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"png\", \"jpeg\", \"webp\"],\n                    \"default\": \"png\",\n                    \"description\": \"Dictates the `content-type` of the generated image.\"\n                  }\n                },\n                \"required\": [\"prompt\", \"image\"]\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"description\": \"Generation was successful.\",\n            \"headers\": {\n              \"x-request-id\": {\n                \"description\": \"A unique identifier for this request.\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"content-type\": {\n                \"description\": \"The format of the generated image.\\n\\n To receive the bytes of the image directly, specify `image/*` in the accept header. To receive the bytes base64 encoded inside of a JSON payload, specify `application/json`.\",\n                \"examples\": {\n                  \"png\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/png\"\n                  },\n                  \"pngJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/png\"\n                  },\n                  \"jpeg\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/jpeg\"\n                  },\n                  \"jpegJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/jpeg\"\n                  },\n                  \"webp\": {\n                    \"description\": \"raw bytes\",\n                    \"value\": \"image/webp\"\n                  },\n                  \"webpJSON\": {\n                    \"description\": \"base64 encoded\",\n                    \"value\": \"application/json; type=image/webp\"\n                  }\n                },\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              },\n              \"finish-reason\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"]\n                },\n                \"description\": \"Indicates the reason the generation finished. \\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `finish_reason`.\"\n              },\n              \"seed\": {\n                \"description\": \"The seed used as random noise for this generation.\\n\\n> **NOTE:** This header is absent on JSON encoded responses because it is present in the body as `seed`.\",\n                \"example\": \"343940597\",\n                \"schema\": {\n                  \"type\": \"string\"\n                }\n              }\n            },\n            \"content\": {\n              \"image/png\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated png.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/png\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated jpeg.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/jpeg\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              },\n              \"image/webp\": {\n                \"schema\": {\n                  \"type\": \"string\",\n                  \"description\": \"The bytes of the generated image.\\n\\nThe `finish-reason` and `seed` will be present as headers.\",\n                  \"format\": \"binary\"\n                },\n                \"example\": \"The bytes of the generated webp.\\n(Caution: may contain cats)\"\n              },\n              \"application/json; type=image/webp\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"image\": {\n                      \"type\": \"string\",\n                      \"description\": \"The generated image, encoded to base64.\",\n                      \"example\": \"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1...\"\n                    },\n                    \"finish_reason\": {\n                      \"type\": \"string\",\n                      \"enum\": [\"SUCCESS\", \"CONTENT_FILTERED\"],\n                      \"description\": \"The reason the generation finished.\\n\\n- `SUCCESS` = successful generation.\\n- `CONTENT_FILTERED` = successful generation, however the output violated our content moderation \\npolicy and has been blurred as a result.\",\n                      \"example\": \"SUCCESS\"\n                    },\n                    \"seed\": {\n                      \"type\": \"number\",\n                      \"minimum\": 0,\n                      \"maximum\": 4294967294,\n                      \"default\": 0,\n                      \"description\": \"The seed used as random noise for this generation.\",\n                      \"example\": 343940597\n                    }\n                  },\n                  \"required\": [\"image\", \"finish_reason\"]\n                }\n              }\n            }\n          },\n          \"400\": {\n            \"description\": \"Invalid parameter(s), see the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                }\n              }\n            }\n          },\n          \"403\": {\n            \"description\": \"Your request was flagged by our content moderation system.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ContentModerationResponse\"\n                }\n              }\n            }\n          },\n          \"413\": {\n            \"description\": \"Your request was larger than 10MiB.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"4212a4b66fbe1cedca4bf2133d35dca5\",\n                    \"name\": \"payload_too_large\",\n                    \"errors\": [\n                      \"body: payloads cannot be larger than 10MiB in size\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"422\": {\n            \"description\": \"Your request was well-formed, but rejected. See the `errors` field for details.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"]\n                },\n                \"examples\": {\n                  \"Invalid Language\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"invalid_language\",\n                      \"errors\": [\n                        \"English is the only supported language for this service.\"\n                      ]\n                    }\n                  },\n                  \"Public Figure Detected\": {\n                    \"value\": {\n                      \"id\": \"ff54b236a3acdde1522cb1ba641c43ed\",\n                      \"name\": \"public_figure\",\n                      \"errors\": [\n                        \"Our system detected the likeness of a public figure in your image. To comply with our guidelines, this request cannot be processed. Please upload a different image.\"\n                      ]\n                    }\n                  }\n                }\n              }\n            }\n          },\n          \"429\": {\n            \"description\": \"You have made more than 150 requests in 10 seconds.\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"rate_limit_exceeded\",\n                    \"name\": \"rate_limit_exceeded\",\n                    \"errors\": [\n                      \"You have exceeded the rate limit of 150 requests within a 10 second period, and have been timed out for 60 seconds.\"\n                    ]\n                  }\n                }\n              }\n            }\n          },\n          \"500\": {\n            \"description\": \"An internal error occurred. If the problem persists [contact support](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"type\": \"object\",\n                  \"properties\": {\n                    \"id\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n                      \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n                    },\n                    \"name\": {\n                      \"type\": \"string\",\n                      \"minLength\": 1,\n                      \"description\": \"Short-hand name for an error, useful for discriminating between errors with the same status code.\",\n                      \"example\": \"bad_request\"\n                    },\n                    \"errors\": {\n                      \"type\": \"array\",\n                      \"items\": {\n                        \"type\": \"string\"\n                      },\n                      \"minItems\": 1,\n                      \"description\": \"One or more error messages indicating what went wrong.\",\n                      \"example\": [\"some-field: is required\"]\n                    }\n                  },\n                  \"required\": [\"id\", \"name\", \"errors\"],\n                  \"example\": {\n                    \"id\": \"2a1b2d4eafe2bc6ab4cd4d5c6133f513\",\n                    \"name\": \"internal_error\",\n                    \"errors\": [\n                      \"An unexpected server error has occurred, please try again later.\"\n                    ]\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    },\n    \"/v1/generation/{engine_id}/text-to-image\": {\n      \"post\": {\n        \"description\": \"Generate an image from a text prompt. \\n### Using SDXL 1.0\\nUse `stable-diffusion-xl-1024-v1-0` as the `engine_id` of your request and pass in `height` & `width` as one of the following combinations:\\n- 1024x1024 (default)\\n- 1152x896\\n- 896x1152\\n- 1216x832\\n- 1344x768\\n- 768x1344\\n- 1536x640\\n- 640x1536 \\n\\n### SDXL 1.0 Pricing\\nWhen specifying 30 steps or fewer, generation costs 0.9 credits.\\n\\nWhen specifying above 30 steps, generation cost is determiend using the following formula:\\n\\n `cost = 0.9 * (steps / 30)`\\n\\n### Using SD 1.6\\nSD1.6 is a flexible-resolution base model allowing you to generate non-standard aspect ratios. The model is optimized for a resolution of 512 x 512 pixels. To generate 1 megapixel outputs, we recommend using SDXL 1.0, which is available at the same price.\\n\\nPass in `stable-diffusion-v1-6` as the `engine_id` of your request and ensure the `height` & `width` you pass in adhere to the following restrictions:\\n- No dimension can be less than 320 pixels\\n- No dimension can be greater than 1536 pixels\\n- Height and width must be specified in increments of 64\\n- The default resolution is 512 x 512\\n\",\n        \"operationId\": \"textToImage\",\n        \"summary\": \"Text-to-image\",\n        \"tags\": [\"SDXL 1.0 & SD1.6\"],\n        \"parameters\": [\n          {\n            \"$ref\": \"#/components/parameters/engineID\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/accept\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/organization\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientID\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientVersion\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"application/json\": {\n              \"example\": {\n                \"cfg_scale\": 7,\n                \"height\": 512,\n                \"width\": 512,\n                \"sampler\": \"K_DPM_2_ANCESTRAL\",\n                \"samples\": 1,\n                \"steps\": 30,\n                \"text_prompts\": [\n                  {\n                    \"text\": \"A lighthouse on a cliff\",\n                    \"weight\": 1\n                  }\n                ]\n              },\n              \"schema\": {\n                \"$ref\": \"#/components/schemas/TextToImageRequestBody\"\n              }\n            }\n          },\n          \"required\": true\n        },\n        \"responses\": {\n          \"200\": {\n            \"$ref\": \"#/components/responses/GenerationResponse\"\n          },\n          \"400\": {\n            \"$ref\": \"#/components/responses/400FromGeneration\"\n          },\n          \"401\": {\n            \"$ref\": \"#/components/responses/401\"\n          },\n          \"403\": {\n            \"$ref\": \"#/components/responses/403\"\n          },\n          \"404\": {\n            \"$ref\": \"#/components/responses/404\"\n          },\n          \"500\": {\n            \"$ref\": \"#/components/responses/500\"\n          }\n        },\n        \"security\": [\n          {\n            \"STABILITY_API_KEY\": []\n          }\n        ],\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"Python\",\n            \"source\": \"import base64\\nimport os\\nimport requests\\n\\nengine_id = \\\"stable-diffusion-v1-6\\\"\\napi_host = os.getenv('API_HOST', 'https://api.stability.ai')\\napi_key = os.getenv(\\\"STABILITY_API_KEY\\\")\\n\\nif api_key is None:\\n    raise Exception(\\\"Missing Stability API key.\\\")\\n\\nresponse = requests.post(\\n    f\\\"{api_host}/v1/generation/{engine_id}/text-to-image\\\",\\n    headers={\\n        \\\"Content-Type\\\": \\\"application/json\\\",\\n        \\\"Accept\\\": \\\"application/json\\\",\\n        \\\"Authorization\\\": f\\\"Bearer {api_key}\\\"\\n    },\\n    json={\\n        \\\"text_prompts\\\": [\\n            {\\n                \\\"text\\\": \\\"A lighthouse on a cliff\\\"\\n            }\\n        ],\\n        \\\"cfg_scale\\\": 7,\\n        \\\"height\\\": 1024,\\n        \\\"width\\\": 1024,\\n        \\\"samples\\\": 1,\\n        \\\"steps\\\": 30,\\n    },\\n)\\n\\nif response.status_code != 200:\\n    raise Exception(\\\"Non-200 response: \\\" + str(response.text))\\n\\ndata = response.json()\\n\\nfor i, image in enumerate(data[\\\"artifacts\\\"]):\\n    with open(f\\\"./out/v1_txt2img_{i}.png\\\", \\\"wb\\\") as f:\\n        f.write(base64.b64decode(image[\\\"base64\\\"]))\\n\"\n          },\n          {\n            \"label\": \"TypeScript\",\n            \"lang\": \"Javascript\",\n            \"source\": \"import fetch from 'node-fetch'\\nimport fs from 'node:fs'\\n\\nconst engineId = 'stable-diffusion-v1-6'\\nconst apiHost = process.env.API_HOST ?? 'https://api.stability.ai'\\nconst apiKey = process.env.STABILITY_API_KEY\\n\\nif (!apiKey) throw new Error('Missing Stability API key.')\\n\\nconst response = await fetch(\\n  `${apiHost}/v1/generation/${engineId}/text-to-image`,\\n  {\\n    method: 'POST',\\n    headers: {\\n      'Content-Type': 'application/json',\\n      Accept: 'application/json',\\n      Authorization: `Bearer ${apiKey}`,\\n    },\\n    body: JSON.stringify({\\n      text_prompts: [\\n        {\\n          text: 'A lighthouse on a cliff',\\n        },\\n      ],\\n      cfg_scale: 7,\\n      height: 1024,\\n      width: 1024,\\n      steps: 30,\\n      samples: 1,\\n    }),\\n  }\\n)\\n\\nif (!response.ok) {\\n  throw new Error(`Non-200 response: ${await response.text()}`)\\n}\\n\\ninterface GenerationResponse {\\n  artifacts: Array<{\\n    base64: string\\n    seed: number\\n    finishReason: string\\n  }>\\n}\\n\\nconst responseJSON = (await response.json()) as GenerationResponse\\n\\nresponseJSON.artifacts.forEach((image, index) => {\\n  fs.writeFileSync(\\n    `./out/v1_txt2img_${index}.png`,\\n    Buffer.from(image.base64, 'base64')\\n  )\\n})\\n\"\n          },\n          {\n            \"lang\": \"Go\",\n            \"source\": \"package main\\n\\nimport (\\n\\t\\\"bytes\\\"\\n\\t\\\"encoding/base64\\\"\\n\\t\\\"encoding/json\\\"\\n\\t\\\"fmt\\\"\\n\\t\\\"net/http\\\"\\n\\t\\\"os\\\"\\n)\\n\\ntype TextToImageImage struct {\\n\\tBase64       string `json:\\\"base64\\\"`\\n\\tSeed         uint32 `json:\\\"seed\\\"`\\n\\tFinishReason string `json:\\\"finishReason\\\"`\\n}\\n\\ntype TextToImageResponse struct {\\n\\tImages []TextToImageImage `json:\\\"artifacts\\\"`\\n}\\n\\nfunc main() {\\n\\t// Build REST endpoint URL w/ specified engine\\n\\tengineId := \\\"stable-diffusion-v1-6\\\"\\n\\tapiHost, hasApiHost := os.LookupEnv(\\\"API_HOST\\\")\\n\\tif !hasApiHost {\\n\\t\\tapiHost = \\\"https://api.stability.ai\\\"\\n\\t}\\n\\treqUrl := apiHost + \\\"/v1/generation/\\\" + engineId + \\\"/text-to-image\\\"\\n\\n\\t// Acquire an API key from the environment\\n\\tapiKey, hasAPIKey := os.LookupEnv(\\\"STABILITY_API_KEY\\\")\\n\\tif !hasAPIKey {\\n\\t\\tpanic(\\\"Missing STABILITY_API_KEY environment variable\\\")\\n\\t}\\n\\n\\tvar data = []byte(`{\\n\\t\\t\\\"text_prompts\\\": [\\n\\t\\t  {\\n\\t\\t\\t\\\"text\\\": \\\"A lighthouse on a cliff\\\"\\n\\t\\t  }\\n\\t\\t],\\n\\t\\t\\\"cfg_scale\\\": 7,\\n\\t\\t\\\"height\\\": 1024,\\n\\t\\t\\\"width\\\": 1024,\\n\\t\\t\\\"samples\\\": 1,\\n\\t\\t\\\"steps\\\": 30\\n  \\t}`)\\n\\n\\treq, _ := http.NewRequest(\\\"POST\\\", reqUrl, bytes.NewBuffer(data))\\n\\treq.Header.Add(\\\"Content-Type\\\", \\\"application/json\\\")\\n\\treq.Header.Add(\\\"Accept\\\", \\\"application/json\\\")\\n\\treq.Header.Add(\\\"Authorization\\\", \\\"Bearer \\\"+apiKey)\\n\\n\\t// Execute the request & read all the bytes of the body\\n\\tres, _ := http.DefaultClient.Do(req)\\n\\tdefer res.Body.Close()\\n\\n\\tif res.StatusCode != 200 {\\n\\t\\tvar body map[string]interface{}\\n\\t\\tif err := json.NewDecoder(res.Body).Decode(&body); err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\t\\tpanic(fmt.Sprintf(\\\"Non-200 response: %s\\\", body))\\n\\t}\\n\\n\\t// Decode the JSON body\\n\\tvar body TextToImageResponse\\n\\tif err := json.NewDecoder(res.Body).Decode(&body); err != nil {\\n\\t\\tpanic(err)\\n\\t}\\n\\n\\t// Write the images to disk\\n\\tfor i, image := range body.Images {\\n\\t\\toutFile := fmt.Sprintf(\\\"./out/v1_txt2img_%d.png\\\", i)\\n\\t\\tfile, err := os.Create(outFile)\\n\\t\\tif err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\n\\t\\timageBytes, err := base64.StdEncoding.DecodeString(image.Base64)\\n\\t\\tif err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\n\\t\\tif _, err := file.Write(imageBytes); err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\n\\t\\tif err := file.Close(); err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\t}\\n}\\n\"\n          },\n          {\n            \"lang\": \"cURL\",\n            \"source\": \"if [ -z \\\"$STABILITY_API_KEY\\\" ]; then\\n    echo \\\"STABILITY_API_KEY environment variable is not set\\\"\\n    exit 1\\nfi\\n\\nOUTPUT_FILE=./out/v1_txt2img.png\\nBASE_URL=${API_HOST:-https://api.stability.ai}\\nURL=\\\"$BASE_URL/v1/generation/stable-diffusion-v1-6/text-to-image\\\"\\n\\ncurl -f -sS -X POST \\\"$URL\\\" \\\\\\n  -H 'Content-Type: application/json' \\\\\\n  -H 'Accept: image/png' \\\\\\n  -H \\\"Authorization: Bearer $STABILITY_API_KEY\\\" \\\\\\n  --data-raw '{\\n    \\\"text_prompts\\\": [\\n      {\\n        \\\"text\\\": \\\"A lighthouse on a cliff\\\"\\n      }\\n    ],\\n    \\\"cfg_scale\\\": 7,\\n    \\\"height\\\": 1024,\\n    \\\"width\\\": 1024,\\n    \\\"samples\\\": 1,\\n    \\\"steps\\\": 30\\n  }' \\\\\\n  -o \\\"$OUTPUT_FILE\\\"\\n\"\n          }\n        ]\n      }\n    },\n    \"/v1/generation/{engine_id}/image-to-image\": {\n      \"post\": {\n        \"description\": \"Produce an image from an existing image using a text prompt. \\n### How to control strength of generation\\nTo preserve only roughly 35% of the initial image, pass in either `init_image_mode=IMAGE_STRENGTH` and `image_strength=0.35` or `init_image_mode=STEP_SCHEDULE` and `step_schedule_start=0.65`.  Both of these are equivalent, however `init_image_mode=STEP_SCHEDULE` also lets you pass in `step_schedule_end`, which can provide an extra level of control for those who need it.  For more details, see the specific fields below.  \\n\\n> NOTE: Only **Version 1** engines will work with this endpoint.\",\n        \"operationId\": \"imageToImage\",\n        \"summary\": \"Image-to-image with prompt\",\n        \"tags\": [\"SDXL 1.0 & SD1.6\"],\n        \"parameters\": [\n          {\n            \"$ref\": \"#/components/parameters/engineID\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/accept\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/organization\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientID\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientVersion\"\n          }\n        ],\n        \"requestBody\": {\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"$ref\": \"#/components/schemas/ImageToImageRequestBody\"\n              },\n              \"examples\": {\n                \"IMAGE_STRENGTH\": {\n                  \"summary\": \"Using IMAGE_STRENGTH\",\n                  \"description\": \"Request using 35% image_strength\",\n                  \"value\": {\n                    \"image_strength\": 0.35,\n                    \"init_image_mode\": \"IMAGE_STRENGTH\",\n                    \"init_image\": \"<image binary>\",\n                    \"text_prompts[0][text]\": \"A dog space commander\",\n                    \"text_prompts[0][weight]\": 1,\n                    \"cfg_scale\": 7,\n                    \"sampler\": \"K_DPM_2_ANCESTRAL\",\n                    \"samples\": 3,\n                    \"steps\": 30\n                  }\n                },\n                \"STEP_SCHEDULE\": {\n                  \"summary\": \"Using STEP_SCHEDULE\",\n                  \"description\": \"Equivalent request using step_schedule_start\",\n                  \"value\": {\n                    \"step_schedule_start\": 0.65,\n                    \"init_image_mode\": \"STEP_SCHEDULE\",\n                    \"init_image\": \"<image binary>\",\n                    \"text_prompts[0][text]\": \"A dog space commander\",\n                    \"text_prompts[0][weight]\": 1,\n                    \"cfg_scale\": 7,\n                    \"sampler\": \"K_DPM_2_ANCESTRAL\",\n                    \"samples\": 3,\n                    \"steps\": 30\n                  }\n                }\n              }\n            }\n          },\n          \"required\": true\n        },\n        \"responses\": {\n          \"200\": {\n            \"$ref\": \"#/components/responses/GenerationResponse\"\n          },\n          \"400\": {\n            \"$ref\": \"#/components/responses/400FromGeneration\"\n          },\n          \"401\": {\n            \"$ref\": \"#/components/responses/401\"\n          },\n          \"403\": {\n            \"$ref\": \"#/components/responses/403\"\n          },\n          \"404\": {\n            \"$ref\": \"#/components/responses/404\"\n          },\n          \"500\": {\n            \"$ref\": \"#/components/responses/500\"\n          }\n        },\n        \"security\": [\n          {\n            \"STABILITY_API_KEY\": []\n          }\n        ],\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"Python\",\n            \"source\": \"import base64\\nimport os\\nimport requests\\n\\nengine_id = \\\"stable-diffusion-v1-6\\\"\\napi_host = os.getenv(\\\"API_HOST\\\", \\\"https://api.stability.ai\\\")\\napi_key = os.getenv(\\\"STABILITY_API_KEY\\\")\\n\\nif api_key is None:\\n    raise Exception(\\\"Missing Stability API key.\\\")\\n\\nresponse = requests.post(\\n    f\\\"{api_host}/v1/generation/{engine_id}/image-to-image\\\",\\n    headers={\\n        \\\"Accept\\\": \\\"application/json\\\",\\n        \\\"Authorization\\\": f\\\"Bearer {api_key}\\\"\\n    },\\n    files={\\n        \\\"init_image\\\": open(\\\"../init_image.png\\\", \\\"rb\\\")\\n    },\\n    data={\\n        \\\"image_strength\\\": 0.35,\\n        \\\"init_image_mode\\\": \\\"IMAGE_STRENGTH\\\",\\n        \\\"text_prompts[0][text]\\\": \\\"Galactic dog with a cape\\\",\\n        \\\"cfg_scale\\\": 7,\\n        \\\"samples\\\": 1,\\n        \\\"steps\\\": 30,\\n    }\\n)\\n\\nif response.status_code != 200:\\n    raise Exception(\\\"Non-200 response: \\\" + str(response.text))\\n\\ndata = response.json()\\n\\nfor i, image in enumerate(data[\\\"artifacts\\\"]):\\n    with open(f\\\"./out/v1_img2img_{i}.png\\\", \\\"wb\\\") as f:\\n        f.write(base64.b64decode(image[\\\"base64\\\"]))\\n\"\n          },\n          {\n            \"label\": \"TypeScript\",\n            \"lang\": \"Javascript\",\n            \"source\": \"import fetch from 'node-fetch'\\nimport FormData from 'form-data'\\nimport fs from 'node:fs'\\n\\nconst engineId = 'stable-diffusion-v1-6'\\nconst apiHost = process.env.API_HOST ?? 'https://api.stability.ai'\\nconst apiKey = process.env.STABILITY_API_KEY\\n\\nif (!apiKey) throw new Error('Missing Stability API key.')\\n\\n// NOTE: This example is using a NodeJS FormData library.\\n// Browsers should use their native FormData class.\\n// React Native apps should also use their native FormData class.\\nconst formData = new FormData()\\nformData.append('init_image', fs.readFileSync('../init_image.png'))\\nformData.append('init_image_mode', 'IMAGE_STRENGTH')\\nformData.append('image_strength', 0.35)\\nformData.append('text_prompts[0][text]', 'Galactic dog wearing a cape')\\nformData.append('cfg_scale', 7)\\nformData.append('samples', 1)\\nformData.append('steps', 30)\\n\\nconst response = await fetch(\\n  `${apiHost}/v1/generation/${engineId}/image-to-image`,\\n  {\\n    method: 'POST',\\n    headers: {\\n      ...formData.getHeaders(),\\n      Accept: 'application/json',\\n      Authorization: `Bearer ${apiKey}`,\\n    },\\n    body: formData,\\n  }\\n)\\n\\nif (!response.ok) {\\n  throw new Error(`Non-200 response: ${await response.text()}`)\\n}\\n\\ninterface GenerationResponse {\\n  artifacts: Array<{\\n    base64: string\\n    seed: number\\n    finishReason: string\\n  }>\\n}\\n\\nconst responseJSON = (await response.json()) as GenerationResponse\\n\\nresponseJSON.artifacts.forEach((image, index) => {\\n  fs.writeFileSync(\\n    `out/v1_img2img_${index}.png`,\\n    Buffer.from(image.base64, 'base64')\\n  )\\n})\\n\"\n          },\n          {\n            \"lang\": \"Go\",\n            \"source\": \"package main\\n\\nimport (\\n\\t\\\"bytes\\\"\\n\\t\\\"encoding/base64\\\"\\n\\t\\\"encoding/json\\\"\\n\\t\\\"fmt\\\"\\n\\t\\\"io\\\"\\n\\t\\\"mime/multipart\\\"\\n\\t\\\"net/http\\\"\\n\\t\\\"os\\\"\\n)\\n\\ntype ImageToImageImage struct {\\n\\tBase64       string `json:\\\"base64\\\"`\\n\\tSeed         uint32 `json:\\\"seed\\\"`\\n\\tFinishReason string `json:\\\"finishReason\\\"`\\n}\\n\\ntype ImageToImageResponse struct {\\n\\tImages []ImageToImageImage `json:\\\"artifacts\\\"`\\n}\\n\\nfunc main() {\\n\\tengineId := \\\"stable-diffusion-v1-6\\\"\\n\\n\\t// Build REST endpoint URL\\n\\tapiHost, hasApiHost := os.LookupEnv(\\\"API_HOST\\\")\\n\\tif !hasApiHost {\\n\\t\\tapiHost = \\\"https://api.stability.ai\\\"\\n\\t}\\n\\treqUrl := apiHost + \\\"/v1/generation/\\\" + engineId + \\\"/image-to-image\\\"\\n\\n\\t// Acquire an API key from the environment\\n\\tapiKey, hasAPIKey := os.LookupEnv(\\\"STABILITY_API_KEY\\\")\\n\\tif !hasAPIKey {\\n\\t\\tpanic(\\\"Missing STABILITY_API_KEY environment variable\\\")\\n\\t}\\n\\n\\tdata := &bytes.Buffer{}\\n\\twriter := multipart.NewWriter(data)\\n\\n\\t// Write the init image to the request\\n\\tinitImageWriter, _ := writer.CreateFormField(\\\"init_image\\\")\\n\\tinitImageFile, initImageErr := os.Open(\\\"../init_image.png\\\")\\n\\tif initImageErr != nil {\\n\\t\\tpanic(\\\"Could not open init_image.png\\\")\\n\\t}\\n\\t_, _ = io.Copy(initImageWriter, initImageFile)\\n\\n\\t// Write the options to the request\\n\\t_ = writer.WriteField(\\\"init_image_mode\\\", \\\"IMAGE_STRENGTH\\\")\\n\\t_ = writer.WriteField(\\\"image_strength\\\", \\\"0.35\\\")\\n\\t_ = writer.WriteField(\\\"text_prompts[0][text]\\\", \\\"Galactic dog with a cape\\\")\\n\\t_ = writer.WriteField(\\\"cfg_scale\\\", \\\"7\\\")\\n\\t_ = writer.WriteField(\\\"samples\\\", \\\"1\\\")\\n\\t_ = writer.WriteField(\\\"steps\\\", \\\"30\\\")\\n\\twriter.Close()\\n\\n\\t// Execute the request\\n\\tpayload := bytes.NewReader(data.Bytes())\\n\\treq, _ := http.NewRequest(\\\"POST\\\", reqUrl, payload)\\n\\treq.Header.Add(\\\"Content-Type\\\", writer.FormDataContentType())\\n\\treq.Header.Add(\\\"Accept\\\", \\\"application/json\\\")\\n\\treq.Header.Add(\\\"Authorization\\\", \\\"Bearer \\\"+apiKey)\\n\\tres, _ := http.DefaultClient.Do(req)\\n\\tdefer res.Body.Close()\\n\\n\\tif res.StatusCode != 200 {\\n\\t\\tvar body map[string]interface{}\\n\\t\\tif err := json.NewDecoder(res.Body).Decode(&body); err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\t\\tpanic(fmt.Sprintf(\\\"Non-200 response: %s\\\", body))\\n\\t}\\n\\n\\t// Decode the JSON body\\n\\tvar body ImageToImageResponse\\n\\tif err := json.NewDecoder(res.Body).Decode(&body); err != nil {\\n\\t\\tpanic(err)\\n\\t}\\n\\n\\t// Write the images to disk\\n\\tfor i, image := range body.Images {\\n\\t\\toutFile := fmt.Sprintf(\\\"./out/v1_img2img_%d.png\\\", i)\\n\\t\\tfile, err := os.Create(outFile)\\n\\t\\tif err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\n\\t\\timageBytes, err := base64.StdEncoding.DecodeString(image.Base64)\\n\\t\\tif err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\n\\t\\tif _, err := file.Write(imageBytes); err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\n\\t\\tif err := file.Close(); err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\t}\\n}\\n\"\n          },\n          {\n            \"lang\": \"cURL\",\n            \"source\": \"if [ -z \\\"$STABILITY_API_KEY\\\" ]; then\\n    echo \\\"STABILITY_API_KEY environment variable is not set\\\"\\n    exit 1\\nfi\\n\\nOUTPUT_FILE=./out/v1_img2img.png\\nBASE_URL=${API_HOST:-https://api.stability.ai}\\nURL=\\\"$BASE_URL/v1/generation/stable-diffusion-v1-6/image-to-image\\\"\\n\\ncurl -f -sS -X POST \\\"$URL\\\" \\\\\\n  -H 'Content-Type: multipart/form-data' \\\\\\n  -H 'Accept: image/png' \\\\\\n  -H \\\"Authorization: Bearer $STABILITY_API_KEY\\\" \\\\\\n  -F 'init_image=@\\\"../init_image.png\\\"' \\\\\\n  -F 'init_image_mode=IMAGE_STRENGTH' \\\\\\n  -F 'image_strength=0.35' \\\\\\n  -F 'text_prompts[0][text]=A galactic dog in space' \\\\\\n  -F 'cfg_scale=7' \\\\\\n  -F 'samples=1' \\\\\\n  -F 'steps=30' \\\\\\n  -o \\\"$OUTPUT_FILE\\\"\\n\"\n          }\n        ]\n      }\n    },\n    \"/v1/generation/{engine_id}/image-to-image/masking\": {\n      \"post\": {\n        \"description\": \"Selectively modify portions of an image using a mask. The `mask` must be the same shape and size as the init image. This endpoint also supports `image` parameters with alpha channels.  See below for more details. \\n\\n> NOTE: Only **Version 1** engines will work with this endpoint.\",\n        \"operationId\": \"masking\",\n        \"summary\": \"Image-to-image with a mask\",\n        \"tags\": [\"SDXL 1.0 & SD1.6\"],\n        \"parameters\": [\n          {\n            \"example\": \"stable-diffusion-xl-1024-v1-0\",\n            \"in\": \"path\",\n            \"name\": \"engine_id\",\n            \"required\": true,\n            \"schema\": {\n              \"type\": \"string\"\n            }\n          },\n          {\n            \"$ref\": \"#/components/parameters/accept\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/organization\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientID\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientVersion\"\n          }\n        ],\n        \"requestBody\": {\n          \"required\": true,\n          \"content\": {\n            \"multipart/form-data\": {\n              \"schema\": {\n                \"$ref\": \"#/components/schemas/MaskingRequestBody\"\n              },\n              \"examples\": {\n                \"MASK_IMAGE_BLACK\": {\n                  \"value\": {\n                    \"mask_source\": \"MASK_IMAGE_BLACK\",\n                    \"init_image\": \"<image binary>\",\n                    \"mask_image\": \"<image binary>\",\n                    \"text_prompts[0][text]\": \"A dog space commander\",\n                    \"text_prompts[0][weight]\": 1,\n                    \"cfg_scale\": 7,\n                    \"sampler\": \"K_DPM_2_ANCESTRAL\",\n                    \"samples\": 3,\n                    \"steps\": 30\n                  }\n                },\n                \"MASK_IMAGE_WHITE\": {\n                  \"value\": {\n                    \"mask_source\": \"MASK_IMAGE_WHITE\",\n                    \"init_image\": \"<image binary>\",\n                    \"mask_image\": \"<image binary>\",\n                    \"text_prompts[0][text]\": \"A dog space commander\",\n                    \"text_prompts[0][weight]\": 1,\n                    \"cfg_scale\": 7,\n                    \"sampler\": \"K_DPM_2_ANCESTRAL\",\n                    \"samples\": 3,\n                    \"steps\": 30\n                  }\n                },\n                \"INIT_IMAGE_ALPHA\": {\n                  \"value\": {\n                    \"mask_source\": \"INIT_IMAGE_ALPHA\",\n                    \"init_image\": \"<image binary>\",\n                    \"text_prompts[0][text]\": \"A dog space commander\",\n                    \"text_prompts[0][weight]\": 1,\n                    \"cfg_scale\": 7,\n                    \"sampler\": \"K_DPM_2_ANCESTRAL\",\n                    \"samples\": 3,\n                    \"steps\": 30\n                  }\n                }\n              }\n            }\n          }\n        },\n        \"responses\": {\n          \"200\": {\n            \"$ref\": \"#/components/responses/GenerationResponse\"\n          },\n          \"400\": {\n            \"$ref\": \"#/components/responses/400FromGeneration\"\n          },\n          \"401\": {\n            \"$ref\": \"#/components/responses/401\"\n          },\n          \"403\": {\n            \"$ref\": \"#/components/responses/403\"\n          },\n          \"404\": {\n            \"$ref\": \"#/components/responses/404\"\n          },\n          \"500\": {\n            \"$ref\": \"#/components/responses/500\"\n          }\n        },\n        \"security\": [\n          {\n            \"STABILITY_API_KEY\": []\n          }\n        ],\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"Python\",\n            \"source\": \"import base64\\nimport os\\nimport requests\\n\\nengine_id = \\\"stable-diffusion-v1-6\\\"\\napi_host = os.getenv('API_HOST', 'https://api.stability.ai')\\napi_key = os.getenv(\\\"STABILITY_API_KEY\\\")\\n\\nif api_key is None:\\n    raise Exception(\\\"Missing Stability API key.\\\")\\n\\nresponse = requests.post(\\n    f\\\"{api_host}/v1/generation/{engine_id}/image-to-image/masking\\\",\\n    headers={\\n        \\\"Accept\\\": 'application/json',\\n        \\\"Authorization\\\": f\\\"Bearer {api_key}\\\"\\n    },\\n    files={\\n        'init_image': open(\\\"../init_image.png\\\", 'rb'),\\n        'mask_image': open(\\\"../mask_image_black.png\\\", 'rb'),\\n    },\\n    data={\\n        \\\"mask_source\\\": \\\"MASK_IMAGE_BLACK\\\",\\n        \\\"text_prompts[0][text]\\\": \\\"A large spiral galaxy with a bright central bulge and a ring of stars around it\\\",\\n        \\\"cfg_scale\\\": 7,\\n        \\\"clip_guidance_preset\\\": \\\"FAST_BLUE\\\",\\n        \\\"samples\\\": 1,\\n        \\\"steps\\\": 30,\\n    }\\n)\\n\\nif response.status_code != 200:\\n    raise Exception(\\\"Non-200 response: \\\" + str(response.text))\\n\\ndata = response.json()\\n\\nfor i, image in enumerate(data[\\\"artifacts\\\"]):\\n    with open(f\\\"./out/v1_img2img_masking_{i}.png\\\", \\\"wb\\\") as f:\\n        f.write(base64.b64decode(image[\\\"base64\\\"]))\\n\"\n          },\n          {\n            \"label\": \"TypeScript\",\n            \"lang\": \"Javascript\",\n            \"source\": \"import fetch from 'node-fetch'\\nimport FormData from 'form-data'\\nimport fs from 'node:fs'\\n\\nconst engineId = 'stable-diffusion-v1-6'\\nconst apiHost = process.env.API_HOST ?? 'https://api.stability.ai'\\nconst apiKey = process.env.STABILITY_API_KEY\\n\\nif (!apiKey) throw new Error('Missing Stability API key.')\\n\\n// NOTE: This example is using a NodeJS FormData library. Browser\\n// implementations should use their native FormData class. React Native\\n// implementations should also use their native FormData class.\\nconst formData = new FormData()\\nformData.append('init_image', fs.readFileSync('../init_image.png'))\\nformData.append('mask_image', fs.readFileSync('../mask_image_black.png'))\\nformData.append('mask_source', 'MASK_IMAGE_BLACK')\\nformData.append(\\n  'text_prompts[0][text]',\\n  'A large spiral galaxy with a bright central bulge and a ring of stars around it'\\n)\\nformData.append('cfg_scale', '7')\\nformData.append('clip_guidance_preset', 'FAST_BLUE')\\nformData.append('samples', 1)\\nformData.append('steps', 30)\\n\\nconst response = await fetch(\\n  `${apiHost}/v1/generation/${engineId}/image-to-image/masking`,\\n  {\\n    method: 'POST',\\n    headers: {\\n      ...formData.getHeaders(),\\n      Accept: 'application/json',\\n      Authorization: `Bearer ${apiKey}`,\\n    },\\n    body: formData,\\n  }\\n)\\n\\nif (!response.ok) {\\n  throw new Error(`Non-200 response: ${await response.text()}`)\\n}\\n\\ninterface GenerationResponse {\\n  artifacts: Array<{\\n    base64: string\\n    seed: number\\n    finishReason: string\\n  }>\\n}\\n\\nconst responseJSON = (await response.json()) as GenerationResponse\\n\\nresponseJSON.artifacts.forEach((image, index) => {\\n  fs.writeFileSync(\\n    `out/v1_img2img_masking_${index}.png`,\\n    Buffer.from(image.base64, 'base64')\\n  )\\n})\\n\"\n          },\n          {\n            \"lang\": \"Go\",\n            \"source\": \"package main\\n\\nimport (\\n\\t\\\"bytes\\\"\\n\\t\\\"encoding/base64\\\"\\n\\t\\\"encoding/json\\\"\\n\\t\\\"fmt\\\"\\n\\t\\\"io\\\"\\n\\t\\\"mime/multipart\\\"\\n\\t\\\"net/http\\\"\\n\\t\\\"os\\\"\\n)\\n\\ntype MaskingImage struct {\\n\\tBase64       string `json:\\\"base64\\\"`\\n\\tSeed         uint32 `json:\\\"seed\\\"`\\n\\tFinishReason string `json:\\\"finishReason\\\"`\\n}\\n\\ntype MaskingResponse struct {\\n\\tImages []MaskingImage `json:\\\"artifacts\\\"`\\n}\\n\\nfunc main() {\\n\\tengineId := \\\"stable-diffusion-v1-6\\\"\\n\\n\\t// Build REST endpoint URL\\n\\tapiHost, hasApiHost := os.LookupEnv(\\\"API_HOST\\\")\\n\\tif !hasApiHost {\\n\\t\\tapiHost = \\\"https://api.stability.ai\\\"\\n\\t}\\n\\treqUrl := apiHost + \\\"/v1/generation/\\\" + engineId + \\\"/image-to-image/masking\\\"\\n\\n\\t// Acquire an API key from the environment\\n\\tapiKey, hasAPIKey := os.LookupEnv(\\\"STABILITY_API_KEY\\\")\\n\\tif !hasAPIKey {\\n\\t\\tpanic(\\\"Missing STABILITY_API_KEY environment variable\\\")\\n\\t}\\n\\n\\tdata := &bytes.Buffer{}\\n\\twriter := multipart.NewWriter(data)\\n\\n\\t// Write the init image to the request\\n\\tinitImageWriter, _ := writer.CreateFormField(\\\"init_image\\\")\\n\\tinitImageFile, initImageErr := os.Open(\\\"../init_image.png\\\")\\n\\tif initImageErr != nil {\\n\\t\\tpanic(\\\"Could not open init_image.png\\\")\\n\\t}\\n\\t_, _ = io.Copy(initImageWriter, initImageFile)\\n\\n\\t// Write the mask image to the request\\n\\tmaskImageWriter, _ := writer.CreateFormField(\\\"mask_image\\\")\\n\\tmaskImageFile, maskImageErr := os.Open(\\\"../mask_image_black.png\\\")\\n\\tif maskImageErr != nil {\\n\\t\\tpanic(\\\"Could not open mask_image_white.png\\\")\\n\\t}\\n\\t_, _ = io.Copy(maskImageWriter, maskImageFile)\\n\\n\\t// Write the options to the request\\n\\t_ = writer.WriteField(\\\"mask_source\\\", \\\"MASK_IMAGE_BLACK\\\")\\n\\t_ = writer.WriteField(\\\"text_prompts[0][text]\\\", \\\"A large spiral galaxy with a bright central bulge and a ring of stars around it\\\")\\n\\t_ = writer.WriteField(\\\"cfg_scale\\\", \\\"7\\\")\\n\\t_ = writer.WriteField(\\\"clip_guidance_preset\\\", \\\"FAST_BLUE\\\")\\n\\t_ = writer.WriteField(\\\"samples\\\", \\\"1\\\")\\n\\t_ = writer.WriteField(\\\"steps\\\", \\\"30\\\")\\n\\twriter.Close()\\n\\n\\t// Execute the request & read all the bytes of the response\\n\\tpayload := bytes.NewReader(data.Bytes())\\n\\treq, _ := http.NewRequest(\\\"POST\\\", reqUrl, payload)\\n\\treq.Header.Add(\\\"Content-Type\\\", writer.FormDataContentType())\\n\\treq.Header.Add(\\\"Accept\\\", \\\"application/json\\\")\\n\\treq.Header.Add(\\\"Authorization\\\", \\\"Bearer \\\"+apiKey)\\n\\tres, _ := http.DefaultClient.Do(req)\\n\\tdefer res.Body.Close()\\n\\n\\tif res.StatusCode != 200 {\\n\\t\\tvar body map[string]interface{}\\n\\t\\tif err := json.NewDecoder(res.Body).Decode(&body); err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\t\\tpanic(fmt.Sprintf(\\\"Non-200 response: %s\\\", body))\\n\\t}\\n\\n\\t// Decode the JSON body\\n\\tvar body MaskingResponse\\n\\tif err := json.NewDecoder(res.Body).Decode(&body); err != nil {\\n\\t\\tpanic(err)\\n\\t}\\n\\n\\t// Write the images to disk\\n\\tfor i, image := range body.Images {\\n\\t\\toutFile := fmt.Sprintf(\\\"./out/v1_img2img_masking_%d.png\\\", i)\\n\\t\\tfile, err := os.Create(outFile)\\n\\t\\tif err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\n\\t\\timageBytes, err := base64.StdEncoding.DecodeString(image.Base64)\\n\\t\\tif err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\n\\t\\tif _, err := file.Write(imageBytes); err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\n\\t\\tif err := file.Close(); err != nil {\\n\\t\\t\\tpanic(err)\\n\\t\\t}\\n\\t}\\n}\\n\"\n          },\n          {\n            \"lang\": \"cURL\",\n            \"source\": \"#!/bin/sh\\n\\nset -e\\n\\nif [ -z \\\"$STABILITY_API_KEY\\\" ]; then\\n    echo \\\"STABILITY_API_KEY environment variable is not set\\\"\\n    exit 1\\nfi\\n\\nOUTPUT_FILE=./out/v1_img2img_masking.png\\nBASE_URL=${API_HOST:-https://api.stability.ai}\\nURL=\\\"$BASE_URL/v1/generation/stable-diffusion-v1-6/image-to-image/masking\\\"\\n\\ncurl -f -sS -X POST \\\"$URL\\\" \\\\\\n  -H 'Content-Type: multipart/form-data' \\\\\\n  -H 'Accept: image/png' \\\\\\n  -H \\\"Authorization: Bearer $STABILITY_API_KEY\\\" \\\\\\n  -F 'init_image=@\\\"../init_image.png\\\"' \\\\\\n  -F 'mask_image=@\\\"../mask_image_black.png\\\"' \\\\\\n  -F 'mask_source=MASK_IMAGE_BLACK' \\\\\\n  -F 'text_prompts[0][text]=A large spiral galaxy with a bright central bulge and a ring of stars around it' \\\\\\n  -F 'cfg_scale=7' \\\\\\n  -F 'clip_guidance_preset=FAST_BLUE' \\\\\\n  -F 'samples=1' \\\\\\n  -F 'steps=30' \\\\\\n  -o \\\"$OUTPUT_FILE\\\"\\n\"\n          }\n        ]\n      }\n    },\n    \"/v1/engines/list\": {\n      \"get\": {\n        \"description\": \"List all engines available to your organization/user\",\n        \"operationId\": \"listEngines\",\n        \"summary\": \"List engines\",\n        \"tags\": [\"Engines\"],\n        \"parameters\": [\n          {\n            \"$ref\": \"#/components/parameters/organization\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientID\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientVersion\"\n          }\n        ],\n        \"responses\": {\n          \"200\": {\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/ListEnginesResponseBody\"\n                }\n              }\n            },\n            \"description\": \"OK response.\"\n          },\n          \"401\": {\n            \"$ref\": \"#/components/responses/401\"\n          },\n          \"500\": {\n            \"$ref\": \"#/components/responses/500\"\n          }\n        },\n        \"security\": [\n          {\n            \"STABILITY_API_KEY\": []\n          }\n        ],\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"Python\",\n            \"source\": \"import os\\nimport requests\\n\\napi_host = os.getenv('API_HOST', 'https://api.stability.ai')\\nurl = f\\\"{api_host}/v1/engines/list\\\"\\n\\napi_key = os.getenv(\\\"STABILITY_API_KEY\\\")\\nif api_key is None:\\n    raise Exception(\\\"Missing Stability API key.\\\")\\n\\nresponse = requests.get(url, headers={\\n    \\\"Authorization\\\": f\\\"Bearer {api_key}\\\"\\n})\\n\\nif response.status_code != 200:\\n    raise Exception(\\\"Non-200 response: \\\" + str(response.text))\\n\\n# Do something with the payload...\\npayload = response.json()\\n\\n\"\n          },\n          {\n            \"label\": \"TypeScript\",\n            \"lang\": \"Javascript\",\n            \"source\": \"import fetch from 'node-fetch'\\n\\nconst apiHost = process.env.API_HOST ?? 'https://api.stability.ai'\\nconst url = `${apiHost}/v1/engines/list`\\n\\nconst apiKey = process.env.STABILITY_API_KEY\\nif (!apiKey) throw new Error('Missing Stability API key.')\\n\\nconst response = await fetch(url, {\\n  method: 'GET',\\n  headers: {\\n    Authorization: `Bearer ${apiKey}`,\\n  },\\n})\\n\\nif (!response.ok) {\\n  throw new Error(`Non-200 response: ${await response.text()}`)\\n}\\n\\ninterface Payload {\\n  engines: Array<{\\n    id: string\\n    name: string\\n    description: string\\n    type: string\\n  }>\\n}\\n\\n// Do something with the payload...\\nconst payload = (await response.json()) as Payload\\n\"\n          },\n          {\n            \"lang\": \"Go\",\n            \"source\": \"package main\\n\\nimport (\\n\\t\\\"io\\\"\\n\\t\\\"net/http\\\"\\n\\t\\\"os\\\"\\n)\\n\\nfunc main() {\\n\\t// Build REST endpoint URL\\n\\tapiHost, hasApiHost := os.LookupEnv(\\\"API_HOST\\\")\\n\\tif !hasApiHost {\\n\\t\\tapiHost = \\\"https://api.stability.ai\\\"\\n\\t}\\n\\treqUrl := apiHost + \\\"/v1/engines/list\\\"\\n\\n\\t// Acquire an API key from the environment\\n\\tapiKey, hasAPIKey := os.LookupEnv(\\\"STABILITY_API_KEY\\\")\\n\\tif !hasAPIKey {\\n\\t\\tpanic(\\\"Missing STABILITY_API_KEY environment variable\\\")\\n\\t}\\n\\n\\t// Execute the request & read all the bytes of the response\\n\\treq, _ := http.NewRequest(\\\"GET\\\", reqUrl, nil)\\n\\treq.Header.Add(\\\"Authorization\\\", \\\"Bearer \\\"+apiKey)\\n\\tres, _ := http.DefaultClient.Do(req)\\n\\tdefer res.Body.Close()\\n\\tbody, _ := io.ReadAll(res.Body)\\n\\n\\tif res.StatusCode != 200 {\\n\\t\\tpanic(\\\"Non-200 response: \\\" + string(body))\\n\\t}\\n\\n\\t// Do something with the payload...\\n\\t// payload := string(body)\\n}\\n\"\n          },\n          {\n            \"lang\": \"cURL\",\n            \"source\": \"if [ -z \\\"$STABILITY_API_KEY\\\" ]; then\\n    echo \\\"STABILITY_API_KEY environment variable is not set\\\"\\n    exit 1\\nfi\\n\\nBASE_URL=${API_HOST:-https://api.stability.ai}\\nURL=\\\"$BASE_URL/v1/engines/list\\\"\\n\\ncurl -f -sS \\\"$URL\\\" \\\\\\n  -H 'Accept: application/json' \\\\\\n  -H \\\"Authorization: Bearer $STABILITY_API_KEY\\\"\\n\"\n          }\n        ]\n      }\n    },\n    \"/v1/user/account\": {\n      \"get\": {\n        \"description\": \"Get information about the account associated with the provided API key\",\n        \"operationId\": \"userAccount\",\n        \"summary\": \"Account details\",\n        \"tags\": [\"User\"],\n        \"responses\": {\n          \"200\": {\n            \"content\": {\n              \"application/json\": {\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/AccountResponseBody\"\n                }\n              }\n            },\n            \"description\": \"OK response.\"\n          },\n          \"401\": {\n            \"$ref\": \"#/components/responses/401\"\n          },\n          \"500\": {\n            \"$ref\": \"#/components/responses/500\"\n          }\n        },\n        \"security\": [\n          {\n            \"STABILITY_API_KEY\": []\n          }\n        ],\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"Python\",\n            \"source\": \"import os\\nimport requests\\n\\napi_host = os.getenv('API_HOST', 'https://api.stability.ai')\\nurl = f\\\"{api_host}/v1/user/account\\\"\\n\\napi_key = os.getenv(\\\"STABILITY_API_KEY\\\")\\nif api_key is None:\\n    raise Exception(\\\"Missing Stability API key.\\\")\\n\\nresponse = requests.get(url, headers={\\n    \\\"Authorization\\\": f\\\"Bearer {api_key}\\\"\\n})\\n\\nif response.status_code != 200:\\n    raise Exception(\\\"Non-200 response: \\\" + str(response.text))\\n\\n# Do something with the payload...\\npayload = response.json()\\n\\n\"\n          },\n          {\n            \"label\": \"TypeScript\",\n            \"lang\": \"Javascript\",\n            \"source\": \"import fetch from 'node-fetch'\\n\\nconst apiHost = process.env.API_HOST ?? 'https://api.stability.ai'\\nconst url = `${apiHost}/v1/user/account`\\n\\nconst apiKey = process.env.STABILITY_API_KEY\\nif (!apiKey) throw new Error('Missing Stability API key.')\\n\\nconst response = await fetch(url, {\\n  method: 'GET',\\n  headers: {\\n    Authorization: `Bearer ${apiKey}`,\\n  },\\n})\\n\\nif (!response.ok) {\\n  throw new Error(`Non-200 response: ${await response.text()}`)\\n}\\n\\ninterface User {\\n  id: string\\n  profile_picture: string\\n  email: string\\n  organizations?: Array<{\\n    id: string\\n    name: string\\n    role: string\\n    is_default: boolean\\n  }>\\n}\\n\\n// Do something with the user...\\nconst user = (await response.json()) as User\\n\"\n          },\n          {\n            \"lang\": \"Go\",\n            \"source\": \"package main\\n\\nimport (\\n\\t\\\"io\\\"\\n\\t\\\"net/http\\\"\\n\\t\\\"os\\\"\\n)\\n\\nfunc main() {\\n\\t// Build REST endpoint URL\\n\\tapiHost, hasApiHost := os.LookupEnv(\\\"API_HOST\\\")\\n\\tif !hasApiHost {\\n\\t\\tapiHost = \\\"https://api.stability.ai\\\"\\n\\t}\\n\\treqUrl := apiHost + \\\"/v1/user/account\\\"\\n\\n\\t// Acquire an API key from the environment\\n\\tapiKey, hasAPIKey := os.LookupEnv(\\\"STABILITY_API_KEY\\\")\\n\\tif !hasAPIKey {\\n\\t\\tpanic(\\\"Missing STABILITY_API_KEY environment variable\\\")\\n\\t}\\n\\n\\t// Build the request\\n\\treq, _ := http.NewRequest(\\\"GET\\\", reqUrl, nil)\\n\\treq.Header.Add(\\\"Authorization\\\", \\\"Bearer \\\"+apiKey)\\n\\n\\t// Execute the request\\n\\tres, _ := http.DefaultClient.Do(req)\\n\\tdefer res.Body.Close()\\n\\tbody, _ := io.ReadAll(res.Body)\\n\\n\\tif res.StatusCode != 200 {\\n\\t\\tpanic(\\\"Non-200 response: \\\" + string(body))\\n\\t}\\n\\n\\t// Do something with the payload...\\n\\t// payload := string(body)\\n}\\n\"\n          },\n          {\n            \"lang\": \"cURL\",\n            \"source\": \"if [ -z \\\"$STABILITY_API_KEY\\\" ]; then\\n    echo \\\"STABILITY_API_KEY environment variable is not set\\\"\\n    exit 1\\nfi\\n\\n# Determine the URL to use for the request\\nBASE_URL=${API_HOST:-https://api.stability.ai}\\nURL=\\\"$BASE_URL/v1/user/account\\\"\\n\\ncurl -f -sS \\\"$URL\\\" \\\\\\n  -H 'Accept: application/json' \\\\\\n  -H \\\"Authorization: Bearer $STABILITY_API_KEY\\\"\\n\"\n          }\n        ]\n      }\n    },\n    \"/v1/user/balance\": {\n      \"get\": {\n        \"description\": \"Get the credit balance of the account/organization associated with the API key\",\n        \"operationId\": \"userBalance\",\n        \"summary\": \"Account balance\",\n        \"tags\": [\"User\"],\n        \"parameters\": [\n          {\n            \"$ref\": \"#/components/parameters/organization\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientID\"\n          },\n          {\n            \"$ref\": \"#/components/parameters/stabilityClientVersion\"\n          }\n        ],\n        \"responses\": {\n          \"200\": {\n            \"content\": {\n              \"application/json\": {\n                \"example\": {\n                  \"credits\": 0.6336833840314097\n                },\n                \"schema\": {\n                  \"$ref\": \"#/components/schemas/BalanceResponseBody\"\n                }\n              }\n            },\n            \"description\": \"OK response.\"\n          },\n          \"401\": {\n            \"$ref\": \"#/components/responses/401\"\n          },\n          \"500\": {\n            \"$ref\": \"#/components/responses/500\"\n          }\n        },\n        \"security\": [\n          {\n            \"STABILITY_API_KEY\": []\n          }\n        ],\n        \"x-codeSamples\": [\n          {\n            \"lang\": \"Python\",\n            \"source\": \"import os\\nimport requests\\n\\napi_host = os.getenv('API_HOST', 'https://api.stability.ai')\\nurl = f\\\"{api_host}/v1/user/balance\\\"\\n\\napi_key = os.getenv(\\\"STABILITY_API_KEY\\\")\\nif api_key is None:\\n    raise Exception(\\\"Missing Stability API key.\\\")\\n\\nresponse = requests.get(url, headers={\\n    \\\"Authorization\\\": f\\\"Bearer {api_key}\\\"\\n})\\n\\nif response.status_code != 200:\\n    raise Exception(\\\"Non-200 response: \\\" + str(response.text))\\n\\n# Do something with the payload...\\npayload = response.json()\\n\\n\"\n          },\n          {\n            \"label\": \"TypeScript\",\n            \"lang\": \"Javascript\",\n            \"source\": \"import fetch from 'node-fetch'\\n\\nconst apiHost = process.env.API_HOST ?? 'https://api.stability.ai'\\nconst url = `${apiHost}/v1/user/balance`\\n\\nconst apiKey = process.env.STABILITY_API_KEY\\nif (!apiKey) throw new Error('Missing Stability API key.')\\n\\nconst response = await fetch(url, {\\n  method: 'GET',\\n  headers: {\\n    Authorization: `Bearer ${apiKey}`,\\n  },\\n})\\n\\nif (!response.ok) {\\n  throw new Error(`Non-200 response: ${await response.text()}`)\\n}\\n\\ninterface Balance {\\n  credits: number\\n}\\n\\n// Do something with the balance...\\nconst balance = (await response.json()) as Balance\\n\"\n          },\n          {\n            \"lang\": \"Go\",\n            \"source\": \"package main\\n\\nimport (\\n\\t\\\"io\\\"\\n\\t\\\"net/http\\\"\\n\\t\\\"os\\\"\\n)\\n\\nfunc main() {\\n\\t// Build REST endpoint URL\\n\\tapiHost, hasApiHost := os.LookupEnv(\\\"API_HOST\\\")\\n\\tif !hasApiHost {\\n\\t\\tapiHost = \\\"https://api.stability.ai\\\"\\n\\t}\\n\\treqUrl := apiHost + \\\"/v1/user/balance\\\"\\n\\n\\t// Acquire an API key from the environment\\n\\tapiKey, hasAPIKey := os.LookupEnv(\\\"STABILITY_API_KEY\\\")\\n\\tif !hasAPIKey {\\n\\t\\tpanic(\\\"Missing STABILITY_API_KEY environment variable\\\")\\n\\t}\\n\\n\\t// Build the request\\n\\treq, _ := http.NewRequest(\\\"GET\\\", reqUrl, nil)\\n\\treq.Header.Add(\\\"Authorization\\\", \\\"Bearer \\\"+apiKey)\\n\\n\\t// Execute the request\\n\\tres, _ := http.DefaultClient.Do(req)\\n\\tdefer res.Body.Close()\\n\\tbody, _ := io.ReadAll(res.Body)\\n\\n\\tif res.StatusCode != 200 {\\n\\t\\tpanic(\\\"Non-200 response: \\\" + string(body))\\n\\t}\\n\\n\\t// Do something with the payload...\\n\\t// payload := string(body)\\n}\\n\"\n          },\n          {\n            \"lang\": \"cURL\",\n            \"source\": \"if [ -z \\\"$STABILITY_API_KEY\\\" ]; then\\n    echo \\\"STABILITY_API_KEY environment variable is not set\\\"\\n    exit 1\\nfi\\n\\n# Determine the URL to use for the request\\nBASE_URL=${API_HOST:-https://api.stability.ai}\\nURL=\\\"$BASE_URL/v1/user/balance\\\"\\n\\ncurl -f -sS \\\"$URL\\\" \\\\\\n  -H 'Content-Type: application/json' \\\\\\n  -H \\\"Authorization: Bearer $STABILITY_API_KEY\\\"\\n\"\n          }\n        ]\n      }\n    }\n  },\n  \"components\": {\n    \"schemas\": {\n      \"GenerationID\": {\n        \"type\": \"string\",\n        \"minLength\": 64,\n        \"maxLength\": 64,\n        \"description\": \"The `id` of a generation, typically used for async generations, that can be used to check the status of the generation or retrieve the result.\",\n        \"example\": \"a6dc6c6e20acda010fe14d71f180658f2896ed9b4ec25aa99a6ff06c796987c4\"\n      },\n      \"ImageToVideoRequest\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"image\": {\n            \"type\": \"string\",\n            \"description\": \"The source image used in the video generation process.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n\\nSupported Dimensions:\\n- 1024x576\\n- 576x1024\\n- 768x768\",\n            \"format\": \"binary\",\n            \"example\": \"./some/image.png\"\n          },\n          \"seed\": {\n            \"type\": \"number\",\n            \"minimum\": 0,\n            \"maximum\": 4294967294,\n            \"default\": 0,\n            \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n          },\n          \"cfg_scale\": {\n            \"type\": \"number\",\n            \"minimum\": 0,\n            \"maximum\": 10,\n            \"default\": 1.8,\n            \"description\": \"How strongly the video sticks to the original image. Use lower values to allow the model more freedom to make changes and higher values to correct motion distortions.\"\n          },\n          \"motion_bucket_id\": {\n            \"type\": \"number\",\n            \"minimum\": 1,\n            \"maximum\": 255,\n            \"default\": 127,\n            \"description\": \"Lower values generally result in less motion in the output video, while higher values generally result in more motion. This parameter corresponds to the motion_bucket_id parameter from the [paper](https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf).\"\n          }\n        },\n        \"required\": [\"image\"]\n      },\n      \"ContentModerationResponse\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"id\": {\n            \"type\": \"string\",\n            \"minLength\": 1,\n            \"description\": \"A unique identifier associated with this error. Please include this in any [support tickets](https://stabilityplatform.freshdesk.com/support/tickets/new) \\nyou file, as it will greatly assist us in diagnosing the root cause of the problem.\",\n            \"example\": \"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4\"\n          },\n          \"name\": {\n            \"type\": \"string\",\n            \"minLength\": 1,\n            \"description\": \"Our content moderation system has flagged some part of your request and subsequently denied it.  You were not charged for this request.  While this may at times be frustrating, it is necessary to maintain the integrity of our platform and ensure a safe experience for all users.\\n\\nIf you would like to provide feedback, please use the [Support Form](https://stabilityplatform.freshdesk.com/support/tickets/new).\",\n            \"enum\": [\"content_moderation\"]\n          },\n          \"errors\": {\n            \"type\": \"array\",\n            \"items\": {\n              \"type\": \"string\"\n            },\n            \"minItems\": 1,\n            \"description\": \"One or more error messages indicating what went wrong.\",\n            \"example\": [\"some-field: is required\"]\n          }\n        },\n        \"required\": [\"id\", \"name\", \"errors\"],\n        \"description\": \"Your request was flagged by our content moderation system.\",\n        \"example\": {\n          \"id\": \"ed14db44362126aab3cbd25cca51ffe3\",\n          \"name\": \"content_moderation\",\n          \"errors\": [\n            \"Your request was flagged by our content moderation system, as a result your request was denied and you were not charged.\"\n          ]\n        }\n      },\n      \"InpaintingSearchModeRequestBody\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"mode\": {\n            \"type\": \"string\",\n            \"enum\": [\"search\"],\n            \"description\": \"Controls how the model decides which areas to inpaint and which areas to leave alone.  \\n\\nSpecifying `mask` requires:\\n  - Provide an explicit mask image in the `mask` parameter\\n  - Use the alpha channel of the `image` parameter as the mask\\n  \\nSpecifying `search` requires:\\n  - Provide a small description of what to inpaint in the `search_prompt` parameter\"\n          },\n          \"search_prompt\": {\n            \"type\": \"string\",\n            \"description\": \"Short description of what to inpaint in the `image`.\",\n            \"example\": \"glasses\"\n          },\n          \"image\": {\n            \"type\": \"string\",\n            \"description\": \"The image you wish to inpaint.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\",\n            \"format\": \"binary\",\n            \"example\": \"./some/image.png\"\n          },\n          \"prompt\": {\n            \"type\": \"string\",\n            \"minLength\": 1,\n            \"maxLength\": 10000,\n            \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n          },\n          \"negative_prompt\": {\n            \"type\": \"string\",\n            \"maxLength\": 10000,\n            \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n          },\n          \"seed\": {\n            \"type\": \"number\",\n            \"minimum\": 0,\n            \"maximum\": 4294967294,\n            \"default\": 0,\n            \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n          },\n          \"output_format\": {\n            \"type\": \"string\",\n            \"enum\": [\"jpeg\", \"png\", \"webp\"],\n            \"default\": \"png\",\n            \"description\": \"Dictates the `content-type` of the generated image.\"\n          }\n        },\n        \"required\": [\"image\", \"prompt\", \"mode\", \"search_prompt\"]\n      },\n      \"InpaintingMaskingModeRequestBody\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"mode\": {\n            \"type\": \"string\",\n            \"enum\": [\"mask\"],\n            \"description\": \"Controls how the model decides which areas to inpaint and which areas to leave alone.  \\n\\nSpecifying `mask` requires:\\n  - Provide an explicit mask image in the `mask` parameter\\n  - Use the alpha channel of the `image` parameter as the mask\\n  \\nSpecifying `search` requires:\\n  - Provide a small description of what to inpaint in the `search_prompt` parameter\"\n          },\n          \"mask\": {\n            \"type\": \"string\",\n            \"description\": \"Controls the strength of the inpainting process on a per-pixel basis, either via a \\nsecond image (passed into this parameter) or via the alpha channel of the `image` parameter.\\n\\n**Passing in a Mask**  \\n\\nThe image passed to this parameter should be a black and white image that represents, \\nat any pixel, the strength of inpainting based on how dark or light the given pixel is. \\nCompletely black pixels represent no inpainting strength while completely white pixels \\nrepresent maximum strength.\\n\\nIn the event the mask is a different size than the `image` parameter, it will be automatically resized.\\n\\n**Alpha Channel Support**\\n\\nIf you don't provide an explicit mask, one will be derived from the alpha channel of the `image` parameter.\\nTransparent pixels will be inpainted while opaque pixels will be preserved.\\n\\nIn the event an `image` with an alpha channel is provided along with a `mask`, the `mask` will take precedence.\",\n            \"format\": \"binary\",\n            \"example\": \"./some/image.png\"\n          },\n          \"image\": {\n            \"type\": \"string\",\n            \"description\": \"The image you wish to inpaint.\\n\\nSupported Formats:\\n- jpeg\\n- png\\n- webp\\n\\nValidation Rules:\\n- Every side must be at least 64 pixels\\n- Total pixel count must be between 4,096 and 9,437,184 pixels\",\n            \"format\": \"binary\",\n            \"example\": \"./some/image.png\"\n          },\n          \"prompt\": {\n            \"type\": \"string\",\n            \"minLength\": 1,\n            \"maxLength\": 10000,\n            \"description\": \"What you wish to see in the output image. A strong, descriptive prompt that clearly defines \\nelements, colors, and subjects will lead to better results. \\n\\nTo control the weight of a given word use the format `(word:weight)`, \\nwhere `word` is the word you'd like to control the weight of and `weight` \\nis a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`\\nwould convey a sky that was blue and green, but more green than blue.\"\n          },\n          \"negative_prompt\": {\n            \"type\": \"string\",\n            \"maxLength\": 10000,\n            \"description\": \"A blurb of text describing what you **do not** wish to see in the output image.  \\nThis is an advanced feature.\"\n          },\n          \"seed\": {\n            \"type\": \"number\",\n            \"minimum\": 0,\n            \"maximum\": 4294967294,\n            \"default\": 0,\n            \"description\": \"A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass `0` to use a random seed.)\"\n          },\n          \"output_format\": {\n            \"type\": \"string\",\n            \"enum\": [\"jpeg\", \"png\", \"webp\"],\n            \"default\": \"png\",\n            \"description\": \"Dictates the `content-type` of the generated image.\"\n          }\n        },\n        \"required\": [\"image\", \"prompt\", \"mode\"]\n      },\n      \"StabilityClientID\": {\n        \"type\": \"string\",\n        \"maxLength\": 256,\n        \"description\": \"The name of your application, used to help us communicate app-specific debugging or moderation issues to you.\",\n        \"example\": \"my-awesome-app\"\n      },\n      \"StabilityClientUserID\": {\n        \"type\": \"string\",\n        \"maxLength\": 256,\n        \"description\": \"A unique identifier for your end user. Used to help us communicate user-specific debugging or moderation issues to you. Feel free to obfuscate this value to protect user privacy.\",\n        \"example\": \"DiscordUser#9999\"\n      },\n      \"StabilityClientVersion\": {\n        \"type\": \"string\",\n        \"maxLength\": 256,\n        \"description\": \"The version of your application, used to help us communicate version-specific debugging or moderation issues to you.\",\n        \"example\": \"1.2.1\"\n      },\n      \"Creativity\": {\n        \"type\": \"number\",\n        \"minimum\": 0.2,\n        \"maximum\": 0.5,\n        \"default\": 0.35,\n        \"description\": \"Controls the likelihood of creating additional details not heavily conditioned by the init image.\"\n      },\n      \"Engine\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"description\": {\n            \"type\": \"string\"\n          },\n          \"id\": {\n            \"type\": \"string\",\n            \"x-go-name\": \"ID\",\n            \"description\": \"Unique identifier for the engine\",\n            \"example\": \"stable-diffusion-v1-6\"\n          },\n          \"name\": {\n            \"type\": \"string\",\n            \"description\": \"Name of the engine\",\n            \"example\": \"Stable Diffusion XL v1.0\"\n          },\n          \"type\": {\n            \"type\": \"string\",\n            \"description\": \"The type of content this engine produces\",\n            \"example\": \"PICTURE\",\n            \"enum\": [\n              \"AUDIO\",\n              \"CLASSIFICATION\",\n              \"PICTURE\",\n              \"STORAGE\",\n              \"TEXT\",\n              \"VIDEO\"\n            ]\n          }\n        },\n        \"required\": [\"id\", \"name\", \"description\", \"type\"]\n      },\n      \"Error\": {\n        \"type\": \"object\",\n        \"x-go-name\": \"RESTError\",\n        \"properties\": {\n          \"id\": {\n            \"x-go-name\": \"ID\",\n            \"type\": \"string\",\n            \"description\": \"A unique identifier for this particular occurrence of the problem.\",\n            \"example\": \"296a972f-666a-44a1-a3df-c9c28a1f56c0\"\n          },\n          \"name\": {\n            \"type\": \"string\",\n            \"description\": \"The short-name of this class of errors e.g. `bad_request`.\",\n            \"example\": \"bad_request\"\n          },\n          \"message\": {\n            \"type\": \"string\",\n            \"description\": \"A human-readable explanation specific to this occurrence of the problem.\",\n            \"example\": \"Header parameter Authorization is required, but not found\"\n          }\n        },\n        \"required\": [\"name\", \"id\", \"message\", \"status\"]\n      },\n      \"CfgScale\": {\n        \"type\": \"number\",\n        \"description\": \"How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)\",\n        \"default\": 7,\n        \"example\": 7,\n        \"minimum\": 0,\n        \"maximum\": 35\n      },\n      \"ClipGuidancePreset\": {\n        \"type\": \"string\",\n        \"default\": \"NONE\",\n        \"example\": \"FAST_BLUE\",\n        \"enum\": [\n          \"FAST_BLUE\",\n          \"FAST_GREEN\",\n          \"NONE\",\n          \"SIMPLE\",\n          \"SLOW\",\n          \"SLOWER\",\n          \"SLOWEST\"\n        ]\n      },\n      \"UpscaleImageHeight\": {\n        \"x-go-type\": \"uint64\",\n        \"type\": \"integer\",\n        \"description\": \"Desired height of the output image.  Only one of `width` or `height` may be specified.\",\n        \"minimum\": 512\n      },\n      \"UpscaleImageWidth\": {\n        \"x-go-type\": \"uint64\",\n        \"type\": \"integer\",\n        \"description\": \"Desired width of the output image.  Only one of `width` or `height` may be specified.\",\n        \"minimum\": 512\n      },\n      \"DiffuseImageHeight\": {\n        \"x-go-type\": \"uint64\",\n        \"type\": \"integer\",\n        \"description\": \"Height of the image to generate, in pixels, in an increment divisible by 64.\",\n        \"multipleOf\": 64,\n        \"default\": 512,\n        \"example\": 512,\n        \"minimum\": 128\n      },\n      \"DiffuseImageWidth\": {\n        \"x-go-type\": \"uint64\",\n        \"type\": \"integer\",\n        \"description\": \"Width of the image to generate, in pixels, in an increment divisible by 64.\",\n        \"multipleOf\": 64,\n        \"default\": 512,\n        \"example\": 512,\n        \"minimum\": 128\n      },\n      \"Sampler\": {\n        \"type\": \"string\",\n        \"description\": \"Which sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you.\",\n        \"example\": \"K_DPM_2_ANCESTRAL\",\n        \"enum\": [\n          \"DDIM\",\n          \"DDPM\",\n          \"K_DPMPP_2M\",\n          \"K_DPMPP_2S_ANCESTRAL\",\n          \"K_DPM_2\",\n          \"K_DPM_2_ANCESTRAL\",\n          \"K_EULER\",\n          \"K_EULER_ANCESTRAL\",\n          \"K_HEUN\",\n          \"K_LMS\"\n        ]\n      },\n      \"Samples\": {\n        \"x-go-type\": \"uint64\",\n        \"type\": \"integer\",\n        \"description\": \"Number of images to generate\",\n        \"default\": 1,\n        \"example\": 1,\n        \"minimum\": 1,\n        \"maximum\": 10\n      },\n      \"Seed\": {\n        \"type\": \"integer\",\n        \"x-go-type\": \"uint32\",\n        \"description\": \"Random noise seed (omit this option or use `0` for a random seed)\",\n        \"default\": 0,\n        \"example\": 0,\n        \"minimum\": 0,\n        \"maximum\": 4294967295\n      },\n      \"Steps\": {\n        \"x-go-type\": \"uint64\",\n        \"type\": \"integer\",\n        \"description\": \"Number of diffusion steps to run.\",\n        \"default\": 30,\n        \"example\": 50,\n        \"minimum\": 10,\n        \"maximum\": 50\n      },\n      \"Extras\": {\n        \"type\": \"object\",\n        \"description\": \"Extra parameters passed to the engine.\\nThese parameters are used for in-development or experimental features and may change\\nwithout warning, so please use with caution.\"\n      },\n      \"StylePreset\": {\n        \"type\": \"string\",\n        \"enum\": [\n          \"enhance\",\n          \"anime\",\n          \"photographic\",\n          \"digital-art\",\n          \"comic-book\",\n          \"fantasy-art\",\n          \"line-art\",\n          \"analog-film\",\n          \"neon-punk\",\n          \"isometric\",\n          \"low-poly\",\n          \"origami\",\n          \"modeling-compound\",\n          \"cinematic\",\n          \"3d-model\",\n          \"pixel-art\",\n          \"tile-texture\"\n        ],\n        \"description\": \"Pass in a style preset to guide the image model towards a particular style.\\nThis list of style presets is subject to change.\"\n      },\n      \"TextPrompt\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"text\": {\n            \"type\": \"string\",\n            \"description\": \"The prompt itself\",\n            \"example\": \"A lighthouse on a cliff\",\n            \"maxLength\": 2000\n          },\n          \"weight\": {\n            \"type\": \"number\",\n            \"description\": \"Weight of the prompt (use negative numbers for negative prompts)\",\n            \"example\": 0.8167237,\n            \"format\": \"float\"\n          }\n        },\n        \"description\": \"Text prompt for image generation\",\n        \"required\": [\"text\"]\n      },\n      \"TextPromptsForTextToImage\": {\n        \"title\": \"TextPrompts\",\n        \"type\": \"array\",\n        \"items\": {\n          \"$ref\": \"#/components/schemas/TextPrompt\"\n        },\n        \"minItems\": 1,\n        \"description\": \"An array of text prompts to use for generation.\\n\\nGiven a text prompt with the text `A lighthouse on a cliff` and a weight of `0.5`, it would be represented as:\\n\\n```\\n\\\"text_prompts\\\": [\\n  {\\n    \\\"text\\\": \\\"A lighthouse on a cliff\\\",\\n    \\\"weight\\\": 0.5\\n  }\\n]\\n```\"\n      },\n      \"TextPrompts\": {\n        \"description\": \"An array of text prompts to use for generation.\\n\\nDue to how arrays are represented in `multipart/form-data` requests, prompts must adhere to the format `text_prompts[index][text|weight]`,\\nwhere `index` is some integer used to tie the text and weight together.  While `index` does not have to be sequential, duplicate entries \\nwill override previous entries, so it is recommended to use sequential indices.\\n\\nGiven a text prompt with the text `A lighthouse on a cliff` and a weight of `0.5`, it would be represented as:\\n```\\ntext_prompts[0][text]: \\\"A lighthouse on a cliff\\\"\\ntext_prompts[0][weight]: 0.5\\n```\\n\\nTo add another prompt to that request simply provide the values under a new `index`:\\n\\n```\\ntext_prompts[0][text]: \\\"A lighthouse on a cliff\\\"\\ntext_prompts[0][weight]: 0.5\\ntext_prompts[1][text]: \\\"land, ground, dirt, grass\\\"\\ntext_prompts[1][weight]: -0.9\\n```\",\n        \"type\": \"array\",\n        \"items\": {\n          \"$ref\": \"#/components/schemas/TextPrompt\"\n        },\n        \"minItems\": 1\n      },\n      \"InputImage\": {\n        \"x-go-type\": \"[]byte\",\n        \"type\": \"string\",\n        \"description\": \"The image to upscale using ESRGAN.\",\n        \"example\": \"<image binary>\",\n        \"format\": \"binary\"\n      },\n      \"InitImage\": {\n        \"x-go-type\": \"[]byte\",\n        \"type\": \"string\",\n        \"description\": \"Image used to initialize the diffusion process, in lieu of random noise.\",\n        \"example\": \"<image binary>\",\n        \"format\": \"binary\"\n      },\n      \"InitImageStrength\": {\n        \"type\": \"number\",\n        \"description\": \"How much influence the `init_image` has on the diffusion process. Values close to `1` will yield images very similar to the `init_image` while values close to `0` will yield images wildly different than the `init_image`. The behavior of this is meant to mirror DreamStudio's \\\"Image Strength\\\" slider.  <br/> <br/> This parameter is just an alternate way to set `step_schedule_start`, which is done via the calculation `1 - image_strength`. For example, passing in an Image Strength of 35% (`0.35`) would result in a `step_schedule_start` of `0.65`.\\n\",\n        \"example\": 0.4,\n        \"minimum\": 0,\n        \"maximum\": 1,\n        \"format\": \"float\",\n        \"default\": 0.35\n      },\n      \"InitImageMode\": {\n        \"type\": \"string\",\n        \"description\": \"Whether to use `image_strength` or `step_schedule_*` to control how much influence the `init_image` has on the result.\",\n        \"enum\": [\"IMAGE_STRENGTH\", \"STEP_SCHEDULE\"],\n        \"default\": \"IMAGE_STRENGTH\"\n      },\n      \"StepScheduleStart\": {\n        \"type\": \"number\",\n        \"description\": \"Skips a proportion of the start of the diffusion steps, allowing the init_image to influence the final generated image.  Lower values will result in more influence from the init_image, while higher values will result in more influence from the diffusion steps.  (e.g. a value of `0` would simply return you the init_image, where a value of `1` would return you a completely different image.)\",\n        \"default\": 0.65,\n        \"example\": 0.4,\n        \"minimum\": 0,\n        \"maximum\": 1\n      },\n      \"StepScheduleEnd\": {\n        \"type\": \"number\",\n        \"description\": \"Skips a proportion of the end of the diffusion steps, allowing the init_image to influence the final generated image.  Lower values will result in more influence from the init_image, while higher values will result in more influence from the diffusion steps.\",\n        \"example\": 0.01,\n        \"minimum\": 0,\n        \"maximum\": 1\n      },\n      \"MaskImage\": {\n        \"x-go-type\": \"[]byte\",\n        \"type\": \"string\",\n        \"description\": \"Optional grayscale mask that allows for influence over which pixels are eligible for diffusion and at what strength. Must be the same dimensions as the `init_image`. Use the `mask_source` option to specify whether the white or black pixels should be inpainted.\",\n        \"example\": \"<image binary>\",\n        \"format\": \"binary\"\n      },\n      \"MaskSource\": {\n        \"type\": \"string\",\n        \"description\": \"For any given pixel, the mask determines the strength of generation on a linear scale.  This parameter determines where to source the mask from:\\n- `MASK_IMAGE_WHITE` will use the white pixels of the mask_image as the mask, where white pixels are completely replaced and black pixels are unchanged\\n- `MASK_IMAGE_BLACK` will use the black pixels of the mask_image as the mask, where black pixels are completely replaced and white pixels are unchanged\\n- `INIT_IMAGE_ALPHA` will use the alpha channel of the init_image as the mask, where fully transparent pixels are completely replaced and fully opaque pixels are unchanged\"\n      },\n      \"GenerationRequestOptionalParams\": {\n        \"type\": \"object\",\n        \"description\": \"Represents the optional parameters that can be passed to any generation request.\",\n        \"properties\": {\n          \"cfg_scale\": {\n            \"$ref\": \"#/components/schemas/CfgScale\"\n          },\n          \"clip_guidance_preset\": {\n            \"$ref\": \"#/components/schemas/ClipGuidancePreset\"\n          },\n          \"sampler\": {\n            \"$ref\": \"#/components/schemas/Sampler\"\n          },\n          \"samples\": {\n            \"$ref\": \"#/components/schemas/Samples\"\n          },\n          \"seed\": {\n            \"$ref\": \"#/components/schemas/Seed\"\n          },\n          \"steps\": {\n            \"$ref\": \"#/components/schemas/Steps\"\n          },\n          \"style_preset\": {\n            \"$ref\": \"#/components/schemas/StylePreset\"\n          },\n          \"extras\": {\n            \"$ref\": \"#/components/schemas/Extras\"\n          }\n        }\n      },\n      \"RealESRGANUpscaleRequestBody\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"image\": {\n            \"$ref\": \"#/components/schemas/InputImage\"\n          },\n          \"width\": {\n            \"$ref\": \"#/components/schemas/UpscaleImageWidth\"\n          },\n          \"height\": {\n            \"$ref\": \"#/components/schemas/UpscaleImageHeight\"\n          }\n        },\n        \"required\": [\"image\"]\n      },\n      \"ImageToImageRequestBody\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"text_prompts\": {\n            \"$ref\": \"#/components/schemas/TextPrompts\"\n          },\n          \"init_image\": {\n            \"$ref\": \"#/components/schemas/InitImage\"\n          },\n          \"init_image_mode\": {\n            \"$ref\": \"#/components/schemas/InitImageMode\"\n          },\n          \"image_strength\": {\n            \"$ref\": \"#/components/schemas/InitImageStrength\"\n          },\n          \"step_schedule_start\": {\n            \"$ref\": \"#/components/schemas/StepScheduleStart\"\n          },\n          \"step_schedule_end\": {\n            \"$ref\": \"#/components/schemas/StepScheduleEnd\"\n          },\n          \"cfg_scale\": {\n            \"$ref\": \"#/components/schemas/CfgScale\"\n          },\n          \"clip_guidance_preset\": {\n            \"$ref\": \"#/components/schemas/ClipGuidancePreset\"\n          },\n          \"sampler\": {\n            \"$ref\": \"#/components/schemas/Sampler\"\n          },\n          \"samples\": {\n            \"$ref\": \"#/components/schemas/Samples\"\n          },\n          \"seed\": {\n            \"$ref\": \"#/components/schemas/Seed\"\n          },\n          \"steps\": {\n            \"$ref\": \"#/components/schemas/Steps\"\n          },\n          \"style_preset\": {\n            \"$ref\": \"#/components/schemas/StylePreset\"\n          },\n          \"extras\": {\n            \"$ref\": \"#/components/schemas/Extras\"\n          }\n        },\n        \"required\": [\"text_prompts\", \"init_image\"],\n        \"discriminator\": {\n          \"propertyName\": \"init_image_mode\",\n          \"mapping\": {\n            \"IMAGE_STRENGTH\": \"#/components/schemas/ImageToImageUsingImageStrengthRequestBody\",\n            \"STEP_SCHEDULE\": \"#/components/schemas/ImageToImageUsingStepScheduleRequestBody\"\n          }\n        }\n      },\n      \"ImageToImageUsingImageStrengthRequestBody\": {\n        \"allOf\": [\n          {\n            \"type\": \"object\",\n            \"properties\": {\n              \"text_prompts\": {\n                \"$ref\": \"#/components/schemas/TextPrompts\"\n              },\n              \"init_image\": {\n                \"$ref\": \"#/components/schemas/InitImage\"\n              },\n              \"init_image_mode\": {\n                \"$ref\": \"#/components/schemas/InitImageMode\"\n              },\n              \"image_strength\": {\n                \"$ref\": \"#/components/schemas/InitImageStrength\"\n              }\n            },\n            \"required\": [\"text_prompts\", \"init_image\"]\n          },\n          {\n            \"$ref\": \"#/components/schemas/GenerationRequestOptionalParams\"\n          }\n        ]\n      },\n      \"ImageToImageUsingStepScheduleRequestBody\": {\n        \"allOf\": [\n          {\n            \"type\": \"object\",\n            \"properties\": {\n              \"text_prompts\": {\n                \"$ref\": \"#/components/schemas/TextPrompts\"\n              },\n              \"init_image\": {\n                \"$ref\": \"#/components/schemas/InitImage\"\n              },\n              \"init_image_mode\": {\n                \"$ref\": \"#/components/schemas/InitImageMode\"\n              },\n              \"step_schedule_start\": {\n                \"$ref\": \"#/components/schemas/StepScheduleStart\"\n              },\n              \"step_schedule_end\": {\n                \"$ref\": \"#/components/schemas/StepScheduleEnd\"\n              }\n            },\n            \"required\": [\"text_prompts\", \"init_image\"]\n          },\n          {\n            \"$ref\": \"#/components/schemas/GenerationRequestOptionalParams\"\n          }\n        ]\n      },\n      \"MaskingRequestBody\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"init_image\": {\n            \"$ref\": \"#/components/schemas/InitImage\"\n          },\n          \"mask_source\": {\n            \"$ref\": \"#/components/schemas/MaskSource\"\n          },\n          \"mask_image\": {\n            \"$ref\": \"#/components/schemas/MaskImage\"\n          },\n          \"text_prompts\": {\n            \"$ref\": \"#/components/schemas/TextPrompts\"\n          },\n          \"cfg_scale\": {\n            \"$ref\": \"#/components/schemas/CfgScale\"\n          },\n          \"clip_guidance_preset\": {\n            \"$ref\": \"#/components/schemas/ClipGuidancePreset\"\n          },\n          \"sampler\": {\n            \"$ref\": \"#/components/schemas/Sampler\"\n          },\n          \"samples\": {\n            \"$ref\": \"#/components/schemas/Samples\"\n          },\n          \"seed\": {\n            \"$ref\": \"#/components/schemas/Seed\"\n          },\n          \"steps\": {\n            \"$ref\": \"#/components/schemas/Steps\"\n          },\n          \"style_preset\": {\n            \"$ref\": \"#/components/schemas/StylePreset\"\n          },\n          \"extras\": {\n            \"$ref\": \"#/components/schemas/Extras\"\n          }\n        },\n        \"required\": [\"text_prompts\", \"init_image\", \"mask_source\"],\n        \"discriminator\": {\n          \"propertyName\": \"mask_source\",\n          \"mapping\": {\n            \"MASK_IMAGE_BLACK\": \"#/components/schemas/MaskingUsingMaskImageRequestBody\",\n            \"MASK_IMAGE_WHITE\": \"#/components/schemas/MaskingUsingMaskImageRequestBody\",\n            \"INIT_IMAGE_ALPHA\": \"#/components/schemas/MaskingUsingInitImageAlphaRequestBody\"\n          }\n        }\n      },\n      \"MaskingUsingMaskImageRequestBody\": {\n        \"allOf\": [\n          {\n            \"type\": \"object\",\n            \"properties\": {\n              \"text_prompts\": {\n                \"$ref\": \"#/components/schemas/TextPrompts\"\n              },\n              \"init_image\": {\n                \"$ref\": \"#/components/schemas/InitImage\"\n              },\n              \"mask_source\": {\n                \"$ref\": \"#/components/schemas/MaskSource\"\n              },\n              \"mask_image\": {\n                \"$ref\": \"#/components/schemas/MaskImage\"\n              }\n            },\n            \"required\": [\n              \"init_image\",\n              \"mask_image\",\n              \"text_prompts\",\n              \"mask_source\"\n            ]\n          },\n          {\n            \"$ref\": \"#/components/schemas/GenerationRequestOptionalParams\"\n          }\n        ]\n      },\n      \"MaskingUsingInitImageAlphaRequestBody\": {\n        \"allOf\": [\n          {\n            \"type\": \"object\",\n            \"properties\": {\n              \"text_prompts\": {\n                \"$ref\": \"#/components/schemas/TextPrompts\"\n              },\n              \"init_image\": {\n                \"$ref\": \"#/components/schemas/InitImage\"\n              },\n              \"mask_source\": {\n                \"$ref\": \"#/components/schemas/MaskSource\"\n              }\n            },\n            \"required\": [\"init_image\", \"text_prompts\", \"mask_source\"]\n          },\n          {\n            \"$ref\": \"#/components/schemas/GenerationRequestOptionalParams\"\n          }\n        ]\n      },\n      \"TextToImageRequestBody\": {\n        \"type\": \"object\",\n        \"allOf\": [\n          {\n            \"type\": \"object\",\n            \"properties\": {\n              \"height\": {\n                \"$ref\": \"#/components/schemas/DiffuseImageHeight\"\n              },\n              \"width\": {\n                \"$ref\": \"#/components/schemas/DiffuseImageWidth\"\n              },\n              \"text_prompts\": {\n                \"$ref\": \"#/components/schemas/TextPromptsForTextToImage\"\n              }\n            },\n            \"required\": [\"text_prompts\"]\n          },\n          {\n            \"$ref\": \"#/components/schemas/GenerationRequestOptionalParams\"\n          }\n        ],\n        \"example\": {\n          \"cfg_scale\": 7,\n          \"height\": 512,\n          \"width\": 512,\n          \"sampler\": \"K_DPM_2_ANCESTRAL\",\n          \"samples\": 1,\n          \"seed\": 0,\n          \"steps\": 30,\n          \"text_prompts\": [\n            {\n              \"text\": \"A lighthouse on a cliff\",\n              \"weight\": 1\n            }\n          ]\n        },\n        \"required\": [\"text_prompts\"]\n      },\n      \"AccountResponseBody\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"email\": {\n            \"type\": \"string\",\n            \"description\": \"The user's email\",\n            \"example\": \"example@stability.ai\",\n            \"format\": \"email\"\n          },\n          \"id\": {\n            \"type\": \"string\",\n            \"description\": \"The user's ID\",\n            \"example\": \"user-1234\",\n            \"x-go-name\": \"ID\"\n          },\n          \"organizations\": {\n            \"type\": \"array\",\n            \"example\": [\n              {\n                \"id\": \"org-5678\",\n                \"name\": \"Another Organization\",\n                \"role\": \"MEMBER\",\n                \"is_default\": true\n              },\n              {\n                \"id\": \"org-1234\",\n                \"name\": \"My Organization\",\n                \"role\": \"MEMBER\",\n                \"is_default\": false\n              }\n            ],\n            \"items\": {\n              \"$ref\": \"#/components/schemas/OrganizationMembership\"\n            },\n            \"description\": \"The user's organizations\"\n          },\n          \"profile_picture\": {\n            \"type\": \"string\",\n            \"description\": \"The user's profile picture\",\n            \"example\": \"https://api.stability.ai/example.png\",\n            \"format\": \"uri\"\n          }\n        },\n        \"required\": [\"id\", \"email\", \"organizations\"]\n      },\n      \"BalanceResponseBody\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"credits\": {\n            \"type\": \"number\",\n            \"description\": \"The balance of the account/organization associated with the API key\",\n            \"example\": 0.41122252265928866,\n            \"format\": \"double\"\n          }\n        },\n        \"example\": {\n          \"credits\": 0.07903292496944721\n        },\n        \"required\": [\"credits\"]\n      },\n      \"ListEnginesResponseBody\": {\n        \"type\": \"array\",\n        \"description\": \"The engines available to your user/organization\",\n        \"items\": {\n          \"$ref\": \"#/components/schemas/Engine\"\n        },\n        \"example\": [\n          {\n            \"description\": \"Stability-AI Stable Diffusion v1.6\",\n            \"id\": \"stable-diffusion-v1-6\",\n            \"name\": \"Stable Diffusion v1.6\",\n            \"type\": \"PICTURE\"\n          },\n          {\n            \"description\": \"Stability-AI Stable Diffusion XL v1.0\",\n            \"id\": \"stable-diffusion-xl-1024-v1-0\",\n            \"name\": \"Stable Diffusion XL v1.0\",\n            \"type\": \"PICTURE\"\n          }\n        ]\n      },\n      \"FinishReason\": {\n        \"type\": \"string\",\n        \"description\": \"The result of the generation process.\\n- `SUCCESS` indicates success\\n- `ERROR` indicates an error\\n- `CONTENT_FILTERED` indicates the result affected by the content filter and may be blurred.\\n\\nThis header is only present when the `Accept` is set to `image/png`.  Otherwise it is returned in the response body.\",\n        \"enum\": [\"SUCCESS\", \"ERROR\", \"CONTENT_FILTERED\"]\n      },\n      \"Image\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"base64\": {\n            \"type\": \"string\",\n            \"x-go-type-skip-optional-pointer\": true,\n            \"description\": \"Image encoded in base64\"\n          },\n          \"finishReason\": {\n            \"type\": \"string\",\n            \"x-go-type-skip-optional-pointer\": true,\n            \"example\": \"CONTENT_FILTERED\",\n            \"enum\": [\"SUCCESS\", \"ERROR\", \"CONTENT_FILTERED\"]\n          },\n          \"seed\": {\n            \"type\": \"number\",\n            \"x-go-type-skip-optional-pointer\": true,\n            \"description\": \"The seed associated with this image\",\n            \"example\": 1229191277\n          }\n        },\n        \"example\": [\n          {\n            \"base64\": \"...very long string...\",\n            \"finishReason\": \"SUCCESS\",\n            \"seed\": 1050625087\n          },\n          {\n            \"base64\": \"...very long string...\",\n            \"finishReason\": \"CONTENT_FILTERED\",\n            \"seed\": 1229191277\n          }\n        ]\n      },\n      \"OrganizationMembership\": {\n        \"type\": \"object\",\n        \"properties\": {\n          \"id\": {\n            \"type\": \"string\",\n            \"example\": \"org-123456\",\n            \"x-go-name\": \"ID\"\n          },\n          \"is_default\": {\n            \"type\": \"boolean\",\n            \"example\": false\n          },\n          \"name\": {\n            \"type\": \"string\",\n            \"example\": \"My Organization\"\n          },\n          \"role\": {\n            \"type\": \"string\",\n            \"example\": \"MEMBER\"\n          }\n        },\n        \"required\": [\"id\", \"name\", \"role\", \"is_default\"]\n      }\n    },\n    \"parameters\": {\n      \"upscaleEngineID\": {\n        \"in\": \"path\",\n        \"name\": \"engine_id\",\n        \"required\": true,\n        \"schema\": {\n          \"type\": \"string\"\n        },\n        \"examples\": {\n          \"ESRGAN_X2_PLUS\": {\n            \"description\": \"ESRGAN x2 Upscaler\",\n            \"value\": \"esrgan-v1-x2plus\"\n          }\n        }\n      },\n      \"engineID\": {\n        \"examples\": {\n          \"default\": {\n            \"value\": \"stable-diffusion-v1-6\",\n            \"description\": \"Stable Diffusion v1.6\"\n          },\n          \"stable-diffusion-xl-1024-v1-0\": {\n            \"value\": \"stable-diffusion-xl-1024-v1-0\",\n            \"description\": \"Stable Diffusion XL v1.0\"\n          }\n        },\n        \"in\": \"path\",\n        \"name\": \"engine_id\",\n        \"required\": true,\n        \"schema\": {\n          \"type\": \"string\"\n        }\n      },\n      \"organization\": {\n        \"allowEmptyValue\": false,\n        \"description\": \"Allows for requests to be scoped to an organization other than the user's default.  If not provided, the user's default organization will be used.\",\n        \"example\": \"org-123456\",\n        \"in\": \"header\",\n        \"name\": \"Organization\",\n        \"x-go-name\": \"OrganizationID\",\n        \"schema\": {\n          \"type\": \"string\"\n        }\n      },\n      \"stabilityClientID\": {\n        \"allowEmptyValue\": false,\n        \"description\": \"Used to identify the source of requests, such as the client application or sub-organization. Optional, but recommended for organizational clarity.\",\n        \"example\": \"my-great-plugin\",\n        \"in\": \"header\",\n        \"name\": \"Stability-Client-ID\",\n        \"schema\": {\n          \"type\": \"string\"\n        }\n      },\n      \"stabilityClientVersion\": {\n        \"allowEmptyValue\": false,\n        \"description\": \"Used to identify the version of the application or service making the requests. Optional, but recommended for organizational clarity.\",\n        \"example\": \"1.2.1\",\n        \"in\": \"header\",\n        \"name\": \"Stability-Client-Version\",\n        \"schema\": {\n          \"type\": \"string\"\n        }\n      },\n      \"accept\": {\n        \"allowEmptyValue\": false,\n        \"in\": \"header\",\n        \"name\": \"Accept\",\n        \"description\": \"The format of the response.  Leave blank for JSON, or set to 'image/png' for a PNG image.\",\n        \"schema\": {\n          \"default\": \"application/json\",\n          \"enum\": [\"application/json\", \"image/png\"],\n          \"type\": \"string\"\n        }\n      }\n    },\n    \"securitySchemes\": {\n      \"STABILITY_API_KEY\": {\n        \"type\": \"apiKey\",\n        \"scheme\": \"bearer\",\n        \"name\": \"authorization\",\n        \"in\": \"header\",\n        \"description\": \"Use your [Stability API key](https://platform.stability.ai/account/keys) to authentication requests to this App.\"\n      }\n    },\n    \"responses\": {\n      \"401\": {\n        \"description\": \"unauthorized: API key missing or invalid\",\n        \"content\": {\n          \"application/json\": {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/Error\"\n            },\n            \"example\": {\n              \"id\": \"9160aa70-222f-4a36-9eb7-475e2668362a\",\n              \"name\": \"unauthorized\",\n              \"message\": \"missing authorization header\"\n            }\n          }\n        }\n      },\n      \"403\": {\n        \"description\": \"permission_denied: You lack the necessary permissions to perform this action\",\n        \"content\": {\n          \"application/json\": {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/Error\"\n            },\n            \"example\": {\n              \"id\": \"5cf19777-d17f-49fe-9bd9-39ff0ec6bb50\",\n              \"name\": \"permission_denied\",\n              \"message\": \"You do not have permission to access this resource\"\n            }\n          }\n        }\n      },\n      \"404\": {\n        \"description\": \"not_found: The requested resource/engine was not found\",\n        \"content\": {\n          \"application/json\": {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/Error\"\n            },\n            \"example\": {\n              \"id\": \"92b19e7f-22a2-4e71-a821-90edda229293\",\n              \"name\": \"not_found\",\n              \"message\": \"The specified engine (ID some-fake-engine) was not found.\"\n            }\n          }\n        }\n      },\n      \"500\": {\n        \"description\": \"server_error: Some unexpected server error occurred\",\n        \"content\": {\n          \"application/json\": {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/Error\"\n            },\n            \"example\": {\n              \"id\": \"f81964d6-619b-453e-97bc-9fd7ac3f04e7\",\n              \"name\": \"server_error\",\n              \"message\": \"An unexpected server error occurred, please try again.\"\n            }\n          }\n        }\n      },\n      \"GenerationResponse\": {\n        \"description\": \"Generation successful.\",\n        \"content\": {\n          \"application/json\": {\n            \"schema\": {\n              \"description\": \"An array of results from the generation request, where each image is a base64 encoded PNG.\",\n              \"type\": \"object\",\n              \"properties\": {\n                \"artifacts\": {\n                  \"type\": \"array\",\n                  \"x-go-type-skip-optional-pointer\": true,\n                  \"items\": {\n                    \"$ref\": \"#/components/schemas/Image\"\n                  }\n                }\n              }\n            }\n          },\n          \"image/png\": {\n            \"example\": \"The bytes of the generated image, what did you expect?\",\n            \"schema\": {\n              \"description\": \"The bytes of the generated PNG image\",\n              \"format\": \"binary\",\n              \"type\": \"string\"\n            }\n          }\n        },\n        \"headers\": {\n          \"Content-Length\": {\n            \"$ref\": \"#/components/headers/Content-Length\"\n          },\n          \"Content-Type\": {\n            \"$ref\": \"#/components/headers/Content-Type\"\n          },\n          \"Finish-Reason\": {\n            \"$ref\": \"#/components/headers/Finish-Reason\"\n          },\n          \"Seed\": {\n            \"$ref\": \"#/components/headers/Seed\"\n          }\n        }\n      },\n      \"400FromGeneration\": {\n        \"description\": \"bad_request: one or more parameters were invalid.\",\n        \"content\": {\n          \"application/json\": {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/Error\"\n            },\n            \"example\": {\n              \"id\": \"296a972f-666a-44a1-a3df-c9c28a1f56c0\",\n              \"name\": \"bad_request\",\n              \"message\": \"init_image: is required\"\n            }\n          }\n        }\n      },\n      \"400FromUpscale\": {\n        \"description\": \"One or more parameters were invalid.\",\n        \"content\": {\n          \"application/json\": {\n            \"schema\": {\n              \"$ref\": \"#/components/schemas/Error\"\n            },\n            \"example\": {\n              \"id\": \"296a972f-666a-44a1-a3df-c9c28a1f56c0\",\n              \"name\": \"bad_request\",\n              \"message\": \"image: is required\"\n            }\n          }\n        }\n      }\n    },\n    \"headers\": {\n      \"Content-Length\": {\n        \"required\": true,\n        \"schema\": {\n          \"type\": \"integer\"\n        }\n      },\n      \"Content-Type\": {\n        \"required\": true,\n        \"schema\": {\n          \"enum\": [\"application/json\", \"image/png\"],\n          \"type\": \"string\"\n        }\n      },\n      \"Finish-Reason\": {\n        \"schema\": {\n          \"$ref\": \"#/components/schemas/FinishReason\"\n        }\n      },\n      \"Seed\": {\n        \"example\": 3817857576,\n        \"schema\": {\n          \"example\": 787078103,\n          \"type\": \"integer\"\n        },\n        \"description\": \"The seed used to generate the image.  This header is only present when the `Accept` is set to `image/png`.  Otherwise it is returned in the response body.\"\n      }\n    }\n  },\n  \"x-tagGroups\": [\n    {\n      \"name\": \"Stable Image\",\n      \"tags\": [\"Generate\", \"Upscale\", \"Edit\", \"Control\", \"Results\"]\n    },\n    {\n      \"name\": \"3D & Video\",\n      \"tags\": [\"3D\", \"Image-to-Video\"]\n    },\n    {\n      \"name\": \"Version 1\",\n      \"tags\": [\"SDXL 1.0 & SD1.6\", \"Engines\", \"User\"]\n    }\n  ]\n}\n"
  },
  {
    "path": "packages/backend/server.py",
    "content": "from app.log_config import root_logger\nimport sys\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n\nfrom app.flask.socketio_init import flask_app, socketio\nimport app.flask.sockets\nimport app.flask.routes\nimport app.tasks.single_thread_tasks.browser.async_browser_task\n\n\nif __name__ == \"__main__\":\n    host = os.getenv(\"HOST\", \"127.0.0.1\")\n    port = int(os.getenv(\"PORT\", 8000))\n\n    root_logger.info(f\"Starting application on {host}:{port}...\")\n    root_logger.info(\"You can stop the application by pressing Ctrl+C at any time.\")\n\n    # If we're running in a PyInstaller bundle\n    if getattr(sys, \"frozen\", False) and hasattr(sys, \"_MEIPASS\"):\n        os.environ[\"PLAYWRIGHT_BROWSERS_PATH\"] = os.path.join(\n            sys._MEIPASS, \"ms-playwright\"\n        )\n\n    root_logger.warning(\"Protocol set to HTTP\")\n    socketio.run(flask_app, port=port, host=host)\n"
  },
  {
    "path": "packages/backend/tests/unit/test_processor_factory.py",
    "content": "import unittest\nfrom app.processors.factory.processor_factory_iter_modules import (\n    ProcessorFactoryIterModules,\n)\nfrom app.processors.components.processor import BasicProcessor, ContextAwareProcessor\n\n\nclass DummyProcessor(BasicProcessor):\n    processor_type = \"dummy_processor\"\n\n    def process(self):\n        pass\n\n    def cancel(self):\n        pass\n\n\nclass APIDummyProcessor(ContextAwareProcessor):\n    processor_type = \"api_dummy_processor\"\n\n    def __init__(self, config, context=None):\n        super().__init__(config)\n        self._processor_context = context\n\n    def process(self):\n        pass\n\n    def cancel(self):\n        pass\n\n\nclass TestProcessorFactory(unittest.TestCase):\n    def setUp(self):\n        self.factory = ProcessorFactoryIterModules()\n\n    def test_register_and_create_simple_processor(self):\n        self.factory.register_processor(DummyProcessor.processor_type, DummyProcessor)\n        processor = self.factory.create_processor(\n            {\"processorType\": \"dummy_processor\", \"name\": \"dummy_processor\"}\n        )\n        self.assertIsInstance(processor, DummyProcessor)\n        self.assertIsInstance(processor, BasicProcessor)\n\n    def test_create_unknown_processor_raises_exception(self):\n        with self.assertRaises(ValueError):\n            self.factory.create_processor(\n                {\"processorType\": \"unknown_processor\", \"name\": \"unknown_processor\"}\n            )\n\n    def test_create_processor_with_api_context_data(self):\n        self.factory.register_processor(\n            APIDummyProcessor.processor_type, APIDummyProcessor\n        )\n        processor = self.factory.create_processor(\n            {\"processorType\": \"api_dummy_processor\", \"name\": \"api_dummy_processor\"},\n            context_data=\"api_data\",\n        )\n        self.assertIsInstance(processor, APIDummyProcessor)\n        self.assertIsInstance(processor, ContextAwareProcessor)\n        self.assertEqual(processor._processor_context, \"api_data\")\n"
  },
  {
    "path": "packages/backend/tests/unit/test_processor_launcher.py",
    "content": "import unittest\nfrom unittest.mock import MagicMock, Mock, patch, mock_open\n\nfrom app.processors.launcher.basic_processor_launcher import BasicProcessorLauncher\nfrom app.processors.factory.processor_factory_iter_modules import (\n    ProcessorFactoryIterModules,\n)\n\nfrom dotenv import load_dotenv\n\nfrom app.tasks.single_thread_tasks.browser.browser_task import (\n    stop_browser_thread,\n)\n\nload_dotenv()\n\nfrom app.flask.socketio_init import socketio\n\n\nclass NoInputMock(MagicMock):\n    def __getattr__(self, name):\n        if name == \"inputs\":\n            raise AttributeError(\n                f\"'{type(self).__name__}' object has no attribute 'inputs'\"\n            )\n        return super().__getattr__(name)\n\n\nclass TestProcessorLauncher(unittest.TestCase):\n    def test_load_config_data_valid_file(self):\n        factory = ProcessorFactoryIterModules()\n        launcher = BasicProcessorLauncher(factory, None)\n\n        m = mock_open(read_data='{\"key\": \"value\"}')\n        with patch(\"builtins.open\", m):\n            data = launcher._load_config_data(\"fake_file_path\")\n            self.assertEqual(data, {\"key\": \"value\"})\n\n    def test_link_processors_valid(self):\n        factory = ProcessorFactoryIterModules()\n        launcher = BasicProcessorLauncher(factory, None)\n\n        processor1 = Mock()\n        processor1.name = \"processor1\"\n        processor1.inputs = [{\"inputNode\": \"processor2\"}]\n\n        processor2 = NoInputMock()\n        processor2.name = \"processor2\"\n\n        processors = {\n            \"processor1\": processor1,\n            \"processor2\": processor2,\n        }\n\n        launcher._link_processors(processors)\n        processor1.add_input_processor.assert_called_once_with(processor2)\n\n        stop_browser_thread()\n"
  },
  {
    "path": "packages/backend/tests/unit/test_stable_diffusion_stabilityai_prompt_processor.py",
    "content": "import unittest\nfrom unittest.mock import ANY, patch, Mock\nimport re\nfrom app.processors.components.core.stable_diffusion_stabilityai_prompt_processor import (\n    StableDiffusionStabilityAIPromptProcessor,\n)\nfrom app.storage.local_storage_strategy import LocalStorageStrategy\nfrom app.processors.components.core.input_processor import InputProcessor\nfrom tests.utils.processor_context_mock import ProcessorContextMock\n\n\nclass TestStableDiffusionStabilityAIPromptProcessor(unittest.TestCase):\n    @staticmethod\n    def get_default_valid_config():\n        return {\n            \"inputs\": [{\"inputNode\": \"0rhbnaol3#gpt-no-context-prompt\"}],\n            \"name\": \"vd6up8r0m#stable-diffusion-stabilityai-prompt\",\n            \"processorType\": \"stable-diffusion-stabilityai-prompt\",\n            \"size\": \"1024x1024\",\n            \"x\": -1961.3132869508825,\n            \"y\": -73.26855714327525,\n        }\n\n    @patch(\"requests.post\")\n    def test_process_returns_valid_image_url_on_successful_api_response(\n        self, mock_post\n    ):\n        # Arrange\n        API_KEY = \"FAKE\"\n        mock_response = Mock()\n        mock_response.status_code = 200\n        mock_response.json.return_value = {\n            \"artifacts\": [{\"base64\": \"R0lGODlhAQABAAAAACw=\"}]\n        }\n        mock_post.return_value = mock_response\n\n        url_pattern = re.compile(r\"http?://[^\\s]+\")\n\n        config = self.get_default_valid_config()\n        clean_name = config[\"name\"].replace(\"#\", \"\")\n        api_context_data = ProcessorContextMock(API_KEY)\n\n        processor = StableDiffusionStabilityAIPromptProcessor(config, api_context_data)\n        processor.set_storage_strategy(LocalStorageStrategy())\n\n        # Act\n        url = processor.process()\n\n        # Assert\n        self.assertTrue(url_pattern.match(url))\n        self.assertIn(clean_name, url)\n        self.assertTrue(url.endswith(\".png\"))\n\n        mock_post.assert_called_once_with(\n            ANY,\n            headers=ANY,\n            json=ANY,\n        )\n\n    @patch(\"requests.post\")\n    def test_process_transmit_prompt_to_api(self, mock_post):\n        # Arrange\n        API_KEY = \"FAKE\"\n        mock_response = Mock()\n        mock_response.status_code = 200\n        mock_response.json.return_value = {\n            \"artifacts\": [{\"base64\": \"R0lGODlhAQABAAAAACw=\"}]\n        }\n        mock_post.return_value = mock_response\n\n        config = self.get_default_valid_config()\n        expected_prompt = \"Expected prompt\"\n        config[\"prompt\"] = expected_prompt\n\n        api_context_data = ProcessorContextMock(API_KEY)\n        processor = StableDiffusionStabilityAIPromptProcessor(config, api_context_data)\n        processor.set_storage_strategy(LocalStorageStrategy())\n\n        # Act\n        processor.process()\n\n        # Assert\n        called_args = mock_post.call_args[1]\n        sent_json = called_args.get(\"json\")\n        transmitted_prompts = sent_json[\"text_prompts\"]\n        self.assertTrue(\n            any(prompt[\"text\"] == expected_prompt for prompt in transmitted_prompts)\n        )\n\n    @patch(\"requests.post\")\n    def test_when_linked_to_input_node_transmits_input_node_output_to_the_api(\n        self, mock_post\n    ):\n        # Arrange\n        API_KEY = \"FAKE\"\n        expected_prompt = \"Expected prompt\"\n\n        input_processor = InputProcessor(\n            {\n                \"inputs\": [],\n                \"name\": \"p00tklq5w#input-text\",\n                \"processorType\": \"input-text\",\n                \"inputText\": expected_prompt,\n            }\n        )\n\n        input_processor.set_output(expected_prompt)\n\n        mock_response = Mock()\n        mock_response.status_code = 200\n        mock_response.json.return_value = {\n            \"artifacts\": [{\"base64\": \"R0lGODlhAQABAAAAACw=\"}]\n        }\n        mock_post.return_value = mock_response\n\n        config = self.get_default_valid_config()\n\n        api_context_data = ProcessorContextMock(API_KEY)\n        processor = StableDiffusionStabilityAIPromptProcessor(config, api_context_data)\n        processor.set_storage_strategy(LocalStorageStrategy())\n        processor.add_input_processor(input_processor)\n\n        # Act\n        processor.process()\n\n        # Assert\n        called_args = mock_post.call_args[1]\n        sent_json = called_args.get(\"json\")\n        transmitted_prompts = sent_json[\"text_prompts\"]\n        self.assertTrue(\n            any(prompt[\"text\"] == expected_prompt for prompt in transmitted_prompts)\n        )\n"
  },
  {
    "path": "packages/backend/tests/utils/openai_mock_utils.py",
    "content": "from unittest.mock import Mock\n\n\ndef create_mocked_openai_response(\n    model=\"gpt-4\", api_key=\"000000000\", response_content=\"Mocked Response\"\n):\n    \"\"\"\n    Create a mocked response for OpenAI.\n\n    :param model: The model to be used.\n    :param api_key: The API key to be used.\n    :param response_content: The content for the mocked response.\n    :return: A mocked response for OpenAI.\n    \"\"\"\n    mock_message = Mock()\n    mock_message.content = response_content\n\n    mock_choice = Mock()\n    mock_choice.message = mock_message\n\n    mock_response = Mock()\n    mock_response.choices = [mock_choice]\n\n    return mock_response\n"
  },
  {
    "path": "packages/backend/tests/utils/processor_context_mock.py",
    "content": "from typing import List\nfrom app.processors.context.processor_context import ProcessorContext\nfrom typing import List\n\n\nclass ProcessorContextMock(ProcessorContext):\n    def __init__(self, api_key, user_id=0, session_id=0) -> None:\n        super().__init__()\n        self.api_key = api_key\n        self.user_id = user_id\n        self.session_id = session_id\n\n    def get_context(self):\n        return self.api_key\n\n    def get_current_user_id(self):\n        return self.user_id\n\n    def get_session_id(self):\n        return self.user_id\n\n    def get_parameter_names(self) -> List[str]:\n        return super().get_parameter_names()\n\n    def get_value(self, name):\n        if \"api_key\" in name:\n            return self.api_key\n        return super().get_value(name)\n\n    def is_using_personal_keys(self, source_name):\n        return False\n"
  },
  {
    "path": "packages/backend/tests/utils/processor_factory_mock.py",
    "content": "import logging\nimport random\nimport eventlet\nimport time\n\nfrom injector import singleton\nfrom unittest.mock import MagicMock\nfrom app.processors.factory.processor_factory_iter_modules import (\n    ProcessorFactoryIterModules,\n)\n\nfrom app.processors.components.core.processor_type_name_utils import (\n    ProcessorType,\n)\nfrom .processor_context_mock import ProcessorContextMock\n\n\n@singleton\nclass ProcessorFactoryMock(ProcessorFactoryIterModules):\n    MIN_DELAY = 0.1\n    MAX_DELAY = 1\n\n    NON_MOCKED_PROCESSORS = [\n        ProcessorType.INPUT_TEXT.value,\n        ProcessorType.INPUT_IMAGE.value,\n        ProcessorType.URL_INPUT.value,\n        ProcessorType.DISPLAY.value,\n        ProcessorType.TRANSITION.value,\n    ]\n\n    def __init__(\n        self,\n        fake_text_output=None,\n        fake_img_output=None,\n        fake_multiple_output=None,\n        with_delay=False,\n        sleep_duration=None,\n    ):\n        super().__init__()\n        self._mock_processors = {}\n        self.fake_text_output = fake_text_output\n        self.fake_img_output = fake_img_output\n        self.fake_multiple_output = fake_multiple_output\n        self.with_delay = with_delay\n\n    def create_mock_processor(\n        self, config, processor_type: ProcessorType, processor_class: str\n    ):\n        mock_processor = MagicMock(spec=processor_class)\n\n        mock_processor.name = config.get(\"name\", \"default_processor_name\")\n        mock_processor.processor_type = processor_type\n        mock_processor.input_processors = []\n        mock_processor._processor_context = ProcessorContextMock(\"\")\n\n        if config.get(\"inputs\") is not None and config.get(\"inputs\") != []:\n            mock_processor.inputs = config.get(\"inputs\")\n\n        def fake_process(*args, **kwargs):\n            if self.with_delay:\n                sleep_duration = random.uniform(\n                    ProcessorFactoryMock.MIN_DELAY, ProcessorFactoryMock.MAX_DELAY\n                )\n                eventlet.sleep(sleep_duration)\n\n            if config.get(\"sleepDuration\") is not None:\n                sleep_duration = config.get(\"sleepDuration\")\n                logging.info(f\"Sleeping for {sleep_duration} seconds\")\n                eventlet.sleep(sleep_duration)\n                logging.info(\"Awake\")\n\n            if mock_processor.processor_type in [\n                ProcessorType.DALLE_PROMPT.value,\n                ProcessorType.STABLE_DIFFUSION_STABILITYAI_PROMPT.value,\n            ]:\n                output = (\n                    [self.fake_img_output]\n                    if self.fake_img_output is not None\n                    else [\n                        \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/v0.4.0-sample-1.png\"\n                    ]\n                )\n            elif mock_processor.processor_type in [\n                ProcessorType.AI_DATA_SPLITTER.value\n            ]:\n                output = (\n                    self.fake_multiple_output\n                    if self.fake_multiple_output is not None\n                    else [\"Lorem Ipsum\", \"Lorem Ipsum\"]\n                )\n            else:\n                output = (\n                    self.fake_text_output\n                    if self.fake_text_output is not None\n                    else \"Lorem Ipsum\"\n                )\n            mock_processor.set_output(output)\n            mock_processor.is_finished = True\n            return output\n\n        def fake_process_raise_error(*args, **kwargs):\n            logging.error(\"MockProcessor - Fake Error\")\n            raise Exception(\"Mock Processor error\")\n\n        def fake_add_input_processor(input_processor):\n            mock_processor.input_processors.append(input_processor)\n\n        def get_input_processors():\n            return mock_processor.input_processors\n\n        def fake_has_dynamic_behavior():\n            return False\n\n        def fake_get_input_by_name(input_name, default_value=\"\"):\n            return default_value\n\n        mock_processor.process_and_update = (\n            fake_process\n            if config.get(\"raiseError\", False) == False\n            else fake_process_raise_error\n        )\n        mock_processor.add_input_processor = fake_add_input_processor\n        mock_processor.get_input_processors = get_input_processors\n        mock_processor.has_dynamic_behavior = fake_has_dynamic_behavior\n        mock_processor.get_input_by_name = fake_get_input_by_name\n\n        self._mock_processors[processor_type] = mock_processor\n\n        return mock_processor\n\n    def create_processor(self, config, context=None, storage_strategy=None):\n        processor_type = config[\"processorType\"]\n        processor_class = self._processors.get(processor_type)\n        if not processor_class:\n            raise ValueError(f\"Processor type '{processor_type}' not supported\")\n\n        if (\n            processor_type in ProcessorFactoryMock.NON_MOCKED_PROCESSORS\n            and config.get(\"raiseError\", False) == False\n        ):\n            processor = processor_class(config=config)\n        else:\n            processor = self.create_mock_processor(\n                config, processor_type, processor_class\n            )\n\n        return processor\n"
  },
  {
    "path": "packages/ui/.gitignore",
    "content": "# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.\n\n# dependencies\n/node_modules\n/.pnp\n.pnp.js\n\n# testing\n/coverage\n\n# production\n/build\n\n# misc\n.DS_Store\n.env.local\n.env.development.local\n.env.test.local\n.env.production.local\n\nnpm-debug.log*\nyarn-debug.log*\nyarn-error.log*\n\n#amplify-do-not-edit-begin\namplify/\\#current-cloud-backend\namplify/.config/local-*\namplify/logs\namplify/mock-data\namplify/mock-api-resources\namplify/backend/amplify-meta.json\namplify/backend/.temp\nbuild/\ndist/\nnode_modules/\naws-exports.js\nawsconfiguration.json\namplifyconfiguration.json\namplifyconfiguration.dart\namplify-build-config.json\namplify-gradle-config.json\namplifytools.xcconfig\n.secret-*\n**.sample\n#amplify-do-not-edit-end\n"
  },
  {
    "path": "packages/ui/.prettierignore",
    "content": "node_modules\n\n# Ignore artifacts:\nbuild\ncoverage"
  },
  {
    "path": "packages/ui/Dockerfile",
    "content": "FROM node:21 as build\n\nWORKDIR /app\n\nARG VITE_APP_WS_HOST\nARG VITE_APP_WS_PORT\nARG VITE_APP_API_REST_PORT\nARG VITE_APP_USE_HTTPS\nARG VITE_APP_VERSION\n\nENV VITE_APP_WS_HOST=$VITE_APP_WS_HOST\nENV VITE_APP_WS_PORT=$VITE_APP_WS_PORT\nENV VITE_APP_API_REST_PORT=$VITE_APP_API_REST_PORT\nENV VITE_APP_USE_HTTPS=$VITE_APP_USE_HTTPS\nENV VITE_APP_VERSION=$VITE_APP_VERSION\n\nCOPY package.json package-lock.json /app/\n\nRUN npm ci\n\nCOPY . /app/\n\nRUN npm run build\nRUN ls -al /app\n\nFROM nginx:1.21\n\nCOPY --from=build ./app/build /usr/share/nginx/html\n\nCOPY nginx.conf /etc/nginx/conf.d/default.conf\n\nEXPOSE 80\n\nCMD [\"nginx\", \"-g\", \"daemon off;\"]\n"
  },
  {
    "path": "packages/ui/README.md",
    "content": "# Getting Started with Create React App\n\nThis project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app).\n\n## Available Scripts\n\nIn the project directory, you can run:\n\n### `npm start`\n\nRuns the app in the development mode.\\\nOpen [http://localhost:3000](http://localhost:3000) to view it in the browser.\n\nThe page will reload if you make edits.\\\nYou will also see any lint errors in the console.\n\n### `npm test`\n\nLaunches the test runner in the interactive watch mode.\\\nSee the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information.\n\n### `npm run build`\n\nBuilds the app for production to the `build` folder.\\\nIt correctly bundles React in production mode and optimizes the build for the best performance.\n\nThe build is minified and the filenames include the hashes.\\\nYour app is ready to be deployed!\n\nSee the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information.\n\n### `npm run eject`\n\n**Note: this is a one-way operation. Once you `eject`, you can’t go back!**\n\nIf you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project.\n\nInstead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.\n\nYou don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.\n\n## Learn More\n\nYou can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started).\n\nTo learn React, check out the [React documentation](https://reactjs.org/).\n"
  },
  {
    "path": "packages/ui/index.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n\n<head>\n  <meta charset=\"utf-8\" />\n  <link rel=\"apple-touch-icon\" sizes=\"180x180\" href=\"/apple-touch-icon.png\" />\n  <link rel=\"icon\" type=\"image/png\" sizes=\"32x32\" href=\"/favicon-32x32.png\" />\n  <link rel=\"icon\" type=\"image/png\" sizes=\"16x16\" href=\"/favicon-16x16.png\" />\n  <link rel=\"manifest\" href=\"/site.webmanifest\" />\n  <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n  <meta name=\"theme-color\" content=\"#000000\" />\n  <meta name=\"description\" content=\"AI-FLow App\" />\n  <link rel=\"apple-touch-icon\" href=\"/logo192.png\" />\n  <link rel=\"manifest\" href=\"/manifest.json\" />\n  <link href=\"https://fonts.googleapis.com/css2?family=Roboto:wght@400;500;700&display=swap\" rel=\"stylesheet\">\n  <title>AI Flow</title>\n</head>\n\n<body>\n  <noscript>You need to enable JavaScript to run this app.</noscript>\n  <div id=\"root\"></div>\n  <script type=\"module\" src=\"/src/index.tsx\"></script>\n</body>\n\n</html>"
  },
  {
    "path": "packages/ui/jest.config.ts",
    "content": "import type { Config } from \"@jest/types\";\n\nconst config: Config.InitialOptions = {\n  verbose: true,\n  preset: \"ts-jest\",\n  testEnvironment: \"node\",\n  testMatch: [\"**/test/**/*.ts\", \"**/?(*.)+(spec|test).ts\"],\n};\n\nexport default config;\n"
  },
  {
    "path": "packages/ui/nginx.conf",
    "content": "server {\n    listen 80;\n    server_name localhost;\n    root /usr/share/nginx/html;\n    index index.html;\n\n    location / {\n        try_files $uri $uri/ /index.html;\n        add_header Content-Security-Policy \"script-src 'self' 'unsafe-inline';\";\n    }\n}"
  },
  {
    "path": "packages/ui/package.json",
    "content": "{\n  \"name\": \"ai-flow-front\",\n  \"version\": \"0.11.3\",\n  \"private\": true,\n  \"dependencies\": {\n    \"@headlessui/react\": \"^1.7.17\",\n    \"@mantine/core\": \"^7.7.1\",\n    \"@mantine/hooks\": \"^7.7.1\",\n    \"@testing-library/jest-dom\": \"^5.16.5\",\n    \"@testing-library/react\": \"^13.4.0\",\n    \"@testing-library/user-event\": \"^13.5.0\",\n    \"@types/chai\": \"^4.3.12\",\n    \"@types/lodash.debounce\": \"^4.0.8\",\n    \"@types/node\": \"^20.12.12\",\n    \"@types/three\": \"^0.164.0\",\n    \"@vitejs/plugin-react-swc\": \"^3.6.0\",\n    \"autoprefixer\": \"^10.4.14\",\n    \"axios\": \"^1.6.2\",\n    \"dotenv\": \"^16.0.3\",\n    \"framer-motion\": \"^10.16.4\",\n    \"github-markdown-css\": \"^5.4.0\",\n    \"i18next\": \"^22.5.0\",\n    \"i18next-browser-languagedetector\": \"^7.0.1\",\n    \"i18next-http-backend\": \"^2.2.1\",\n    \"lodash.debounce\": \"^4.0.8\",\n    \"polished\": \"^4.2.2\",\n    \"rdndmb-html5-to-touch\": \"^8.0.3\",\n    \"react\": \"^18.2.0\",\n    \"react-copy-to-clipboard\": \"^5.1.0\",\n    \"react-dnd\": \"^16.0.1\",\n    \"react-dnd-html5-backend\": \"^16.0.1\",\n    \"react-dnd-multi-backend\": \"^8.0.3\",\n    \"react-dnd-touch-backend\": \"^16.0.1\",\n    \"react-dom\": \"^18.2.0\",\n    \"react-dropzone\": \"^14.2.3\",\n    \"react-grid-layout\": \"^1.4.4\",\n    \"react-i18next\": \"^12.3.1\",\n    \"react-icons\": \"^4.11.0\",\n    \"react-joyride\": \"^2.7.2\",\n    \"react-markdown\": \"^9.0.0\",\n    \"react-resizable\": \"^3.0.5\",\n    \"react-sketch-canvas\": \"^6.2.0\",\n    \"react-switch\": \"^7.0.0\",\n    \"react-syntax-highlighter\": \"^15.6.1\",\n    \"react-toastify\": \"^9.1.3\",\n    \"react-tooltip\": \"^5.13.1\",\n    \"react18-json-view\": \"^0.2.9\",\n    \"reactflow\": \"^11.7.2\",\n    \"remark-gfm\": \"^4.0.0\",\n    \"socket.io-client\": \"^4.6.1\",\n    \"styled-components\": \"^5.3.10\",\n    \"tailwindcss\": \"^3.3.2\",\n    \"three\": \"^0.164.1\",\n    \"typescript\": \"^4.9.5\",\n    \"typescript-json-schema\": \"^0.63.0\",\n    \"video.js\": \"^8.12.0\",\n    \"videojs-wavesurfer\": \"^3.10.0\",\n    \"vite\": \"^5.2.11\",\n    \"vite-plugin-svgr\": \"^4.2.0\",\n    \"web-vitals\": \"^2.1.4\"\n  },\n  \"scripts\": {\n    \"start\": \"vite --host\",\n    \"build\": \"tsc && vite build\",\n    \"serve\": \"vite preview\",\n    \"test\": \"vitest\",\n    \"test:e2e\": \"playwright test\"\n  },\n  \"eslintConfig\": {\n    \"extends\": [\n      \"react-app\",\n      \"react-app/jest\"\n    ]\n  },\n  \"browserslist\": {\n    \"production\": [\n      \">0.2%\",\n      \"not dead\",\n      \"not op_mini all\"\n    ],\n    \"development\": [\n      \"last 1 chrome version\",\n      \"last 1 firefox version\",\n      \"last 1 safari version\"\n    ]\n  },\n  \"devDependencies\": {\n    \"@playwright/test\": \"^1.45.1\",\n    \"@types/dompurify\": \"^3.0.2\",\n    \"@types/jest\": \"^29.5.12\",\n    \"@types/react\": \"^18.3.3\",\n    \"@types/react-copy-to-clipboard\": \"^5.0.7\",\n    \"@types/react-dom\": \"^18.3.0\",\n    \"@types/react-grid-layout\": \"^1.3.5\",\n    \"@types/react-syntax-highlighter\": \"^15.5.13\",\n    \"@types/styled-components\": \"^5.1.26\",\n    \"@types/video.js\": \"^7.3.58\",\n    \"@vitest/coverage-v8\": \"^1.6.0\",\n    \"jsdom\": \"^24.0.0\",\n    \"postcss\": \"^8.4.38\",\n    \"postcss-preset-mantine\": \"^1.13.0\",\n    \"postcss-simple-vars\": \"^7.0.1\",\n    \"prettier\": \"^3.2.4\",\n    \"prettier-plugin-tailwindcss\": \"^0.5.11\",\n    \"ts-jest\": \"^29.1.2\",\n    \"vite-bundle-visualizer\": \"^1.2.1\",\n    \"vitest\": \"^1.6.0\"\n  }\n}\n"
  },
  {
    "path": "packages/ui/postcss.config.cjs",
    "content": "module.exports = {\n  plugins: {\n    \"postcss-preset-mantine\": {},\n    \"postcss-simple-vars\": {\n      variables: {\n        \"mantine-breakpoint-xs\": \"36em\",\n        \"mantine-breakpoint-sm\": \"48em\",\n        \"mantine-breakpoint-md\": \"62em\",\n        \"mantine-breakpoint-lg\": \"75em\",\n        \"mantine-breakpoint-xl\": \"88em\",\n      },\n    },\n  },\n};\n"
  },
  {
    "path": "packages/ui/postcss.config.js",
    "content": "module.exports = {\n  plugins: {\n    tailwindcss: {},\n    autoprefixer: {},\n  },\n}\n"
  },
  {
    "path": "packages/ui/prettier.config.js",
    "content": "module.exports = {\n  plugins: ['prettier-plugin-tailwindcss'],\n}"
  },
  {
    "path": "packages/ui/public/health",
    "content": "OK"
  },
  {
    "path": "packages/ui/public/locales/en/aiActions.json",
    "content": "{\n    \"Summary\": \"Summary\",\n    \"SpellCheck\": \"Spell Check\",\n    \"VisualPrompt\": \"Visual Prompt\",\n    \"ConstructiveCritique\": \"Constructive Critique\",\n    \"SimpleExplanation\": \"Simple Explanation\",\n    \"Paraphrase\": \"Paraphrase\",\n    \"SentimentAnalysis\": \"Sentiment Analysis\",\n    \"TextExtension\": \"Text Extension\",\n    \"ClickToShowOutput\": \"Click to show output\"\n}"
  },
  {
    "path": "packages/ui/public/locales/en/config.json",
    "content": "{\n  \"configurationTitle\": \"Configuration\",\n  \"apiKeyDisclaimer\": \"We do not use or store your API keys.\",\n  \"openSourceDisclaimer\": \"The application code is open source.\",\n  \"apiKeyRevokeReminder\": \"Remember, you can revoke your keys at any time and generate new ones.\",\n  \"closeButtonLabel\": \"Close\",\n  \"validateButtonLabel\": \"Validate\",\n  \"likeProjectPrompt\": \"If you like this project, you can add a star on:\",\n  \"supportProjectPrompt\": \"You can support the future of the project and contact us through\",\n  \"Logout\": \"Logout\",\n  \"sections.core\": \"Base parameters\",\n  \"parameters.core.openai_api_key\": \"OpenAI API Key\",\n  \"parameters.core.stabilityai_api_key\": \"StabilityAI API Key\",\n  \"parameters.core.replicate_api_key\": \"Replicate API Key\",\n  \"parameters.extension.anthropic_api_key\": \"Anthropic API Key\",\n  \"sections.extension\": \"Extensions\",\n  \"userTabLabel\": \"User parameters\",\n  \"appParametersLabel\": \"App parameters\",\n  \"displayTabLabel\": \"Display parameters\",\n  \"nodesDisplayed\": \"Nodes enabled\",\n  \"configUpdated\": \"Configuration updated successfully\",\n  \"ShowMinimap\": \"Show minimap\",\n  \"UI\": \"User Interface\",\n  \"input\": \"Inputs\",\n  \"models\": \"Models\",\n  \"tools\": \"Tools\",\n  \"parameters.extension.deepseek_api_key\": \"DeepSeek API Key\",\n  \"parameters.extension.openrouter_api_key\": \"OpenRouter API Key\",\n  \"incompleteLoadingPleaseRestart\": \"The application failed to load all data. Please restart the app.\"\n}\n"
  },
  {
    "path": "packages/ui/public/locales/en/dialogs.json",
    "content": "{\n    \"attachNodeTitle\": \"Attach Node\",\n    \"attachNodeAction\": \"Attach\"\n}"
  },
  {
    "path": "packages/ui/public/locales/en/flow.json",
    "content": "{\n  \"Flow\": \"Flow\",\n  \"AddTab\": \"Add Tab\",\n  \"ShowOnlyOutputs\": \"Show only outputs\",\n  \"ShowOnlyParams\": \"Show only params\",\n  \"ApiKeyRequiredMessage\": \"Please provide your API Key in the configuration settings to run your flow successfully.\",\n  \"Prompt\": \"Prompt\",\n  \"ClickToShowOutput\": \"Click to show output\",\n  \"RoleInitPrompt\": \"Indicate the role you wish the AI to adopt in the upcoming interactions. For example, 'Behave as a Literary Critic'.\",\n  \"EnterURL\": \"Web Extractor\",\n  \"YoutubeTranscriptNodeName\": \"Youtube Transcript\",\n  \"URLPlaceholder\": \"Input a URL\",\n  \"Input\": \"Inputs\",\n  \"InputImage\": \"Image\",\n  \"InputPlaceholder\": \"Enter your text here to serve as input for other nodes.\",\n  \"InputImagePlaceholder\": \"Enter the URL of the image you want to use.\",\n  \"DALLE\": \"DALL-E\",\n  \"ClickToShowImageOutput\": \"Click to show image output\",\n  \"JsonView\": \"JSON View\",\n  \"TopologicalView\": \"Outputs\",\n  \"currentNodeView\": \"Current Node\",\n  \"Upload\": \"Upload\",\n  \"Download\": \"Download\",\n  \"Text\": \"Text\",\n  \"URL\": \"URL\",\n  \"YoutubeVideo\": \"Youtube Transcript\",\n  \"Models\": \"Models\",\n  \"GPT\": \"GPT Model\",\n  \"GPTPrompt\": \"GPT Prompt\",\n  \"DataSplitter\": \"Data Splitter\",\n  \"ReplicateModel\": \"Replicate\",\n  \"NoContextPrompt\": \"GPT Prompt\",\n  \"PromptPlaceholder\": \"Enter your prompt here, for example 'Create a Twitter thread based on the data I sent you.'\",\n  \"MergePromptPlaceholder\": \"Enter your text for the merge, using ${input-1} and ${input-2} as placeholders. You can add additional text, for example: \\n Answer to ${input-1} by considering ${input-2}.\",\n  \"VisionPromptPlaceholder\": \"Please enter your prompt here to start the image analysis. For example, you could write 'Describe this image'\",\n  \"VisionImageURLPlaceholder\": \"Enter the URL of the image you want to use.\",\n  \"DallEPromptPlaceholder\": \"Enter your prompt here, for example 'A dog and a cat playing in the desert'\",\n  \"ImageGeneration\": \"Image Generation\",\n  \"AdvancedSection\": \"Advanced\",\n  \"AiAction\": \"AI Action\",\n  \"LLMPrompt\": \"GPT\",\n  \"AiDataSplitter\": \"Data Splitter\",\n  \"MergerNode\": \"Merge Text\",\n  \"inputHelp\": \"This node is used to input text.\",\n  \"inputImageHelp\": \"This node is used to input an image URL.\",\n  \"urlInputHelp\": \"Enter a valid URL and the node will retrieve the data from this URL.\",\n  \"youtubeTranscriptHelp\": \"This node retrieves the subtitles of a YouTube video from its URL.\",\n  \"gptHelp\": \"This node allows you to configure a GPT model, specify its role, and the data it will use to respond.\",\n  \"gptPromptHelp\": \"This node allows you to query a GPT model. It shares its context with other nodes connected to the model.\",\n  \"noContextPromptHelp\": \"This node allows you to query GPT without context, just from input data and a prompt. You don't have to connect it to a GPT Model.\",\n  \"dallePromptHelp\": \"This node uses the DALL-E model to generate images from a textual description.\",\n  \"stableDiffusionPromptHelp\": \"This node uses the Stable Diffusion model to generate images from a textual description.\",\n  \"stableVideoDiffusionPromptHelp\": \"This node uses Replicate to launch Stable Video Diffusion.\",\n  \"aiActionPromptHelp\": \"This node uses GPT-4 to perform simple action, without having to write a prompt.\",\n  \"llmPromtHelp\": \"This nodes allows you to send prompt to GPT-3.5 or GPT-4\",\n  \"replicateHelp\": \"This nodes uses Replicate to give access to a large amount of models.\",\n  \"mergerPromptHelp\": \"This nodes allows you to merge 2 inputs.\",\n  \"gptVisionPromptHelp\": \"This node uses GPT-4 Vision and takes an image URL as input.\",\n  \"dataSplitterHelp\": \"This node is used to split data into several parts. You can specify upstream how many parts you want to create. You can also run it individually so that it finds the exact number of outputs to generate.\",\n  \"socketConnectionLost\": \"The connection has been lost.\",\n  \"ClickToSelectModel\": \"Click to select model\",\n  \"Or\": \"OR\",\n  \"EnterModelNameDirectly\": \"Enter model name directly\",\n  \"Load\": \"Load\",\n  \"LoadMore\": \"Load more\",\n  \"SpotlightModels\": \"Spotlight Models\",\n  \"AllModels\": \"All Models\",\n  \"EdgeType\": \"Edge type\",\n  \"CannotChangeTabWhileRunning\": \"Cannot change tab while running\",\n  \"EnterUrlToDesiredFile\": \"Enter file URL\",\n  \"Transition\": \"Transition\",\n  \"transitionHelp\": \"This node can be used to organize the flow.\",\n  \"MissingFieldsMessage\": \"Required fields are missing\",\n  \"Node\": \"Node\",\n  \"MissingFields\": \"Missing fields\",\n  \"CannotDeleteLastFlow\": \"Cannot delete the last flow\",\n  \"HideSidebar\": \"Hide sidebar\",\n  \"ShowSidebar\": \"Show sidebar\",\n  \"fileUploadHelp\": \"This node can be used to load a file.\",\n  \"llmPromptHelp\": \"This node allows you to send prompt to GPT-3.5 or GPT-4\",\n  \"Output\": \"Output\",\n  \"Inputs\": \"Inputs\",\n  \"Parameters\": \"Parameters\",\n  \"Duplicate\": \"Duplicate\",\n  \"OpeninSidepane\": \"Open in sidepane\",\n  \"ClearOutput\": \"Clear Output\",\n  \"RemoveNode\": \"Remove Node\",\n  \"ExpiredURL\": \"Expired URL\",\n  \"NoNodeSelected\": \"No node selected yet.\",\n  \"ClickOnNodeToSelectIt\": \"Click on any node to select it.\",\n  \"Field\": \"Field\",\n  \"DragAndDropNodes\": \"Drag and drop nodes onto the canvas to add them.\",\n  \"CopiedToClipboard\": \"Copied to clipboard.\",\n  \"DocumentToText\": \"Document-to-Text\",\n  \"documentToTextHelp\": \"Convert .pdf .txt .csv .json .html file to simple text\",\n  \"TextToSpeech\": \"Text-to-Speech\",\n  \"textToSpeechHelp\": \"Convert a text to an audio file using OpenAI tts model\",\n  \"error.upload_failed\": \"Upload failed. Please check your configuration to enable file upload.\",\n  \"InputTextPlaceholder\": \"Enter your text here\",\n  \"DownloadFile\": \"Download File\",\n  \"FileUploaded\": \"File uploaded\",\n  \"GenericPromptPlaceholder\": \"Enter your prompt here\",\n  \"GenericNegativePromptPlaceholder\": \"Enter your negative prompt here\",\n  \"EnterCustomName\": \"Enter custom name\",\n  \"NodeColor\": \"Change node color\",\n  \"ChangeName\": \"Change name\",\n  \"RemoveFlow\": \"Remove flow\",\n  \"HideHint\": \"Hide\",\n  \"TextDocumentHint\": \"Please note that this node only provides files as URLs. To use a document (.pdf, .txt) in text format, consider using the 'Document-to-Text' node.\",\n  \"Display\": \"Display\",\n  \"displayHelp\": \"This resizable node allows you to display content.\",\n  \"Validate\": \"Validate\",\n  \"AI\": \"AI\",\n  \"Separator\": \"Separator\",\n  \"ClaudeAnthropic\": \"Claude\",\n  \"claudeAnthropichHelp\": \"This node uses Claude from Anthropic to generate text.\",\n  \"noDataAvailableForThisNode\": \"No data available for this node\",\n  \"learnMore\": \"Learn more:\",\n  \"Help\": \"Help\",\n  \"cookiesConsentLabelPlaceholder\": \"Agree to all\",\n  \"cookiesConsentLabelHelp\": \"For some pages, we need to click on the cookie consent button to access the data. This instruction helps locate the button.\",\n  \"EditTextContent\": \"Edit text content\",\n  \"ShowCoordinates\": \"Show Coordinates\",\n  \"ShowNodesConfig\": \"Show Nodes Config\",\n  \"DeleteAll\": \"Delete All\",\n  \"DeleteOutputs\": \"Delete Outputs\",\n  \"ReplaceText\": \"Replace Text\",\n  \"ReplaceTextInputPlaceholder\": \"Enter the full text where the term will be replaced.\",\n  \"ReplaceTextSearchPlaceholder\": \"Enter the term or regex pattern to be replaced.\",\n  \"ReplaceTextReplacePlaceholder\": \"Enter the replacement term.\",\n  \"replaceTextNodeHelp\": \"Use this node to find and replace specific text or patterns within the input.\",\n  \"openaio1Help\": \"Advanced language models trained for complex reasoning, excelling in scientific, mathematical, and programming challenges.\",\n  \"ContextPlaceholder\": \"Additionnal context that will be used to answer your prompt.\",\n  \"deepSeekHelp\": \"Access DeepSeek LLMs through this node.\",\n  \"openRouterHelp\": \"OpenRouter provides completion API to multiple models & providers. This nodes requires you to provide an API Key.\",\n  \"Generate Number\": \"Generate Number\",\n  \"generateNumberHelp\": \"Generate a random number\",\n  \"httpGetProcessorURLPlaceholder\": \"Enter the URL to request\",\n  \"httpGetProcessorURLDescription\": \"The URL that the HTTP GET request will be sent to.\",\n  \"httpGetProcessorHeadersPlaceholder\": \"Enter headers in JSON format\",\n  \"httpGetProcessorHeadersDescription\": \"The headers to include in the HTTP GET request.\",\n  \"httpGetProcessorHelp\": \"Send an HTTP GET request with the specified headers.\",\n  \"gptImageHelp\": \"Generate or Edit an image using GPT Image\",\n  \"gptImageMaskDescription\": \"You can provide a mask to indicate where the image should be edited. You can use the prompt to describe the full new image, not just the erased area. If you provide multiple input images, the mask will be applied to the first image.\",\n  \"dallEDeprecated\": \"Most recent OpenAI models are now available via the new GPT Image node and are superior to DALL-E. DALL-E remains available if needed.\",\n  \"TTSInstructionPlaceholder\": \"Ex : Speak in a cheerful and positive tone.\",\n  \"TTSInstructionDescription\": \" Prompt the model to control aspects of speech (accent, emotional range, intonation, speed, tone, ...)\",\n  \"PopularModels\": \"Popular Models\",\n  \"removeBackgroundDescription\": \"Remove the background from an image using the StabilityAI API.\",\n  \"upscaleFastDescription\": \"Upscale an image using StabilityAI API\",\n  \"fluxDescription\": \"Generate an image using the FLUX model.\",\n  \"fluxKontextDescription\": \"A state-of-the-art text-based image editing model that delivers high-quality outputs with excellent prompt following and consistent results for transforming images through natural language\",\n  \"faceswapDescription\": \"Seamlessly swap faces between images, allowing for realistic and precise facial replacements.\",\n  \"removeBgDescription\": \"Remove the background from an image using lucataco/remove-bg from Replicate API.\",\n  \"upscaleDescription\": \"Real-ESRGAN with optional face correction and adjustable upscale\",\n  \"GoogleUpscaleDescription\": \"Upscale an image using Google's model.\",\n  \"moondreamDescription\": \"Moondream is a vision model that responds to prompts about a given image.\",\n  \"llamaDescription\": \"Meta's flagship 405 billion parameter language model, fine-tuned for chat completions.\",\n  \"imagenDescription\": \"Imagen produce stunning, detailed images with precision.\",\n  \"recraftSVGDescription\": \"Recraft SVG offers advanced vector image generation, enabling scalable and creative SVG designs.\",\n  \"recraftDescription\": \"Recraft V3 is a text-to-image model with the ability to generate long texts, and images in a wide list of styles. \",\n  \"video01Description\": \"Video-01 provides dynamic video generation.\",\n  \"video01LiveDescription\": \"Video-01-Live provides dynamic video generation, ideal for 2D animation.\",\n  \"klingDescription\": \"Kling delivers robust video generation and animation solutions, available in both professional and standard versions.\",\n  \"veo3Description\": \"Google’s flagship Veo 3 text to video model, with audio\"\n}\n"
  },
  {
    "path": "packages/ui/public/locales/en/nodeHelp.json",
    "content": "{\n  \"input-text\": {\n    \"description\": \"Text Node can be used to transfer text input to other nodes.\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/Input-nodes\",\n        \"label\": \"Add an input in AI-FLOW\"\n      }\n    ]\n  },\n  \"url_input\": {\n    \"description\": \"Retrieve textual content from an URL.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/web-extractor-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/Input-nodes#url\",\n        \"label\": \"Add an input in AI-FLOW\"\n      }\n    ]\n  },\n  \"llm-prompt\": {\n    \"description\": \"Processes inputs using GPT models by OpenAI, which can understand and generate responses based on the context provided by the user.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/gpt-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/blog/summarize-doc-post\",\n        \"label\": \"How to Summarize Documents or Ask Questions Using AI-FLOW\"\n      },\n      {\n        \"url\": \"https://docs.ai-flow.net/blog/summarize-ytb-post\",\n        \"label\": \"How to Summarize a YouTube Video Using AI-FLOW\"\n      }\n    ]\n  },\n  \"gpt-vision\": {\n    \"description\": \"Use GPT-4-o to analyze a picture.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/vision-demo.gif\",\n    \"docUrls\": []\n  },\n  \"youtube_transcript_input\": {\n    \"description\": \"Captures transcripts directly from YouTube API, allowing for further processing like translation, summarization, or keyword extraction.\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/blog/summarize-ytb-post\",\n        \"label\": \"How to Summarize a YouTube Video Using AI-FLOW\"\n      }\n    ]\n  },\n  \"dalle-prompt\": {\n    \"description\": \"Allows users to create detailed prompts to generate images using the DALL-E 3 model, combining creativity and AI for custom visual content. Please note that OpenAI only allow 5-7 images per minute. Don't forget to save your file; OpenAI host files for 1H.\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/text-to-image-processing\",\n        \"label\": \"Image Generation in AI-FLOW\"\n      }\n    ]\n  },\n  \"stable-diffusion-stabilityai-prompt\": {\n    \"description\": \"Stable Diffusion SDXL model by Stability AI, offering quick and low cost image generation. Don't forget to save your file; files are available for 12 hours.\",\n    \"docUrls\": []\n  },\n  \"merger-prompt\": {\n    \"description\": \"Used for combining two outputs. Each output need to be used with his specific identifier that will be replaced dynamically e.g ${input-1} and ${input-2}. Use the buttons at the top of the node to insert the identifier automatically !\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/merge-demo.gif\",\n    \"docUrls\": []\n  },\n  \"claude-anthropic-processor\": {\n    \"description\": \"Processes inputs using Claude 3 by Anthropic, which can understand and generate responses based on the context provided by the user.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/claude-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/blog/anthropic-claude-api\",\n        \"label\": \"Access Claude 3 from Anthropic API through AI-FLOW\"\n      }\n    ]\n  },\n  \"document-to-text-processor\": {\n    \"description\": \"Converts various document formats into plain text, enabling text extraction for processing and analysis. This nodes supports .pdf, .txt, .json, .html, .csv.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/document-to-text-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/blog/summarize-doc-post\",\n        \"label\": \"How to Summarize Documents or Ask Questions Using AI-FLOW\"\n      }\n    ]\n  },\n  \"openai-text-to-speech-processor\": {\n    \"description\": \"Converts text to natural-sounding speech using OpenAI's advanced text-to-speech models, facilitating accessibility and multimedia applications. Don't forget to save your file, OpenAI host the file for 1H.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/tts-demo.gif\",\n    \"docUrls\": []\n  },\n  \"stabilityai-generic-processor\": {\n    \"description\": \"A versatile node capable of interfacing with StabilityAI API to perform various task such as remove background, search and replace and more. Don't forget to save your file; files are available for 12 hours.  \",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/stabilityai-demo.gif\",\n    \"docUrls\": []\n  },\n  \"stabilityai-stable-diffusion-3-processor\": {\n    \"description\": \"Integrates the latest Stable Diffusion 3 capabilities for high-quality image generation. Don't forget to save your file; files are available for 12 hours.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/sd3-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/blog/stable-diffusion-3-api\",\n        \"label\": \"Access Stable Diffusion 3 API through AI-FLOW\"\n      }\n    ]\n  },\n  \"file\": {\n    \"description\": \"Handles the uploading, storage, and retrieval of files, supporting various file types for use within the system. This node does not extract file content. Files are available 12H.\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/blog/summarize-doc-post\",\n        \"label\": \"How to Summarize Documents or Ask Questions Using AI-FLOW\"\n      }\n    ]\n  },\n  \"ai-data-splitter\": {\n    \"description\": \"Split an input into multiple outputs using two available modes: AI mode and Manual mode. In Manual mode, you must specify a separator. This can be useful for generating content based on a list of ideas or concepts. You can specify an estimated number of outputs to prepare your flow accordingly.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/splitter-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/split-input\",\n        \"label\": \"Split input with AI\"\n      }\n    ]\n  },\n  \"replicate\": {\n    \"description\": \"A versatile node capable of interfacing with the Replicate API. Explore various model for text, image, audio, 3d model generation ! Replicate cost vary per model, and per time usage. Please not that less used models needs a warmup time, for this reason, first launch can be longer. Don't forget to save your file; files are available for 12 hours.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/replicate-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/replicate-node\",\n        \"label\": \"Access Diverse AI Models through Replicate\"\n      }\n    ]\n  },\n  \"transition\": {\n    \"description\": \"Use this node to organize your flow. The transition node only transfer output to another node.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/transition-demo.gif\",\n    \"docUrls\": []\n  },\n  \"display\": {\n    \"description\": \"This resizeable node can be use to display every output at the size you wish. You can also use it as an intermidiary node.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/display-demo.gif\",\n    \"docUrls\": []\n  },\n  \"deepseek-processor\": {\n    \"description\": \"The DeepSeek node is designed to interact with the DeepSeek API, enabling you to access various V3 and R1 models.\"\n  },\n  \"generate-number-processor\": {\n    \"description\": \"The Generate Number node is designed to generate a random number within a specified range.\"\n  },\n  \"openrouter-processor\": {\n    \"description\": \"OpenRouter provides an OpenAI-compatible completion API to multiple models & providers. \\n\\nThis node is provided as a User Integration, to use it, please provide your OpenRouter API Key in the Secure Store.\"\n  },\n  \"http-get-processor\": {\n    \"description\": \"Send a HTTP GET request with the specified headers.\"\n  },\n  \"gpt-image-processor\": {\n    \"description\": \"The GPT Image node is designed to interact with the GPT Image API, enabling you to generate and edit images based on a prompt and reference images.\"\n  }\n}\n"
  },
  {
    "path": "packages/ui/public/locales/en/tips.json",
    "content": "{\n  \"tips\": [\n    {\n      \"title\": \"Getting started with AI-Flow\",\n      \"description\": \"This guide will help you get started with AI-Flow, adding nodes, connecting them, customizing workspace.\",\n      \"url\": \"https://docs.ai-flow.net/blog/getting-started-with-ai-flow/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-card-images/app-overview-r.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"4min read\"\n    },\n    {\n      \"title\": \"Replicate Node Usage\",\n      \"description\": \"Seamlessly Integrate Replicate API with AI-FLOW for AI workflow automation.\",\n      \"url\": \"https://docs.ai-flow.net/blog/replicate-node/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/page-images/replicate-node/model-popup.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"2min read\"\n    },\n    {\n      \"title\": \"How to Use Subflows\",\n      \"description\": \"This feature allows you to create custom nodes based on your flows. \",\n      \"url\": \"https://docs.ai-flow.net/docs/pro-features/api-builder/subflow/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/page-images/api-builder/subflow-preview-3.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"2min read\"\n    },\n    {\n      \"title\": \"How to Create Loops\",\n      \"description\": \"This feature allows you loop on a Subflow. \",\n      \"url\": \"https://docs.ai-flow.net/docs/pro-features/api-builder/subflow-loop/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/page-images/api-builder/subflow-loop-4.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"4min read\"\n    },\n    {\n      \"title\": \"StabilityAI API with AI-FLOW\",\n      \"description\": \"This integration offer a versatile range of image processing capabilities.\",\n      \"url\": \"https://docs.ai-flow.net/blog/stabilityai-api/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-images/stabilityai.png\",\n      \"timeEstimated\": \"2min read\"\n    },\n\n    {\n      \"title\": \"Accessing the API Builder View\",\n      \"description\": \"This view allows you to monitor the current state of the API, learn how to use your API, and more.\",\n      \"url\": \"https://docs.ai-flow.net/docs/pro-features/api-builder/builder-view/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-card-images/blog-api-builder-1.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"3min read\"\n    },\n    {\n      \"title\": \"Add Webhooks to your Flow\",\n      \"description\": \"The Webhook Node is a powerful tool that allows you to send outputs as webhooks. \",\n      \"url\": \"https://docs.ai-flow.net/docs/pro-features/api-builder/webhooks/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-card-images/blog-api-builder-1.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"3min read\"\n    },\n    {\n      \"title\": \"Run Flow through API\",\n      \"description\": \"Discover how to create and manage an API around a given Flow to integrate it seamlessly to other tools.\",\n      \"url\": \"https://docs.ai-flow.net/docs/category/api-builder/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-card-images/blog-api-builder-1.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"2min read\"\n    },\n    {\n      \"title\": \"Full Documentation\",\n      \"description\": \"The main page of AI-FLOW Documentation, accessible through https://docs.ai-flow.net/\",\n      \"url\": \"https://docs.ai-flow.net/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/ai-flow-social-card.png\"\n    }\n  ],\n  \"docAvailable\": \"Full documentation is available here : \",\n  \"tipsSection\": \"Tips\"\n}\n"
  },
  {
    "path": "packages/ui/public/locales/en/tour.json",
    "content": "{\n    \"firstTimeHere\": \"First time here?\",\n    \"discoverApp\": \"Unlock tips to make the most of our app in just 15 seconds!\",\n    \"iKnowTheApp\": \"I know the app\",\n    \"letsStart\": \"Let's start!\",\n    \"welcomeToAIFLOW\": \"Welcome to AI-FLOW\",\n    \"addNodesWithDragAndDrop\": \"Easily add nodes to your canvas with a simple drag & drop.\",\n    \"dragAndDrop\": \"Drag and Drop\",\n    \"addingNodes\": \"Adding Nodes\",\n    \"runningANode\": \"Running a Node\",\n    \"connectingNodes\": \"Connecting Nodes\",\n    \"runEverything\": \"Run Everything\",\n    \"exploringMoreModels\": \"Exploring More Models\",\n    \"youveGotTheBasics\": \"You've got the basics!\",\n    \"executeSingleNode\": \"You can execute a single node by clicking the run button.\",\n    \"runNode\": \"Run Node\",\n    \"handlesExplanation\": \"Blue handles are for inputs, and orange handles are for outputs. For GPT Nodes, inputs add context to your prompts.\",\n    \"connectNodes\": \"Connect Nodes\",\n    \"executeAllNodesDescription\": \"This button executes all nodes in your flow, overwriting previous outputs.\",\n    \"replicateNodeDescription\": \"Expand your capabilities with the Replicate Node, providing access to a wide range of models for advanced use-cases.\",\n    \"replicateNode\": \"Replicate Node\",\n    \"checkHelpForAdvanced\": \"For advanced use-cases, check the Help section at the bottom left.\",\n    \"configDescription\": \"Here you can add your API Keys to be able to use the app.\",\n    \"config\": \"Config\"\n  }"
  },
  {
    "path": "packages/ui/public/locales/en/version.json",
    "content": "{\n  \"versionInfo\": {\n    \"versionNumber\": \"v0.7.3\",\n    \"description\": \"Discover the latest features added in version 0.7.3\"\n  },\n  \"features\": [\n    {\n      \"title\": \"Improved Web Extractor\",\n      \"description\": \"You can now customized how data is extracted.\"\n    },\n    {\n      \"title\": \"New Action: Help\",\n      \"description\": \"Each node now includes a 'Help' action that enables you to learn how to use it.\"\n    }\n  ],\n  \"articles\": [\n    {\n      \"title\": \"Generate Consistent Characters Using AI - Part 1\",\n      \"url\": \"https://docs.ai-flow.net/blog/generate-consistent-characters-ai/\"\n    },\n    {\n      \"title\": \"How to automate story and image creation using AI - Part 2\",\n      \"url\": \"https://docs.ai-flow.net/blog/automate-story-creation-2/\"\n    },\n    {\n      \"title\": \"How to use Documents in AI-FLOW\",\n      \"url\": \"https://docs.ai-flow.net/blog/summarize-doc-post/\"\n    }\n  ],\n  \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/gif-v0.7.3.gif\",\n  \"newVersionAvailable\": \"A new version is now available !\",\n  \"newVersionDefaultMessage\": \"New features and bug fixes are available. To access them, please refresh your page.\",\n  \"refresh\": \"Refresh\"\n}\n"
  },
  {
    "path": "packages/ui/public/locales/fr/aiActions.json",
    "content": "{\n    \"Summary\": \"Résumé\",\n    \"SpellCheck\": \"Vérification Orthographique\",\n    \"VisualPrompt\": \"Prompt Visuelle\",\n    \"ConstructiveCritique\": \"Critique Constructive\",\n    \"SimpleExplanation\": \"Explication Simple\",\n    \"Paraphrase\": \"Paraphrase\",\n    \"SentimentAnalysis\": \"Analyse de Sentiment\",\n    \"TextExtension\": \"Extension du Texte\",\n    \"ClickToShowOutput\": \"Cliquez pour afficher le résultat\"\n}"
  },
  {
    "path": "packages/ui/public/locales/fr/config.json",
    "content": "{\n  \"configurationTitle\": \"Configuration\",\n  \"apiKeyDisclaimer\": \"Nous n'utilisons ni ne stockons vos clés API.\",\n  \"openSourceDisclaimer\": \"Ce projet est open source.\",\n  \"apiKeyRevokeReminder\": \"N'oubliez pas, vous pouvez révoquer vos clés à tout moment et en générer de nouvelles.\",\n  \"closeButtonLabel\": \"Fermer\",\n  \"validateButtonLabel\": \"Valider\",\n  \"likeProjectPrompt\": \"Si vous aimez ce projet, vous pouvez ajouter une étoile sur:\",\n  \"supportProjectPrompt\": \"Vous pouvez soutenir l'avenir du projet et nous contacter via\",\n  \"Logout\": \"Se déconnecter\",\n  \"sections.core\": \"Paramètres de base\",\n  \"parameters.core.openai_api_key\": \"Clé API OpenAI\",\n  \"parameters.core.stabilityai_api_key\": \"Clé API StabilityAI\",\n  \"parameters.core.replicate_api_key\": \"Clé API Replicate\",\n  \"parameters.extension.anthropic_api_key\": \"Clé API Anthropic\",\n  \"sections.extension\": \"Extensions\",\n  \"userTabLabel\": \"Paramètres utilisateur\",\n  \"appParametersLabel\": \"Paramètres de l'application\",\n  \"displayTabLabel\": \"Paramètres d'affichage\",\n  \"nodesDisplayed\": \"Nœuds activés\",\n  \"configUpdated\": \"Configuration mise à jour avec succès\",\n  \"ShowMinimap\": \"Afficher la minimap\",\n  \"UI\": \"Interface Utilisateur\",\n  \"input\": \"Entrées\",\n  \"models\": \"Modèles\",\n  \"tools\": \"Outils\",\n  \"parameters.extension.deepseek_api_key\": \"Clé API DeepSeek\",\n  \"parameters.extension.openrouter_api_key\": \"Clé API OpenRouter\",\n  \"incompleteLoadingPleaseRestart\": \"L’application n’a pas pu charger toutes les données. Veuillez redémarrer l’application.\"\n}\n"
  },
  {
    "path": "packages/ui/public/locales/fr/dialogs.json",
    "content": "{\n    \"attachNodeTitle\": \"Attacher un noeud\",\n    \"attachNodeAction\": \"Attacher\"\n}"
  },
  {
    "path": "packages/ui/public/locales/fr/flow.json",
    "content": "{\n  \"Flow\": \"Flow\",\n  \"AddTab\": \"Ajouter un Flow\",\n  \"ShowOnlyOutputs\": \"Afficher uniquement les résultats\",\n  \"ShowOnlyParams\": \"Afficher uniquement les paramètres\",\n  \"Prompt\": \"Prompt\",\n  \"ApiKeyRequiredMessage\": \"Veuillez fournir votre clé API dans les paramètres de configuration pour exécuter correctement votre flow.\",\n  \"ClickToShowOutput\": \"Cliquez pour afficher le résultat\",\n  \"RoleInitPrompt\": \"Indiquez le rôle que vous souhaitez que l'IA adopte lors de vos prochaines interactions. Par exemple, 'Comporte toi comme un Critique Littéraire'.\",\n  \"EnterURL\": \"Extracteur Web\",\n  \"YoutubeTranscriptNodeName\": \"Transcription Youtube\",\n  \"URLPlaceholder\": \"Saisissez une URL\",\n  \"Input\": \"Entrées\",\n  \"InputImage\": \"Image\",\n  \"InputPlaceholder\": \"Saisissez votre texte ici pour qu'il serve d'entrée à d'autres nœuds\",\n  \"InputImagePlaceholder\": \"Saisissez l'URL vers l'image que vous souhaitez utiliser\",\n  \"DALLE\": \"DALL-E\",\n  \"ClickToShowImageOutput\": \"Cliquez pour afficher la sortie d'image\",\n  \"JsonView\": \"Vue JSON\",\n  \"TopologicalView\": \"Résultats\",\n  \"currentNodeView\": \"Noeud selectionné\",\n  \"Upload\": \"Importer\",\n  \"Download\": \"Télécharger\",\n  \"Text\": \"Texte\",\n  \"URL\": \"URL\",\n  \"YoutubeVideo\": \"Transcription Youtube\",\n  \"Models\": \"Modèles\",\n  \"GPT\": \"Modèle GPT\",\n  \"GPTPrompt\": \"GPT Prompt\",\n  \"NoContextPrompt\": \"GPT Prompt\",\n  \"PromptPlaceholder\": \"Entrez votre prompt ici, par exemple 'Crée un thread Twitter basé sur les données que je t'ai envoyées.'\",\n  \"MergePromptPlaceholder\": \"Entrez votre texte pour la fusion en utilisant les constantes ${input-1} et ${input-2}. Vous pouvez ajouter du texte supplémentaire, par exemple : \\n Répond à ${input-1} en tenant compte de ${input-2}.\",\n  \"VisionPromptPlaceholder\": \"Entrez votre prompt ici pour analyser l'image en entrée. Par exemple 'Crée une description de cette image'\",\n  \"VisionImageURLPlaceholder\": \"Saisissez l'URL vers l'image que vous souhaitez utiliser\",\n  \"DallEPromptPlaceholder\": \"Entrez votre prompt ici, par exemple 'Un chien et un chat jouant dans le désert'\",\n  \"DataSplitter\": \"Data Splitter\",\n  \"ImageGeneration\": \"Génération d'image\",\n  \"AdvancedSection\": \"Avancé\",\n  \"AiAction\": \"Action IA\",\n  \"LLMPrompt\": \"GPT\",\n  \"AiDataSplitter\": \"Data Splitter\",\n  \"MergerNode\": \"Fusion de Texte\",\n  \"ReplicateModel\": \"Replicate\",\n  \"inputHelp\": \"Ce noeud sert à saisir du texte.\",\n  \"inputImageHelp\": \"Ce noeud permet de visualiser une image via une URL.\",\n  \"urlInputHelp\": \"Saisissez une URL valide et le noeud récupérera les données de cette URL.\",\n  \"youtubeTranscriptHelp\": \"Ce noeud récupère les sous titres d'une vidéo Youtube à partir de son URL\",\n  \"gptHelp\": \"Ce noeud permet de configurer un modèle GPT, de lui préciser son rôle, et les données sur lesquelles il se basera pour répondre.\",\n  \"gptPromptHelp\": \"Ce noeud permet d'interroger un modèle GPT. Il partage son contexte avec les autres noeuds connectés au modèle.\",\n  \"noContextPromptHelp\": \"Ce noeud permet d'interroger GPT sans contexte, juste à partir de données d'entrées et d'un prompt. Pas besoin de le connecter à un modèle.\",\n  \"stableDiffusionPromptHelp\": \"Ce noeud utilise le modèle Stable Diffusion pour générer des images à partir d'une description textuelle.\",\n  \"stableVideoDiffusionPromptHelp\": \"Ce noeud utilise Replicate pour lancer le modèle Stable Video Diffusion.\",\n  \"aiActionPromptHelp\": \"Ce noeud permet de réaliser des actions simples avec GPT-4 sans préciser de prompt.\",\n  \"llmPromtHelp\": \"Ce noeud permet de traiter une prompt avec GPT-3.5 ou GPT-4\",\n  \"dataSplitterHelp\": \"Ce noeud sert à diviser les données en plusieurs parties. Vous pouvez spécifier en amont combien de parties vous voulez créer. Vous pouvez également le lancer unitairement pour qu'il trouve le nombre exact de sorties à générer.\",\n  \"replicateHelp\": \"Ce nœud utilise Replicate pour donner accès à un grand nombre de modèles.\",\n  \"mergerPromptHelp\": \"Ce nœud vous permet de fusionner 2 entrées.\",\n  \"gptVisionPromptHelp\": \"Ce nœud utilise GPT-4 Vision et prend une URL d'image comme entrée.\",\n  \"socketConnectionLost\": \"La connexion a été perdue. \\n\\n Vous pouvez réessayer plus tard ou installer l'application localement pour éviter ce type de problème à l'avenir.\",\n  \"ClickToSelectModel\": \"Cliquez pour sélectionner un modèle\",\n  \"Or\": \"OU\",\n  \"EnterModelNameDirectly\": \"Entrez directement le nom du modèle\",\n  \"Load\": \"Charger\",\n  \"LoadMore\": \"Charger plus\",\n  \"SpotlightModels\": \"Modèles Vedettes\",\n  \"AllModels\": \"Tous les Modèles\",\n  \"EdgeType\": \"Type d'arête\",\n  \"CannotChangeTabWhileRunning\": \"Vous ne pouvez pas changer d'onglet quand un lancement est en cours.\",\n  \"Transition\": \"Transition\",\n  \"transitionHelp\": \"Ce noeud sert uniquement à organiser le flow.\",\n  \"MissingFieldsMessage\": \"Des champs nécessaires sont manquants\",\n  \"Node\": \"Noeud\",\n  \"MissingFields\": \"Champ manquant\",\n  \"CannotDeleteLastFlow\": \"Impossible de supprimer le dernier Flow\",\n  \"HideSidebar\": \"Cacher la barre\",\n  \"ShowSidebar\": \"Afficher la barre\",\n  \"File\": \"Fichier\",\n  \"EnterUrlToDesiredFile\": \"Entrez l'URL\",\n  \"fileUploadHelp\": \"Utilisez ce noeud pour charger un fichier\",\n  \"llmPromptHelp\": \"Ce noeud permet d'envoyer des prompts à GPT-3.5 ou GPT-4\",\n  \"Output\": \"Résultat\",\n  \"Inputs\": \"Entrées\",\n  \"Parameters\": \"Paramètres\",\n  \"Duplicate\": \"Dupliquer\",\n  \"OpeninSidepane\": \"Ouvrir dans le bandeau\",\n  \"ClearOutput\": \"Supprimer le résultat\",\n  \"RemoveNode\": \"Supprimer le noeud\",\n  \"ExpiredURL\": \"URL expirée\",\n  \"NoNodeSelected\": \"Aucun nœud sélectionné pour le moment.\",\n  \"ClickOnNodeToSelectIt\": \"Veuillez cliquer sur un nœud pour le sélectionner.\",\n  \"Field\": \"Champ\",\n  \"DragAndDropNodes\": \"Glissez et déposez les nœuds pour les ajouter.\",\n  \"CopiedToClipboard\": \"Copié dans le presse-papiers.\",\n  \"DocumentToText\": \"Document vers Texte\",\n  \"documentToTextHelp\": \"Convertir un fichier .pdf, .txt, .csv, .json, .html en texte simple\",\n  \"TextToSpeech\": \"Texte vers Audio\",\n  \"textToSpeechHelp\": \"Convertir un texte en fichier audio en utilisant le modèle tts d'OpenAI\",\n  \"error.upload_failed\": \"Echec de l'upload. Vérifiez votre configuration pour pouvoir activer l'upload.\",\n  \"InputTextPlaceholder\": \"Entrez votre texte ici\",\n  \"DownloadFile\": \"Télécharger le fichier\",\n  \"FileUploaded\": \"Fichier hébergé\",\n  \"GenericPromptPlaceholder\": \"Entrez vos instructions ici\",\n  \"GenericNegativePromptPlaceholder\": \"Entrez vos instructions négatives ici\",\n  \"EnterCustomName\": \"Nom personalisé :\",\n  \"NodeColor\": \"Couleur du noeud\",\n  \"ChangeName\": \"Changer le nom\",\n  \"RemoveFlow\": \"Supprimer le flow\",\n  \"HideHint\": \"Cacher\",\n  \"TextDocumentHint\": \"Veuillez noter que ce nœud fournit les fichiers uniquement sous forme d'URLs. Pour utiliser un document (.pdf, .txt) en format texte, envisagez d'utiliser le nœud 'Document-en-Texte'.\",\n  \"Display\": \"Affichage\",\n  \"displayHelp\": \"Ce noeud redimensionable permet d'afficher du contenu.\",\n  \"Validate\": \"Valider\",\n  \"AI\": \"IA\",\n  \"Separator\": \"Séparateur\",\n  \"ClaudeAnthropic\": \"Claude\",\n  \"claudeAnthropichHelp\": \"Ce noeud utilise le modèle Claude d'Anthropic pour traiter des instructions textuelles.\",\n  \"noDataAvailableForThisNode\": \"Pas de données disponibles pour ce noeud.\",\n  \"learnMore\": \"En apprendre plus :\",\n  \"Help\": \"Aide\",\n  \"cookiesConsentLabelPlaceholder\": \"Accepter tout\",\n  \"cookiesConsentLabelHelp\": \"Pour certaines pages, nous devons cliquer sur le bouton de consentement aux cookies pour accéder aux données. Cette instruction aide à localiser le bouton.\",\n  \"EditTextContent\": \"Editer le texte\",\n  \"ShowCoordinates\": \"Afficher les coordonnées\",\n  \"ShowNodesConfig\": \"Afficher les configurations des noeuds\",\n  \"DeleteAll\": \"Tout supprimer\",\n  \"DeleteOutputs\": \"Supprimer les résultats\",\n  \"ReplaceText\": \"Remplacer Texte\",\n  \"ReplaceTextInputPlaceholder\": \"Entrez le texte complet où le terme sera remplacé.\",\n  \"ReplaceTextSearchPlaceholder\": \"Entrez le terme ou le motif regex à remplacer.\",\n  \"ReplaceTextReplacePlaceholder\": \"Entrez le terme de remplacement.\",\n  \"replaceTextNodeHelp\": \"Utilisez ce nœud pour rechercher et remplacer un texte spécifique ou des motifs dans l'entrée.\",\n  \"openaio1Help\": \"Des modèles de langage avancés formés pour le raisonnement complexe, excellant dans les défis scientifiques, mathématiques et de programmation.\",\n  \"ContextPlaceholder\": \"Contexte additionnel qui sera utilisé pour répondre au prompt\",\n  \"deepSeekHelp\": \"Accédez aux LLMs DeepSeek via ce noeud.\",\n  \"openRouterHelp\": \"OpenRouter donne accès à plusieurs LLMs et providers. Ce noeud nécessite de fournir une clé API.\",\n  \"Generate Number\": \"Générer un nombre\",\n  \"generateNumberHelp\": \"Génère un nombre aléatoire\",\n  \"httpGetProcessorURLPlaceholder\": \"Enter the URL to request\",\n  \"httpGetProcessorURLDescription\": \"The URL that the HTTP GET request will be sent to.\",\n  \"httpGetProcessorHeadersPlaceholder\": \"Enter headers in JSON format\",\n  \"httpGetProcessorHeadersDescription\": \"The headers to include in the HTTP GET request.\",\n  \"httpGetProcessorHelp\": \"Send an HTTP GET request with the specified headers.\",\n  \"gptImageHelp\": \"Generate or Edit an image using GPT Image\",\n  \"gptImageMaskDescription\": \"You can provide a mask to indicate where the image should be edited. You can use the prompt to describe the full new image, not just the erased area. If you provide multiple input images, the mask will be applied to the first image.\",\n  \"dallEDeprecated\": \"Most recent OpenAI models are now available via the new GPT Image node and are superior to DALL-E. DALL-E remains available if needed.\",\n  \"TTSInstructionPlaceholder\": \"Ex : Parlez avec un ton joyeux et positif.\",\n  \"TTSInstructionDescription\": \"Invitez le modèle à contrôler les aspects de la parole (accent, émotion, intonation, vitesse, ton, ...)\",\n  \"PopularModels\": \"Modèles Populaires\",\n  \"removeBackgroundDescription\": \"Supprimez l'arrière-plan d'une image en utilisant l'API StabilityAI.\",\n  \"upscaleFastDescription\": \"Augmenter la résolution d'une image avec l'API StabilityAI\",\n  \"fluxDescription\": \"Générez une image en utilisant le modèle FLUX.\",\n  \"fluxKontextDescription\": \"Un modèle d’édition d’image basé sur le texte à la pointe de la technologie, offrant des résultats de haute qualité, respectant fidèlement les instructions des prompts et garantissant une transformation cohérente des images via le langage naturel.\",\n  \"faceswapDescription\": \"Échangez des visages entre des images de manière transparente, permettant des remplacements faciaux réalistes et précis.\",\n  \"removeBgDescription\": \"Supprimez le fond d'une image avec le modèle lucataco/remove-bg sur Replicate\",\n  \"upscaleDescription\": \"Modèle Real-ESRGAN pour augmenter la résolution d'une image\",\n  \"moondreamDescription\": \"Moondream est un modèle de vision qui répond aux commandes concernant une image donnée.\",\n  \"llamaDescription\": \"Le modèle de langage phare de Meta, doté de 405 milliards de paramètres, affiné pour la complétion des discussions.\",\n  \"imagenDescription\": \"Google Imagen produit des images époustouflantes, détaillées et précises.\",\n  \"recraftSVGDescription\": \"Recraft SVG propose une génération d'images vectorielles avancée.\",\n  \"recraftDescription\": \"Recraft V3  produit des images époustouflantes, détaillées et précises.\",\n  \"video01Description\": \"Video-01 offre une génération dynamique de vidéos.\",\n  \"video01LiveDescription\": \"Video-01-Live offre une génération dynamique de vidéos, idéale pour l'animation 2D.\",\n  \"klingDescription\": \"Kling propose des solutions robustes de génération vidéo et d'animation, disponibles en versions professionnelle et standard.\",\n  \"veo3Description\": \"Modèle phare de Google, Veo 3, de texte à vidéo, avec audio\"\n}\n"
  },
  {
    "path": "packages/ui/public/locales/fr/nodeHelp.json",
    "content": "{\n  \"input-text\": {\n    \"description\": \"Le nœud de texte peut être utilisé pour transférer une entrée de texte à d'autres nœuds.\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/Input-nodes\",\n        \"label\": \"Ajouter une entrée dans AI-FLOW\"\n      }\n    ]\n  },\n  \"url_input\": {\n    \"description\": \"Récupère le contenu textuel d'une page web à partir d'une URL.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/web-extractor-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/Input-nodes#url\",\n        \"label\": \"Ajouter une entrée dans AI-FLOW\"\n      }\n    ]\n  },\n  \"llm-prompt\": {\n    \"description\": \"Traite les entrées en utilisant les modèles GPT d'OpenAI, qui peuvent comprendre et générer des réponses en fonction du contexte fourni par l'utilisateur.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/gpt-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/fr/blog/summarize-doc-post\",\n        \"label\": \"Comment Résumer des Documents ou Poser des Questions en Utilisant AI-FLOW\"\n      },\n      {\n        \"url\": \"https://docs.ai-flow.net/fr/blog/summarize-ytb-post\",\n        \"label\": \"Comment Résumer une Vidéo YouTube en Utilisant AI-FLOW\"\n      }\n    ]\n  },\n  \"gpt-vision\": {\n    \"description\": \"Utilise GPT-4 Vision pour analyser une image.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/vision-demo.gif\",\n    \"docUrls\": []\n  },\n  \"youtube_transcript_input\": {\n    \"description\": \"Capture les transcriptions directement depuis l'API de YouTube, permettant un traitement ultérieur tel que la traduction, la résumé, ou l'extraction de mots clés.\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/fr/blog/summarize-ytb-post\",\n        \"label\": \"Comment Résumer une Vidéo YouTube en Utilisant AI-FLOW\"\n      }\n    ]\n  },\n  \"dalle-prompt\": {\n    \"description\": \"Permet aux utilisateurs de créer des descriptions détaillées pour générer des images à l'aide du modèle DALL-E 3, combinant créativité et IA pour un contenu visuel personnalisé. Veuillez noter qu'OpenAI ne permet que 5-7 images par minute. N'oubliez pas de sauvegarder votre fichier; OpenAI héberge les fichiers pendant 1H.\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/text-to-image-processing\",\n        \"label\": \"Génération d'image dans AI-FLOW\"\n      }\n    ]\n  },\n  \"stable-diffusion-stabilityai-prompt\": {\n    \"description\": \"Modèle Stable Diffusion SDXL par Stability AI, offrant une génération d'images rapide et à faible coût. N'oubliez pas de sauvegarder votre fichier ; les fichiers sont disponibles pendant 12 heures.\",\n    \"docUrls\": []\n  },\n  \"merger-prompt\": {\n    \"description\": \"Utilisé pour combiner deux sorties. Chaque sortie doit être utilisée avec son identifiant spécifique qui sera remplacé dynamiquement, par exemple ${input-1} et ${input-2}. Utilisez les boutons en haut du nœud pour insérer automatiquement les identifiants.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/merge-demo.gif\",\n    \"docUrls\": []\n  },\n  \"claude-anthropic-processor\": {\n    \"description\": \"Traite les entrées en utilisant Claude 3 par Anthropic, qui peut comprendre et générer des réponses en fonction du contexte fourni par l'utilisateur.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/claude-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/fr/blog/anthropic-claude-api\",\n        \"label\": \"Accédez à Claude 3 via l'API d'Anthropic grâce à AI-FLOW\"\n      }\n    ]\n  },\n  \"document-to-text-processor\": {\n    \"description\": \"Convertit divers formats de documents en texte brut, permettant l'extraction de texte pour le traitement et l'analyse. Ce nœud prend en charge les formats .pdf, .txt, .json, .html, .csv.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/document-to-text-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/fr/blog/summarize-doc-post\",\n        \"label\": \"Comment Résumer des Documents ou Poser des Questions en Utilisant AI-FLOW\"\n      }\n    ]\n  },\n  \"openai-text-to-speech-processor\": {\n    \"description\": \"Convertit le texte en parole naturelle à l'aide des modèles avancés de synthèse vocale d'OpenAI, facilitant l'accessibilité et les applications multimédia. N'oubliez pas de sauvegarder votre fichier, OpenAI héberge le fichier pendant 1H.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/tts-demo.gif\",\n    \"docUrls\": []\n  },\n  \"stabilityai-generic-processor\": {\n    \"description\": \"Un nœud polyvalent capable d'interfacer avec l'API de StabilityAI pour effectuer diverses tâches telles que supprimer l'arrière-plan, rechercher et remplacer, et plus encore. N'oubliez pas de sauvegarder votre fichier ; les fichiers sont disponibles pendant 12 heures.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/stabilityai-demo.gif\",\n    \"docUrls\": []\n  },\n  \"stabilityai-stable-diffusion-3-processor\": {\n    \"description\": \"Intègre les dernières capacités de Stable Diffusion 3 pour une génération d'images de haute qualité. N'oubliez pas de sauvegarder votre fichier ; les fichiers sont disponibles pendant 12 heures.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/sd3-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/fr/blog/stable-diffusion-3-api\",\n        \"label\": \"Access Stable Diffusion 3 API through AI-FLOW\"\n      }\n    ]\n  },\n  \"file\": {\n    \"description\": \"Permet d'héberger un fichier et de retourner une URL permettant d'y accéder. Supporte divers types de fichiers pour une utilisation au sein du système. Ce nœud n'extrait pas le contenu des fichiers. Les fichiers sont disponibles pendant 12H.\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/fr/blog/summarize-doc-post\",\n        \"label\": \"Comment Résumer des Documents ou Poser des Questions en Utilisant AI-FLOW\"\n      }\n    ]\n  },\n  \"ai-data-splitter\": {\n    \"description\": \"Divise une entrée en plusieurs sorties en utilisant deux modes disponibles : mode AI et mode Manuel. En mode Manuel, vous devez spécifier un séparateur. Cela peut être utile pour générer du contenu basé sur une liste d'idées ou de concepts. Vous pouvez spécifier un nombre estimé de sorties pour préparer votre flux en conséquence.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/splitter-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/split-input\",\n        \"label\": \"Séparer une entrée avec l'IA\"\n      }\n    ]\n  },\n  \"replicate\": {\n    \"description\": \"Un nœud polyvalent capable d'interfacer avec l'API Replicate. Explorez divers modèles pour la génération de texte, d'images, d'audio, de modèles 3D. Les fichiers en sortie sont accessibles 12H.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/replicate-demo.gif\",\n    \"docUrls\": [\n      {\n        \"url\": \"https://docs.ai-flow.net/docs/nodes-presentation/replicate-node\",\n        \"label\": \"Accéder a divers modèles IA via Replicate\"\n      }\n    ]\n  },\n  \"transition\": {\n    \"description\": \"Utilisez ce nœud pour organiser votre flux. Le nœud de transition ne transfère la sortie qu'à un autre nœud.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/transition-demo.gif\",\n    \"docUrls\": []\n  },\n  \"display\": {\n    \"description\": \"Ce nœud redimensionnable peut être utilisé pour afficher chaque sortie à la taille que vous souhaitez. Vous pouvez également l'utiliser comme un nœud intermédiaire.\",\n    \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/node-help-img/display-demo.gif\",\n    \"docUrls\": []\n  },\n  \"deepseek-processor\": {\n    \"description\": \"Le nœud DeepSeek est conçu pour interagir avec l'API DeepSeek, vous permettant d'accéder à différents modèles tels que V3 et R1.\"\n  },\n  \"openrouter-processor\": {\n    \"description\": \"OpenRouter donne accès à plusieurs LLMs et providers. Ce noeud nécessite de fournir une clé API.\"\n  },\n  \"generate-number-processor\": {\n    \"description\": \"Le nœud « Générer un nombre » est conçu pour générer un nombre aléatoire dans une plage spécifiée.\"\n  },\n  \"http-get-processor\": {\n    \"description\": \"Ce noeud permet d'envoyer une requête HTTP GET sur l'URL spécifiée, avec les headers souhaités.\"\n  },\n  \"gpt-image-processor\": {\n    \"description\": \"Le noeud GPT Image vous permet de générer et de modifier des images à partir d'une instruction et d'images de référence. \"\n  }\n}\n"
  },
  {
    "path": "packages/ui/public/locales/fr/tips.json",
    "content": "{\n  \"tips\": [\n    {\n      \"title\": \"Bien débuter sur AI-Flow\",\n      \"description\": \"Ce guide vous montrera l'essentiel pour bien débuter.\",\n      \"url\": \"https://docs.ai-flow.net/fr/blog/getting-started-with-ai-flow/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-card-images/app-overview-r.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"4min de lecture\"\n    },\n    {\n      \"title\": \"Utilisation du noeud Replicate\",\n      \"description\": \"Intégrez l'API Replicate avec AI-FLOW.\",\n      \"url\": \"https://docs.ai-flow.net/fr/blog/replicate-node/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/page-images/replicate-node/model-popup.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"2min de lecture\"\n    },\n    {\n      \"title\": \"Comment utiliser les sous-flow\",\n      \"description\": \"Cette fonctionnalité vous permet de créer des nœuds personnalisés basés sur vos flows.\",\n      \"url\": \"https://docs.ai-flow.net/docs/pro-features/api-builder/subflow/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/page-images/api-builder/subflow-preview-3.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"2min de lecture\"\n    },\n    {\n      \"title\": \"Comment créer des boucles\",\n      \"description\": \"Cette fonctionnalité vous permet d'itérer sur un sous-flow.\",\n      \"url\": \"https://docs.ai-flow.net/docs/pro-features/api-builder/subflow-loop/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/page-images/api-builder/subflow-loop-4.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"4min read\"\n    },\n    {\n      \"title\": \"StabilityAI avec AI-FLOW\",\n      \"description\": \"Cette intégration offre une gamme polyvalente de capacités de traitement d'image.\",\n      \"url\": \"https://docs.ai-flow.net/fr/blog/stabilityai-api/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-card-images/blog-sd3.png\",\n      \"timeEstimated\": \"3min de lecture\"\n    },\n    {\n      \"title\": \"Vue API Builder\",\n      \"description\": \"Cette vue vous permet de surveiller l'état actuel de l'API, d'apprendre à utiliser votre API, et plus encore.\",\n      \"url\": \"https://docs.ai-flow.net/docs/pro-features/api-builder/builder-view/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-card-images/blog-api-builder-1.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"3min de lecture\"\n    },\n    {\n      \"title\": \"Ajouter des Webhooks à vos Flows\",\n      \"description\": \"Le nœud Webhook est un outil puissant qui vous permet d'envoyer des sorties sous forme de webhooks.\",\n      \"url\": \"https://docs.ai-flow.net/docs/pro-features/api-builder/webhooks/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-card-images/blog-api-builder-1.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"2min de lecture\"\n    },\n    {\n      \"title\": \"Lancer un Flow via l'API\",\n      \"description\": \"Découvrez comment créer et gérer une API autour d'un Flow donné pour l'intégrer parfaitement à d'autres outils.\",\n      \"url\": \"https://docs.ai-flow.net/docs/category/api-builder/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/blog-card-images/blog-api-builder-1.png\",\n      \"newFeature\": true,\n      \"timeEstimated\": \"2min de lecture\"\n    },\n    {\n      \"title\": \"Documentation Complète\",\n      \"description\": \"La page principale de la documentation AI-FLOW, accessible via https://docs.ai-flow.net/\",\n      \"url\": \"https://docs.ai-flow.net/\",\n      \"imgUrl\": \"https://docs.ai-flow.net/img/ai-flow-social-card.png\"\n    }\n  ],\n  \"docAvailable\": \"La documentation complète est disponible ici : \",\n  \"tipsSection\": \"Astuces\"\n}\n"
  },
  {
    "path": "packages/ui/public/locales/fr/tour.json",
    "content": "{\n    \"firstTimeHere\": \"Première visite ?\",\n    \"discoverApp\": \"Découvrez des astuces pour profiter pleinement de notre application en moins de 15 secondes !\",\n    \"iKnowTheApp\": \"Je connais l'application\",\n    \"letsStart\": \"Commençons !\",\n    \"welcomeToAIFLOW\": \"Bienvenue sur AI-FLOW\",\n    \"addNodesWithDragAndDrop\": \"Ajoutez facilement des nœuds à votre canevas avec un simple glisser-déposer.\",\n    \"dragAndDrop\": \"Glisser-Déposer\",\n    \"addingNodes\": \"Ajouter un Nœud\",\n    \"runningANode\": \"Exécuter un Nœud\",\n    \"connectingNodes\": \"Connecter des Nœuds\",\n    \"runEverything\": \"Tout Exécuter\",\n    \"exploringMoreModels\": \"Explorer Plus de Modèles\",\n    \"youveGotTheBasics\": \"Vous avez les bases !\",\n    \"executeSingleNode\": \"Vous pouvez exécuter un seul nœud en cliquant sur le bouton d'exécution.\",\n    \"runNode\": \"Exécuter un Nœud\",\n    \"handlesExplanation\": \"Les poignées bleues sont pour les entrées, et les poignées oranges pour les sorties. Pour les Nœuds GPT, les entrées ajoutent du contexte à vos invites.\",\n    \"connectNodes\": \"Connecter les Nœuds\",\n    \"executeAllNodesDescription\": \"Ce bouton exécute tous les nœuds dans votre flux, en écrasant les sorties précédentes.\",\n    \"replicateNodeDescription\": \"Étendez vos capacités avec le Nœud Replicate, offrant un accès à une large gamme de modèles pour des cas d'usage avancés.\",\n    \"replicateNode\": \"Replicate\",\n    \"checkHelpForAdvanced\": \"Pour des cas d'usage avancés, consultez la section Aide en bas à gauche.\",\n    \"configDescription\": \"Vous pouvez ajouter vos clés APIs via ce menu pour utiliser l'application.\",\n    \"config\": \"Configuration\"\n  }\n  "
  },
  {
    "path": "packages/ui/public/locales/fr/version.json",
    "content": "{\n  \"versionInfo\": {\n    \"versionNumber\": \"v0.7.3\",\n    \"description\": \"Voici les nouveautés de la v0.7.3\"\n  },\n  \"features\": [\n    {\n      \"title\": \"Amélioration de l'extracteur Web\",\n      \"description\": \"Vous pouvez désormais mieux cibler l'extraction de données.\"\n    },\n    {\n      \"title\": \"Nouvelle action : Aide\",\n      \"description\": \"Chaque noeud possède désormais une action 'Aide' qui vous permet de découvrir comment l'utiliser.\"\n    }\n  ],\n  \"articles\": [\n    {\n      \"title\": \"Générer des Personnages Cohérents avec l'IA - Partie 1\",\n      \"url\": \"https://docs.ai-flow.net/fr/blog/generate-consistent-characters-ai/\"\n    },\n    {\n      \"title\": \"Comment automatiser la création d'histoires et d'images à l'aide de l'IA - Partie 2\",\n      \"url\": \"https://docs.ai-flow.net/fr/blog/automate-story-creation-2/\"\n    },\n    {\n      \"title\": \"Comment Utiliser des Documents dans AI-FLOW\",\n      \"url\": \"https://docs.ai-flow.net/fr/blog/summarize-doc-post/\"\n    }\n  ],\n  \"imageUrl\": \"https://ai-flow-public-assets.s3.eu-west-3.amazonaws.com/gif-v0.7.3.gif\",\n  \"newVersionAvailable\": \"Une nouvelle version est maintenant disponible !\",\n  \"newVersionDefaultMessage\": \"De nouvelles fonctionnalités et des corrections de bugs sont disponibles, pour y accéder, veuillez rafraîchir votre page.\",\n  \"refresh\": \"Rafraîchir\"\n}\n"
  },
  {
    "path": "packages/ui/public/robots.txt",
    "content": "# https://www.robotstxt.org/robotstxt.html\nUser-agent: *\nDisallow:\n"
  },
  {
    "path": "packages/ui/public/samples/intro.json",
    "content": "[\n  {\n    \"inputs\": [],\n    \"name\": \"3jexlwros#llm-prompt\",\n    \"processorType\": \"llm-prompt\",\n    \"model\": \"gpt-4o\",\n    \"x\": -1130.048690482733,\n    \"y\": -885.266525660136\n  }\n]"
  },
  {
    "path": "packages/ui/public/site.webmanifest",
    "content": "{\"name\":\"\",\"short_name\":\"\",\"icons\":[{\"src\":\"/android-chrome-192x192.png\",\"sizes\":\"192x192\",\"type\":\"image/png\"},{\"src\":\"/android-chrome-512x512.png\",\"sizes\":\"512x512\",\"type\":\"image/png\"}],\"theme_color\":\"#ffffff\",\"background_color\":\"#ffffff\",\"display\":\"standalone\"}"
  },
  {
    "path": "packages/ui/src/App.tsx",
    "content": "import { useContext, useEffect, useMemo, useState } from \"react\";\nimport FlowTabs, { FlowTab } from \"./layout/main-layout/AppLayout\";\nimport { ThemeContext } from \"./providers/ThemeProvider\";\nimport { DndProvider } from \"react-dnd\";\nimport { MultiBackend } from \"react-dnd-multi-backend\";\nimport { HTML5toTouch } from \"rdndmb-html5-to-touch\";\nimport { AppTour } from \"./components/tour/AppTour\";\nimport { VisibilityProvider } from \"./providers/VisibilityProvider\";\nimport { Tooltip } from \"react-tooltip\";\nimport { loadExtensions } from \"./nodes-configuration/nodeConfig\";\nimport { loadAllNodesTypes } from \"./utils/mappings\";\nimport { loadParameters } from \"./components/popups/config-popup/parameters\";\nimport { SocketProvider } from \"./providers/SocketProvider\";\nimport { getAllTabs } from \"./services/tabStorage\";\nimport { convertJsonToFlow } from \"./utils/flowUtils\";\nimport { UserMessage } from \"./components/popups/UserMessagePopup\";\nimport { useTranslation } from \"react-i18next\";\n\ninterface AppProps {\n  onLoadingComplete: () => void;\n}\nconst App = ({ onLoadingComplete }: AppProps) => {\n  const { dark } = useContext(ThemeContext);\n  const [runTour, setRunTour] = useState(false);\n  const [configLoaded, setConfigLoaded] = useState(false);\n  const [showApp, setShowApp] = useState(false);\n  const [allTabs, setAllTabs] = useState<FlowTab[]>([]);\n  const [fixedAlert, setFixedAlert] = useState<UserMessage | null>(null);\n  const { t } = useTranslation(\"config\");\n\n  const [appMounted, setComponentsMounted] = useState(false);\n\n  useEffect(() => {\n    if (dark) {\n      document.body.classList.add(\"dark-theme\");\n    } else {\n      document.body.classList.remove(\"dark-theme\");\n    }\n  }, [dark]);\n\n  useEffect(() => {\n    loadAppData();\n  }, []);\n\n  useEffect(() => {\n    if (showApp) {\n      setComponentsMounted(true);\n    }\n  }, [showApp]);\n\n  const loadIntroFile = async () => {\n    const firstVisit = localStorage.getItem(\"firstVisit\") !== \"false\";\n    const savedFlowTabs = localStorage.getItem(\"flowTabs\");\n\n    if (firstVisit && !savedFlowTabs) {\n      try {\n        const response = await fetch(\"/samples/intro.json\");\n        if (!response.ok) {\n          throw new Error(\"Failed to fetch intro file\");\n        }\n        const jsonData = await response.json();\n        const defaultTab: FlowTab = convertJsonToFlow(jsonData);\n\n        localStorage.setItem(\"firstVisit\", \"false\");\n\n        return [defaultTab];\n      } catch (error) {\n        console.error(\"Cannot load sample file :\", error);\n      }\n    }\n\n    return [];\n  };\n\n  async function loadAppData() {\n    try {\n      await loadParameters();\n      await loadExtensions();\n      const defaultTabs = await loadIntroFile();\n      const allTabs = await getAllTabs();\n      if (allTabs.length === 0) {\n        allTabs.push(...defaultTabs);\n      }\n      setAllTabs(allTabs);\n    } catch (error) {\n      console.error(\"Failed to load app data:\", error);\n      console.error(\"Default parameters will be loaded\");\n      setFixedAlert({\n        content: t(\"incompleteLoadingPleaseRestart\"),\n      });\n    } finally {\n      loadAllNodesTypes();\n      setConfigLoaded(true);\n      setShowApp(true);\n      onLoadingComplete();\n    }\n  }\n\n  return (\n    <>\n      {configLoaded && (\n        <div\n          className={`${showApp ? \"opacity-100\" : \"opacity-0\"} transition-opacity duration-300 ease-in-out`}\n          id=\"main-content\"\n        >\n          <VisibilityProvider>\n            {fixedAlert && (\n              <div className=\"absolute bottom-0 mb-5 flex w-full justify-center\">\n                <div className=\"rounded border-l-4 border-yellow-500 bg-yellow-100 px-4 py-2 text-yellow-800\">\n                  {fixedAlert.content}\n                </div>\n              </div>\n            )}\n            <DndProvider backend={MultiBackend} options={HTML5toTouch}>\n              <SocketProvider>\n                <FlowTabs tabs={allTabs} />\n              </SocketProvider>\n\n              <Tooltip\n                id={`app-tooltip`}\n                style={{ zIndex: 100 }}\n                delayShow={500}\n              />\n              {appMounted && runTour && (\n                <AppTour run={runTour} setRun={setRunTour} />\n              )}\n            </DndProvider>\n          </VisibilityProvider>\n        </div>\n      )}\n    </>\n  );\n};\n\nexport default App;\n"
  },
  {
    "path": "packages/ui/src/Main.tsx",
    "content": "import React, { useState } from \"react\";\nimport App from \"./App\";\nimport LoadingScreen from \"./components/LoadingScreen\";\n\nconst Main = () => {\n  const [initialLoading, setInitialLoading] = useState(true);\n\n  const handleLoadingComplete = () => {\n    setInitialLoading(false);\n  };\n\n  return (\n    <>\n      {initialLoading && <LoadingScreen />}\n      <App onLoadingComplete={handleLoadingComplete} />\n    </>\n  );\n};\n\nexport default Main;\n"
  },
  {
    "path": "packages/ui/src/api/cache/cacheManager.ts",
    "content": "import { isCacheEnabled } from \"../../config/config\";\n\ninterface CacheItem<T> {\n  data: T;\n  ttl?: number;\n  timestamp: number;\n}\n\nconst DEFAULT_TTL = 3600 * 1000; // 1 hour\nconst DEFAULT_NB_ELEMENTS_TO_REMOVE = 5;\nconst DISPENSABLE_CACHE_PREFIX = \"dispensable_cache\";\n\nexport function generateCacheKey(functionName: string, ...args: any[]): string {\n  const argsKey = JSON.stringify(args);\n  return `${functionName}:${argsKey}`;\n}\n\nexport function setCache(key: string, data: any, ttl?: number) {\n  if (!isCacheEnabled()) return;\n  const item = {\n    data,\n    ttl,\n    timestamp: Date.now(),\n  };\n  try {\n    localStorage.setItem(key, JSON.stringify(item));\n  } catch (err: any) {\n    if (err.code == 22 || err.code == 1014) {\n      clearOldCacheItems();\n      localStorage.setItem(key, JSON.stringify(item));\n    } else {\n      throw new Error(err.message);\n    }\n  }\n}\n\nexport function getCache<T>(key: string): T | undefined {\n  if (!isCacheEnabled()) return;\n\n  const itemStr = localStorage.getItem(key);\n  if (!itemStr) return;\n\n  const item = JSON.parse(itemStr) as CacheItem<T>;\n  const now = Date.now();\n\n  const ttl = item.ttl ?? DEFAULT_TTL;\n\n  if (now - item.timestamp > ttl) {\n    localStorage.removeItem(key);\n    return;\n  }\n\n  return item.data;\n}\n\nfunction clearOldCacheItems() {\n  const keys = Object.keys(localStorage);\n  const items = keys\n    .filter((key) => key.includes(DISPENSABLE_CACHE_PREFIX))\n    .map((key) => ({\n      key,\n      data: JSON.parse(localStorage.getItem(key) ?? \"\"),\n    }))\n    .sort((a, b) => a.data.timestamp - b.data.timestamp);\n\n  items.forEach((item, index) => {\n    if (index <= DEFAULT_NB_ELEMENTS_TO_REMOVE) {\n      localStorage.removeItem(item.key);\n    }\n  });\n}\n"
  },
  {
    "path": "packages/ui/src/api/cache/withCache.ts",
    "content": "import { generateCacheKey, getCache, setCache } from \"./cacheManager\";\n\ntype AsyncFunction<T extends any[], N> = (...args: T) => Promise<N>;\n\ntype Params<T> = T extends (...args: infer U) => any ? U : never;\n\ninterface CacheOptions {\n  ttl: number;\n  key?: string;\n}\n\nasync function withCache<T extends any[], N>(\n  fn: AsyncFunction<T, N>,\n  options: CacheOptions,\n  ...args: Params<AsyncFunction<T, N>>\n): Promise<N>;\n\nasync function withCache<T extends any[], N>(\n  fn: AsyncFunction<T, N>,\n  ...args: Params<AsyncFunction<T, N>>\n): Promise<N>;\n\nasync function withCache<T extends any[], N>(\n  fn: AsyncFunction<T, N>,\n  ...args:\n    | Params<AsyncFunction<T, N>>\n    | [CacheOptions, ...Params<AsyncFunction<T, N>>]\n): Promise<N> {\n  let options: CacheOptions | undefined = undefined;\n  let parameters: Params<AsyncFunction<T, N>>;\n\n  if (args.length > 0 && typeof args[0] === \"object\" && \"ttl\" in args[0]) {\n    options = args.shift() as CacheOptions;\n    parameters = args as Params<AsyncFunction<T, N>>;\n  } else {\n    parameters = args as Params<AsyncFunction<T, N>>;\n  }\n\n  let cacheKey = options?.key;\n\n  if (cacheKey === undefined) {\n    cacheKey = generateCacheKey(fn.name, ...parameters);\n  }\n\n  let cachedResult = getCache<N>(cacheKey);\n\n  if (cachedResult !== undefined) {\n    return cachedResult;\n  }\n\n  const result = await fn(...parameters);\n  setCache(cacheKey, result);\n  return result;\n}\n\nexport default withCache;\n"
  },
  {
    "path": "packages/ui/src/api/client.ts",
    "content": "import axios from \"axios\";\nimport { getRestApiUrl } from \"../config/config\";\n\nconst apiClient = axios.create({\n  baseURL: getRestApiUrl(),\n  headers: {\n    \"Content-type\": \"application/json\",\n  },\n});\n\nexport default apiClient;\n"
  },
  {
    "path": "packages/ui/src/api/nodes.ts",
    "content": "import client from \"./client\";\n\nexport async function getNodeExtensions() {\n  let response;\n  try {\n    response = await client.get(`/node/extensions`);\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n  return response.data?.extensions;\n}\n\nexport async function getDynamicConfig(processorType: string, data: any) {\n  let response;\n  const dataToSend = {\n    processorType,\n    data,\n  };\n  try {\n    response = await client.post(`/node/extensions/dynamic`, dataToSend);\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n  return response.data;\n}\n\nexport async function getModels(providerName: string) {\n  let response;\n  try {\n    response = await client.get(`/node/openapi/${providerName}/models`);\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n  return response.data;\n}\n\nexport async function getModelConfig(providerName: string, id: string) {\n  let response;\n  try {\n    response = await client.get(`/node/openapi/${providerName}/config/${id}`);\n    return response.data;\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n}\n"
  },
  {
    "path": "packages/ui/src/api/parameters.ts",
    "content": "import client from \"./client\";\n\nexport async function getParameters() {\n  let response;\n  try {\n    response = await client.get(`/parameters`);\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n  return response.data;\n}\n"
  },
  {
    "path": "packages/ui/src/api/replicateModels.ts",
    "content": "import { Config } from \"../utils/openAPIUtils\";\nimport client from \"./client\";\n\ninterface GetCollectionModelsResponse {\n  models: any;\n  cursor: string;\n}\n\nexport async function getCollections() {\n  let response;\n  try {\n    response = await client.get(\"/node/collections\");\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n  return response.data.results;\n}\n\nexport async function getPublicModels(cursor?: string) {\n  let response;\n  try {\n    response = await client.get(\"/node/models\", {\n      params: {\n        cursor: cursor,\n      },\n    });\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n  const newCursor = response.data?.public.next\n    ? response.data?.public.next.split(\"?cursor=\")[1]\n    : \"\";\n\n  return {\n    models: response.data.public.results,\n    cursor: newCursor,\n  } as GetCollectionModelsResponse;\n}\n\nexport async function getHighlightedModels() {\n  let response;\n  try {\n    response = await client.get(\"/node/models\");\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n  return response.data.highlighted;\n}\n\nexport async function getCollectionModels(\n  collectionName: string,\n  cursor?: string,\n) {\n  let response;\n  try {\n    response = await client.get(`/node/collections/${collectionName}`, {\n      params: {\n        cursor: cursor,\n      },\n    });\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n\n  const models = response.data.models;\n  const newCursor = response.data?.next\n    ? response.data?.next.split(\"?cursor=\")[1]\n    : \"\";\n\n  return { models, cursor: newCursor } as GetCollectionModelsResponse;\n}\n\nexport async function getModelConfig(model: string, processorType: string) {\n  let response;\n  try {\n    response = await client.get(`/node/replicate/config/${model}`, {\n      params: {\n        processorType: processorType,\n      },\n    });\n    return response.data as Config;\n  } catch (error) {\n    console.error(\"Error fetching configuration:\", error);\n    throw error;\n  }\n}\n"
  },
  {
    "path": "packages/ui/src/api/uploadFile.ts",
    "content": "import axios, { AxiosProgressEvent } from \"axios\";\nimport client from \"./client\";\n\nexport async function getUploadAndDownloadUrl(filename?: string) {\n  try {\n    const data = { filename };\n    const response = await client.get(\"/upload\", { params: data });\n    return response.data;\n  } catch (error) {\n    console.error(\"Error while trying to get upload link :\", error);\n    throw error;\n  }\n}\n\nexport async function uploadWithS3Link(s3UploadData: any, file: File) {\n  const config = {\n    onUploadProgress: (progressEvent: AxiosProgressEvent) => {\n      if (!progressEvent.total) return;\n\n      const percentCompleted = Math.round(\n        (progressEvent.loaded * 100) / progressEvent.total,\n      );\n\n      console.log(`Upload progress: ${percentCompleted}%`);\n    },\n  };\n\n  try {\n    const url = s3UploadData.url;\n    const fields = s3UploadData.fields;\n\n    const formData = new FormData();\n\n    Object.keys(fields).forEach((key) => {\n      formData.append(key, fields[key]);\n    });\n\n    formData.append(\"file\", file);\n\n    await axios.post(url, formData, config);\n  } catch (error) {\n    console.error(\"Error uploading file :\", error);\n    throw error;\n  }\n}\n"
  },
  {
    "path": "packages/ui/src/components/Flow.tsx",
    "content": "import {\n  useState,\n  useCallback,\n  useMemo,\n  useEffect,\n  useRef,\n  useImperativeHandle,\n  Ref,\n  forwardRef,\n} from \"react\";\nimport {\n  Node,\n  Edge,\n  OnNodesChange,\n  OnEdgesChange,\n  OnConnect,\n  applyNodeChanges,\n  applyEdgeChanges,\n  addEdge,\n  Connection,\n  ReactFlowInstance,\n} from \"reactflow\";\nimport \"reactflow/dist/style.css\";\nimport SideBar from \"./bars/Sidebar\";\nimport { NodeProvider } from \"../providers/NodeProvider\";\nimport { MiniMapStyled, ReactFlowStyled } from \"./nodes/Node.styles\";\nimport UserMessagePopup, {\n  MessageType,\n  UserMessage,\n} from \"./popups/UserMessagePopup\";\nimport { getAllNodeWithEaseOut } from \"../utils/mappings\";\nimport { useDrop } from \"react-dnd\";\nimport { useSocketListeners } from \"../hooks/useFlowSocketListeners\";\nimport ButtonEdge from \"./edges/buttonEdge\";\nimport { createNewNode } from \"../utils/nodeUtils\";\nimport {\n  FlowOnCurrentNodeRunningEventData,\n  FlowOnErrorEventData,\n  FlowOnProgressEventData,\n} from \"../sockets/flowEventTypes\";\nimport { useVisibility } from \"../providers/VisibilityProvider\";\nimport { FlowMetadata } from \"../layout/main-layout/AppLayout\";\n\nexport interface FlowProps {\n  nodes: Node[];\n  edges: Edge[];\n  metadata: FlowMetadata;\n  onFlowChange: (nodes: Node[], edges: Edge[], metadata: FlowMetadata) => void;\n  onUpdateMetadata?: (metadata: FlowMetadata) => void;\n  showOnlyOutput?: boolean;\n  isRunning: boolean;\n  onRunChange: (isRunning: boolean) => void;\n  onLoaded: () => void;\n}\n\nconst Flow = forwardRef((props: FlowProps, ref) => {\n  const reactFlowWrapper = useRef(null);\n\n  function getAllEdgeTypes() {\n    return { buttonedge: ButtonEdge };\n  }\n  const nodeTypes = useMemo(() => getAllNodeWithEaseOut(), []);\n  const edgeTypes = useMemo(() => getAllEdgeTypes(), []);\n\n  const [reactFlowInstance, setReactFlowInstance] = useState<\n    ReactFlowInstance | undefined\n  >(undefined);\n  const [nodes, setNodes] = useState<Node[]>(props.nodes);\n  const [edges, setEdges] = useState<Edge[]>(props.edges);\n\n  const [isPopupOpen, setIsPopupOpen] = useState<boolean>(false);\n  const [currentUserMessage, setCurrentUserMessage] = useState<UserMessage>({\n    content: \"\",\n  });\n  const [currentNodesRunning, setCurrentNodesRunning] = useState<string[]>([]);\n  const [errorCount, setErrorCount] = useState<number>(0);\n\n  const { getElement } = useVisibility();\n  const minimap = getElement(\"minimap\");\n\n  useEffect(() => {\n    const areNodesRunning = currentNodesRunning.length > 0;\n    if (props.isRunning !== areNodesRunning) {\n      props.onRunChange(areNodesRunning);\n    }\n  }, [currentNodesRunning]);\n\n  const [{ isOver }, dropRef] = useDrop({\n    accept: \"NODE\",\n    drop: (item, monitor) => {\n      onDrop(item, monitor);\n    },\n    collect: (monitor) => ({\n      isOver: monitor.isOver(),\n    }),\n  });\n\n  const onInit = (reactFlowInstance: ReactFlowInstance) => {\n    setReactFlowInstance(reactFlowInstance);\n  };\n\n  const addNode = (type: string, data?: any) => {\n    const reactFlowBounds = (\n      reactFlowWrapper.current as any\n    ).getBoundingClientRect();\n\n    const additionnalData = data?.additionnalData;\n    const additionnalConfig = data?.additionnalConfig;\n\n    if (typeof type === \"undefined\" || !type) {\n      return;\n    }\n\n    const position = (reactFlowInstance as any).project({\n      x: reactFlowBounds.width / 2 - 100,\n      y: reactFlowBounds.height / 2 - 100,\n    });\n\n    const newNode = createNewNode(\n      type,\n      position,\n      additionnalData,\n      additionnalConfig,\n    );\n\n    setNodes((nds) => nds.concat(newNode));\n  };\n\n  useImperativeHandle(ref, () => ({\n    addNode,\n  }));\n\n  useSocketListeners<\n    FlowOnProgressEventData,\n    FlowOnErrorEventData,\n    FlowOnProgressEventData\n  >(onProgress, onError, () => {}, onCurrentNodeRunning);\n\n  function onProgress(data: FlowOnProgressEventData) {\n    const nodeToUpdate = data.instanceName;\n    const output = data.output;\n\n    setCurrentNodesRunning((previous) => {\n      return previous.filter((node) => node != nodeToUpdate);\n    });\n\n    if (nodeToUpdate) {\n      setNodes((prevNodes) => {\n        return [\n          ...prevNodes.map((node: Node) => {\n            if (node.data.name == nodeToUpdate) {\n              node.data = {\n                ...node.data,\n                outputData: output,\n                lastRun: new Date(),\n                isDone: data.isDone,\n              };\n            }\n\n            return node;\n          }),\n        ];\n      });\n    }\n  }\n\n  function onError(data: FlowOnErrorEventData) {\n    setCurrentNodesRunning((previous) => {\n      return previous.filter((node) => node != data.instanceName);\n    });\n    setCurrentUserMessage({\n      content: data.error,\n      nodeId: data.instanceName ?? data.nodeName,\n      type: MessageType.Error,\n    });\n    setErrorCount((prevErrorCount) => prevErrorCount + 1);\n    setIsPopupOpen(true);\n  }\n\n  function onCurrentNodeRunning(data: FlowOnCurrentNodeRunningEventData) {\n    setCurrentNodesRunning((previous) => {\n      return [...previous, data.instanceName];\n    });\n  }\n\n  useEffect(() => {\n    if (props.onFlowChange) {\n      props.onFlowChange(nodes, edges, props.metadata);\n    }\n  }, [nodes, edges]);\n\n  const onNodesChange: OnNodesChange = useCallback(\n    (changes) => setNodes((nds) => applyNodeChanges(changes, nds)),\n    [setNodes],\n  );\n  const onEdgesChange: OnEdgesChange = useCallback(\n    (changes) => setEdges((eds) => applyEdgeChanges(changes, eds)),\n    [setEdges],\n  );\n\n  const onConnect: OnConnect = useCallback(\n    (connection) =>\n      setEdges((eds) => {\n        if (\n          isHandleAlreadyTargeted(connection, eds) ||\n          isSameNodeTargeted(connection)\n        ) {\n          return eds;\n        }\n        return addEdge(\n          {\n            ...connection,\n            type: \"buttonedge\",\n            markerEnd: \"arrowClosed\",\n          },\n          eds,\n        );\n      }),\n    [setEdges],\n  );\n\n  const onDragOver = useCallback((event: any) => {\n    event.preventDefault();\n    if (!!event.dataTransfert) {\n      event.dataTransfer.dropEffect = \"move\";\n    }\n  }, []);\n\n  const onDrop = useCallback(\n    (item: any, monitor?: any) => {\n      if (\n        !!reactFlowWrapper &&\n        !!reactFlowInstance &&\n        !!reactFlowWrapper.current\n      ) {\n        const reactFlowBounds = (\n          reactFlowWrapper.current as any\n        ).getBoundingClientRect();\n        const type = item.nodeType;\n        const additionnalData = item.additionnalData;\n        const additionnalConfig = item.additionnalConfig;\n\n        // check if the dropped element is valid\n        if (typeof type === \"undefined\" || !type) {\n          return;\n        }\n\n        const { x, y } = monitor.getClientOffset();\n\n        const position = (reactFlowInstance as any).project({\n          x: x - reactFlowBounds.left,\n          y: y - reactFlowBounds.top,\n        });\n\n        const newNode = createNewNode(\n          type,\n          position,\n          additionnalData,\n          additionnalConfig,\n        );\n        setNodes((nds) => nds.concat(newNode));\n      }\n    },\n    [reactFlowInstance],\n  );\n\n  const isHandleAlreadyTargeted = (connection: Connection, eds: Edge[]) => {\n    if (\n      eds.filter(\n        (edge) =>\n          edge.targetHandle === connection.targetHandle &&\n          edge.target === connection.target,\n      ).length > 0\n    ) {\n      return true;\n    }\n    return false;\n  };\n\n  const isSameNodeTargeted = (connection: Connection) => {\n    if (connection.source === connection.target) {\n      return true;\n    }\n    return false;\n  };\n\n  const handlePopupClose = useCallback(() => {\n    setIsPopupOpen(false);\n  }, []);\n\n  function handleChangeFlow(nodes: Node[], edges: Edge[]): void {\n    setNodes(nodes);\n    setEdges(edges);\n  }\n\n  const handleUpdateNodeData = (nodeId: string, data: any) => {\n    const updatedNodes = nodes.map((node) => {\n      if (node.id === nodeId) {\n        return { ...node, data };\n      }\n      return node;\n    });\n    setNodes(updatedNodes);\n  };\n\n  const handleUpdateNodes = (updatedNodes: Node[], updatesEdges: Edge[]) => {\n    setNodes(updatedNodes);\n    setEdges(updatesEdges);\n  };\n\n  return (\n    <NodeProvider\n      nodes={nodes}\n      edges={edges}\n      metadata={props.metadata}\n      showOnlyOutput={props.showOnlyOutput}\n      isRunning={props.isRunning}\n      currentNodesRunning={currentNodesRunning}\n      errorCount={errorCount}\n      onUpdateNodeData={handleUpdateNodeData}\n      onUpdateNodes={handleUpdateNodes}\n    >\n      <div className=\"h-full w-full\" ref={dropRef}>\n        <div className=\"reactflow-wrapper h-full w-full\" ref={reactFlowWrapper}>\n          <ReactFlowStyled\n            nodes={nodes}\n            nodeTypes={nodeTypes}\n            edgeTypes={edgeTypes}\n            onNodesChange={onNodesChange}\n            edges={edges}\n            onEdgesChange={onEdgesChange}\n            onConnect={onConnect}\n            onDrop={onDrop}\n            onDragOver={onDragOver}\n            onTouchEnd={onDragOver}\n            onInit={onInit}\n            fitView\n            fitViewOptions={{\n              maxZoom: 0.5,\n            }}\n            minZoom={0.2}\n            maxZoom={1.5}\n            onLoad={props.onLoaded}\n          >\n            {minimap.isVisible && <MiniMapStyled style={{ right: \"4vw\" }} />}\n          </ReactFlowStyled>\n        </div>\n        <SideBar nodes={nodes} edges={edges} onChangeFlow={handleChangeFlow} />\n        <UserMessagePopup\n          isOpen={isPopupOpen}\n          onClose={handlePopupClose}\n          message={currentUserMessage}\n        />\n      </div>\n    </NodeProvider>\n  );\n});\n\nexport default Flow;\n"
  },
  {
    "path": "packages/ui/src/components/LoadingScreen.tsx",
    "content": "import { LoadingScreenSpinner } from \"./nodes/Node.styles\";\n\nconst LoadingScreen = () => {\n  return (\n    <div\n      className=\"fixed left-0 top-0 flex h-screen w-screen items-center justify-center\"\n      style={{ zIndex: 1000 }}\n      id=\"loading-screen\"\n    >\n      <div className=\"flex flex-col items-center justify-center space-y-5\">\n        <img src=\"./logo.svg\" className=\"w-1/2\" />\n        <LoadingScreenSpinner className=\"h-8 w-8\" />\n      </div>\n    </div>\n  );\n};\n\nexport default LoadingScreen;\n"
  },
  {
    "path": "packages/ui/src/components/bars/Sidebar.tsx",
    "content": "import React, { useContext } from \"react\";\nimport { Edge, Node } from \"reactflow\";\nimport JSONView from \"../side-views/JSONView\";\nimport styled, { css } from \"styled-components\";\nimport { useTranslation } from \"react-i18next\";\nimport { useVisibility } from \"../../providers/VisibilityProvider\";\nimport CurrentNodeView from \"../side-views/CurrentNodeView\";\nimport ButtonRunAll from \"../buttons/ButtonRunAll\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport { Tabs, rem } from \"@mantine/core\";\nimport { FaFile } from \"react-icons/fa\";\nimport { MdCenterFocusStrong } from \"react-icons/md\";\nimport { FiChevronsLeft, FiChevronsRight } from \"react-icons/fi\";\n\ninterface SidebarProps {\n  nodes: Node[];\n  edges: Edge[];\n  onChangeFlow: (nodes: Node[], edges: Edge[]) => void;\n}\n\nconst Sidebar: React.FC<SidebarProps> = ({ nodes, edges, onChangeFlow }) => {\n  const { t } = useTranslation(\"flow\");\n  const { runAllNodes, currentNodesRunning } = useContext(NodeContext);\n  const { getElement, sidepaneActiveTab, setSidepaneActiveTab } =\n    useVisibility();\n\n  const sidebar = getElement(\"sidebar\");\n  const show = sidebar.isVisible;\n  const toggleShow = () => sidebar.toggle();\n\n  const iconStyle = { width: rem(12), height: rem(12) };\n\n  return (\n    <>\n      <SidebarToggle show={show} onClick={toggleShow}>\n        <ToggleIcon>\n          {show ? <FiChevronsRight /> : <FiChevronsLeft />}\n        </ToggleIcon>\n      </SidebarToggle>\n      <ButtonsContainer\n        show={show}\n        className={`absolute  flex flex-col space-y-3 bg-red-500 ${show ? \"z-50 opacity-100\" : \"pointer-events-none -z-50 opacity-0\"} transition-all duration-300 ease-out`}\n      >\n        <ButtonRunAll\n          small\n          onClick={show ? runAllNodes : () => {}}\n          isRunning={currentNodesRunning?.length > 0}\n        />\n      </ButtonsContainer>\n\n      <SidebarContainer\n        show={show}\n        key={sidepaneActiveTab}\n        className=\"rounded-l-3xl\"\n      >\n        <Tabs\n          defaultValue={sidepaneActiveTab}\n          color=\"cyan\"\n          variant=\"pills\"\n          keepMounted={false}\n        >\n          <Tabs.List grow>\n            <Tabs.Tab\n              value=\"json\"\n              leftSection={<FaFile style={iconStyle} />}\n              onClick={() => setSidepaneActiveTab(\"json\")}\n            >\n              {t(\"JsonView\")}\n            </Tabs.Tab>\n            <Tabs.Tab\n              value=\"current_node\"\n              leftSection={<MdCenterFocusStrong style={iconStyle} />}\n              onClick={() => setSidepaneActiveTab(\"current_node\")}\n            >\n              {t(\"currentNodeView\")}\n            </Tabs.Tab>\n          </Tabs.List>\n\n          <Tabs.Panel value=\"json\">\n            <JSONView nodes={nodes} edges={edges} onChangeFlow={onChangeFlow} />\n          </Tabs.Panel>\n\n          <Tabs.Panel value=\"current_node\">\n            <CurrentNodeView />\n          </Tabs.Panel>\n        </Tabs>\n      </SidebarContainer>\n      {!show && <div className=\"sidebar-overlay\" onClick={toggleShow} />}\n    </>\n  );\n};\n\nconst SidebarContainer = styled.div<{ show: boolean }>`\n  position: fixed;\n  right: 0;\n  top: 0;\n  bottom: 0;\n  width: 30%;\n  color: ${({ theme }) => theme.text};\n  background-color: ${({ theme }) => theme.bg};\n  box-shadow: -3px 0 3px rgba(0, 0, 0, 0.2);\n  overflow-y: auto;\n  transform: translateX(100%);\n  transition: transform 0.2s ease-in-out;\n  z-index: 9999;\n\n  ${({ show }) =>\n    show &&\n    css`\n      transform: translateX(0);\n    `}\n`;\n\nconst SidebarToggle = styled.div<{ show: boolean }>`\n  position: fixed;\n  right: 0;\n  top: 50%;\n  transform: translateY(-50%);\n  width: 20px;\n  height: 80px;\n  background-color: #110a0e;\n  border-top-left-radius: 10px;\n  border-bottom-left-radius: 10px;\n  transition: width 0.2s ease-in-out;\n  z-index: 1;\n\n  ${({ show }) =>\n    show &&\n    css`\n      width: 31.5%;\n    `}\n\n  @media screen and (max-width: 768px) {\n    display: none;\n  }\n`;\n\nconst ButtonsContainer = styled.div<{ show: boolean }>`\n  position: fixed;\n  right: 0;\n  top: 3%;\n  transform: translateY(-50%);\n  transition: width 0.2s ease-in-out;\n  z-index: 1000000;\n\n  ${({ show }) =>\n    show &&\n    css`\n      right: 31%;\n    `}\n\n  @media screen and (max-width: 768px) {\n    display: none;\n  }\n`;\n\nconst ToggleIcon = styled.div`\n  color: #a4a4a4d1;\n  font-size: 1.5em;\n  position: absolute;\n  top: 50%;\n  transform: translateY(-50%);\n\n  :hover {\n    color: #ffffff;\n  }\n`;\n\nexport default Sidebar;\n"
  },
  {
    "path": "packages/ui/src/components/bars/dnd-sidebar/DnDSidebar.tsx",
    "content": "import { useTranslation } from \"react-i18next\";\nimport styled from \"styled-components\";\nimport {\n  DnDNode,\n  getSections,\n} from \"../../../nodes-configuration/sectionConfig\";\nimport { memo, useEffect, useState } from \"react\";\nimport DraggableNode from \"./DraggableNode\";\nimport {\n  FiChevronDown,\n  FiChevronLeft,\n  FiChevronRight,\n  FiSearch,\n} from \"react-icons/fi\";\nimport { useVisibility } from \"../../../providers/VisibilityProvider\";\nimport useIsTouchDevice from \"../../../hooks/useIsTouchDevice\";\nimport Section from \"./Section\";\nimport { DraggableNodeAdditionnalData } from \"./types\";\nimport { TextInput, Chip, Group } from \"@mantine/core\";\nimport { SubnodeData } from \"../../../nodes-configuration/types\";\nimport DraggableNodeWithSubnodes from \"./DraggableNodeWithSubnodes\";\n\nconst HIDE_SIDEBAR_ANIMATION_DURATION = 300;\n\ninterface DnDSidebarProps {\n  addNodeFromExt?: (\n    type: string,\n    data: DraggableNodeAdditionnalData & Record<string, unknown>,\n  ) => void;\n}\n\nconst DnDSidebar = ({ addNodeFromExt }: DnDSidebarProps) => {\n  const { t } = useTranslation(\"flow\");\n  const { getElement } = useVisibility();\n  const sidebar = getElement(\"dragAndDropSidebar\");\n\n  const [contentVisible, setContentVisible] = useState(sidebar?.isVisible);\n  const [sections, setSections] = useState(getSections());\n  const [searchQuery, setSearchQuery] = useState(\"\");\n\n  const isTouchDevice = useIsTouchDevice();\n\n  // Update sidebar content visibility\n  useEffect(() => {\n    let timeoutId: NodeJS.Timeout;\n    if (sidebar.isVisible) {\n      setContentVisible(true);\n    } else {\n      timeoutId = setTimeout(\n        () => setContentVisible(false),\n        HIDE_SIDEBAR_ANIMATION_DURATION,\n      );\n    }\n    return () => clearTimeout(timeoutId);\n  }, [sidebar]);\n\n  // Update sections when hidden list changes\n  useEffect(() => {\n    const handleHiddenListChanged = (e: any) => {\n      setSections(getSections());\n    };\n\n    window.addEventListener(\"nodesHiddenListChanged\", handleHiddenListChanged);\n    return () => {\n      window.removeEventListener(\n        \"nodesHiddenListChanged\",\n        handleHiddenListChanged,\n      );\n    };\n  }, []);\n\n  function nodeMatchesSearch(node: DnDNode, query: string): boolean {\n    if (!query) return true;\n    const lowerQuery = query.toLowerCase();\n    return !!node.label.toLowerCase().includes(lowerQuery);\n  }\n\n  function subnodeMatchesSearch(subnode: SubnodeData, query: string): boolean {\n    if (!query) return true;\n    const lowerQuery = query.toLowerCase();\n    return !!subnode.label.toLowerCase().includes(lowerQuery);\n  }\n\n  function filterSubnodes(\n    subnodes: SubnodeData[],\n    searchQuery: string,\n  ): SubnodeData[] {\n    const subnodesMatches = subnodes.filter((subnode) =>\n      subnodeMatchesSearch(subnode, searchQuery),\n    );\n\n    const isFilterEnabled = !!searchQuery;\n\n    if (!isFilterEnabled && subnodesMatches.length > 7) {\n      return subnodesMatches.slice(0, 8);\n    }\n\n    return subnodesMatches;\n  }\n\n  /**\n   * Returns the node if it matches the search AND category filter,\n   * OR if any of its subnodes match (in which case, only the matching subnodes are retained).\n   */\n  function filterNode(node: DnDNode, searchQuery: string): DnDNode | null {\n    const nodeMatchesOverall = nodeMatchesSearch(node, searchQuery);\n    let filteredSubnodes: SubnodeData[] = [];\n    if (node.subnodesShortcutConfig && node.subnodesShortcutConfig.length > 0) {\n      filteredSubnodes = filterSubnodes(\n        node.subnodesShortcutConfig,\n        searchQuery,\n      );\n    }\n    if (nodeMatchesOverall || filteredSubnodes.length > 0) {\n      return {\n        ...node,\n        // If some subnodes match, update them; otherwise, leave them unchanged.\n        subnodesShortcutConfig:\n          filteredSubnodes.length > 0\n            ? filteredSubnodes\n            : node.subnodesShortcutConfig,\n      };\n    }\n    return null;\n  }\n\n  function renderNodeWithSubnode(nodeIndex: number, node: DnDNode) {\n    const subNodeLabel = node.subnodesShortcutStyle?.title\n      ? t(node.subnodesShortcutStyle?.title)\n      : t(\"PopularModels\");\n    return (\n      <DraggableNodeWithSubnodes\n        key={`${nodeIndex}-${node.label}`}\n        nodeIndex={nodeIndex}\n        node={node}\n        subNodeLabel={subNodeLabel}\n        subNodesData={node.subnodesShortcutConfig ?? []}\n        addNodeFromExt={addNodeFromExt}\n      />\n    );\n  }\n\n  const sectionsToRender = sections\n    .map((section) => {\n      if (!section.nodes) return null;\n      const filteredNodes = section.nodes\n        .map((node) => filterNode(node, searchQuery))\n        .filter((node): node is DnDNode => node !== null);\n      return { ...section, nodes: filteredNodes };\n    })\n    .filter(\n      (section) =>\n        section !== null &&\n        section.nodes !== undefined &&\n        section.nodes.length > 0,\n    );\n\n  return (\n    <div\n      className={`relative flex w-fit max-w-[35vw] transform text-xs transition-transform md:text-base duration-${HIDE_SIDEBAR_ANIMATION_DURATION} ease-in-out ${\n        !sidebar.isVisible ? \"-translate-x-full\" : \"translate-x-0\"\n      }`}\n    >\n      <div\n        className={`absolute left-full top-1/2 z-50 flex translate-x-2 transform cursor-pointer rounded-2xl text-2xl font-bold text-slate-300 hover:font-extrabold hover:text-slate-100`}\n        onClick={sidebar.toggle}\n      >\n        {!sidebar.isVisible ? <FiChevronRight /> : <FiChevronLeft />}\n      </div>\n      {contentVisible && (\n        <DnDSidebarContainer\n          id=\"dnd-sidebar\"\n          className={`font-sm md:font-md flex flex-col rounded-r-xl bg-zinc-950/10 px-3 py-2 shadow-md backdrop-blur-md ${\n            isTouchDevice\n              ? \"overflow-y-auto\"\n              : \"overflow-hidden hover:overflow-y-auto\"\n          } ${!sidebar.isVisible ? \"opacity-0\" : \"\"} transition-opacity duration-${HIDE_SIDEBAR_ANIMATION_DURATION} ease-in-out`}\n        >\n          {/* Search bar */}\n          <div className=\"mb-3\">\n            <TextInput\n              placeholder={t(\"Search nodes\") ?? \"Search nodes\"}\n              value={searchQuery}\n              onChange={(e) => setSearchQuery(e.currentTarget.value)}\n              leftSection={<FiSearch />}\n              size=\"xs\"\n            />\n          </div>\n\n          {/* Render sections (filtered by search query and category) */}\n          {sectionsToRender.map((section, index) => {\n            if (!section || !section.nodes || section.nodes.length === 0) {\n              return null;\n            }\n            return (\n              <Section key={index} index={index} section={section}>\n                {section.nodes?.map((node, nodeIndex) => {\n                  if (!node) return null;\n                  if (\n                    node.subnodesShortcutConfig &&\n                    node.subnodesShortcutConfig?.length > 0\n                  ) {\n                    return renderNodeWithSubnode(nodeIndex, node);\n                  }\n                  return (\n                    <DraggableNode\n                      key={nodeIndex}\n                      node={node}\n                      additionnalConfig={\n                        node?.additionnalData?.additionnalConfig\n                      }\n                      additionnalData={node?.additionnalData?.additionnalData}\n                    />\n                  );\n                })}\n              </Section>\n            );\n          })}\n        </DnDSidebarContainer>\n      )}\n    </div>\n  );\n};\n\nconst DnDSidebarContainer = styled.div``;\n\nexport default memo(DnDSidebar);\n"
  },
  {
    "path": "packages/ui/src/components/bars/dnd-sidebar/DraggableNode.tsx",
    "content": "import { useDrag } from \"react-dnd\";\nimport { useTranslation } from \"react-i18next\";\nimport { DnDNode } from \"../../../nodes-configuration/sectionConfig\";\nimport { ReactNode, memo } from \"react\";\nimport styled from \"styled-components\";\nimport { toastCustomIconInfoMessage } from \"../../../utils/toastUtils\";\nimport { FiMenu, FiMove } from \"react-icons/fi\";\nimport { darken, lighten } from \"polished\";\nimport { Tooltip } from \"@mantine/core\";\nimport { GripIcon } from \"./GripIcon\";\nimport { DraggableNodeAdditionnalData } from \"./types\";\n\ninterface DraggableNodeProps extends DraggableNodeAdditionnalData {\n  node: DnDNode;\n  id?: string;\n}\n\ninterface NodeBadgeProps {\n  children?: ReactNode;\n  color?: string;\n}\nconst NodeBadge = ({ children, color = \"#0369a1\" }: NodeBadgeProps) => (\n  <div\n    className={`absolute left-3 top-3 translate-x-[-50%] translate-y-[-50%] -rotate-45 transform px-5 text-xs text-white`}\n    style={{ backgroundColor: color }}\n  >\n    {children}\n  </div>\n);\n\nconst DraggableNode = (props: DraggableNodeProps) => {\n  const { t } = useTranslation(\"flow\");\n\n  const [{ isDragging }, drag] = useDrag({\n    type: \"NODE\",\n    item: {\n      nodeType: props.node.type,\n      additionnalData: props.additionnalData,\n      additionnalConfig: props.additionnalConfig,\n    },\n    collect: (monitor) => {\n      const result = {\n        isDragging: monitor.isDragging(),\n      };\n      return result;\n    },\n  });\n\n  function showDragAndDropHelper() {\n    if (localStorage.getItem(\"AIFLOW_didShowDragDropHelper\") === \"true\") {\n      return;\n    }\n    toastCustomIconInfoMessage(\n      \"Drag and drop nodes onto the canvas to add them.\",\n      FiMove,\n    );\n    localStorage.setItem(\"AIFLOW_didShowDragDropHelper\", \"true\");\n  }\n\n  return (\n    <Tooltip\n      label={t(props.node.helpMessage ?? \"\")}\n      color=\"gray\"\n      openDelay={300}\n    >\n      <Node\n        ref={drag}\n        id={props.id ?? props.node.type}\n        onClick={(e) => {\n          e.stopPropagation();\n        }}\n        onTouchEnd={(e) => {\n          e.stopPropagation();\n        }}\n        onDoubleClick={(e) => {\n          showDragAndDropHelper();\n        }}\n        bandColor={props.node.color}\n        className={`sidebar-dnd-node text-md text-af-text-element hover:ring-af-text-element/10 group group relative \n                  flex\n                  h-auto\n                  w-full cursor-grab flex-row\n                  items-center justify-between gap-x-1 overflow-hidden\n                  rounded-md py-2 text-center\n                  font-medium\n                  shadow-md transition-all duration-200 \n                  ease-in-out hover:ring-2 \n                  ${isDragging ? \"opacity-10\" : \"\"}`}\n      >\n        <div className=\"flex w-full flex-row items-center justify-between space-x-1 px-2 text-center\">\n          <p className=\"flex-grow truncate \">{t(props.node.label)}</p>\n          <GripIcon className=\"text-af-text-description/60 group-hover:text-af-text-element/60 h-4 w-4 transition-colors duration-75 ease-in-out\" />\n        </div>\n\n        {props.node.isBeta && <NodeBadge>Beta</NodeBadge>}\n        {props.node.isNew && <NodeBadge color=\"#166e4c\">New</NodeBadge>}\n      </Node>\n    </Tooltip>\n  );\n};\n\nexport const Node = styled.div<{ bandColor?: string }>`\n  background:\n    linear-gradient(\n        120deg,\n        ${({ bandColor }) => (bandColor ? lighten(0.05, bandColor) : \"#84fab0\")}\n          0%,\n        ${({ bandColor }) => (bandColor ? darken(0.1, bandColor) : \"#8fd3f4\")}\n          100%\n      )\n      left / 2% no-repeat,\n    ${({ theme }) => theme.bg};\n  user-select: none;\n  -webkit-user-select: none;\n  -moz-user-select: none;\n  -ms-user-select: none;\n`;\n\nexport default memo(DraggableNode);\n"
  },
  {
    "path": "packages/ui/src/components/bars/dnd-sidebar/DraggableNodeWithSubnodes.tsx",
    "content": "import React, { useState } from \"react\";\nimport { FiChevronDown, FiChevronRight, FiGrid } from \"react-icons/fi\";\nimport DraggableNode from \"./DraggableNode\";\nimport { DnDNode } from \"../../../nodes-configuration/sectionConfig\";\nimport { SubnodeData } from \"../../../nodes-configuration/types\";\nimport { useTranslation } from \"react-i18next\";\nimport { DraggableNodeAdditionnalData } from \"./types\";\n\ninterface DraggableNodeWithSubnodesProps {\n  nodeIndex: number;\n  node: DnDNode;\n  subNodeLabel: string;\n  subNodesData: SubnodeData[];\n  selectSubnodeComponent?: (props: {\n    show: boolean;\n    onClose: () => void;\n    onValidate: (data?: any) => void;\n  }) => JSX.Element;\n  addNodeFromExt?: (\n    type: string,\n    data: DraggableNodeAdditionnalData & Record<string, unknown>,\n  ) => void;\n}\n\nconst DraggableNodeWithSubnodes: React.FC<DraggableNodeWithSubnodesProps> = ({\n  nodeIndex,\n  node,\n  subNodeLabel,\n  subNodesData,\n  selectSubnodeComponent,\n  addNodeFromExt,\n}) => {\n  const { t } = useTranslation(\"flow\");\n  const [isExpanded, setIsExpanded] = useState(true);\n  const [showMore, setShowMore] = useState(false);\n\n  const toggleSubnodes = () => {\n    setIsExpanded(!isExpanded);\n  };\n\n  const subNodes = subNodesData.map(\n    (subnodeData: SubnodeData, index: number) => {\n      const subNode = { ...node };\n      subNode.label = subnodeData.label;\n      subNode.isBeta = subnodeData.isBeta;\n      subNode.isNew = subnodeData.isNew;\n      subNode.helpMessage = subnodeData.description ?? node.helpMessage;\n\n      return (\n        <div\n          className=\"w-full\"\n          key={`${nodeIndex}-${index}-${subnodeData.label}`}\n        >\n          <div className=\"relative ml-auto w-[80%] text-xs\">\n            <div className=\"absolute -top-1/2 left-[-16px] h-full w-px bg-gray-600\"></div>\n            {index !== subNodesData.length - 1 && (\n              <div className=\"absolute left-[-16px] top-1 h-full w-px bg-gray-600\"></div>\n            )}\n            <div className=\"absolute left-[-16px] top-1/2 h-px w-4 bg-gray-600\"></div>\n            <DraggableNode\n              key={`${nodeIndex}-${index}-${subnodeData.label}`}\n              node={subNode}\n              id={`subnode-${subNode.type}-${index}`}\n              additionnalConfig={subnodeData.configData}\n              additionnalData={subnodeData.data}\n            />\n          </div>\n        </div>\n      );\n    },\n  );\n\n  return (\n    <div className=\"flex w-full flex-col\">\n      <div className=\"relative\">\n        <DraggableNode key={nodeIndex} node={node} />\n      </div>\n      <div className=\"flex flex-col space-y-1 overflow-hidden\">\n        <span\n          className=\"mt-2 flex cursor-pointer flex-row items-center space-x-2 text-xs md:text-sm\"\n          onClick={toggleSubnodes}\n        >\n          <p>{subNodeLabel}</p>\n          <span>\n            {isExpanded ? (\n              <FiChevronDown className=\"transition-colors duration-100 ease-in-out hover:text-slate-100\" />\n            ) : (\n              <FiChevronRight className=\"transition-colors duration-100 ease-in-out hover:text-slate-100\" />\n            )}\n          </span>\n        </span>\n        {isExpanded && (\n          <>\n            <div className=\"flex flex-col space-y-1 overflow-hidden\">\n              {subNodes}\n            </div>\n            {/* Only show the \"Show More\" button if the component hasn't been displayed yet */}\n            {!!selectSubnodeComponent && !showMore && (\n              <button\n                type=\"button\"\n                className=\"bg-af-bg-2/30 hover:bg-af-bg-1 dark:bg-af-bg-1/70 dark:hover:bg-af-bg-1 ml-auto flex w-[75%] cursor-pointer items-center justify-center rounded p-2 text-xs font-semibold transition-colors duration-300 ease-in-out md:text-sm\"\n                onClick={() => setShowMore(true)}\n              >\n                <FiGrid className=\"mr-1\" />\n                {t(\"More Models\")}\n              </button>\n            )}\n            {/* Render the component returned by selectSubnodeComponent when showMore is true */}\n            {!!selectSubnodeComponent && showMore && (\n              <div className=\"mt-2 flex items-center justify-center\">\n                {selectSubnodeComponent({\n                  show: true,\n                  onClose: () => setShowMore(false),\n                  onValidate: (data?: any) => {\n                    if (!!data) {\n                      if (!!addNodeFromExt) {\n                        addNodeFromExt(node.type, {\n                          ...data,\n                          generateNow: true,\n                        });\n                      }\n                    }\n                    setShowMore(false);\n                  },\n                })}\n              </div>\n            )}\n          </>\n        )}\n      </div>\n    </div>\n  );\n};\n\nexport default DraggableNodeWithSubnodes;\n"
  },
  {
    "path": "packages/ui/src/components/bars/dnd-sidebar/GripIcon.tsx",
    "content": "import { ComponentProps } from \"react\";\n\nexport function GripIcon(props: ComponentProps<\"svg\">) {\n  return (\n    <svg\n      xmlns=\"http://www.w3.org/2000/svg\"\n      width={24}\n      height={24}\n      viewBox=\"0 0 24 24\"\n      fill=\"none\"\n      stroke=\"currentColor\"\n      strokeWidth={2}\n      strokeLinecap=\"round\"\n      strokeLinejoin=\"round\"\n      {...props}\n    >\n      <circle cx={9} cy={12} r={1} />\n      <circle cx={9} cy={5} r={1} />\n      <circle cx={9} cy={19} r={1} />\n      <circle cx={15} cy={12} r={1} />\n      <circle cx={15} cy={5} r={1} />\n      <circle cx={15} cy={19} r={1} />\n    </svg>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/bars/dnd-sidebar/Section.tsx",
    "content": "import { useTranslation } from \"react-i18next\";\nimport { NodeSection } from \"../../../nodes-configuration/sectionConfig\";\nimport {\n  FiArrowDown,\n  FiChevronDown,\n  FiChevronRight,\n  FiChevronUp,\n} from \"react-icons/fi\";\nimport { useState } from \"react\";\n\ninterface SidebarSectionProps {\n  section: NodeSection;\n  index: number;\n  children: React.ReactNode;\n}\n\nfunction SidebarSection({ section, index, children }: SidebarSectionProps) {\n  const { t } = useTranslation(\"flow\");\n  const [show, setShow] = useState<boolean>(true);\n\n  function toggleShow() {\n    setShow((prev) => !prev);\n  }\n  return (\n    <div key={index} className={`mb-5 flex flex-col gap-y-2`}>\n      <div className=\"flex flex-row items-center justify-between\">\n        <h2 className=\"text-md ml-1 flex flex-grow flex-row items-center gap-x-2 border-b-2 border-b-slate-500/20 py-1 text-slate-300\">\n          {section.icon && <section.icon />}\n          {t(section.label)}\n        </h2>\n\n        {show ? (\n          <FiChevronDown\n            onClick={toggleShow}\n            className=\"transition-colors duration-100 ease-in-out hover:text-slate-100\"\n          />\n        ) : (\n          <FiChevronRight\n            onClick={toggleShow}\n            className=\"transition-colors duration-100 ease-in-out hover:text-slate-100\"\n          />\n        )}\n      </div>\n\n      {show && children}\n    </div>\n  );\n}\n\nexport default SidebarSection;\n"
  },
  {
    "path": "packages/ui/src/components/bars/dnd-sidebar/types.ts",
    "content": "export interface DraggableNodeAdditionnalData {\n  additionnalData?: any;\n  additionnalConfig?: any;\n}\n"
  },
  {
    "path": "packages/ui/src/components/buttons/ButtonRunAll.tsx",
    "content": "import styled, { keyframes } from \"styled-components\";\nimport { FaPlay, FaSpinner } from \"react-icons/fa\";\nimport { memo } from \"react\";\nimport TapScale from \"../shared/motions/TapScale\";\nimport { Tooltip } from \"react-tooltip\";\n\ninterface ButtonRunAllProps {\n  onClick: () => void;\n  isRunning: boolean;\n  small?: boolean;\n}\nconst ButtonRunAll: React.FC<ButtonRunAllProps> = ({\n  onClick,\n  isRunning,\n  small,\n}) => {\n  return (\n    <TapScale>\n      <button\n        id=\"run-all-button\"\n        className={`flex flex-row items-center justify-center gap-x-2 \n                ${\n                  isRunning\n                    ? \"bg-[#86D8F0] text-slate-200\"\n                    : \"bg-slate-800 text-[#86D8F0] ring-2 ring-sky-800\"\n                } \n                z-50\n                cursor-pointer\n                rounded-md\n                px-2 py-2 transition-all hover:text-sky-100 hover:ring-sky-500`}\n        onClick={onClick}\n      >\n        {isRunning ? <Spinner className=\"text-xl \" /> : <FaPlay />}\n        {!isRunning && !small && <div className=\"hidden md:flex\">RUN ALL</div>}\n      </button>\n    </TapScale>\n  );\n};\n\nexport default memo(ButtonRunAll);\n\nconst spin = keyframes`\n  0% { transform: rotate(0deg); }\n  100% { transform: rotate(360deg); }\n`;\n\nconst Spinner = styled(FaSpinner)`\n  animation: ${spin} 1s linear infinite;\n`;\n"
  },
  {
    "path": "packages/ui/src/components/buttons/ConfigurationButton.tsx",
    "content": "import React, { memo } from \"react\";\nimport { FiSettings } from \"react-icons/fi\";\nimport styled from \"styled-components\";\n\ninterface RightButtonProps {\n  onClick: () => void;\n  color?: string;\n  icon?: React.ReactNode;\n  text?: string;\n  bottom?: string;\n}\n\nconst RightIconButton: React.FC<RightButtonProps> = ({\n  onClick,\n  color = \"#808080\",\n  icon = <FiSettings />,\n  bottom = \"30px\",\n}) => {\n  return (\n    <StyledRightButton\n      className=\"config-button fixed right-0 z-20 mx-auto w-11 items-center rounded-l-lg py-1 pl-1 transition-all duration-150 ease-linear hover:bg-slate-700\"\n      color={color}\n      bottom={bottom}\n      onClick={onClick}\n    >\n      <div className=\"fon align-middle text-xl text-slate-200\">{icon}</div>\n    </StyledRightButton>\n  );\n};\n\nconst StyledRightButton = styled.div<{ color: string; bottom: string }>`\n  bottom: ${(props) => props.bottom};\n  background-color: ${(props) => props.color};\n`;\n\nexport default memo(RightIconButton);\n"
  },
  {
    "path": "packages/ui/src/components/edges/buttonEdge.tsx",
    "content": "import {\n  BaseEdge,\n  EdgeLabelRenderer,\n  EdgeProps,\n  getBezierPath,\n  getSmoothStepPath,\n  getStraightPath,\n  useReactFlow,\n} from \"reactflow\";\n\nexport default function ButtonEdge({\n  id,\n  sourceX,\n  sourceY,\n  targetX,\n  targetY,\n  sourcePosition,\n  targetPosition,\n  style = {},\n  markerEnd,\n  data,\n}: EdgeProps) {\n  const { setEdges } = useReactFlow();\n\n  const pathType = data?.pathType || \"bezier\";\n\n  let pathData = [];\n  switch (pathType) {\n    case \"bezier\":\n      pathData = getBezierPath({\n        sourceX,\n        sourceY,\n        sourcePosition,\n        targetX,\n        targetY,\n        targetPosition,\n      });\n      break;\n\n    case \"smoothstep\":\n      pathData = getSmoothStepPath({\n        sourceX,\n        sourceY,\n        sourcePosition,\n        targetX,\n        targetY,\n        targetPosition,\n      });\n      break;\n\n    case \"step\":\n      pathData = getSmoothStepPath({\n        sourceX,\n        sourceY,\n        sourcePosition,\n        targetX,\n        targetY,\n        targetPosition,\n        borderRadius: 0,\n      });\n      break;\n\n    case \"straight\":\n      pathData = getStraightPath({\n        sourceX,\n        sourceY,\n        targetX,\n        targetY,\n      });\n      break;\n\n    default:\n      pathData = getBezierPath({\n        sourceX,\n        sourceY,\n        sourcePosition,\n        targetX,\n        targetY,\n        targetPosition,\n      });\n  }\n\n  const edgePath = pathData[0];\n  const labelX = pathData[1];\n  const labelY = pathData[2];\n\n  const onEdgeClick = () => {\n    setEdges((edges) => edges.filter((edge) => edge.id !== id));\n  };\n\n  return (\n    <>\n      <BaseEdge path={edgePath} markerEnd={markerEnd} style={style} />\n      <EdgeLabelRenderer>\n        <div\n          style={{\n            position: \"absolute\",\n            transform: `translate(-50%, -50%) translate(${labelX}px,${labelY}px)`,\n            fontSize: 12,\n            // everything inside EdgeLabelRenderer has no pointer events by default\n            // if you have an interactive element, set pointer-events: all\n            pointerEvents: \"all\",\n          }}\n          className=\"nodrag nopan \"\n        >\n          <button\n            className=\"flex h-6 w-6 cursor-pointer \n                    items-center justify-center\n                    rounded-full border-slate-300\n                    bg-slate-400 text-xl\n                    leading-none text-slate-900/80 transition-all\n                    duration-100 ease-in-out hover:h-7 hover:w-7\n                    hover:bg-slate-300 hover:text-3xl hover:text-red-500 \"\n            onClick={onEdgeClick}\n            onTouchEnd={onEdgeClick}\n          >\n            ×\n          </button>\n        </div>\n      </EdgeLabelRenderer>\n    </>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/handles/HandleWrapper.tsx",
    "content": "import ReactDOM from \"react-dom\";\nimport styled, { CSSProperties } from \"styled-components\";\nimport { InputHandle, OutputHandle } from \"../nodes/Node.styles\";\nimport { useMemo, useRef, useState } from \"react\";\nimport { Position } from \"reactflow\";\nimport React from \"react\";\n\nexport type LinkedHandlePositions = {\n    [key in Position]: string[];\n};\n\ntype HandleWrapperProps = {\n    id: string;\n    position: Position;\n    linkedHandlePositions?: LinkedHandlePositions;\n    isOutput?: boolean;\n    onChangeHandlePosition: (newPosition: Position, id: string) => void;\n};\n\nconst HandleWrapper: React.FC<HandleWrapperProps> = ({ id, position, onChangeHandlePosition, isOutput, linkedHandlePositions }) => {\n\n    const HANDLE_DEFAULT_OFFSET = 20;\n    const POPUP_DEFAULT_TOP_OFFSET = 10;\n\n    const [showPopup, setShowPopup] = useState(false);\n    const [currentPosition, setCurrentPosition] = useState<Position>(position)\n    const [popupCoords, setPopupCoords] = useState<{ x: number; y: number } | null>(null);\n    const ref = useRef<HTMLDivElement | null>(null);\n    const closeTimeout = useRef<NodeJS.Timeout | null>(null);\n    const openTimeout = useRef<NodeJS.Timeout | null>(null);\n    const isHoveredRef = useRef(false);\n\n    const handleMouseEnter = (event: React.MouseEvent) => {\n        if (ref.current) {\n            const rect = ref.current.getBoundingClientRect();\n            setPopupCoords({ x: rect.left + rect.width / 2, y: rect.top - POPUP_DEFAULT_TOP_OFFSET });\n            isHoveredRef.current = true;\n\n            openTimeout.current = setTimeout(() => {\n                if (!isHoveredRef.current) return;\n                setShowPopup(true);\n            }, 1000)\n        }\n    };\n\n    const cancelClose = () => {\n        if (closeTimeout.current) {\n            clearTimeout(closeTimeout.current);\n            closeTimeout.current = null;\n        }\n    };\n\n    const startClose = () => {\n        isHoveredRef.current = false;\n        closeTimeout.current = setTimeout(() => {\n            setShowPopup(false);\n        }, 500);\n    };\n\n    const changePosition = (newPosition: Position) => {\n        setCurrentPosition(newPosition);\n        onChangeHandlePosition(newPosition, id);\n    }\n\n    const adjustPositionByIndex = (): CSSProperties => {\n        if (linkedHandlePositions == null) return {}\n\n        const handleIndex = !!linkedHandlePositions[currentPosition] ? linkedHandlePositions[currentPosition].indexOf(id) : 0;\n\n        switch (currentPosition) {\n            case Position.Left:\n            case Position.Right:\n                return { marginTop: `${handleIndex * HANDLE_DEFAULT_OFFSET}px` };\n            case Position.Top:\n            case Position.Bottom:\n                return { marginLeft: `${handleIndex * HANDLE_DEFAULT_OFFSET}px` };\n            default:\n                return {}\n        }\n    };\n\n    return (\n        <>\n            {isOutput\n                ? <OutputHandle ref={ref} className=\"handle-out\" type=\"source\" id={id} position={currentPosition}\n                    style={adjustPositionByIndex()}\n                    onMouseEnter={handleMouseEnter}\n                    onMouseLeave={startClose} />\n                : <InputHandle ref={ref} className=\"handle\" type=\"target\" id={id} position={currentPosition}\n                    style={adjustPositionByIndex()}\n                    onMouseEnter={handleMouseEnter}\n                    onMouseLeave={startClose} />\n            }\n            {showPopup && popupCoords &&\n                <Popup\n                    currentPosition={currentPosition}\n                    coords={popupCoords}\n                    onCancelClose={cancelClose}\n                    onStartClose={startClose}\n                    onSelect={changePosition}\n                    isOutput={isOutput}\n                />}\n        </>\n    );\n};\n\ntype PopupProps = {\n    currentPosition: Position;\n    onSelect: (position: Position) => void;\n    coords: { x: number; y: number };\n    onCancelClose: () => void;\n    onStartClose: () => void;\n    isOutput?: boolean;\n};\n\nconst Popup: React.FC<PopupProps> = ({ currentPosition, onSelect, coords, onCancelClose, onStartClose, isOutput }) => {\n\n    const handles = useMemo(() => [\n        { src: `./handle-left${isOutput ? '-out' : ''}.svg`, position: Position.Left },\n        { src: `./handle-right${isOutput ? '-out' : ''}.svg`, position: Position.Right },\n        { src: `./handle-top${isOutput ? '-out' : ''}.svg`, position: Position.Top },\n        { src: `./handle-bottom${isOutput ? '-out' : ''}.svg`, position: Position.Bottom },\n    ], [isOutput]);\n\n    const popupContent = (\n        <StyledPopup\n            className=\"fixed flex flex-col justify-center items-center text-center text-xs  text-slate-200 bg-slate-700 rounded-md\"\n            onMouseEnter={onCancelClose}\n            onMouseLeave={onStartClose}\n            style={{ top: `${coords.y}px`, left: `${coords.x}px` }}\n        >\n            <div className=\"flex flex-row\">\n                {handles.map((handle, index) => (\n                    <img\n                        key={`${handle.position}`}\n                        src={handle.src}\n                        className={`w-14 cursor-pointer ${handle.position === currentPosition ? 'opacity-100' : 'opacity-40'}`}\n                        onClick={() => onSelect(handle.position)}\n                        alt=\"\"\n                    />\n                ))}\n            </div>\n        </StyledPopup>\n    );\n\n    return ReactDOM.createPortal(popupContent, document.body);\n};\n\nconst StyledPopup = styled.div`\n    transform: translate(-50%, -100%);\n`;\n\nexport default HandleWrapper;"
  },
  {
    "path": "packages/ui/src/components/inputs/InputWithButton.tsx",
    "content": "import NodeTextField from \"../nodes/node-input/NodeTextField\";\n\ninterface InputWithButtonProps {\n  buttonText: string;\n  onInputChange: (value: string) => void;\n  onButtonClick: () => void;\n  value: string;\n  inputPlaceholder?: string;\n  inputClassName?: string;\n  buttonClassName?: string;\n}\n\nconst InputWithButton = ({\n  inputPlaceholder,\n  buttonText,\n  value,\n  onInputChange,\n  onButtonClick,\n  inputClassName = \"\",\n  buttonClassName = \"\",\n}: InputWithButtonProps) => {\n  return (\n    <div className=\"flex w-full flex-col items-center justify-center px-2 pb-4\">\n      <div className=\"flex w-full flex-row space-x-2\">\n        <NodeTextField\n          // className={` ${inputClassName ? inputClassName : \"text-center\"} `}\n          placeholder={inputPlaceholder}\n          onChange={(event) => onInputChange(event.target.value)}\n          value={value}\n        />\n        <button\n          className={`${buttonClassName ? buttonClassName : \"rounded-lg bg-sky-500 p-2 hover:bg-sky-400\"}`}\n          onClick={onButtonClick}\n        >\n          {buttonText}\n        </button>\n      </div>\n    </div>\n  );\n};\n\nexport default InputWithButton;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/AIDataSplitterNode.tsx",
    "content": "import React, { useContext, useEffect } from \"react\";\nimport { Position, NodeProps, useUpdateNodeInternals } from \"reactflow\";\nimport styled from \"styled-components\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport NodePlayButton from \"./node-button/NodePlayButton\";\nimport { generateIdForHandle } from \"../../utils/flowUtils\";\nimport { InputHandle, OutputHandle } from \"./Node.styles\";\nimport { useIsPlaying } from \"../../hooks/useIsPlaying\";\nimport { GenericNodeData } from \"./types/node\";\nimport SelectAutocomplete, {\n  SelectItem,\n} from \"../selectors/SelectAutocomplete\";\nimport NodeTextField from \"./node-input/NodeTextField\";\nimport { Switch, Tooltip } from \"@mantine/core\";\nimport { useTranslation } from \"react-i18next\";\n\ninterface AIDataSplitterNodeData extends GenericNodeData {\n  id: string;\n  name: string;\n  processorType: string;\n  nbOutput: number;\n  input: string;\n  input_key: string;\n  outputData?: string[];\n  lastRun: string;\n}\n\ninterface AIDataSplitterNodeProps extends NodeProps {\n  data: AIDataSplitterNodeData;\n}\n\nconst separatorOptions: SelectItem<string>[] = [\n  {\n    value: \",\",\n    name: \",\",\n  },\n  {\n    value: \";\",\n    name: \";\",\n  },\n  {\n    value: \"\\\\t\",\n    name: \"\\\\t\",\n  },\n  {\n    value: \" \",\n    name: \"Space\",\n  },\n  {\n    value: \"\\\\n\",\n    name: \"\\\\n\",\n  },\n  {\n    value: \"\\\\r\",\n    name: \"\\\\r\",\n  },\n];\n\nconst AIDataSplitterNode: React.FC<AIDataSplitterNodeProps> = React.memo(\n  ({ data, id, selected }) => {\n    const { t } = useTranslation(\"flow\");\n    const updateNodeInternals = useUpdateNodeInternals();\n\n    const [isPlaying, setIsPlaying] = useIsPlaying();\n\n    const { onUpdateNodeData } = useContext(NodeContext);\n\n    const modeOptions: SelectItem<string>[] = [\n      {\n        value: \"ai\",\n        name: t(\"AI\"),\n      },\n      {\n        value: \"manual\",\n        name: t(\"Separator\"),\n      },\n    ];\n\n    useEffect(() => {\n      const newNbOutput = data.outputData ? data.outputData.length : 0;\n      if (!data.nbOutput || newNbOutput > data.nbOutput) {\n        onUpdateNodeData(id, {\n          ...data,\n          nbOutput: newNbOutput,\n        });\n      }\n      setIsPlaying(false);\n    }, [data.outputData]);\n\n    useEffect(() => {\n      updateNodeInternals(id);\n    }, [data.nbOutput]);\n\n    const handlePlayClick = () => {\n      setIsPlaying(true);\n    };\n\n    const handleForceNbOutputChange = (\n      event: React.ChangeEvent<HTMLInputElement>,\n    ) => {\n      const forcedNbOutput = Number(event.target.value);\n\n      onUpdateNodeData(id, {\n        ...data,\n        nbOutput: forcedNbOutput,\n      });\n    };\n\n    const handleChangeField = (field: string, value: any) => {\n      onUpdateNodeData(id, {\n        ...data,\n        [field]: value,\n      });\n    };\n\n    return (\n      <DataSplitterNodeContainer\n        nbOutput={data.nbOutput}\n        key={id}\n        style={{\n          borderColor: data?.appearance?.color\n            ? data?.appearance?.color\n            : \"rgb(34 197 94)\",\n        }}\n        className={`flex flex-col items-center justify-center rounded-lg  border  bg-gray-800 p-5 hover:bg-gray-700/50`}\n      >\n        <div className=\"flex flex-col items-center justify-center space-y-1\">\n          <div className=\"mt-3 flex\">\n            <NodePlayButton\n              isPlaying={isPlaying}\n              nodeName={data.name}\n              onClick={handlePlayClick}\n              size=\"medium\"\n            />\n          </div>\n          {\n            <div\n              className=\"flex flex-col items-center justify-center space-y-2\"\n              onDoubleClick={(e) => e.stopPropagation()}\n              onClick={(e) => e.stopPropagation()}\n              onTouchStart={(e) => e.stopPropagation()}\n            >\n              <p className=\"ml-8 w-full text-left font-mono\"> mode </p>\n              <div className=\"flex w-5/6 flex-col  space-y-2\">\n                <SelectAutocomplete\n                  values={modeOptions}\n                  selectedValue={data?.mode ?? \"ai\"}\n                  onChange={(value) => handleChangeField(\"mode\", value)}\n                />\n                {data[\"mode\"] === \"manual\" && (\n                  <>\n                    <p className=\"w-full text-left font-mono\">\n                      {\" \"}\n                      custom_separator{\" \"}\n                    </p>\n                    <Switch\n                      checked={data?.customSeparator}\n                      onChange={(e) =>\n                        handleChangeField(\"customSeparator\", e.target.checked)\n                      }\n                    />\n                    <p className=\"w-full text-left font-mono\"> separator * </p>\n                    {!!data[\"customSeparator\"] ? (\n                      <NodeTextField\n                        value={data?.separator}\n                        onChange={(e) =>\n                          handleChangeField(\"separator\", e.target.value)\n                        }\n                      />\n                    ) : (\n                      <SelectAutocomplete\n                        values={separatorOptions}\n                        selectedValue={data?.separator}\n                        onChange={(value) =>\n                          handleChangeField(\"separator\", value)\n                        }\n                      />\n                    )}\n                  </>\n                )}\n                <p className=\"w-full text-left font-mono\"> nb_output </p>\n                <div className=\"flex flex-row items-center justify-center space-x-2\">\n                  <span className=\"h-3 w-3 bg-orange-400/40\" />\n                  <ForceNbOutputInput\n                    className=\"w-full border border-slate-200/20 bg-gray-800\"\n                    id=\"nbOutput\"\n                    value={data.nbOutput}\n                    onChange={handleForceNbOutputChange}\n                  />\n                </div>\n              </div>\n            </div>\n          }\n        </div>\n        <InputHandle\n          className=\"handle\"\n          type=\"target\"\n          position={Position.Left}\n        />\n        <div>\n          {!!data.nbOutput &&\n            Array.from(Array(data.nbOutput)).map((_, index) => (\n              <Tooltip label={data.outputData ? data.outputData[index] : \"\"}>\n                <OutputHandle\n                  key={generateIdForHandle(index, true)}\n                  type=\"source\"\n                  id={generateIdForHandle(index, true)}\n                  position={Position.Right}\n                  style={{\n                    background: data?.outputData\n                      ? data.outputData[index]\n                        ? \"rgb(224, 166, 79)\"\n                        : \"#ddd\"\n                      : \"#ddd\",\n                    top: `${data.nbOutput === 1 ? 50 : (index / (data.nbOutput - 1)) * 80 + 10}%`,\n                  }}\n                />\n              </Tooltip>\n            ))}\n        </div>\n      </DataSplitterNodeContainer>\n    );\n  },\n);\n\nconst DataSplitterNodeContainer = styled.div<{\n  nbOutput: number;\n}>`\n  min-height: ${(props) => props.nbOutput * 30 + 100}px;\n  width: 200px;\n  transition: all 0.3s ease-in-out;\n`;\n\nconst ForceNbOutputInput = styled.input`\n  font-size: 0.9em;\n  color: ${({ theme }) => theme.text};\n  padding: 5px;\n  border-radius: 5px;\n`;\n\nexport default AIDataSplitterNode;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/DisplayNode.tsx",
    "content": "import React, { useContext, useEffect, useMemo, useState } from \"react\";\nimport {\n  Position,\n  NodeProps,\n  useUpdateNodeInternals,\n  ResizeParams,\n  NodeResizeControl,\n} from \"reactflow\";\nimport { generateIdForHandle } from \"../../utils/flowUtils\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport { useIsPlaying } from \"../../hooks/useIsPlaying\";\nimport NodePlayButton from \"./node-button/NodePlayButton\";\nimport HandleWrapper from \"../handles/HandleWrapper\";\nimport useHandlePositions from \"../../hooks/useHandlePositions\";\nimport { GenericNodeData } from \"./types/node\";\nimport { NodeBand, NodeHeader, NodeIcon, NodeTitle } from \"./Node.styles\";\nimport OutputDisplay from \"./node-output/OutputDisplay\";\nimport { useTranslation } from \"react-i18next\";\nimport { FaTv } from \"react-icons/fa\";\n\ninterface DisplayNodeData extends GenericNodeData {\n  handles: any;\n  id: string;\n  name: string;\n  processorType: string;\n  nbOutput: number;\n  input: string;\n  input_key: string;\n  outputData?: string[];\n  lastRun: string;\n}\n\ninterface DisplayNodeProps extends NodeProps {\n  data: DisplayNodeData;\n}\n\ninterface Dimensions {\n  width: number;\n  height: number;\n}\n\nfunction ResizeIcon() {\n  return (\n    <svg\n      xmlns=\"http://www.w3.org/2000/svg\"\n      width=\"30\"\n      height=\"30\"\n      viewBox=\"0 0 24 24\"\n      strokeWidth=\"2\"\n      stroke=\"#F36788\"\n      fill=\"none\"\n      strokeLinecap=\"round\"\n      strokeLinejoin=\"round\"\n      style={{ position: \"absolute\", right: -20, bottom: -20 }}\n    >\n      <path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\" />\n      <polyline points=\"16 20 20 20 20 16\" />\n      <line x1=\"14\" y1=\"14\" x2=\"20\" y2=\"20\" />\n      <polyline points=\"8 4 4 4 4 8\" />\n      <line x1=\"4\" y1=\"4\" x2=\"10\" y2=\"10\" />\n    </svg>\n  );\n}\n\nconst DisplayNode: React.FC<DisplayNodeProps> = React.memo(\n  ({ data, id, selected }) => {\n    const { t } = useTranslation(\"flow\");\n    const { onUpdateNodeData } = useContext(NodeContext);\n    const [nodeId, setNodeId] = useState<string>(`${data.name}-${Date.now()}`);\n    const [dimensions, setDimensions] = useState<Dimensions>({\n      width: data.nodeDimensions?.width ?? 450,\n      height: data.nodeDimensions?.height ?? 200,\n    });\n    const [reloadDisplay, setReloadDisplay] = useState<number>(0);\n    const [isPlaying, setIsPlaying] = useIsPlaying();\n    const updateNodeInternals = useUpdateNodeInternals();\n\n    const inputHandleId = useMemo(() => generateIdForHandle(0), []);\n    const outputHandleId = useMemo(() => generateIdForHandle(0, true), []);\n    const { allHandlePositions } = useHandlePositions(data, 1, [\n      outputHandleId,\n    ]);\n\n    useEffect(() => {\n      setNodeId(`${data.name}-${Date.now()}`);\n      setIsPlaying(false);\n      updateNodeInternals(id);\n    }, [data.lastRun]);\n\n    const handlePlayClick = () => {\n      setIsPlaying(true);\n    };\n\n    const handleChangeHandlePosition = (\n      newPosition: Position,\n      handleId: string,\n    ) => {\n      onUpdateNodeData(id, {\n        ...data,\n        handles: {\n          ...data.handles,\n          [handleId]: newPosition,\n        },\n      });\n      updateNodeInternals(id);\n    };\n\n    const handleReloadDisplay = () => {\n      setReloadDisplay(reloadDisplay + 1);\n    };\n\n    const handleSaveDimensions = (params: ResizeParams) => {\n      setDimensions({\n        width: params.width,\n        height: params.height,\n      });\n    };\n\n    return (\n      <div\n        key={id}\n        className={`flex h-full flex-col rounded-lg bg-zinc-900 `}\n        style={{\n          width: \"100%\",\n          minWidth: \"300px\",\n        }}\n      >\n        {selected && (\n          <NodeResizeControl\n            minWidth={300}\n            minHeight={100}\n            onResizeEnd={(event, params) => {\n              handleReloadDisplay();\n              handleSaveDimensions(params);\n            }}\n            style={{\n              backgroundColor: \"transparent\",\n              border: \"none\",\n            }}\n          >\n            <ResizeIcon />\n          </NodeResizeControl>\n        )}\n\n        <NodeHeader>\n          <NodeIcon>\n            <FaTv />\n          </NodeIcon>\n          <NodeTitle>{data.appearance?.customName ?? t(\"Display\")}</NodeTitle>\n          <NodePlayButton\n            isPlaying={isPlaying}\n            hasRun={!!data.lastRun}\n            onClick={handlePlayClick}\n            nodeName={data.name}\n          />\n        </NodeHeader>\n        <NodeBand\n          selected={selected}\n          color={data.appearance?.color}\n          className={`${selected ? \"animate-pulse\" : \"\"}`}\n        />\n        <HandleWrapper\n          id={inputHandleId}\n          position={\n            !!data?.handles && data.handles[inputHandleId]\n              ? data.handles[inputHandleId]\n              : Position.Left\n          }\n          linkedHandlePositions={allHandlePositions}\n          onChangeHandlePosition={handleChangeHandlePosition}\n        />\n\n        <HandleWrapper\n          id={outputHandleId}\n          position={\n            !!data?.handles && data.handles[outputHandleId]\n              ? data.handles[outputHandleId]\n              : Position.Right\n          }\n          linkedHandlePositions={allHandlePositions}\n          onChangeHandlePosition={handleChangeHandlePosition}\n          isOutput\n        />\n\n        <div className=\"nodrag nowheel flex h-full w-full overflow-auto\">\n          {data.outputData != null ? (\n            <OutputDisplay key={reloadDisplay} data={data} />\n          ) : (\n            <div className=\"h-10\" />\n          )}\n        </div>\n      </div>\n    );\n  },\n);\nexport default DisplayNode;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/FileUploadNode.tsx",
    "content": "import { useContext, useEffect, useState } from \"react\";\nimport { FaFileAlt, FaLink } from \"react-icons/fa\";\nimport { NodeProps, Position, useUpdateNodeInternals } from \"reactflow\";\nimport HandleWrapper from \"../handles/HandleWrapper\";\nimport { generateIdForHandle } from \"../../utils/flowUtils\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport { useIsPlaying } from \"../../hooks/useIsPlaying\";\nimport NodePlayButton from \"./node-button/NodePlayButton\";\nimport {\n  LoadingSpinner,\n  NodeBand,\n  NodeContainer,\n  NodeHeader,\n  NodeIcon,\n  NodeTitle,\n} from \"./Node.styles\";\nimport { useTranslation } from \"react-i18next\";\nimport { toastErrorMessage } from \"../../utils/toastUtils\";\nimport { GenericNodeData } from \"./types/node\";\nimport { getOutputExtension } from \"./node-output/outputUtils\";\nimport NodeOutput from \"./node-output/NodeOutput\";\nimport OptionSelector, { Option } from \"../selectors/OptionSelector\";\nimport InputWithButton from \"../inputs/InputWithButton\";\nimport FileDropZone from \"../selectors/FileDropZone\";\nimport {\n  getUploadAndDownloadUrl,\n  uploadWithS3Link,\n} from \"../../api/uploadFile\";\nimport { useLoading } from \"../../hooks/useLoading\";\nimport HintComponent from \"./utils/HintComponent\";\n\ninterface GenericNodeProps extends NodeProps {\n  data: GenericNodeData;\n  id: string;\n  selected: boolean;\n}\n\ntype FileChoice = \"url\" | \"upload\";\n\nconst fileChoices: Option<FileChoice>[] = [\n  {\n    name: \"URL\",\n    icon: <FaLink />,\n    value: \"url\",\n  },\n  {\n    name: \"Upload\",\n    icon: <FaFileAlt />,\n    value: \"upload\",\n  },\n];\n\nconst accept = {\n  \"video/mp4\": [\".mp4\"],\n  \"audio/mpeg\": [\".mp3\"],\n  \"image/png\": [\".png\"],\n  \"image/jpeg\": [\".jpg\", \".jpeg\"],\n  \"image/gif\": [\".gif\"],\n  \"text/plain\": [\".txt\"],\n  \"application/pdf\": [\".pdf\"],\n  \"model/gltf-binary\": [\".glb\"],\n  \"model/gltf+json\": [\".gltf\"],\n  \"model/gltf\": [\".gltf\"],\n  \"model/obj\": [\".obj\"],\n};\n\nconst FileUploadNode = ({ data, id }: GenericNodeProps) => {\n  const { onUpdateNodeData } = useContext(NodeContext);\n\n  const { t } = useTranslation(\"flow\");\n  const [files, setFiles] = useState<File[] | null>(null);\n  const updateNodeInternals = useUpdateNodeInternals();\n  const [isPlaying, setIsPlaying] = useIsPlaying();\n  const [collapsed, setCollapsed] = useState<boolean>(\n    data.outputData ? true : false,\n  );\n  const [showLogs, setShowLogs] = useState<boolean>(\n    data.outputData ? true : false,\n  );\n  const [url, setUrl] = useState<string>(\"\");\n\n  const [fileChoiceSelected, setFileChoiceSelected] =\n    useState<FileChoice | null>(data?.fileChoiceSelected);\n\n  const [isLoading, startLoadingWith] = useLoading();\n\n  useEffect(() => {\n    if (data.isDone) {\n      setIsPlaying(false);\n    }\n    updateNodeInternals(id);\n  }, [data.lastRun, data.outputData]);\n\n  async function uploadFile(files: File[]) {\n    const filename = files[0].name;\n    const urls = await getUploadAndDownloadUrl(filename);\n    const uploadData = urls.upload_data;\n    await uploadWithS3Link(uploadData, files[0]);\n    return urls;\n  }\n\n  async function processFiles(files: File[]) {\n    if (!files || files.length === 0) return;\n\n    let urls: any;\n    let uploadError: boolean = false;\n\n    try {\n      urls = await startLoadingWith(uploadFile, files);\n    } catch (error) {\n      toastErrorMessage(t(\"error.upload_failed\"));\n      uploadError = true;\n      setFiles(null);\n    } finally {\n      if (uploadError) return;\n    }\n\n    const outputType = getOutputExtension(files[0].name);\n\n    onUpdateNodeData(id, {\n      ...data,\n      fileUrl: urls.download_link,\n      outputData: urls.download_link,\n      lastRun: new Date(),\n      config: {\n        ...data.config,\n        outputType,\n      },\n    });\n\n    setShowLogs(true);\n    setCollapsed(true);\n  }\n\n  const handleAcceptFiles = (files: File[]) => {\n    if (files) {\n      setFiles(files);\n      processFiles(files);\n    }\n  };\n\n  const handleChangeHandlePosition = (\n    newPosition: Position,\n    handleId: string,\n  ) => {\n    onUpdateNodeData(id, {\n      ...data,\n      handles: {\n        ...data.handles,\n        [handleId]: newPosition,\n      },\n    });\n    updateNodeInternals(id);\n  };\n\n  const handlePlayClick = () => {\n    setIsPlaying(true);\n  };\n\n  const toggleCollapsed = () => {\n    setCollapsed(!collapsed);\n  };\n\n  const handleSetFileViaURL = () => {\n    if (!url) return;\n\n    const outputType = getOutputExtension(url);\n    onUpdateNodeData(id, {\n      ...data,\n      fileUrl: url,\n      outputData: url,\n      lastRun: new Date(),\n      config: {\n        ...data.config,\n        outputType,\n      },\n    });\n\n    setShowLogs(true);\n    setCollapsed(true);\n  };\n\n  function handleFileChoiceSelected(choice: FileChoice | null) {\n    setFileChoiceSelected(choice);\n    onUpdateNodeData(id, {\n      ...data,\n      fileChoiceSelected: choice,\n    });\n  }\n\n  const hideFields = isLoading || collapsed;\n\n  return (\n    <NodeContainer>\n      <NodeHeader onDoubleClick={toggleCollapsed}>\n        <NodeIcon>\n          <FaFileAlt />\n        </NodeIcon>\n        <NodeTitle>{data.appearance?.customName ?? t(\"File\")}</NodeTitle>\n        <HandleWrapper\n          id={generateIdForHandle(0)}\n          position={\n            !!data?.handles && data.handles[id]\n              ? data.handles[id]\n              : Position.Right\n          }\n          isOutput\n          onChangeHandlePosition={handleChangeHandlePosition}\n        />\n        <NodePlayButton\n          isPlaying={isPlaying}\n          hasRun={!!data.lastRun}\n          onClick={handlePlayClick}\n          nodeName={data.name}\n        />\n      </NodeHeader>\n      <NodeBand color={data.appearance?.color} />\n\n      {!hideFields && (\n        <div className=\"p-2 text-3xl\">\n          <OptionSelector\n            onSelectOption={(option) => handleFileChoiceSelected(option.value)}\n            options={fileChoices}\n            selectedOption={fileChoiceSelected}\n          />\n        </div>\n      )}\n\n      {isLoading && (\n        <div className=\"my-2 flex w-full items-center justify-center p-2 text-center text-2xl text-teal-400\">\n          <LoadingSpinner />\n        </div>\n      )}\n\n      {!hideFields && fileChoiceSelected === \"upload\" && (\n        <div className=\"px-5 py-3\">\n          <FileDropZone\n            accept={accept}\n            onAcceptFile={handleAcceptFiles}\n            selectedFiles={files}\n            oneFile\n          />\n        </div>\n      )}\n\n      {!hideFields && fileChoiceSelected === \"url\" && (\n        <div className=\"text-slate-200\">\n          <InputWithButton\n            buttonText={t(\"Load\") ?? \"\"}\n            inputPlaceholder={t(\"EnterUrlToDesiredFile\") ?? \"\"}\n            onInputChange={setUrl}\n            value={url}\n            onButtonClick={handleSetFileViaURL}\n            inputClassName=\"text-center\"\n            buttonClassName=\"rounded-lg bg-sky-500 p-2 hover:bg-sky-400\"\n          />\n        </div>\n      )}\n      <HintComponent hintId=\"file-upload\" textVar=\"TextDocumentHint\" />\n      <NodeOutput\n        showLogs={showLogs}\n        onClickOutput={() => setShowLogs(!showLogs)}\n        data={data}\n      />\n    </NodeContainer>\n  );\n};\n\nexport default FileUploadNode;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/GenericNode.tsx",
    "content": "import React, { useState, useEffect, useContext, useMemo, FC } from \"react\";\nimport { Position, NodeProps, useUpdateNodeInternals } from \"reactflow\";\nimport {\n  NodeContainer,\n  NodeHeader,\n  NodeIcon,\n  NodeTitle,\n  NodeContent,\n  NodeForm,\n  NodeBand,\n} from \"./Node.styles\";\nimport useHandleShowOutput from \"../../hooks/useHandleShowOutput\";\nimport {\n  generateIdForHandles,\n  getKeyFromHandleName,\n  getTargetHandleKey,\n} from \"../../utils/flowUtils\";\nimport { getIconComponent } from \"./utils/NodeIcons\";\nimport {\n  Field,\n  NodeConfig,\n  NodeSubConfig,\n} from \"../../nodes-configuration/types\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport NodePlayButton from \"./node-button/NodePlayButton\";\nimport { useTranslation } from \"react-i18next\";\nimport { useIsPlaying } from \"../../hooks/useIsPlaying\";\nimport { GenericNodeData, NodeData } from \"./types/node\";\nimport HandleWrapper from \"../handles/HandleWrapper\";\nimport useHandlePositions from \"../../hooks/useHandlePositions\";\nimport { useFormFields } from \"../../hooks/useFormFields\";\nimport NodeOutput from \"./node-output/NodeOutput\";\nimport { getDynamicConfig } from \"../../api/nodes\";\nimport {\n  getAdequateConfigFromDiscriminators,\n  getDefaultOptions,\n  getNbInputs,\n  getNbOutputs,\n  hasDiscriminatorChanged,\n} from \"../../utils/nodeConfigurationUtils\";\nimport { evaluateCondition } from \"../../utils/evaluateConditions\";\n\ninterface GenericNodeProps extends NodeProps {\n  data: GenericNodeData;\n  id: string;\n  selected: boolean;\n  nodeFields?: Field[];\n  iconComponent?: FC;\n}\n\nconst GenericNode: React.FC<GenericNodeProps> = React.memo(\n  ({ data, id, selected, nodeFields, iconComponent }) => {\n    const { t } = useTranslation(\"flow\");\n\n    const {\n      hasParent,\n      showOnlyOutput,\n      onUpdateNodeData,\n      getIncomingEdges,\n      overrideConfigForNode,\n      findNode,\n      removeEdgesByIds,\n    } = useContext(NodeContext);\n\n    const updateNodeInternals = useUpdateNodeInternals();\n\n    const nbOutput = getNbOutputs(data);\n\n    const [collapsed, setCollapsed] = useState<boolean>(false);\n\n    const [showLogs, setShowLogs] = useState<boolean>(\n      data.config?.defaultHideOutput == null\n        ? true\n        : !data.config.defaultHideOutput,\n    );\n    const [fields, setFields] = useState<Field[]>(\n      !!data.config?.fields\n        ? data.config.fields\n        : !!nodeFields\n          ? nodeFields\n          : [],\n    );\n\n    useEffect(() => {\n      if (data.isDone) setIsPlaying(false);\n\n      if (!data.config.defaultHideOutput) {\n        setShowLogs(true);\n      } else {\n        setShowLogs(false);\n      }\n    }, [data.lastRun, data.outputData]);\n\n    useEffect(() => {\n      if (!data.config?.fields?.some((field) => field.hasHandle)) return;\n\n      const fieldsToNullify: any = {};\n\n      const edgesKeys = getIncomingEdges(id)?.map((edge) =>\n        getTargetHandleKey(edge),\n      );\n\n      const fieldsWithValidCondition = fields.filter((field) => {\n        if (field?.condition) {\n          const condition = field.condition;\n          return evaluateCondition(condition, data);\n        }\n        return true;\n      });\n\n      edgesKeys?.forEach((key) => {\n        fieldsToNullify[fieldsWithValidCondition[key]?.name] = undefined;\n      });\n\n      const fieldsUpdated = fields.map((field) => {\n        if (field.name in fieldsToNullify) {\n          field.isLinked = true;\n        } else {\n          field.isLinked = false;\n        }\n        return field;\n      });\n\n      const currentNodeData = findNode(id)?.data;\n\n      if (!!currentNodeData) {\n        onUpdateNodeData(id, {\n          ...currentNodeData,\n          ...fieldsToNullify,\n          config: {\n            ...currentNodeData.config,\n            fields: fieldsUpdated,\n            inputNames: fieldsWithValidCondition.map((field) => field.name),\n          },\n        });\n      }\n    }, [getIncomingEdges(id)?.length]);\n\n    const outputHandleIds = useMemo(\n      () => generateIdForHandles(nbOutput, true),\n      [nbOutput],\n    );\n\n    const nbInput = useMemo(\n      () => getNbInputs(data, fields),\n      [data?.config?.inputNames],\n    );\n\n    useEffect(() => {\n      const incomingEdges = getIncomingEdges(id) || [];\n      const incomingEdgeKeys = incomingEdges.map((edge) =>\n        getKeyFromHandleName(edge.targetHandle ?? \"\"),\n      );\n\n      const keysToRemove = incomingEdgeKeys.filter((key) => +key >= nbInput);\n\n      const edgesIdToRemove = incomingEdges\n        .filter((edge) =>\n          keysToRemove.includes(getKeyFromHandleName(edge.targetHandle ?? \"\")),\n        )\n        .map((edge) => edge.id);\n\n      if (edgesIdToRemove.length) removeEdgesByIds(edgesIdToRemove);\n    }, [data?.config?.inputNames]);\n\n    const [isPlaying, setIsPlaying] = useIsPlaying();\n\n    useHandleShowOutput({\n      showOnlyOutput,\n      setCollapsed: setCollapsed,\n      setShowLogs: setShowLogs,\n    });\n\n    const formFields = useFormFields(\n      data,\n      id,\n      handleNodeFieldChange,\n      setDefaultOptions,\n      hasParent,\n      {\n        showHandles: data.config.showHandlesNames,\n        showLabels: data.config.showHandlesNames,\n        showOnlyConnectedFields: collapsed,\n      },\n      handleNodeDataChange,\n    );\n\n    const { allInputHandleIds, allHandlePositions } = useHandlePositions(\n      data,\n      nbInput,\n      outputHandleIds,\n    );\n\n    const toggleCollapsed = () => {\n      setCollapsed(!collapsed);\n    };\n\n    const handlePlayClick = () => {\n      setIsPlaying(true);\n    };\n\n    function handleNodeDataChange(data: GenericNodeData) {\n      onUpdateNodeData(id, data);\n      updateNodeInternals(id);\n      if (data.config.fields) {\n        setFields(data.config.fields);\n      }\n    }\n\n    function handleNodeFieldChange(\n      fieldName: string,\n      value: any,\n      target?: any,\n    ) {\n      const selectionStart = target?.selectionStart;\n      const selectionEnd = target?.selectionEnd;\n\n      const newNodeData = {\n        ...data,\n        [fieldName]: value,\n      };\n\n      onUpdateNodeData(id, newNodeData);\n\n      if (hasDiscriminatorChanged(fieldName, newNodeData)) {\n        updateConfigWithDiscriminator(newNodeData);\n      }\n\n      if (!!target) {\n        requestAnimationFrame(() => {\n          target.selectionStart = selectionStart;\n          target.selectionEnd = selectionEnd;\n        });\n      }\n\n      if (fieldName === \"config\") {\n        updateNodeInternals(id);\n      }\n    }\n\n    function updateConfigWithDiscriminator(nodeData: NodeData) {\n      const newConfig = getAdequateConfigFromDiscriminators(nodeData)?.config;\n      if (!newConfig) return;\n\n      if (!!newConfig) {\n        overrideConfigForNode(id, newConfig, nodeData);\n        setFields(newConfig.fields);\n      }\n    }\n\n    function setDefaultOptions() {\n      const defaultOptions: any = getDefaultOptions(data.config.fields, data);\n\n      onUpdateNodeData(id, {\n        ...data,\n        ...defaultOptions,\n      });\n    }\n\n    function handleChangeHandlePosition(\n      newPosition: Position,\n      handleId: string,\n    ) {\n      onUpdateNodeData(id, {\n        ...data,\n        handles: {\n          ...data.handles,\n          [handleId]: newPosition,\n        },\n      });\n      updateNodeInternals(id);\n    }\n\n    function updateConfig(config: NodeConfig) {\n      const defaultOptions: any = getDefaultOptions(config.fields, data);\n\n      onUpdateNodeData(id, {\n        ...data,\n        ...defaultOptions,\n        config: {\n          ...config,\n          isDynamicallyGenerated: false,\n        },\n      });\n\n      setFields(config.fields);\n    }\n\n    function updateConfigVariant(variantConf: NodeSubConfig) {\n      const defaultConfigEnabled = variantConf.subConfigurations[0].config;\n      const discriminators = variantConf.subConfigurations[0].discriminators;\n\n      const defaultFields = defaultConfigEnabled.fields;\n      const defaultOptions: any = getDefaultOptions(defaultFields, data);\n\n      onUpdateNodeData(id, {\n        ...data,\n        ...defaultOptions,\n        ...discriminators,\n        config: {\n          ...defaultConfigEnabled,\n          isDynamicallyGenerated: false,\n        },\n        variantConfig: {\n          ...variantConf,\n        },\n      });\n\n      setFields(defaultFields);\n    }\n\n    async function handleGetDynamicConfig() {\n      if (data.config.processorType == null) return;\n\n      const newConfig = await getDynamicConfig(data.config.processorType, data);\n\n      if (newConfig.subConfigurations != null) {\n        updateConfigVariant(newConfig);\n      } else {\n        updateConfig(newConfig);\n      }\n    }\n\n    const NodeIconComponent = !!iconComponent\n      ? iconComponent\n      : getIconComponent(data.config.icon);\n\n    const displayInputs =\n      data.config.hasInputHandle && !data.config.showHandlesNames;\n\n    const hideNodeParams =\n      (hasParent(id) && data.config.hideFieldsIfParent) || collapsed;\n\n    return (\n      <NodeContainer key={id} className={`flex h-full w-full flex-col`}>\n        <NodeHeader onDoubleClick={toggleCollapsed}>\n          {displayInputs && (\n            <>\n              {allInputHandleIds.map((id) => {\n                return (\n                  <HandleWrapper\n                    key={id}\n                    id={id}\n                    position={\n                      !!data?.handles && data.handles[id]\n                        ? data.handles[id]\n                        : Position.Left\n                    }\n                    linkedHandlePositions={allHandlePositions}\n                    onChangeHandlePosition={handleChangeHandlePosition}\n                  />\n                );\n              })}\n            </>\n          )}\n          <NodeIcon>{NodeIconComponent && <NodeIconComponent />}</NodeIcon>\n          <NodeTitle>\n            {data.appearance?.customName ?? t(data.config.nodeName)}\n          </NodeTitle>\n          {outputHandleIds.map((id, index) => {\n            return (\n              <HandleWrapper\n                key={id}\n                id={id}\n                position={\n                  !!data?.handles && data.handles[id]\n                    ? data.handles[id]\n                    : Position.Right\n                }\n                linkedHandlePositions={allHandlePositions}\n                onChangeHandlePosition={handleChangeHandlePosition}\n                data-tooltip-id={`app-tooltip`}\n                data-tooltip-content={\n                  data.outputData ? data.outputData[index] : \"\"\n                }\n                isOutput\n              />\n            );\n          })}\n          <NodePlayButton\n            isPlaying={isPlaying}\n            hasRun={!!data.lastRun}\n            onClick={handlePlayClick}\n            nodeName={data.name}\n          />\n        </NodeHeader>\n        <NodeBand\n          selected={selected}\n          color={data.appearance?.color}\n          className={`${selected ? \"animate-pulse\" : \"\"}`}\n        />\n        {(!hideNodeParams || data.config.showHandlesNames) && (\n          <NodeContent>\n            <NodeForm>{formFields}</NodeForm>\n            {data.config.isDynamicallyGenerated && (\n              <button\n                className={`rounded-lg bg-sky-500 p-2 hover:bg-sky-400`}\n                onClick={handleGetDynamicConfig}\n              >\n                {t(\"Validate\")}\n              </button>\n            )}\n          </NodeContent>\n        )}\n\n        <NodeOutput\n          showLogs={showLogs}\n          onClickOutput={() => setShowLogs(!showLogs)}\n          data={data}\n        />\n      </NodeContainer>\n    );\n  },\n  propsAreEqual,\n);\n\nfunction propsAreEqual(\n  prevProps: GenericNodeProps,\n  nextProps: GenericNodeProps,\n) {\n  if (\n    prevProps.selected !== nextProps.selected ||\n    prevProps.id !== nextProps.id\n  ) {\n    return false;\n  }\n\n  for (let key in prevProps.data) {\n    if (\n      key !== \"x\" &&\n      key !== \"y\" &&\n      prevProps.data[key] !== nextProps.data[key]\n    ) {\n      return false;\n    }\n  }\n\n  for (let key in nextProps.data) {\n    if (\n      key !== \"x\" &&\n      key !== \"y\" &&\n      nextProps.data[key] !== prevProps.data[key]\n    ) {\n      return false;\n    }\n  }\n\n  return true;\n}\n\nexport default GenericNode;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/Node.styles.ts",
    "content": "import styled, { css, keyframes } from \"styled-components\";\nimport ReactFlow, { MiniMap, Controls, Panel, Handle } from \"reactflow\";\n\nimport { createGlobalStyle } from \"styled-components\";\nimport { darken } from \"polished\";\nimport { FiCopy } from \"react-icons/fi\";\nimport { FaSpinner } from \"react-icons/fa\";\n\nexport const GlobalStyle = createGlobalStyle`\n  body {\n    font-family: 'Roboto', sans-serif;\n  }\n`;\n\nexport const NodeHeader = styled.div`\n  display: flex;\n  align-items: center;\n  justify-content: space-between;\n  font-size: 1.4em;\n  min-height: 70px;\n  background-color: ${({ theme }) => theme.nodeBg};\n  padding-top: 0.1em;\n  padding-bottom: 0.1em;\n  padding-left: 1em;\n  padding-right: 1em;\n  border-top-left-radius: 8px;\n  border-top-right-radius: 8px;\n  cursor: pointer;\n  color: ${({ theme }) => theme.text};\n  transition: all 0.3s ease;\n`;\n\nexport const NodeBand = styled.div<{ selected?: boolean; color?: string }>`\n  padding: 2px;\n  overflow: hidden;\n  transition: height 0.2s ease-out background 0.3s ease;\n  background: ${({ theme, selected, color }) =>\n    color ? color : selected ? theme.accentSelected : theme.accent};\n`;\n\nexport const NodeTitle = styled.div`\n  font-weight: 600;\n  color: ${({ theme }) => theme.text};\n`;\n\nexport const NodeContent = styled.div.attrs({\n  className: \"flex flex-col h-auto w-full flex-grow justify-center p-4\",\n})`\n  color: ${({ theme }) => theme.text};\n`;\n\nexport const NodeForm = styled.div`\n  display: flex;\n  height: 100%;\n  width: 100%;\n  flex-direction: column;\n  gap: 8px;\n`;\n\nexport const NodeLabel = styled.label``;\n\nexport const StyledNodeTextarea = styled.textarea<{ withMinHeight?: boolean }>`\n  padding: 12px 24px;\n  border: none;\n  border-radius: 8px;\n  font-size: 1.1em;\n  line-height: 1.5em;\n  background-color: ${({ theme }) => theme.nodeInputBg};\n  box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n  color: ${({ theme }) => theme.text};\n  resize: vertical;\n  min-height: ${({ withMinHeight }) => (withMinHeight ? \"8rem\" : undefined)};\n  width: 100%;\n  height: auto;\n  transition:\n    box-shadow 0.3s ease-in-out,\n    background-color 0.3s ease;\n\n  &:hover,\n  &:focus {\n    box-shadow: 0 4px 6px rgba(0, 0, 0, 0.2);\n  }\n`;\n\nexport const NodeIcon = styled.div`\n  display: flex;\n  justify-content: center;\n  align-items: center;\n  height: 100%;\n  color: ${({ theme }) => theme.text};\n  max-width: 1.3em;\n  font-size: 1.3em;\n`;\n\nexport const NodeContainer = styled.div<{ width?: number }>`\n  width: 35em;\n\n  background: ${({ theme }) => theme.nodeGradientBg};\n  background-color: ${({ theme }) => theme.bg};\n  box-shadow: ${({ theme }) => theme.boxShadow};\n  border-radius: 8px;\n  transition: all 0.3s ease;\n`;\n\nexport const NodeLogsText = styled.p`\n  font-size: 1em;\n  margin: 0;\n  color: ${({ theme }) => theme.text};\n`;\n\nexport const NodeLogs = styled.div<{ showLogs: boolean; noPadding?: boolean }>`\n  border-radius: 0 0 8px 8px;\n  font-size: 1.1em;\n  line-height: 1.4em;\n  padding: ${({ noPadding }) => (noPadding ? \"0px\" : \"10px 16px\")};\n  overflow: hidden;\n  word-break: break-word;\n  transition: height 0.2s ease-out background 0.3s ease;\n  background: ${({ theme }) => theme.outputBg};\n  color: ${({ theme }) => theme.accentText};\n  cursor: pointer;\n  max-height: 700px;\n  overflow-y: auto;\n  overflow-wrap: break-word;\n`;\n\nexport const OptionSelector = styled.div`\n  display: flex;\n  flex-direction: row;\n  justify-content: space-around;\n  align-items: center;\n  width: 100%;\n  height: fit-content;\n  border: 2px solid ${({ theme }) => theme.accent};\n  border-radius: 4px;\n  overflow: hidden;\n  background-color: ${({ theme }) => theme.bg};\n  box-shadow: 0px 0px 0px 1px rgba(255, 255, 255, 0.1);\n  padding: 3px;\n  gap: 5px;\n`;\n\nexport const OptionButton = styled.button<{ selected: boolean }>`\n  flex-grow: 1;\n  padding: 10px 10px;\n  font-size: 1.1rem;\n  background: ${({ selected, theme }) =>\n    selected ? theme.optionButtonBgSelected : null};\n  color: ${({ selected, theme }) =>\n    selected ? theme.optionButtonColorSelected : theme.optionButtonColor};\n  border: none;\n  border-radius: 4px;\n  cursor: pointer;\n  transition: all 0.3s ease;\n  text-align: center;\n  font-weight: bold;\n\n  &:hover {\n    background-color: ${({ selected, theme }) =>\n      selected ? theme.optionButtonBg : darken(0.1, theme.optionButtonBg)};\n    color: ${({ theme }) => theme.optionButtonColorSelected};\n  }\n`;\n\nexport const NodeSelect = styled.select`\n  padding: 10px 16px;\n  border: none;\n  border-radius: 5px;\n  font-size: 1.1em;\n  background-color: ${({ theme }) => theme.nodeInputBg};\n  box-shadow: 0 1px 1px rgba(0, 0, 0, 0.2);\n  color: ${({ theme }) => theme.text};\n  resize: vertical;\n  height: fit-content;\n  transition: all 0.3s ease;\n`;\n\nexport const NodeSelectOption = styled.option`\n  padding: 10px 16px;\n`;\n\nexport const ReactFlowStyled = styled(ReactFlow)`\n  .react-flow__attribution {\n    background: transparent;\n  }\n`;\n\nexport const MiniMapStyled = styled(MiniMap)`\n  background-color: ${(props) => props.theme.minimapBg};\n\n  .react-flow__minimap-mask {\n    fill: ${(props) => props.theme.minimapMaskBg};\n  }\n\n  .react-flow__minimap-node {\n    fill: ${(props) => props.theme.minimapMaskBg};\n    stroke: none;\n  }\n\n  @media screen and (max-width: 768px) {\n    display: none;\n  }\n`;\n\nexport const ControlsStyled = styled(Controls)`\n  button {\n    background-color: ${(props) => props.theme.controlsBg};\n    color: ${(props) => props.theme.controlsColor};\n    border-bottom: 1px solid ${(props) => props.theme.controlsBorder};\n\n    &:hover {\n      background-color: ${(props) => props.theme.controlsBgHover};\n    }\n\n    path {\n      fill: currentColor;\n    }\n  }\n`;\n\nexport const CopyButton = styled.button`\n  background-color: transparent;\n  border: none;\n  cursor: pointer;\n  display: flex;\n  align-items: center;\n  justify-content: center;\n`;\n\nexport const CopyIcon = styled(FiCopy)`\n  color: ${(props) => props.theme.controlsColor};\n\n  :hover {\n    color: #000000;\n  }\n`;\n\nexport const InputHandle = styled(Handle)<{ required?: boolean }>`\n  z-index: 45;\n  background: ${({ required }) => (required ? \"#F09686\" : \"#72c8fa\")};\n  width: 0.75em;\n  height: 0.75em;\n\n  @media (max-width: 600px) {\n    width: 1.25em;\n    height: 1.25em;\n  }\n\n  border-radius: 50%;\n  border: none;\n  box-shadow: ${({ required }) =>\n    required\n      ? \"0 0 10px 2px rgba(240, 150, 134, 0.5)\"\n      : \"0 0 10px 2px rgba(114, 200, 250, 0.3)\"};\n  transition:\n    background 0.3s ease,\n    box-shadow 0.3s ease;\n\n  &:hover {\n    background: #89d0fc;\n    box-shadow: 0 0 15px 7px rgba(114, 200, 250, 0.5);\n  }\n`;\n\nexport const OutputHandle = styled(Handle)`\n  z-index: 45;\n  background: rgb(224, 166, 79);\n  width: 10px;\n  height: 10px;\n  box-shadow: 0 0 10px 2px rgba(224, 166, 79, 0.3);\n  border-radius: 0;\n  border: none;\n  transition:\n    background 0.3s ease,\n    box-shadow 0.3s ease;\n\n  @media (max-width: 600px) {\n    width: 1.25em;\n    height: 1.25em;\n  }\n\n  &:hover {\n    background: rgb(234, 176, 89);\n    box-shadow: 0 0 15px 7px rgba(224, 166, 79, 0.5);\n  }\n`;\n\nconst spin = keyframes`\n  0% { transform: rotate(0deg); }\n  100% { transform: rotate(360deg); }\n`;\n\nexport const LoadingIcon = styled(FaSpinner)`\n  animation: ${spin} 1s linear infinite;\n`;\n\nexport const LoadingSpinner = styled(FaSpinner)`\n  animation: ${spin} 1s linear infinite;\n`;\n\nexport const LoadingScreenSpinner = styled.div`\n  border: 4px solid rgba(0, 0, 0, 0.1);\n  border-radius: 50%;\n  border-left-color: rgb(132, 250, 176);\n  animation: ${spin} 1s ease infinite;\n`;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/NodeHelpPopover.tsx",
    "content": "import React from \"react\";\nimport { Popover } from \"@mantine/core\";\nimport { NodeHelp, NodeHelpData } from \"./utils/NodeHelp\";\n\ntype NodeHelpPopoverProps = {\n  children: React.ReactNode;\n  showHelp: boolean;\n  data: NodeHelpData;\n  onClose: () => void;\n};\n\nfunction NodeHelpPopover({\n  children,\n  showHelp,\n  data,\n  onClose,\n}: NodeHelpPopoverProps) {\n  return (\n    <Popover\n      width={\"35em\"}\n      opened={showHelp}\n      withArrow\n      position=\"right\"\n      arrowSize={12}\n      offset={45}\n      withinPortal\n      clickOutsideEvents={[\"mouseup\", \"touchend\"]}\n      closeOnClickOutside\n      shadow=\"md\"\n      styles={{\n        dropdown: {\n          padding: 0,\n        },\n      }}\n    >\n      <Popover.Target>{children}</Popover.Target>\n      <Popover.Dropdown>\n        {data && <NodeHelp data={data} onClose={onClose} />}\n      </Popover.Dropdown>\n    </Popover>\n  );\n}\n\nexport default NodeHelpPopover;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/NodeWrapper.tsx",
    "content": "import React, { useContext, useState } from \"react\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport { FaCopy, FaEraser, FaQuestionCircle, FaRegCopy } from \"react-icons/fa\";\nimport ActionGroup, { Action } from \"../selectors/ActionGroup\";\nimport { MdDelete, MdEdit, MdMenuOpen } from \"react-icons/md\";\nimport { useVisibility } from \"../../providers/VisibilityProvider\";\nimport { useTranslation } from \"react-i18next\";\nimport ColorSelector from \"../selectors/ColorSelector\";\nimport { NodeHelpData } from \"./utils/NodeHelp\";\nimport NodeHelpPopover from \"./NodeHelpPopover\";\n\ntype NodeWrapperProps = {\n  children: React.ReactNode;\n  nodeId: string;\n};\n\ntype NodeActions =\n  | \"clear\"\n  | \"duplicate\"\n  | \"ref\"\n  | \"remove\"\n  | \"sidepane\"\n  | \"color\"\n  | \"name\"\n  | \"helper\";\n\nfunction NodeWrapper({ children, nodeId }: NodeWrapperProps) {\n  const { t } = useTranslation(\"flow\");\n  const { t: tHelp } = useTranslation(\"nodeHelp\");\n  const { getElement, setSidepaneActiveTab } = useVisibility();\n\n  const {\n    findNode,\n    duplicateNode,\n    removeNode,\n    clearNodeOutput,\n    setCurrentNodeIdSelected,\n    updateNodeAppearance,\n  } = useContext(NodeContext);\n\n  const currentNode = findNode(nodeId);\n\n  const currentNodeHelp = tHelp(currentNode?.data.processorType, {\n    returnObjects: true,\n  }) as NodeHelpData;\n\n  const currentNodeColor = currentNode?.data?.appearance?.color;\n\n  const currentNodeName =\n    currentNode?.data?.appearance?.customName ??\n    t(currentNode?.data?.config?.nodeName);\n\n  const currentNodeIsMissingFields =\n    currentNode?.data?.missingFields?.length > 0;\n\n  const [showActions, setShowActions] = useState(false);\n  const [showColors, setShowColors] = useState(false);\n  const [showTextField, setShowTextField] = useState(false);\n  const [showHelp, setShowHelp] = useState(false);\n\n  let hideActionsTimeout: ReturnType<typeof setTimeout>;\n\n  const hideActionsWithDelay = () => {\n    hideActionsTimeout = setTimeout(() => {\n      setShowColors(false);\n      setShowTextField(false);\n      setShowActions(false);\n    }, 2000);\n  };\n\n  const clearHideActionsTimeout = () => {\n    if (hideActionsTimeout) {\n      clearTimeout(hideActionsTimeout);\n    }\n  };\n\n  function handleOpenSidepane(): void {\n    getElement(\"sidebar\").show();\n    setSidepaneActiveTab(\"current_node\");\n  }\n\n  function handleChangeNodeColor(color: string): void {\n    if (color === \"transparent\") {\n      updateNodeAppearance(nodeId, { color: undefined });\n    } else {\n      updateNodeAppearance(nodeId, { color });\n    }\n  }\n\n  function toggleHelp(): void {\n    setShowHelp(!showHelp);\n  }\n\n  const actions: Action<NodeActions>[] = [\n    {\n      icon: (\n        <div className=\"flex h-7 w-7 items-center justify-center\">\n          <div\n            className=\"h-6 w-6 rounded-full\"\n            style={{\n              backgroundColor: currentNodeColor,\n              border: currentNodeColor ? \"none\" : \"solid white 1px\",\n            }}\n          ></div>\n        </div>\n      ),\n      name: t(\"NodeColor\"),\n      value: \"color\",\n      tooltipPosition: \"left\",\n      onClick: () => {\n        setShowColors(!showColors);\n        setShowTextField(false);\n      },\n    },\n    {\n      icon: <MdEdit />,\n      name: t(\"ChangeName\"),\n      value: \"name\",\n      onClick: () => {\n        setShowTextField(!showTextField);\n        setShowColors(false);\n      },\n    },\n    {\n      icon: <FaCopy />,\n      name: t(\"Duplicate\"),\n      value: \"duplicate\",\n      onClick: () => duplicateNode(nodeId),\n    },\n    // {\n    //   icon: <FaRegCopy />,\n    //   name: t(\"CreateRef\"),\n    //   value: \"ref\",\n    //   onClick: () => createNodeRef(nodeId),\n    // },\n    {\n      icon: <MdMenuOpen />,\n      name: t(\"OpeninSidepane\"),\n      value: \"sidepane\",\n      onClick: () => handleOpenSidepane(),\n    },\n    {\n      icon: <FaEraser />,\n      name: t(\"ClearOutput\"),\n      value: \"clear\",\n      onClick: () => clearNodeOutput(nodeId),\n    },\n    {\n      icon: <FaQuestionCircle />,\n      name: t(\"Help\"),\n      value: \"helper\",\n      onClick: () => toggleHelp(),\n    },\n    {\n      icon: <MdDelete />,\n      name: t(\"RemoveNode\"),\n      value: \"remove\",\n      onClick: () => {\n        setCurrentNodeIdSelected(\"\");\n        removeNode(nodeId);\n      },\n      hoverColor: \"text-red-400\",\n    },\n  ];\n\n  return (\n    <NodeHelpPopover\n      showHelp={showHelp}\n      data={currentNodeHelp}\n      onClose={() => setShowHelp(false)}\n    >\n      <div\n        className={`group relative flex h-full w-full rounded-lg p-1 transition-all duration-300 ease-in-out\n        ${currentNodeIsMissingFields ? \"border-2 border-dashed border-red-500/80\" : \"\"}`}\n        onClick={() => {\n          setShowActions(true);\n          setCurrentNodeIdSelected(nodeId);\n        }}\n        onMouseLeave={() => {\n          hideActionsWithDelay();\n        }}\n        onMouseEnter={clearHideActionsTimeout}\n      >\n        {children}\n        <div\n          className={`nodrag absolute right-1/2 top-0 flex -translate-y-14 translate-x-1/2 transition-all duration-300 ease-in-out  ${showActions ? \"opacity-100\" : \"pointer-events-none opacity-0\"}`}\n          onMouseEnter={clearHideActionsTimeout}\n        >\n          <span className=\"text-3xl\">\n            <ActionGroup actions={actions} showIcon />\n          </span>\n          <div\n            className={`absolute flex -translate-x-1/3 -translate-y-10 items-center justify-center space-x-2 rounded-full bg-slate-200/10 p-2 ${showColors ? \"opacity-100 \" : \"pointer-events-none opacity-0\"} transition-all duration-300 ease-in-out `}\n          >\n            <ColorSelector onChangeColor={handleChangeNodeColor} />\n          </div>\n          <div\n            className={`absolute flex -translate-y-20 translate-x-5 items-center justify-center  ${showTextField ? \"opacity-100 \" : \"pointer-events-none opacity-0\"} transition-all duration-300 ease-in-out `}\n          >\n            <div className=\"flex flex-col items-center justify-center rounded-lg bg-slate-200/10 p-2 text-center\">\n              <p> {t(\"EnterCustomName\")}</p>\n              <input\n                className=\"bg-zinc-900/90 px-1 text-center\"\n                value={currentNodeName}\n                onChange={(e) =>\n                  updateNodeAppearance(nodeId, { customName: e.target.value })\n                }\n                onKeyDown={(e) => {\n                  if (e.key === \"Enter\") {\n                    setShowTextField(false);\n                  }\n                }}\n              />\n            </div>\n          </div>\n        </div>\n      </div>\n    </NodeHelpPopover>\n  );\n}\n\nexport default NodeWrapper;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/ReplicateNode.tsx",
    "content": "import { useContext, useEffect, useMemo, useRef, useState } from \"react\";\nimport { useTranslation } from \"react-i18next\";\nimport { NodeProps } from \"reactflow\";\nimport { Field } from \"../../nodes-configuration/types\";\nimport { LoadingSpinner, NodeContainer } from \"./Node.styles\";\nimport { NodeData } from \"./types/node\";\nimport InputWithButton from \"../inputs/InputWithButton\";\nimport { getModelConfig } from \"../../api/replicateModels\";\nimport withCache from \"../../api/cache/withCache\";\nimport { toastErrorMessage } from \"../../utils/toastUtils\";\nimport GenericNode from \"./GenericNode\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport { getIconComponent } from \"./utils/NodeIcons\";\nimport SelectModelPopup from \"../popups/select-model-popup/SelectModelPopup\";\nimport {\n  getSchemaFromConfig,\n  convertOpenAPISchemaToNodeConfig,\n} from \"../../utils/openAPIUtils\";\n\ninterface ReplicateNodeData extends NodeData {\n  schema: any;\n}\n\ninterface DynamicFieldsProps extends NodeProps {\n  data: ReplicateNodeData;\n}\n\nexport default function ReplicateNode({\n  data,\n  id,\n  selected,\n  isConnectable,\n  type,\n  xPos,\n  yPos,\n  zIndex,\n}: DynamicFieldsProps) {\n  const { t } = useTranslation(\"flow\");\n  const [modelInput, setModelInput] = useState<string>(\"\");\n\n  const modelRef = useRef<string | undefined>(\n    !!data.config?.nodeName ? data.config.nodeName : undefined,\n  );\n\n  const fieldsRef = useRef<Field[]>(\n    !!data.config?.fields ? data.config.fields : [],\n  );\n\n  const [showPopup, setShowPopup] = useState(false);\n\n  const { onUpdateNodeData, findNode } = useContext(NodeContext);\n\n  function formatName(name: string) {\n    return name\n      .replace(/([A-Z])/g, \" $1\")\n      .replace(/[_\\-]+/g, \" \")\n      .trim()\n      .split(\" \")\n      .map((word) => word.charAt(0).toUpperCase() + word.slice(1))\n      .join(\" \");\n  }\n\n  function arrangeOldConfig() {\n    onUpdateNodeData(id, {\n      ...data,\n      nodeLoaded: true,\n      model: data.config.nodeName,\n      config: {\n        ...data.config,\n        showHandlesNames: true,\n        nodeName: data.config?.nodeName.split(\":\")[0],\n      },\n    });\n  }\n\n  useEffect(() => {\n    if (\n      !!data?.config?.fields &&\n      !!data?.config?.nodeName &&\n      !data.nodeLoaded\n    ) {\n      arrangeOldConfig();\n    }\n  });\n\n  useEffect(() => {\n    async function configureNode() {\n      if (!modelRef.current) return;\n      let response;\n      let fields: Field[] = [];\n      try {\n        response = await withCache(\n          getModelConfig,\n          modelRef.current,\n          data.processorType,\n        );\n        const inputSchema = getSchemaFromConfig(response, \"Input\");\n        fields = convertOpenAPISchemaToNodeConfig(inputSchema, response);\n      } catch (error) {\n        toastErrorMessage(\n          `Error fetching configuration for following model : \"${modelRef.current}\". \\n\\n Here's a valid model name as an example : fofr/become-image `,\n        );\n      }\n      if (!response) return;\n\n      const modelId = response.modelId;\n      modelRef.current = modelRef.current + \":\" + modelId;\n\n      fieldsRef.current = fields;\n\n      let modelNameToDisplay = getModelNameToDisplay();\n\n      const newFieldData: any = getNewFieldData(fieldsRef.current);\n\n      onUpdateNodeData(id, {\n        ...data,\n        ...newFieldData,\n        model: modelRef.current,\n        config: {\n          ...data.config,\n          fields: fieldsRef.current,\n          inputNames: fields.map((field) => field.name),\n          showHandlesNames: true,\n          nodeName: modelNameToDisplay,\n        },\n        nodeLoaded: true,\n      });\n    }\n\n    if (fieldsRef.current.length > 0 || !modelRef.current) return;\n\n    configureNode();\n  }, [modelRef.current]);\n\n  useEffect(() => {\n    if (!fieldsRef.current || fieldsRef.current.length === 0) return;\n\n    const newFieldData: any = getNewFieldData(fieldsRef.current);\n\n    const currentNodeData = findNode(id)?.data;\n\n    onUpdateNodeData(id, {\n      ...currentNodeData,\n      ...newFieldData,\n      config: {\n        ...currentNodeData.config,\n        inputNames: fieldsRef.current.map((field) => field.name),\n        fields: fieldsRef.current,\n      },\n    });\n  }, [fieldsRef.current]);\n\n  function getModelNameToDisplay() {\n    let modelNameToDisplay = modelRef.current?.includes(\":\")\n      ? modelRef.current.split(\":\")[0]\n      : modelRef.current;\n\n    if (!modelNameToDisplay) return;\n\n    modelNameToDisplay = modelNameToDisplay.includes(\"/\")\n      ? formatName(modelNameToDisplay.split(\"/\")[1])\n      : modelNameToDisplay;\n\n    return modelNameToDisplay;\n  }\n\n  function getNewFieldData(fields: Field[]) {\n    const newFieldData: any = {};\n\n    fields.forEach((field) => {\n      if (field.defaultValue != null) {\n        if (data[field.name] == null && !field.isLinked) {\n          newFieldData[field.name] = field.defaultValue;\n        }\n      }\n    });\n    return newFieldData;\n  }\n\n  function handleClosePopup() {\n    setShowPopup(false);\n  }\n\n  const handleButtonClick = () => {\n    setShowPopup(!showPopup);\n  };\n\n  const handleValidate = (model: any) => {\n    modelRef.current = model;\n    setShowPopup(!showPopup);\n  };\n\n  function handleLoadModel() {\n    modelRef.current = modelInput;\n  }\n\n  const NodeIconComponent = getIconComponent(\"ReplicateLogo\");\n\n  return !data.nodeLoaded ? (\n    <NodeContainer\n      key={id}\n      className={`flex h-full w-full flex-col items-center justify-center px-4 py-5 text-slate-100`}\n    >\n      {!modelRef.current ? (\n        <div className=\"flex w-full flex-col items-center justify-center space-y-3\">\n          <div className=\"flex w-full flex-row items-center\">\n            <button\n              className=\"w-full rounded-2xl bg-slate-600 px-3 py-3 hover:bg-slate-400\"\n              onClick={handleButtonClick}\n            >\n              {t(\"ClickToSelectModel\")}\n            </button>\n            {showPopup && (\n              <SelectModelPopup\n                show={showPopup}\n                onClose={handleClosePopup}\n                onValidate={handleValidate}\n              />\n            )}\n          </div>\n          <p> {t(\"Or\")} </p>\n          <div className=\"w-full text-slate-200\">\n            <InputWithButton\n              buttonText={t(\"Load\") ?? \"\"}\n              inputPlaceholder={t(\"EnterModelNameDirectly\") ?? \"\"}\n              value={modelInput}\n              onInputChange={setModelInput}\n              onButtonClick={handleLoadModel}\n              inputClassName=\"text-center\"\n              buttonClassName=\"rounded-lg bg-sky-500 p-2 hover:bg-sky-400\"\n            />\n          </div>\n        </div>\n      ) : (\n        <>\n          <LoadingSpinner />\n        </>\n      )}\n    </NodeContainer>\n  ) : (\n    <GenericNode\n      data={data}\n      id={id}\n      selected={selected}\n      type={type}\n      zIndex={zIndex}\n      isConnectable={isConnectable}\n      xPos={xPos}\n      yPos={yPos}\n      dragging={false}\n      nodeFields={fieldsRef.current}\n      iconComponent={NodeIconComponent}\n    />\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/TransitionNode.tsx",
    "content": "import React, { useContext, useEffect, useMemo, useState } from \"react\";\nimport { Position, NodeProps, useUpdateNodeInternals } from \"reactflow\";\nimport { generateIdForHandle } from \"../../utils/flowUtils\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport { useIsPlaying } from \"../../hooks/useIsPlaying\";\nimport NodePlayButton from \"./node-button/NodePlayButton\";\nimport HandleWrapper from \"../handles/HandleWrapper\";\nimport useHandlePositions from \"../../hooks/useHandlePositions\";\nimport { GenericNodeData } from \"./types/node\";\n\ninterface TransitionNodeData extends GenericNodeData {\n  handles: any;\n  id: string;\n  name: string;\n  processorType: string;\n  nbOutput: number;\n  input: string;\n  input_key: string;\n  outputData?: string[];\n  lastRun: string;\n}\n\ninterface TransitionNodeProps extends NodeProps {\n  data: TransitionNodeData;\n}\n\nconst TransitionNode: React.FC<TransitionNodeProps> = React.memo(\n  ({ data, id }) => {\n    const { onUpdateNodeData } = useContext(NodeContext);\n    const [nodeId, setNodeId] = useState<string>(`${data.name}-${Date.now()}`);\n    const [isPlaying, setIsPlaying] = useIsPlaying();\n    const updateNodeInternals = useUpdateNodeInternals();\n\n    const outputHandleId = useMemo(() => generateIdForHandle(0, true), []);\n    const inputHandleId = useMemo(() => generateIdForHandle(0), []);\n\n    const { allHandlePositions } = useHandlePositions(data, 1, [\n      outputHandleId,\n    ]);\n\n    useEffect(() => {\n      setNodeId(`${data.name}-${Date.now()}`);\n      setIsPlaying(false);\n      updateNodeInternals(id);\n    }, [data.lastRun]);\n\n    const handlePlayClick = () => {\n      setIsPlaying(true);\n    };\n\n    const handleChangeHandlePosition = (\n      newPosition: Position,\n      handleId: string,\n    ) => {\n      onUpdateNodeData(id, {\n        ...data,\n        handles: {\n          ...data.handles,\n          [handleId]: newPosition,\n        },\n      });\n      updateNodeInternals(id);\n    };\n\n    return (\n      <div\n        key={id}\n        style={{\n          borderColor: data?.appearance?.color\n            ? data?.appearance?.color\n            : \"rgb(34 197 94)\",\n        }}\n        className=\"flex flex-col items-center justify-between \n                    rounded-lg border bg-gray-800 p-4 \n                    text-white shadow-lg transition \n                    duration-300 ease-in-out hover:bg-gray-700\"\n      >\n        <HandleWrapper\n          id={inputHandleId}\n          position={\n            !!data?.handles && data.handles[inputHandleId]\n              ? data.handles[inputHandleId]\n              : Position.Left\n          }\n          linkedHandlePositions={allHandlePositions}\n          onChangeHandlePosition={handleChangeHandlePosition}\n        />\n\n        <NodePlayButton\n          isPlaying={isPlaying}\n          hasRun={!!data.lastRun}\n          onClick={handlePlayClick}\n          nodeName={data.name}\n          size=\"medium\"\n        />\n\n        <HandleWrapper\n          id={outputHandleId}\n          position={\n            !!data?.handles && data.handles[outputHandleId]\n              ? data.handles[outputHandleId]\n              : Position.Right\n          }\n          linkedHandlePositions={allHandlePositions}\n          onChangeHandlePosition={handleChangeHandlePosition}\n          isOutput\n        />\n      </div>\n    );\n  },\n);\nexport default TransitionNode;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-button/InputNameBar.tsx",
    "content": "import { memo } from \"react\";\nimport { Tooltip, ActionIcon } from \"@mantine/core\";\nimport { FaMinus, FaPlus } from \"react-icons/fa\";\nimport { useTranslation } from \"react-i18next\";\n\ninterface InputNameBarProps {\n  inputNames: string[];\n  textareaRef: any;\n  fieldToUpdate?: string;\n  onNameClick?: (value: string) => void;\n  addNewInput?: () => void;\n  removeInput?: () => void;\n}\n\nfunction InputNameBar({\n  inputNames,\n  textareaRef,\n  fieldToUpdate,\n  onNameClick,\n  addNewInput,\n  removeInput,\n}: InputNameBarProps) {\n  const { t } = useTranslation(\"flow\");\n\n  const insertAtCursor = (\n    textarea: HTMLTextAreaElement | null,\n    myValue: string,\n  ) => {\n    if (textarea) {\n      if (textarea.selectionStart || textarea.selectionStart === 0) {\n        let startPos = textarea.selectionStart;\n        let endPos = textarea.selectionEnd;\n        textarea.value =\n          textarea.value.substring(0, startPos) +\n          myValue +\n          textarea.value.substring(endPos, textarea.value.length);\n        textarea.selectionStart = startPos + myValue.length;\n        textarea.selectionEnd = startPos + myValue.length;\n      } else {\n        textarea.value += myValue;\n      }\n    }\n  };\n\n  const handleNameClick = (name: string) => {\n    if (!fieldToUpdate) {\n      insertAtCursor(textareaRef?.current, `\\${${name}} `);\n    }\n    onNameClick?.(`\\${${name}} `);\n  };\n\n  const handleAddInput = () => {\n    if (addNewInput) addNewInput();\n  };\n\n  return (\n    <div className=\"flex w-full flex-row items-center justify-center space-x-2 rounded-lg py-1 shadow\">\n      {inputNames.map((name) => (\n        <div\n          key={name}\n          className=\"flex cursor-pointer items-center space-x-1 rounded bg-slate-600/40 px-3 py-1 shadow-sm transition-colors duration-200 ease-in-out hover:bg-slate-400\"\n          onClick={() => handleNameClick(name)}\n        >\n          <span>{name}</span>\n        </div>\n      ))}\n      {!!handleAddInput && !!removeInput && (\n        <>\n          <Tooltip label={t(\"AddInput\")} position=\"top\" withArrow>\n            <ActionIcon color=\"gray\" variant=\"filled\" onClick={handleAddInput}>\n              <FaPlus size={16} />\n            </ActionIcon>\n          </Tooltip>\n          {inputNames.length > 2 && (\n            <Tooltip label={t(\"RemoveInput\")} position=\"top\" withArrow>\n              <ActionIcon color=\"gray\" variant=\"filled\" onClick={removeInput}>\n                <FaMinus size={16} />\n              </ActionIcon>\n            </Tooltip>\n          )}\n        </>\n      )}\n    </div>\n  );\n}\n\nexport default memo(InputNameBar);\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-button/NodePlayButton.tsx",
    "content": "import React, { useContext, useState } from \"react\";\nimport styled, { css, keyframes } from \"styled-components\";\nimport { FaCheck, FaPlay, FaStop } from \"react-icons/fa\";\nimport { NodeContext } from \"../../../providers/NodeProvider\";\nimport TapScale from \"../../shared/motions/TapScale\";\nimport * as NodeStyles from \"../Node.styles\";\n\ninterface NodePlayButtonProps {\n  isPlaying?: boolean;\n  hasRun?: boolean;\n  onClick?: () => void;\n  nodeName: string;\n  size?: \"small\" | \"medium\" | \"large\";\n}\n\nconst NodePlayButton: React.FC<NodePlayButtonProps> = ({\n  isPlaying,\n  hasRun,\n  onClick,\n  nodeName,\n  size,\n}) => {\n  const { runNode, isRunning, currentNodesRunning } = useContext(NodeContext);\n  const [isHovered, setHovered] = useState(false);\n\n  const handleClick = () => {\n    if (!isPlaying) {\n      if (runNode(nodeName) && onClick) {\n        onClick();\n      }\n    }\n  };\n\n  const handleMouseEnter = () => setHovered(true);\n  const handleMouseLeave = () => setHovered(false);\n\n  const isCurrentNodeRunning = currentNodesRunning.includes(nodeName);\n  const isDisabled = isCurrentNodeRunning && !isHovered;\n\n  const IconComponent = getIconComponent(\n    isPlaying,\n    isCurrentNodeRunning,\n    hasRun,\n    isHovered,\n  );\n\n  const tailwindClassSize = {\n    small: \"text-sm\",\n    medium: \"text-md\",\n    large: \"text-3xl\",\n  }[size || \"large\"];\n\n  return (\n    <NodePlayButtonContainer\n      className={`node-play-button ${tailwindClassSize}`}\n      onClick={handleClick}\n      disabled={isDisabled}\n      onMouseEnter={handleMouseEnter}\n      onMouseLeave={handleMouseLeave}\n    >\n      <TapScale scale={0.5}>\n        <IconComponent />\n      </TapScale>\n    </NodePlayButtonContainer>\n  );\n};\n\nfunction getIconComponent(\n  isPlaying: boolean | undefined,\n  isCurrentNodeRunning: boolean,\n  hasRun: boolean | undefined,\n  isHovered: boolean,\n) {\n  if (isPlaying || isCurrentNodeRunning) return NodeStyles.LoadingIcon;\n\n  if (hasRun && !isHovered) return CheckIcon;\n\n  return isCurrentNodeRunning ? NodeStopButtonIcon : NodePlayButtonIcon;\n}\n\nconst NodePlayButtonContainer = styled.button<{ disabled?: boolean }>`\n  cursor: pointer;\n  color: ${(props) => (props.disabled ? \"#888\" : \"#7bb380\")};\n\n  &:hover {\n    color: ${(props) => (props.disabled ? \"#888\" : \"#57ff2d\")};\n  }\n`;\n\nconst NodePlayButtonIcon = styled(FaPlay)`\n  transition: transform 0.3s ease-in-out;\n`;\n\nconst NodeStopButtonIcon = styled(FaStop)`\n  transition: transform 0.3s ease-in-out;\n`;\n\nconst CheckIcon = styled(FaCheck)`\n  transition: transform 0.3s ease-in-out;\n`;\n\nexport default NodePlayButton;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/FileUploadField.tsx",
    "content": "import React, { useContext, useEffect, useRef, useState } from \"react\";\nimport { useTranslation } from \"react-i18next\";\nimport {\n  getUploadAndDownloadUrl,\n  uploadWithS3Link,\n} from \"../../../api/uploadFile\";\nimport { useLoading } from \"../../../hooks/useLoading\";\nimport { toastErrorMessage } from \"../../../utils/toastUtils\";\nimport { getOutputExtension } from \"../../nodes/node-output/outputUtils\";\nimport { LoadingSpinner } from \"../../nodes/Node.styles\";\nimport { Input } from \"@mantine/core\";\nimport OutputRenderer from \"./OutputRenderer\";\nimport { MdFileUpload } from \"react-icons/md\";\nimport { ThemeContext } from \"../../../providers/ThemeProvider\";\n\nexport interface UploadInfo {\n  url: string;\n  extension: string;\n}\n\ninterface FileUploadFieldProps {\n  onFileUpload: (uploadResult: UploadInfo) => void;\n  onUrlSubmit: (url: string) => void;\n  value?: string;\n  isRenderForNode?: boolean;\n}\n\nconst FileUploadField: React.FC<FileUploadFieldProps> = ({\n  onFileUpload,\n  onUrlSubmit,\n  value = \"\",\n  isRenderForNode = false,\n}) => {\n  const { t } = useTranslation(\"flow\");\n  const [url, setUrl] = useState<string>(value);\n  const [showPreview, setShowPreview] = useState<boolean>(!!value);\n  const [isLoading, startLoadingWith] = useLoading();\n  const fileInputRef = useRef<HTMLInputElement | null>(null);\n\n  const { getStyle } = useContext(ThemeContext);\n\n  useEffect(() => {\n    if (value) {\n      setUrl(value);\n      setShowPreview(true);\n    }\n  }, [value]);\n\n  const handleFileButtonClick = () => {\n    fileInputRef.current?.click();\n  };\n\n  const handleFileChange = async (\n    event: React.ChangeEvent<HTMLInputElement>,\n  ) => {\n    const file = event.target.files?.[0];\n    if (file) {\n      try {\n        const result = await startLoadingWith(uploadFile, file);\n        const outputType = getOutputExtension(file.name);\n\n        const info: UploadInfo = {\n          url: result.download_link,\n          extension: outputType,\n        };\n\n        onFileUpload(info);\n        setUrl(info.url);\n        setShowPreview(true);\n      } catch (error) {\n        toastErrorMessage(t(\"error.upload_failed\"));\n      }\n    }\n  };\n  const uploadFile = async (file: File) => {\n    const filename = file.name;\n    const urls = await getUploadAndDownloadUrl(filename);\n    await uploadWithS3Link(urls.upload_data, file);\n    return urls;\n  };\n\n  const handleBlur = () => {\n    if (url) {\n      onUrlSubmit(url);\n      setShowPreview(true);\n    }\n  };\n\n  return (\n    <div data-testid=\"file-upload-field\">\n      {isLoading && (\n        <div className=\"text-md my-2 flex w-full items-center justify-center p-2 text-center text-teal-400\">\n          <LoadingSpinner />\n        </div>\n      )}\n      {!isLoading && (\n        <div className=\"flex w-full items-center gap-2\">\n          <Input\n            style={{ width: \"100%\" }}\n            placeholder=\"Enter URL to desired file\"\n            value={url}\n            onChange={(e) => setUrl(e.target.value)}\n            onBlur={handleBlur}\n            size=\"md\"\n            classNames={{\n              input: \"text-md\",\n            }}\n            styles={\n              isRenderForNode\n                ? {\n                    input: {\n                      backgroundColor: getStyle()?.nodeInputBg,\n                      color: getStyle()?.text,\n                    },\n                  }\n                : undefined\n            }\n          />\n\n          <div\n            onClick={handleFileButtonClick}\n            className=\"cursor-pointer p-1 text-lg hover:text-blue-400\"\n            title=\"Upload file\"\n          >\n            <MdFileUpload />\n          </div>\n        </div>\n      )}\n\n      <input\n        type=\"file\"\n        ref={fileInputRef}\n        style={{ display: \"none\" }}\n        onChange={handleFileChange}\n      />\n\n      {showPreview && (\n        <div className=\"mt-2\">\n          <OutputRenderer data={{ outputData: url } as any} thumbnail />\n        </div>\n      )}\n    </div>\n  );\n};\n\nexport default FileUploadField;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/ImageMaskCreator.tsx",
    "content": "import { Button, Slider } from \"@mantine/core\";\nimport React, { useState, useRef, useEffect } from \"react\";\nimport { useTranslation } from \"react-i18next\";\nimport { ReactSketchCanvas, ReactSketchCanvasRef } from \"react-sketch-canvas\";\nimport { MdRemove, MdUndo } from \"react-icons/md\";\nimport { BsFillEraserFill } from \"react-icons/bs\";\n\ninterface ImageMaskCreatorProps {\n  onSave: (maskDataUrl: string) => void;\n  imageUrls?: string[];\n}\n\nexport const ImageMaskCreator: React.FC<ImageMaskCreatorProps> = ({\n  onSave,\n  imageUrls = [],\n}) => {\n  const { t } = useTranslation(\"flow\");\n\n  const [activeTab, setActiveTab] = useState<\"url\" | \"select\" | \"custom\">(\n    imageUrls.length > 0 ? \"select\" : \"url\",\n  );\n  const [imageUrl, setImageUrl] = useState(\"\");\n  const [penSize, setPenSize] = useState(10);\n\n  const [originalWidth, setOriginalWidth] = useState(800);\n  const [originalHeight, setOriginalHeight] = useState(600);\n  const [displayWidth, setDisplayWidth] = useState(800);\n  const [displayHeight, setDisplayHeight] = useState(600);\n  const [scaleFactor, setScaleFactor] = useState(1);\n\n  const [customWidth, setCustomWidth] = useState(800);\n  const [customHeight, setCustomHeight] = useState(600);\n\n  const canvasRef = useRef<ReactSketchCanvasRef>(null);\n\n  // Listen for image load changes to adjust canvas dimensions\n  useEffect(() => {\n    if ((activeTab === \"url\" || activeTab === \"select\") && imageUrl) {\n      const img = new Image();\n      img.crossOrigin = \"anonymous\";\n      img.src = imageUrl;\n      img.onload = () => {\n        calculateDimensions(img.width, img.height);\n      };\n    } else if (activeTab === \"custom\") {\n      calculateDimensions(customWidth, customHeight);\n    }\n  }, [imageUrl, customWidth, customHeight, activeTab]);\n\n  // Undo feature: Listen for Ctrl+Z / Cmd+Z to remove the last stroke\n  useEffect(() => {\n    const handleKeyDown = (event: KeyboardEvent) => {\n      if ((event.ctrlKey || event.metaKey) && event.key.toLowerCase() === \"z\") {\n        event.preventDefault(); // Prevent default undo behavior\n        if (canvasRef.current) {\n          canvasRef.current.undo();\n        }\n      }\n    };\n\n    window.addEventListener(\"keydown\", handleKeyDown);\n    return () => window.removeEventListener(\"keydown\", handleKeyDown);\n  }, []);\n\n  const calculateDimensions = (width: number, height: number) => {\n    const maxWidth = 800; // Maximum display width\n    const maxHeight = 600; // Maximum display height\n    let scale = 1;\n\n    if (width > maxWidth || height > maxHeight) {\n      // Calculate scale factor to fit the image within the modal\n      const widthScale = maxWidth / width;\n      const heightScale = maxHeight / height;\n      scale = Math.min(widthScale, heightScale);\n    }\n\n    setOriginalWidth(width);\n    setOriginalHeight(height);\n    setDisplayWidth(width * scale);\n    setDisplayHeight(height * scale);\n    setScaleFactor(scale);\n  };\n\n  const handleSave = async () => {\n    if (canvasRef.current) {\n      try {\n        const paths = await canvasRef.current.exportPaths();\n        const tempCanvas = document.createElement(\"canvas\");\n        tempCanvas.width = originalWidth;\n        tempCanvas.height = originalHeight;\n        const context = tempCanvas.getContext(\"2d\");\n\n        if (context) {\n          // Fill background with black\n          context.fillStyle = \"black\";\n          context.fillRect(0, 0, originalWidth, originalHeight);\n\n          // Set stroke style to white\n          context.strokeStyle = \"white\";\n          context.lineCap = \"round\";\n\n          // Redraw the paths onto the temp canvas\n          paths.forEach((stroke) => {\n            context.lineWidth = stroke.strokeWidth / scaleFactor;\n            context.beginPath();\n            stroke.paths.forEach((point: any, idx: number) => {\n              const x = point.x / scaleFactor;\n              const y = point.y / scaleFactor;\n              if (idx === 0) {\n                context.moveTo(x, y);\n              } else {\n                context.lineTo(x, y);\n              }\n            });\n            context.stroke();\n          });\n\n          // Export the mask image\n          const dataUrl = tempCanvas.toDataURL();\n          onSave(dataUrl);\n        }\n      } catch (e) {\n        console.log(e);\n      }\n    }\n  };\n\n  const handleClearCanvas = () => {\n    if (canvasRef.current) {\n      canvasRef.current.clearCanvas();\n    }\n  };\n\n  const handleUndo = () => {\n    if (canvasRef.current) {\n      canvasRef.current.undo();\n    }\n  };\n\n  const handleImageUrlChange = (e: React.ChangeEvent<HTMLInputElement>) => {\n    setImageUrl(e.target.value);\n  };\n\n  const handleSelectImageUrl = (url: string) => {\n    setImageUrl(url);\n  };\n\n  return (\n    <div className=\"text-af-text-title\">\n      <div className=\"flex flex-col space-y-6 text-sm md:text-base\">\n        {/* Tabs for selecting input method */}\n        <div className=\"flex space-x-4 border-b border-gray-700\">\n          <button\n            className={`px-4 py-2 focus:outline-none ${\n              activeTab === \"url\"\n                ? \"border-b-2 border-teal-400 text-teal-400\"\n                : \"text-af-text-secondary hover:text-teal-400\"\n            }`}\n            onClick={() => setActiveTab(\"url\")}\n          >\n            {t(\"EnterImageURLTab\")}\n          </button>\n          {imageUrls.length > 0 && (\n            <button\n              className={`px-4 py-2 focus:outline-none ${\n                activeTab === \"select\"\n                  ? \"border-b-2 border-teal-400 text-teal-400\"\n                  : \"text-af-text-secondary hover:text-teal-400\"\n              }`}\n              onClick={() => {\n                setActiveTab(\"select\");\n                setImageUrl(\"\");\n              }}\n            >\n              {t(\"ChooseExistingImage\")}\n            </button>\n          )}\n          <button\n            className={`px-4 py-2 focus:outline-none ${\n              activeTab === \"custom\"\n                ? \"border-b-2 border-teal-400 text-teal-400\"\n                : \"text-af-text-secondary hover:text-teal-400\"\n            }`}\n            onClick={() => {\n              setActiveTab(\"custom\");\n              setImageUrl(\"\");\n            }}\n          >\n            {t(\"CustomDimensions\")}\n          </button>\n        </div>\n\n        {/* Content based on active tab */}\n        {activeTab === \"url\" && (\n          <div className=\"mt-4\">\n            <label className=\"text-af-text-title mb-1 block text-sm font-medium\">\n              {t(\"ImageURL\")}\n            </label>\n            <input\n              type=\"text\"\n              placeholder={t(\"EnterImageURLPlaceholder\") ?? \"Enter Image URL\"}\n              value={imageUrl}\n              onChange={handleImageUrlChange}\n              className=\"bg-af-bg-1 text-af-text-title w-full rounded-md border border-gray-700 px-3 py-2 focus:border-teal-400 focus:ring-teal-400\"\n            />\n          </div>\n        )}\n\n        {activeTab === \"select\" && imageUrls.length > 0 && (\n          <div className=\"mt-4\">\n            <h4 className=\"text-md mb-2 font-medium\">{t(\"SelectAnImage\")}</h4>\n            <div className=\"flex flex-wrap gap-4\">\n              {imageUrls.map((url, index) => (\n                <img\n                  key={index}\n                  src={url}\n                  alt={t(\"PreviewImage\", { index }) ?? \"Enter Image URL\"}\n                  className={`h-28 w-28 cursor-pointer rounded-md object-cover shadow ${\n                    imageUrl === url\n                      ? \"ring-2 ring-teal-400\"\n                      : \"ring-1 ring-gray-700\"\n                  }`}\n                  onClick={() => handleSelectImageUrl(url)}\n                />\n              ))}\n            </div>\n          </div>\n        )}\n\n        {activeTab === \"custom\" && (\n          <div className=\"mt-4 grid grid-cols-2 gap-4\">\n            <div>\n              <label className=\"mb-1 block text-sm font-medium \">\n                {t(\"WidthPx\")}\n              </label>\n              <input\n                type=\"number\"\n                value={customWidth}\n                onChange={(e) => setCustomWidth(Number(e.target.value))}\n                className=\"bg-af-bg-1 mt-1 block w-full rounded-md border border-gray-700  px-3 py-2  focus:border-teal-400 focus:ring-teal-400\"\n                min=\"1\"\n              />\n            </div>\n            <div>\n              <label className=\"mb-1 block text-sm font-medium \">\n                {t(\"HeightPx\")}\n              </label>\n              <input\n                type=\"number\"\n                value={customHeight}\n                onChange={(e) => setCustomHeight(Number(e.target.value))}\n                className=\"bg-af-bg-1 mt-1 block w-full rounded-md border border-gray-700  px-3 py-2  focus:border-teal-400 focus:ring-teal-400\"\n                min=\"1\"\n              />\n            </div>\n          </div>\n        )}\n      </div>\n\n      {(imageUrl || activeTab === \"custom\") && (\n        <div className=\"mt-4 md:mt-8\">\n          {/* Pen Size, Clear and Undo Buttons */}\n          <div className=\"flex items-center space-x-4\">\n            <label className=\"text-xs font-medium md:text-sm\">\n              {t(\"PenSize\")}\n            </label>\n            <div className=\"flex items-center space-x-2\">\n              <Slider\n                value={penSize}\n                onChange={setPenSize}\n                min={5}\n                max={256}\n                step={1}\n                className=\"w-32\"\n              />\n              <span className=\"text-xs md:text-sm\">{penSize}px</span>\n            </div>\n            <button\n              onClick={handleUndo}\n              className=\"text-af-text-element inline-flex items-center rounded-md border border-transparent bg-blue-600/20 px-3 py-2 text-xs font-medium hover:bg-blue-700/50 md:text-sm\"\n            >\n              <MdUndo size={16} className=\"mr-1\" />\n              {t(\"UndoStroke\")}\n            </button>\n            <button\n              onClick={handleClearCanvas}\n              className=\"text-af-text-element ml-auto inline-flex items-center rounded-md border border-transparent bg-red-600/20 px-3 py-2 text-xs font-medium hover:bg-red-700/50 md:text-sm\"\n            >\n              <BsFillEraserFill size={16} className=\"mr-1\" />\n              {t(\"ClearStrokes\")}\n            </button>\n            <div className=\"hidden flex-grow justify-end md:flex\">\n              <Button onClick={handleSave} color=\"cyan\">\n                {t(\"SaveMask\")}\n              </Button>\n            </div>\n          </div>\n\n          {/* Canvas */}\n          <div\n            className=\"mx-auto mt-6 overflow-hidden\"\n            style={{ width: displayWidth, height: displayHeight }}\n          >\n            <ReactSketchCanvas\n              ref={canvasRef}\n              width={`${displayWidth}px`}\n              height={`${displayHeight}px`}\n              strokeWidth={penSize}\n              strokeColor=\"rgba(255, 255, 255, 0.9)\"\n              backgroundImage={imageUrl || undefined}\n              canvasColor=\"black\"\n            />\n          </div>\n          <div className=\"mt-5 flex justify-center md:hidden\">\n            <Button onClick={handleSave} color=\"cyan\">\n              {t(\"SaveMask\")}\n            </Button>\n          </div>\n        </div>\n      )}\n    </div>\n  );\n};\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/ImageMaskCreatorField.tsx",
    "content": "import { Button, Modal } from \"@mantine/core\";\nimport React, { useContext, useEffect, useState } from \"react\";\nimport { ImageMaskCreator } from \"./ImageMaskCreator\";\nimport { useLoading } from \"../../../hooks/useLoading\";\nimport {\n  getUploadAndDownloadUrl,\n  uploadWithS3Link,\n} from \"../../../api/uploadFile\";\nimport { LoadingSpinner } from \"../Node.styles\";\nimport { useTranslation } from \"react-i18next\";\nimport DefaultPopupWrapper from \"../../popups/DefaultPopup\";\nimport { ThemeContext } from \"../../../providers/ThemeProvider\";\n\ninterface ImageMaskCreatorFieldProps {\n  onChange: (value: string) => void;\n  imageUrls?: string[];\n  loadImageUrls?: () => Promise<string[]>;\n}\n\nexport default function ImageMaskCreatorField({\n  onChange,\n  imageUrls,\n  loadImageUrls,\n}: ImageMaskCreatorFieldProps) {\n  const { t } = useTranslation(\"flow\");\n  const { dark } = useContext(ThemeContext);\n\n  const [isModalOpen, setIsModalOpen] = useState(false);\n  const [maskPreview, setMaskPreview] = useState<string | null>(null);\n  const [isLoading, startLoadingWith] = useLoading();\n  const [imageUrlsState, setImageUrls] = useState<string[]>(imageUrls ?? []);\n\n  useEffect(() => {\n    if (loadImageUrls && isModalOpen) {\n      startLoadingWith(async () => {\n        const urls = await loadImageUrls();\n        setImageUrls(urls);\n      });\n    }\n  }, [loadImageUrls, isModalOpen]);\n\n  function dataURLtoFile(dataUrl: string, filename: string): File {\n    const arr = dataUrl.split(\",\");\n    const mimeMatch = arr[0].match(/:(.*?);/);\n    const mime = mimeMatch ? mimeMatch[1] : \"\";\n    const bstr = atob(arr[1]);\n    let n = bstr.length;\n    const u8arr = new Uint8Array(n);\n\n    while (n--) {\n      u8arr[n] = bstr.charCodeAt(n);\n    }\n\n    return new File([u8arr], filename, { type: mime });\n  }\n\n  async function uploadFile(files: File[]) {\n    const filename = files[0].name;\n    const urls = await getUploadAndDownloadUrl(filename);\n    const uploadData = urls.upload_data;\n    await uploadWithS3Link(uploadData, files[0]);\n    return urls;\n  }\n\n  const handleSaveMask = async (maskDataUrl: string) => {\n    setIsModalOpen(false);\n    const maskFile = dataURLtoFile(maskDataUrl, \"mask.png\");\n\n    const files: File[] = [maskFile];\n\n    try {\n      const urls = await startLoadingWith(uploadFile, files);\n\n      if (urls.download_link) {\n        setMaskPreview(urls.download_link);\n        onChange(urls.download_link);\n      } else {\n        alert(t(\"FailedToUploadImage\"));\n      }\n    } catch (error) {\n      alert(t(\"FailedToUploadImage\"));\n    }\n  };\n\n  return (\n    <div>\n      <Button\n        color=\"cyan\"\n        variant=\"outline\"\n        onClick={() => {\n          setIsModalOpen(true);\n        }}\n        size=\"lg\"\n      >\n        {t(\"CreateMaskFromImage\")}\n      </Button>\n      {isModalOpen && (\n        <DefaultPopupWrapper\n          show={isModalOpen}\n          onClose={() => setIsModalOpen(false)}\n          popupClassNames=\"overflow-auto w-[85%] md:w-[75%] max-h-[95%] flex shadow-lg md:p-4 p-2 rounded-md mt-[2%]\"\n          style={{\n            background: dark\n              ? \"linear-gradient(135deg, #101113, #1a1b1e)\"\n              : \"#FFFFFF\",\n          }}\n        >\n          <div className=\"pb-6 md:px-6\">\n            <ImageMaskCreator\n              onSave={handleSaveMask}\n              imageUrls={imageUrlsState}\n            />\n          </div>\n        </DefaultPopupWrapper>\n      )}\n      {isLoading && (\n        <div className=\"my-2 flex w-full items-center justify-center p-2 text-center text-2xl text-teal-400 \">\n          <LoadingSpinner />\n        </div>\n      )}\n      {maskPreview && (\n        <div className=\"mt-5 flex w-full flex-col space-y-4\">\n          <p>{t(\"Preview\")}</p>\n          <div>\n            <img\n              src={maskPreview}\n              alt={t(\"MaskPreview\") ?? \"Mask Preview\"}\n              className=\"h-28\"\n            />\n          </div>\n        </div>\n      )}\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/ImageMaskCreatorFieldFlowAware.tsx",
    "content": "import { getOutputExtension } from \"../node-output/outputUtils\";\nimport ImageMaskCreatorField from \"./ImageMaskCreatorField\";\n\ninterface ImageMaskCreatorFieldProps {\n  onChange: (value: string) => void;\n}\n\nconst extractImageUrls = (nodes: any[]) => {\n  return nodes\n    .flatMap((node) => {\n      const outputData = node.data.outputData;\n      if (typeof outputData === \"string\") return [outputData];\n      if (Array.isArray(outputData)) return outputData;\n      return [];\n    })\n    .filter((url) => getOutputExtension(url) === \"imageUrl\");\n};\n\nexport default function ImageMaskCreatorFieldFlowAware({\n  onChange,\n}: ImageMaskCreatorFieldProps) {\n  return <ImageMaskCreatorField imageUrls={[]} onChange={onChange} />;\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/KeyValueInputList.tsx",
    "content": "// KeyValueInputList.tsx\nimport React from \"react\";\nimport { Button, Group, TextInput } from \"@mantine/core\";\nimport { MdClear } from \"react-icons/md\";\nimport { FaPlus } from \"react-icons/fa\";\n\ninterface KeyValuePair {\n  key: string;\n  value: string;\n}\n\ninterface KeyValueInputListProps {\n  pairs: KeyValuePair[];\n  onChange: (pairs: KeyValuePair[]) => void;\n}\n\nexport const KeyValueInputList: React.FC<KeyValueInputListProps> = ({\n  pairs,\n  onChange,\n}) => {\n  const handleKeyChange = (index: number, newKey: string) => {\n    const newPairs = [...pairs];\n    newPairs[index].key = newKey;\n    onChange(newPairs);\n  };\n\n  const handleValueChange = (index: number, newValue: string) => {\n    const newPairs = [...pairs];\n    newPairs[index].value = newValue;\n    onChange(newPairs);\n  };\n\n  const handleAddPair = () => {\n    onChange([...pairs, { key: \"\", value: \"\" }]);\n  };\n\n  const handleRemovePair = (index: number) => {\n    const newPairs = pairs.filter((_, i) => i !== index);\n    onChange(newPairs);\n  };\n\n  return (\n    <div className=\"flex w-full flex-col space-y-2\">\n      {!!pairs &&\n        pairs.map((pair, index) => (\n          <Group key={index} align=\"center\">\n            <TextInput\n              placeholder=\"Key\"\n              value={pair.key}\n              onChange={(event) =>\n                handleKeyChange(index, event.currentTarget.value)\n              }\n              style={{ flex: 1 }}\n            />\n            <TextInput\n              placeholder=\"Value\"\n              value={pair.value}\n              onChange={(event) =>\n                handleValueChange(index, event.currentTarget.value)\n              }\n              style={{ flex: 1 }}\n            />\n            <MdClear\n              className=\"text-af-text-element-2 h-full w-5 cursor-pointer transition-colors duration-150 ease-in-out hover:text-red-500\"\n              onClick={() => handleRemovePair(index)}\n            />\n          </Group>\n        ))}\n      <div>\n        <Button onClick={handleAddPair} color=\"gray\">\n          <span className=\"flex flex-row space-x-2\">\n            <FaPlus /> <p> Add </p>\n          </span>\n        </Button>\n      </div>\n    </div>\n  );\n};\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/NodeField.tsx",
    "content": "import { InputHandle, NodeLabel } from \"../Node.styles\";\nimport { Position } from \"reactflow\";\nimport { Field } from \"../../../nodes-configuration/types\";\nimport { DisplayParams } from \"../../../hooks/useFormFields\";\nimport { FiFile, FiInfo, FiPlus, FiTrash } from \"react-icons/fi\";\nimport { useTranslation } from \"react-i18next\";\nimport { Tooltip } from \"@mantine/core\";\n\ninterface NodeFieldProps<T> {\n  field: T;\n  renderField: (field: T, isLoopField?: boolean) => JSX.Element;\n  label: string;\n  handleId?: string;\n  displayParams?: DisplayParams;\n  handlePosition?: Position;\n  onAddNewField?: () => void;\n  onDeleteField?: () => void;\n}\n\nfunction NodeField<\n  T extends Pick<\n    Field,\n    | \"required\"\n    | \"label\"\n    | \"hasHandle\"\n    | \"isLinked\"\n    | \"description\"\n    | \"hidden\"\n    | \"type\"\n  >,\n>({\n  field,\n  displayParams,\n  renderField,\n  label,\n  handlePosition = Position.Left,\n  handleId,\n  onAddNewField,\n  onDeleteField,\n}: NodeFieldProps<T>) {\n  const { t } = useTranslation(\"flow\");\n  return (\n    <>\n      {field.label && displayParams?.showLabels && (\n        <div className=\"flex flex-row items-center justify-between \">\n          <div className=\"flex flex-row items-center space-x-5\">\n            {field.hasHandle && displayParams?.showHandles && (\n              <InputHandle\n                className=\"handle custom-handle\"\n                required={field.required}\n                type=\"target\"\n                position={handlePosition}\n                id={handleId}\n              />\n            )}\n            <div className=\"flex flex-row items-center justify-center space-x-1\">\n              <NodeLabel\n                className={`font-mono text-lg\n                        ${field.isLinked ? \"linkedToNode text-sky-400\" : \"\"}  \n                        ${field.required ? \"font-bold\" : \"\"}`}\n              >\n                {label}\n              </NodeLabel>\n\n              {field.type === \"fileUpload\" && <FiFile />}\n              {field.required ? <span className=\"text-lg\">*</span> : null}\n            </div>\n          </div>\n          {!!field.description && (\n            <Tooltip\n              label={t(field.description)}\n              openDelay={300}\n              position=\"top-start\"\n              color=\"dark\"\n              transitionProps={{ transition: \"slide-up\", duration: 300 }}\n              multiline\n            >\n              <span>\n                <FiInfo className=\"cursor-pointer text-xl hover:text-teal-300\" />\n              </span>\n            </Tooltip>\n          )}\n        </div>\n      )}\n      {!field.isLinked && (\n        <div className=\"flex h-full pb-3\">{renderField(field)}</div>\n      )}\n\n      {onAddNewField && (\n        <div className=\"mt-3 flex w-full justify-end space-x-3\">\n          <button\n            className=\"flex items-center justify-center rounded-md bg-sky-700 px-5 py-2 text-sm font-medium text-white transition-all duration-200 hover:scale-105 hover:bg-sky-700 focus:outline-none focus:ring-2 focus:ring-sky-400/50\"\n            onClick={onAddNewField}\n          >\n            <FiPlus className=\"mr-2 h-4 w-4\" />\n            Add {field.label} field\n          </button>\n          {onDeleteField && (\n            <button\n              className=\"flex items-center justify-center rounded-md bg-red-700 px-5 py-2 text-sm font-medium text-white transition-all duration-200 hover:scale-105 hover:bg-red-700 focus:outline-none focus:ring-2 focus:ring-red-400/50\"\n              onClick={onDeleteField}\n            >\n              <FiTrash className=\"mr-2 h-4 w-4\" />\n              Remove {field.label} field\n            </button>\n          )}\n        </div>\n      )}\n    </>\n  );\n}\n\nexport default NodeField;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/NodeTextField.tsx",
    "content": "import styled from \"styled-components\";\nimport { ChangeEvent, RefObject, useState } from \"react\";\nimport TextAreaPopupWrapper from \"./TextAreaPopupWrapper\";\n\ninterface NodeTextFieldProps {\n  value: string;\n  onChange: (event: ChangeEvent<HTMLInputElement>) => void;\n  onChangeValue?: (value: string) => void;\n  placeholder?: string;\n  error?: boolean;\n  isTouchDevice?: boolean;\n  fieldName?: string;\n  withEditPopup?: boolean;\n  ref?: RefObject<HTMLInputElement>;\n}\n\nexport default function NodeTextField({\n  value,\n  onChange,\n  onChangeValue,\n  placeholder,\n  error,\n  isTouchDevice,\n  fieldName,\n  withEditPopup,\n  ref,\n}: NodeTextFieldProps) {\n  const [isTextareaSelected, setIsTextareaSelected] = useState(false);\n\n  function handleChangeValue(value: string) {\n    if (onChangeValue) {\n      onChangeValue(value);\n    }\n  }\n\n  const handleFocus = () => {\n    setIsTextareaSelected(true);\n  };\n\n  const handleBlur = () => {\n    setIsTextareaSelected(false);\n  };\n\n  const isControlled = !isTextareaSelected;\n\n  if (!withEditPopup) {\n    return (\n      <NodeInput\n        value={isControlled ? value : undefined}\n        defaultValue={!isControlled ? value : undefined}\n        className={`nowheel ${!isTouchDevice ? \"nodrag\" : \"\"}`}\n        onChange={(event) => onChange(event)}\n        placeholder={placeholder}\n        onFocus={handleFocus}\n        onBlur={handleBlur}\n      />\n    );\n  }\n\n  return (\n    <TextAreaPopupWrapper\n      onChange={handleChangeValue}\n      initValue={value}\n      fieldName={fieldName}\n    >\n      <NodeInput\n        value={isControlled ? value : undefined}\n        defaultValue={!isControlled ? value : undefined}\n        className={`nowheel ${!isTouchDevice ? \"nodrag\" : \"\"}`}\n        onChange={(event) => onChange(event)}\n        placeholder={placeholder}\n        onFocus={handleFocus}\n        onBlur={handleBlur}\n        ref={ref}\n      />\n    </TextAreaPopupWrapper>\n  );\n}\n\nconst NodeInput = styled.input`\n  width: 100%;\n  border: none;\n  outline: none;\n  font-size: 1.1em;\n  color: ${({ theme }) => theme.text};\n  background-color: ${({ theme }) => theme.nodeInputBg};\n  padding: 12px 18px;\n  border-radius: 8px;\n  box-shadow: 0 2px 3px rgba(0, 0, 0, 0.1);\n  transition: all ease-in-out;\n\n  &:focus {\n    border: solid;\n    border-color: rgba(223, 223, 223, 0.175);\n  }\n`;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/NodeTextarea.tsx",
    "content": "import { useTranslation } from \"react-i18next\";\nimport { DisplayParams } from \"../../../hooks/useFormFields\";\nimport { Field } from \"../../../nodes-configuration/types\";\nimport { StyledNodeTextarea } from \"../Node.styles\";\nimport { GenericNodeData } from \"../types/node\";\nimport { useEffect, useState } from \"react\";\nimport TextAreaPopupWrapper from \"./TextAreaPopupWrapper\";\nimport { useReactFlow } from \"reactflow\";\n\ninterface NodeTextareaProps {\n  data: GenericNodeData;\n  field: Field;\n  displayParams?: DisplayParams;\n  id: string;\n  textareaRef: React.RefObject<HTMLTextAreaElement>;\n  isTouchDevice: boolean;\n  withMinHeight: boolean;\n  onEventNodeDataChange: (event: any) => void;\n  onNodeDataChange: (fieldName: string, value: any, target?: any) => void;\n}\nexport default function NodeTextarea({\n  data,\n  field,\n  id,\n  textareaRef,\n  isTouchDevice,\n  withMinHeight,\n  onEventNodeDataChange,\n  onNodeDataChange,\n}: NodeTextareaProps) {\n  const { t } = useTranslation(\"flow\");\n  let reactFlowInstance: any;\n  try {\n    reactFlowInstance = useReactFlow(); //TMP\n  } catch (error) {\n    //Do nothing\n  }\n\n  const [isTextareaSelected, setIsTextareaSelected] = useState(false);\n\n  useEffect(() => {\n    const maxHeight = 1000;\n    const textarea = textareaRef.current;\n    if (textarea) {\n      textarea.style.height = \"auto\";\n\n      const rect = textarea.getBoundingClientRect();\n      const currentZoom = !!reactFlowInstance ? reactFlowInstance.getZoom() : 1;\n      const adjustedTop = rect.top / currentZoom;\n      const availableSpaceBelow =\n        window.innerHeight / currentZoom - adjustedTop;\n\n      const allowedHeight = Math.min(\n        textarea.scrollHeight,\n        maxHeight,\n        availableSpaceBelow,\n      );\n\n      textarea.style.height = `${allowedHeight}px`;\n    }\n  }, [data[field.name]]);\n\n  const handleFocus = () => {\n    setIsTextareaSelected(true);\n  };\n\n  const handleBlur = () => {\n    setIsTextareaSelected(false);\n  };\n\n  const isControlled = !isTextareaSelected;\n\n  return (\n    <TextAreaPopupWrapper\n      onChange={(value) => onNodeDataChange(field.name, value)}\n      initValue={data[field.name]}\n      fieldName={field.name}\n    >\n      <StyledNodeTextarea\n        ref={textareaRef}\n        name={field.name}\n        className={`nowheel ${!isTouchDevice ? \"nodrag\" : \"\"}`}\n        value={isControlled ? data[field.name] : undefined}\n        defaultValue={!isControlled ? data[field.name] : undefined}\n        placeholder={field.placeholder ? String(t(field.placeholder)) : \"\"}\n        withMinHeight={withMinHeight}\n        onChange={onEventNodeDataChange}\n        onFocus={handleFocus}\n        onBlur={handleBlur}\n      />\n    </TextAreaPopupWrapper>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/OutputRenderer.tsx",
    "content": "import { useState } from \"react\";\nimport { FiFile } from \"react-icons/fi\";\nimport AudioUrlOutput from \"../../nodes/node-output/AudioUrlOutput\";\nimport ImageUrlOutput from \"../../nodes/node-output/ImageUrlOutput\";\nimport MarkdownOutput from \"../../nodes/node-output/MarkdownOutput\";\nimport PdfUrlOutput from \"../../nodes/node-output/PdfUrlOutput\";\nimport ThreeDimensionalUrlOutput from \"../../nodes/node-output/ThreeDimensionalUrlOutput\";\nimport VideoUrlOutput from \"../../nodes/node-output/VideoUrlOutput\";\nimport { NodeData } from \"../../nodes/types/node\";\nimport { OutputType } from \"../../../nodes-configuration/types\";\nimport OutputDisplay from \"../../nodes/node-output/OutputDisplay\";\nimport { useTranslation } from \"react-i18next\";\n\ninterface OutputRendererProps {\n  data: NodeData;\n  thumbnail?: boolean;\n  showOutputOptions?: boolean;\n  fontSize?: number;\n}\n\nexport default function OutputRenderer({\n  data,\n  thumbnail,\n  showOutputOptions = true,\n  fontSize = 1,\n}: OutputRendererProps) {\n  const { t } = useTranslation(\"flow\");\n  const [indexDisplayed, setIndexDisplayed] = useState(0);\n\n  const getOutputComponent = (data: NodeData, outputType: OutputType) => {\n    if (!data.outputData) return <></>;\n\n    let output = data.outputData;\n\n    if (typeof output !== \"string\") {\n      output = output[indexDisplayed];\n    }\n\n    switch (outputType) {\n      case \"imageUrl\":\n        if (thumbnail) {\n          return <ImageUrlOutput url={output} name={data.name} />;\n        }\n\n        return (\n          <div className=\"flex items-center justify-center\">\n            <div className=\" md:max-w-md lg:max-w-lg xl:max-w-xl 2xl:max-w-2xl\">\n              <ImageUrlOutput url={output} name={data.name} />\n            </div>\n          </div>\n        );\n      case \"videoUrl\":\n        return <VideoUrlOutput url={output} name={data.name} />;\n      case \"audioUrl\":\n        return <AudioUrlOutput url={output} name={data.name} />;\n      case \"3dUrl\":\n        return <ThreeDimensionalUrlOutput url={output} name={data.name} />;\n      case \"pdfUrl\":\n        return <PdfUrlOutput url={output} name={data.name} />;\n      case \"fileUrl\":\n        return (\n          <a href={output} target=\"_blank\" rel=\"noreferrer\">\n            <div className=\"flex flex-row items-center justify-center space-x-2 py-2 hover:text-sky-400\">\n              <FiFile className=\"text-3xl\" />\n              <p>{t(\"FileUploaded\")}</p>\n            </div>\n          </a>\n        );\n      default:\n        return (\n          <span\n            className={`bg-af-bg-5 rounded-2xl ${thumbnail ? \"p-4 text-xs\" : \"p-5 text-sm md:p-8 md:text-base\"}`}\n          >\n            <MarkdownOutput\n              data={\n                thumbnail && !!output && output.length > 200\n                  ? output.substring(0, 200) + \"...\"\n                  : output\n              }\n              name={data.name}\n              appearance={{\n                fontSize: fontSize,\n              }}\n            />\n          </span>\n        );\n    }\n  };\n\n  return (\n    <OutputDisplay\n      data={data}\n      getOutputComponentOverride={getOutputComponent}\n    />\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-input/TextAreaPopupWrapper.tsx",
    "content": "import { Tooltip } from \"@mantine/core\";\nimport { FiExternalLink } from \"react-icons/fi\";\nimport { TextareaModal } from \"../utils/TextareaModal\";\nimport { useState } from \"react\";\n\ninterface TextAreaPopupWrapperProps {\n  children: React.ReactNode;\n  onChange: (value: string) => void;\n  initValue: string;\n  fieldName?: string;\n}\n\nfunction TextAreaPopupWrapper({\n  children,\n  onChange,\n  initValue,\n  fieldName,\n}: TextAreaPopupWrapperProps) {\n  const [modalOpen, setModalOpen] = useState<boolean>(false);\n\n  function openModal() {\n    setModalOpen(true);\n  }\n\n  function closeModal() {\n    setModalOpen(false);\n  }\n\n  return (\n    <>\n      <div className=\"relative w-full\">\n        <span className=\"absolute right-2 top-2 cursor-pointer text-slate-400 transition-colors duration-100 ease-in-out hover:text-stone-100\">\n          <Tooltip label={\"Open in popup\"}>\n            <FiExternalLink onClick={openModal} />\n          </Tooltip>\n        </span>\n        {children}\n      </div>\n      {modalOpen && (\n        <TextareaModal\n          initValue={initValue}\n          fieldName={fieldName}\n          onChange={onChange}\n          onClose={closeModal}\n        />\n      )}\n    </>\n  );\n}\n\nexport default TextAreaPopupWrapper;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/AudioUrlOutput.tsx",
    "content": "import React from \"react\";\nimport { FaDownload } from \"react-icons/fa\";\nimport styled from \"styled-components\";\nimport { getFileTypeFromUrl, getGeneratedFileName } from \"./outputUtils\";\n\nimport VideoJS from \"../../players/VideoJS\";\n\ninterface AudioUrlOutputProps {\n  url: string;\n  name: string;\n}\n\nconst AudioUrlOutput: React.FC<AudioUrlOutputProps> = ({ url, name }) => {\n  const playerRef = React.useRef(null);\n\n  const videoJsOptions = {\n    controls: true,\n    autoplay: false,\n    loop: false,\n    muted: false,\n    fluid: true,\n    bigPlayButton: false,\n    plugins: {\n      wavesurfer: {\n        backend: \"MediaElement\",\n        displayMilliseconds: false,\n        debug: false,\n        waveColor: \"rgb(72, 159, 159)\",\n        progressColor: \"rgba(32, 32, 32, 0.719)\",\n        cursorColor: \"rgba(226, 226, 226, 0.616)\",\n        hideScrollbar: true,\n        autoplay: false,\n        height: \"auto\",\n      },\n    },\n  };\n\n  const handlePlayerReady = (player: any) => {\n    playerRef.current = player;\n\n    const mimeType = `audio/${getFileTypeFromUrl(url)}`;\n    player.src({ src: url, type: mimeType });\n  };\n\n  const handleDownloadClick = (event: React.MouseEvent) => {\n    event.stopPropagation();\n    const link = document.createElement(\"a\");\n    link.href = url;\n    link.download = getGeneratedFileName(url, name);\n    link.target = \"_blank\";\n    link.click();\n  };\n\n  return (\n    <OutputAudioContainer className=\"audio-player h-full w-full\">\n      <VideoJS options={videoJsOptions} onReady={handlePlayerReady} key={url} />\n      <div\n        className=\"absolute right-3 top-2 rounded-md bg-slate-600/75 px-1 py-1 text-2xl text-slate-100 hover:bg-sky-600/90\"\n        onClick={handleDownloadClick}\n      >\n        <FaDownload />\n      </div>\n    </OutputAudioContainer>\n  );\n};\n\nconst OutputAudioContainer = styled.div`\n  display: flex;\n  justify-content: center;\n  align-items: center;\n  position: relative;\n  margin-top: 10px;\n`;\n\nexport default AudioUrlOutput;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/ImageBase64Output.tsx",
    "content": "import React, { memo } from \"react\";\nimport { FaDownload } from \"react-icons/fa\";\nimport styled from \"styled-components\";\n\ninterface ImageBase64OutputProps {\n  data: string;\n  name: string;\n  lastRun?: string;\n}\n\nconst ImageBase64Output: React.FC<ImageBase64OutputProps> = ({\n  data,\n  name,\n  lastRun,\n}) => {\n  const blob = new Blob([\n    new Uint8Array(\n      atob(data)\n        .split(\"\")\n        .map(function (c) {\n          return c.charCodeAt(0);\n        }),\n    ),\n  ]);\n\n  const url = URL.createObjectURL(blob);\n\n  const handleDownloadClick = () => {\n    const link = document.createElement(\"a\");\n    link.href = url;\n    link.download = name + \"-output-generated.jpg\";\n    link.target = \"_blank\";\n    link.click();\n  };\n\n  return (\n    <OutputImageContainer>\n      <OutputImage src={url} alt=\"Output Image\" />\n      <DownloadButton onClick={handleDownloadClick}>\n        <FaDownload />\n      </DownloadButton>\n    </OutputImageContainer>\n  );\n};\n\nconst OutputImageContainer = styled.div`\n  position: relative;\n  margin-top: 10px;\n`;\n\nconst OutputImage = styled.img`\n  display: block;\n  width: 100%;\n  height: auto;\n  border-radius: 8px;\n`;\n\nconst DownloadButton = styled.a`\n  position: absolute;\n  top: 8px;\n  right: 8px;\n  display: flex;\n  align-items: center;\n  justify-content: center;\n  width: 32px;\n  height: 32px;\n  background-color: #4285f4;\n  color: #fff;\n  border-radius: 50%;\n  cursor: pointer;\n  transition: background-color 0.3s ease;\n\n  &:hover {\n    background-color: #0d47a1;\n  }\n`;\nfunction arePropsEqual(\n  prevProps: ImageBase64OutputProps,\n  nextProps: ImageBase64OutputProps,\n) {\n  return prevProps.lastRun === nextProps.lastRun;\n}\n\nexport default memo(ImageBase64Output, arePropsEqual);\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/ImageUrlOutput.tsx",
    "content": "import React, { useEffect, useState } from \"react\";\nimport { FaDownload } from \"react-icons/fa\";\nimport styled from \"styled-components\";\nimport { getGeneratedFileName } from \"./outputUtils\";\nimport { toastErrorMessage } from \"../../../utils/toastUtils\";\nimport { useTranslation } from \"react-i18next\";\n\ninterface ImageUrlOutputProps {\n  url: string;\n  name: string;\n}\n\nconst ImageUrlOutput: React.FC<ImageUrlOutputProps> = ({ url, name }) => {\n  const { t } = useTranslation(\"flow\");\n  const [hasError, setHasError] = useState(false);\n\n  useEffect(() => {\n    setHasError(false);\n  }, [url]);\n\n  const handleDownloadClick = (event: React.MouseEvent) => {\n    event.stopPropagation();\n    if (hasError) {\n      toastErrorMessage(\"URL Expired\");\n      return;\n    }\n    const link = document.createElement(\"a\");\n    link.href = url;\n    link.download = getGeneratedFileName(url, name);\n    link.target = \"_blank\";\n    link.click();\n  };\n\n  const handleError = () => {\n    setHasError(true);\n  };\n\n  const handleLoad = () => {\n    setHasError(false);\n  };\n\n  return (\n    <OutputImageContainer>\n      {hasError ? (\n        <p className=\"text-center\"> {t(\"ExpiredURL\")}</p>\n      ) : (\n        <>\n          <OutputImage\n            src={url}\n            alt=\"Output Image\"\n            onError={handleError}\n            onLoad={handleLoad}\n          />\n          <div\n            className=\"absolute right-3 top-2 rounded-md bg-slate-600/75 px-1 py-1 text-2xl text-slate-100 hover:bg-sky-600/90\"\n            onClick={handleDownloadClick}\n          >\n            <FaDownload />\n          </div>\n        </>\n      )}\n    </OutputImageContainer>\n  );\n};\n\nconst OutputImageContainer = styled.div`\n  position: relative;\n  margin-top: 10px;\n`;\n\nconst OutputImage = styled.img`\n  display: block;\n  width: 100%;\n  height: auto;\n  border-radius: 8px;\n`;\n\nexport default ImageUrlOutput;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/MarkdownOutput.tsx",
    "content": "import React, { memo, useContext, useMemo } from \"react\";\nimport remarkGfm from \"remark-gfm\";\nimport ReactMarkdown from \"react-markdown\";\nimport styled from \"styled-components\";\n\nimport \"github-markdown-css\";\nimport { FiCopy, FiMinus, FiPlus } from \"react-icons/fi\";\nimport { copyToClipboard } from \"../../../utils/navigatorUtils\";\nimport { toastFastInfoMessage } from \"../../../utils/toastUtils\";\nimport { useTranslation } from \"react-i18next\";\nimport { NodeContext } from \"../../../providers/NodeProvider\";\nimport { NodeAppearance } from \"../types/node\";\nimport { Prism as SyntaxHighlighter } from \"react-syntax-highlighter\";\nimport { tomorrow as theme } from \"react-syntax-highlighter/dist/esm/styles/prism\";\n\ninterface MarkdownOutputProps {\n  data: string;\n  name: string;\n  appearance?: NodeAppearance;\n}\n\nconst MarkdownOutput: React.FC<MarkdownOutputProps> = ({\n  data,\n  name,\n  appearance,\n}) => {\n  const { t } = useTranslation(\"flow\");\n  const { updateNodeAppearance } = useContext(NodeContext);\n\n  const fontSize = appearance?.fontSize ?? 1.2;\n\n  const stringifiedData = useMemo(() => {\n    if (!data) return \"\";\n    return typeof data === \"string\" ? data : JSON.stringify(data);\n  }, [data]);\n\n  if (!data) return <p> </p>;\n\n  const increaseFontSize = () => {\n    updateNodeAppearance(name, {\n      ...appearance,\n      fontSize: fontSize + 0.1,\n    });\n  };\n\n  const decreaseFontSize = () =>\n    updateNodeAppearance(name, {\n      ...appearance,\n      fontSize: fontSize - 0.1,\n    });\n\n  const handleCopyToClipboard = (event: any) => {\n    event.stopPropagation();\n    if (data) {\n      copyToClipboard(data);\n      toastFastInfoMessage(t(\"CopiedToClipboard\"));\n    }\n  };\n\n  const handleElementCopyToClipboard = (element: any) => {\n    if (element) {\n      copyToClipboard(element);\n      toastFastInfoMessage(t(\"CopiedToClipboard\"));\n    }\n  };\n\n  return (\n    <div className=\"relative\">\n      <MemoizedStyledReactMarkdown\n        remarkPlugins={[remarkGfm]}\n        children={stringifiedData}\n        fontSize={fontSize}\n        className={`markdown-body px-8 pt-8 text-lg`}\n        components={{\n          code(props: any) {\n            const { children, className, node, ...rest } = props;\n            const match = /language-(\\w+)/.exec(className || \"\");\n            return match ? (\n              <div className=\"flex flex-col\">\n                <div className=\" flex justify-between rounded-t-xl bg-zinc-800 px-1 py-2 text-zinc-300\">\n                  <div> {match[1]} </div>\n                  <div className=\"mr-2\">\n                    <IconButton\n                      onClick={(e) => {\n                        e.stopPropagation();\n                        handleElementCopyToClipboard(children);\n                      }}\n                      onTouchStart={(e) => {\n                        e.stopPropagation();\n                        handleElementCopyToClipboard(children);\n                      }}\n                      className=\"copy-icon\"\n                      aria-label=\"Copy text\"\n                      title=\"Copy text\"\n                    >\n                      <div className=\" flex items-center text-sm\">\n                        <FiCopy />\n\n                        <div> Copy </div>\n                      </div>\n                    </IconButton>\n                  </div>\n                </div>\n                <div className=\"rounded-b-xl bg-zinc-800 px-1 pb-1\">\n                  <SyntaxHighlighter\n                    {...rest}\n                    PreTag=\"div\"\n                    children={String(children).replace(/\\n$/, \"\")}\n                    language={match[1]}\n                    style={theme}\n                    customStyle={{\n                      margin: \"0px\",\n                    }}\n                  />\n                </div>\n              </div>\n            ) : (\n              <code {...rest} className={className}>\n                {children}\n              </code>\n            );\n          },\n        }}\n      />\n      <IconContainer\n        className=\"z-50\"\n        onDoubleClick={(e) => {\n          e.stopPropagation();\n        }}\n      >\n        <IconButton\n          onClick={(e) => {\n            e.stopPropagation();\n            increaseFontSize();\n          }}\n          onTouchStart={(e) => {\n            e.stopPropagation();\n            increaseFontSize();\n          }}\n          aria-label=\"Increase text size\"\n          title=\"Increase text size\"\n        >\n          <FiPlus />\n        </IconButton>\n        <IconButton\n          onClick={(e) => {\n            e.stopPropagation();\n            decreaseFontSize();\n          }}\n          onTouchStart={(e) => {\n            e.stopPropagation();\n            decreaseFontSize();\n          }}\n          aria-label=\"Decrease text size\"\n          title=\"Decrease text size\"\n        >\n          <FiMinus />\n        </IconButton>\n        <IconButton\n          onClick={(e) => {\n            e.stopPropagation();\n            handleCopyToClipboard(e);\n          }}\n          onTouchStart={(e) => {\n            e.stopPropagation();\n            handleCopyToClipboard(e);\n          }}\n          className=\"copy-icon\"\n          aria-label=\"Copy text\"\n          title=\"Copy text\"\n        >\n          <FiCopy />\n        </IconButton>\n      </IconContainer>\n    </div>\n  );\n};\n\nconst IconButton = styled.div`\n  cursor: pointer;\n  transition: color 0.2s;\n  color: #ffffff80;\n\n  &:hover {\n    color: #ffffff;\n  }\n`;\n\nconst IconContainer = styled.div`\n  position: absolute;\n  display: flex;\n  flex-direction: row;\n  align-items: center;\n  justify-content: center;\n  gap: 8px;\n  top: 0.5em;\n  right: 0.1em;\n`;\n\nconst StyledReactMarkdown = styled(ReactMarkdown)<{ fontSize: number }>`\n  background-color: transparent !important;\n  color: #f5f5f5;\n  font-size: ${(props) => props.fontSize}em;\n  user-select: text;\n`;\n\nexport const MemoizedStyledReactMarkdown = memo(\n  StyledReactMarkdown,\n  (prevProps, nextProps) => {\n    return (\n      prevProps.children === nextProps.children &&\n      prevProps.fontSize === nextProps.fontSize\n    );\n  },\n);\n\nexport default memo(MarkdownOutput);\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/NodeOutput.tsx",
    "content": "import { copyToClipboard } from \"../../../utils/navigatorUtils\";\nimport { NodeLogs, NodeLogsText } from \"../Node.styles\";\nimport { NodeData } from \"../types/node\";\nimport { toastFastInfoMessage } from \"../../../utils/toastUtils\";\nimport { useTranslation } from \"react-i18next\";\nimport styled from \"styled-components\";\nimport { FiCopy } from \"react-icons/fi\";\nimport { getOutputExtension } from \"./outputUtils\";\nimport { OutputType } from \"../../../nodes-configuration/types\";\nimport OutputDisplay from \"./OutputDisplay\";\n\ninterface NodeOutputProps {\n  data: NodeData;\n  showLogs: boolean;\n  onClickOutput: () => void;\n}\n\nexport default function NodeOutput({\n  data,\n  showLogs,\n  onClickOutput,\n}: NodeOutputProps) {\n  const { t } = useTranslation(\"flow\");\n\n  function getOutputType(): OutputType {\n    if (data.config?.outputType) {\n      return data.config.outputType;\n    }\n\n    if (!data.outputData) {\n      return \"markdown\";\n    }\n\n    let outputData = data.outputData;\n    let output = \"\";\n\n    if (typeof outputData !== \"string\") {\n      output = outputData[0];\n    } else {\n      output = outputData;\n    }\n\n    const outputType = getOutputExtension(output);\n\n    return outputType;\n  }\n\n  const outputType = getOutputType();\n\n  const outputIsMedia =\n    (outputType === \"imageUrl\" ||\n      outputType === \"imageBase64\" ||\n      outputType === \"videoUrl\" ||\n      outputType === \"audioUrl\" ||\n      outputType === \"pdfUrl\" ||\n      outputType === \"3dUrl\") &&\n    !!data.outputData;\n\n  return (\n    <NodeLogs\n      showLogs={showLogs}\n      noPadding={outputIsMedia && showLogs}\n      onDoubleClick={onClickOutput}\n      onClick={!showLogs ? onClickOutput : undefined}\n      className={`relative flex h-auto w-full flex-grow justify-center p-4 ${showLogs ? \"nodrag nowheel\" : \"\"}`}\n    >\n      {!showLogs && data.outputData ? (\n        <NodeLogsText className=\"flex h-auto w-full justify-center text-center\">\n          {t(\"ClickToShowOutput\")}\n        </NodeLogsText>\n      ) : (\n        <OutputDisplay data={data} />\n      )}\n    </NodeLogs>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/OutputDisplay.tsx",
    "content": "import MarkdownOutput from \"./MarkdownOutput\";\nimport { NodeData } from \"../types/node\";\nimport { useTranslation } from \"react-i18next\";\nimport { FiFile } from \"react-icons/fi\";\nimport ImageUrlOutput from \"./ImageUrlOutput\";\nimport ImageBase64Output from \"./ImageBase64Output\";\nimport VideoUrlOutput from \"./VideoUrlOutput\";\nimport AudioUrlOutput from \"./AudioUrlOutput\";\nimport { getOutputExtension } from \"./outputUtils\";\nimport PdfUrlOutput from \"./PdfUrlOutput\";\nimport { OutputType } from \"../../../nodes-configuration/types\";\nimport { useState } from \"react\";\nimport ThreeDimensionalUrlOutput from \"./ThreeDimensionalUrlOutput\";\n\ninterface OutputDisplayProps {\n  data: NodeData;\n  getOutputComponentOverride?: (\n    data: NodeData,\n    outputType: OutputType,\n  ) => JSX.Element | null;\n}\n\nexport default function OutputDisplay({\n  data,\n  getOutputComponentOverride,\n}: OutputDisplayProps) {\n  const { t } = useTranslation(\"flow\");\n\n  const [indexDisplayed, setIndexDisplayed] = useState(0);\n\n  const nbOutput =\n    data.outputData != null && typeof data.outputData !== \"string\"\n      ? data.outputData.length\n      : 1;\n\n  const getOutputComponent = () => {\n    if (getOutputComponentOverride) {\n      const override = getOutputComponentOverride(data, getOutputType());\n      if (override) {\n        return override;\n      }\n    }\n\n    if (!data.outputData) return <></>;\n\n    let output = data.outputData;\n\n    if (typeof output !== \"string\") {\n      output = output[indexDisplayed];\n    }\n\n    switch (getOutputType()) {\n      case \"imageUrl\":\n        return <ImageUrlOutput url={output} name={data.name} />;\n      case \"imageBase64\":\n        return (\n          <ImageBase64Output\n            data={output}\n            name={data.name}\n            lastRun={data.lastRun}\n          />\n        );\n      case \"videoUrl\":\n        return <VideoUrlOutput url={output} name={data.name} />;\n      case \"audioUrl\":\n        return <AudioUrlOutput url={output} name={data.name} />;\n      case \"3dUrl\":\n        return <ThreeDimensionalUrlOutput url={output} name={data.name} />;\n      case \"pdfUrl\":\n        return <PdfUrlOutput url={output} name={data.name} />;\n      case \"fileUrl\":\n        return (\n          <a href={output} target=\"_blank\" rel=\"noreferrer\">\n            <div className=\"flex flex-row items-center justify-center space-x-2 py-2 hover:text-sky-400\">\n              <FiFile className=\"text-4xl\" />\n              <p>{t(\"FileUploaded\")}</p>\n            </div>\n          </a>\n        );\n      default:\n        return (\n          <MarkdownOutput\n            data={output}\n            name={data.name}\n            appearance={data.appearance}\n          />\n        );\n    }\n  };\n\n  function getOutputType(): OutputType {\n    if (data.config?.outputType) {\n      return data.config.outputType;\n    }\n\n    if (!data.outputData) {\n      return \"markdown\";\n    }\n\n    let outputData = data.outputData;\n    let output = \"\";\n\n    if (typeof outputData !== \"string\") {\n      output = outputData[indexDisplayed];\n    } else {\n      output = outputData;\n    }\n\n    const outputType = getOutputExtension(output);\n\n    return outputType;\n  }\n\n  return (\n    <div className=\"flex h-full w-full flex-col\">\n      {nbOutput > 1 && typeof data.outputData !== \"string\" && (\n        <div className=\"mt-2 flex flex-row items-center justify-center gap-1 overflow-x-auto p-1\">\n          {data?.outputData?.map((output, index) => (\n            <button\n              key={index}\n              className={`rounded-full ${index === indexDisplayed ? \"bg-orange-400\" : \"bg-gray-500 hover:bg-orange-200\"} whitespace-nowrap p-1.5 focus:outline-none focus:ring-2 focus:ring-orange-400`}\n              onClick={() => setIndexDisplayed(index)}\n              aria-label={`View output ${index + 1}`}\n              title={`Output ${index + 1}`}\n            />\n          ))}\n        </div>\n      )}\n      {getOutputComponent()}\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/PdfUrlOutput.tsx",
    "content": "import React from \"react\";\nimport { FaDownload } from \"react-icons/fa\";\nimport styled from \"styled-components\";\nimport { getGeneratedFileName } from \"./outputUtils\";\n\ninterface PdfUrlOutputProps {\n  url: string;\n  name: string;\n}\n\nconst PdfUrlOutput: React.FC<PdfUrlOutputProps> = ({ url, name }) => {\n  const handleDownloadClick = (event: React.MouseEvent) => {\n    event.stopPropagation();\n    const link = document.createElement(\"a\");\n    link.href = url;\n    link.download = getGeneratedFileName(url, name); // Ensure getGeneratedFileName handles PDF filenames correctly\n    link.target = \"_blank\";\n    link.click();\n  };\n\n  return (\n    <OutputPdfContainer>\n      <OutputPdf data={url} type=\"application/pdf\">\n        <p>\n          Your browser does not support PDFs. Please download the PDF to view\n          it: <a href={url}>Download PDF</a>.\n        </p>\n      </OutputPdf>\n      <div\n        className=\"absolute right-3 top-2 rounded-md bg-slate-600/75 px-1 py-1 text-2xl text-slate-100 hover:bg-sky-600/90\"\n        onClick={handleDownloadClick}\n      >\n        <FaDownload />\n      </div>\n    </OutputPdfContainer>\n  );\n};\n\nconst OutputPdfContainer = styled.div`\n  position: relative;\n  margin-top: 10px;\n  padding-top: 56.25%; // Maintain aspect ratio for PDF viewer\n  height: 0; // Use padding to define height based on the container's width\n  overflow: hidden;\n`;\n\nconst OutputPdf = styled.object`\n  position: absolute;\n  top: 0;\n  left: 0;\n  width: 100%;\n  height: 100%;\n`;\n\nexport default PdfUrlOutput;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/ThreeDimensionalUrlOutput.tsx",
    "content": "import React, { useEffect, useRef, useState } from \"react\";\nimport { FaDownload } from \"react-icons/fa\";\nimport styled from \"styled-components\";\nimport { getFileTypeFromUrl, getGeneratedFileName } from \"./outputUtils\";\nimport { useTranslation } from \"react-i18next\";\nimport {\n  Scene,\n  PerspectiveCamera,\n  WebGLRenderer,\n  AmbientLight,\n  DirectionalLight,\n} from \"three\";\nimport { OBJLoader } from \"three/examples/jsm/loaders/OBJLoader\";\nimport { GLTFLoader } from \"three/examples/jsm/loaders/GLTFLoader\";\nimport { OrbitControls } from \"three/examples/jsm/controls/OrbitControls\";\nimport { LoadingSpinner } from \"../Node.styles\";\n\ninterface ThreeDimensionalUrlOutputProps {\n  url: string;\n  name: string;\n}\n\nconst ThreeDimensionalUrlOutput: React.FC<ThreeDimensionalUrlOutputProps> = ({\n  url,\n  name,\n}) => {\n  const { t } = useTranslation(\"flow\");\n  const [hasError, setHasError] = useState(false);\n  const [isLoading, setIsLoading] = useState(true);\n  const [height, setHeight] = useState(\"500px\");\n\n  const containerRef = useRef<HTMLDivElement>(null);\n  const threeContainerRef = useRef<HTMLDivElement>(null);\n\n  const getParentContainer = () => {\n    return containerRef?.current?.parentElement;\n  };\n\n  const updateParentHeight = () => {\n    if (containerRef.current && containerRef.current.parentElement) {\n      const parentHeight = containerRef.current.parentElement.clientHeight;\n      const newHeight = Math.max(parentHeight, 500);\n      setHeight(`${newHeight}px`);\n    }\n  };\n\n  const loadObj = (url: string, scene: Scene) => {\n    new OBJLoader().load(\n      url,\n      (obj: any) => {\n        setIsLoading(false);\n        setHasError(false);\n        scene.add(obj);\n      },\n      undefined,\n      () => {\n        setHasError(true);\n      },\n    );\n  };\n\n  const loadGlb = (url: string, scene: Scene) => {\n    new GLTFLoader().load(\n      url,\n      (gltf: any) => {\n        setIsLoading(false);\n        setHasError(false);\n        scene.add(gltf.scene);\n      },\n      undefined,\n      () => {\n        setHasError(true);\n      },\n    );\n  };\n\n  useEffect(() => {\n    updateParentHeight();\n    const container = getParentContainer();\n    if (!container) return;\n\n    const type = getFileTypeFromUrl(url);\n\n    const scene = new Scene();\n    const camera = new PerspectiveCamera(\n      75,\n      container.clientWidth / container.clientHeight,\n      0.1,\n      1000,\n    );\n    const renderer = new WebGLRenderer();\n    renderer.setSize(container.clientWidth, container.clientHeight);\n    renderer.domElement.style.width = \"100%\";\n    renderer.domElement.style.height = \"100%\";\n    threeContainerRef.current?.appendChild(renderer.domElement);\n\n    const ambientLight = new AmbientLight(0xffffff, 1); // soft white light\n    scene.add(ambientLight);\n\n    const directionalLight = new DirectionalLight(0xffffff, 2);\n    directionalLight.position.set(1, 1, 1).normalize();\n    scene.add(directionalLight);\n\n    if (type === \"obj\") loadObj(url, scene);\n    else loadGlb(url, scene);\n\n    const controls = new OrbitControls(camera, renderer.domElement);\n\n    const animate = () => {\n      requestAnimationFrame(animate);\n      controls.update();\n      renderer.clear();\n      renderer.render(scene, camera);\n    };\n\n    camera.position.z = 5;\n    animate();\n\n    return () => {\n      controls.dispose();\n      renderer.dispose();\n      scene.children.forEach((child) => {\n        scene.remove(child);\n      });\n      scene.remove();\n    };\n  }, [url]);\n\n  const handleDownloadClick = (event: React.MouseEvent) => {\n    event.stopPropagation();\n    const link = document.createElement(\"a\");\n    link.href = url;\n    link.download = getGeneratedFileName(url, name);\n    link.target = \"_blank\";\n    link.click();\n  };\n\n  if (hasError) {\n    return <p className=\"p-3 text-center\"> {t(\"ExpiredURL\")}</p>;\n  }\n\n  return (\n    <OutputContainer ref={containerRef} style={{ height: height }}>\n      {isLoading ? (\n        <LoadingSpinner className=\"absolute w-full text-4xl\" />\n      ) : null}\n      <div\n        className={`three-container flex h-full w-full`}\n        ref={threeContainerRef}\n        key={url}\n        onClick={(e) => e.stopPropagation()}\n      />\n      {}\n      <div\n        className=\"absolute right-3 top-2 rounded-md bg-slate-600/75 px-1 py-1 text-2xl text-slate-100 hover:bg-sky-600/90\"\n        onClick={handleDownloadClick}\n      >\n        <FaDownload />\n      </div>\n    </OutputContainer>\n  );\n};\n\nconst OutputContainer = styled.div`\n  position: relative;\n  align-items: center;\n  justify-items: center;\n  text-align: center;\n  display: flex;\n  margin-top: 10px;\n  width: auto;\n`;\n\nexport default ThreeDimensionalUrlOutput;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/VideoUrlOutput.tsx",
    "content": "import React, { useEffect, useState } from \"react\";\nimport { FaDownload } from \"react-icons/fa\";\nimport styled from \"styled-components\";\nimport { getGeneratedFileName } from \"./outputUtils\";\nimport { useTranslation } from \"react-i18next\";\n\ninterface VideoUrlOutputProps {\n  url: string;\n  name: string;\n}\n\nconst VideoUrlOutput: React.FC<VideoUrlOutputProps> = ({ url, name }) => {\n  const { t } = useTranslation(\"flow\");\n  const [hasError, setHasError] = useState(false);\n\n  useEffect(() => {\n    setHasError(false);\n  }, [url]);\n\n  const handleDownloadClick = (event: React.MouseEvent) => {\n    event.stopPropagation();\n    const link = document.createElement(\"a\");\n    link.href = url;\n    link.download = getGeneratedFileName(url, name);\n    link.target = \"_blank\";\n    link.click();\n  };\n\n  const handleError = () => {\n    setHasError(true);\n  };\n\n  const handleLoad = () => {\n    setHasError(false);\n  };\n\n  return (\n    <OutputVideoContainer>\n      {hasError ? (\n        <p className=\"text-center\"> {t(\"ExpiredURL\")}</p>\n      ) : (\n        <>\n          <OutputVideo\n            controls\n            src={url}\n            onError={handleError}\n            onLoad={handleLoad}\n          />{\" \"}\n          {}\n          <div\n            className=\"absolute right-3 top-2 rounded-md bg-slate-600/75 px-1 py-1 text-2xl text-slate-100 hover:bg-sky-600/90\"\n            onClick={handleDownloadClick}\n          >\n            <FaDownload />\n          </div>\n        </>\n      )}\n    </OutputVideoContainer>\n  );\n};\n\nconst OutputVideoContainer = styled.div`\n  position: relative;\n  margin-top: 10px;\n`;\n\nconst OutputVideo = styled.video`\n  display: block;\n  width: 100%;\n  height: auto;\n  border-radius: 8px;\n`;\n\nexport default VideoUrlOutput;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/node-output/outputUtils.ts",
    "content": "import { OutputType } from \"../../../nodes-configuration/types\";\n\nexport const getFileExtension = (url: string) => {\n  const extensionMatch = url.match(/\\.([0-9a-z]+)(?:[\\?#]|$)/i);\n  return extensionMatch ? extensionMatch[1] : \"\";\n};\n\nexport const getGeneratedFileName = (url: string, nodeName: string) => {\n  const extension = getFileExtension(url);\n  return `${nodeName}-output.${extension}`;\n};\n\nconst extensionToTypeMap: { [key: string]: OutputType } = {\n  // Image extensions\n  \".png\": \"imageUrl\",\n  \".jpg\": \"imageUrl\",\n  \".gif\": \"imageUrl\",\n  \".jpeg\": \"imageUrl\",\n  \".webp\": \"imageUrl\",\n  // Video extensions\n  \".mp4\": \"videoUrl\",\n  \".mov\": \"videoUrl\",\n  // Audio extensions\n  \".mp3\": \"audioUrl\",\n  \".wav\": \"audioUrl\",\n  // 3D extensions\n  \".obj\": \"3dUrl\",\n  \".glb\": \"3dUrl\",\n  // Other extensions\n  \".pdf\": \"fileUrl\",\n  \".txt\": \"fileUrl\",\n};\n\nexport function getOutputExtension(output: string): OutputType {\n  if (!output) return \"markdown\";\n  if (typeof output !== \"string\") return \"markdown\";\n\n  let extension = Object.keys(extensionToTypeMap).find((ext) =>\n    output.endsWith(ext),\n  );\n\n  if (!extension) {\n    extension = \".\" + getFileTypeFromUrl(output);\n  }\n\n  return extension ? extensionToTypeMap[extension] : \"markdown\";\n}\n\nexport function getFileTypeFromUrl(url: string) {\n  const lastDotIndex = url.lastIndexOf(\".\");\n  const urlWithoutParams = url.includes(\"?\")\n    ? url.substring(0, url.indexOf(\"?\"))\n    : url;\n  const fileType = urlWithoutParams.substring(lastDotIndex + 1);\n  return fileType;\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/types/node.ts",
    "content": "import { NodeConfig, NodeSubConfig } from \"../../../nodes-configuration/types\";\n\nexport interface NodeInput {\n  inputName: string;\n  inputNode: string;\n  inputNodeOutputKey: number;\n}\n\nexport interface NodeAppearance {\n  color?: string;\n  customName?: string;\n  fontSize?: number;\n}\n\nexport interface NodeData {\n  id: string;\n  name: string;\n  handles: any;\n  processorType: string;\n  nbOutput: number;\n  inputs: NodeInput[];\n  outputData?: string[] | string;\n  lastRun?: string;\n  missingFields?: string[];\n  config: NodeConfig;\n  appearance?: NodeAppearance;\n  variantConfig?: NodeSubConfig;\n  [key: string]: any;\n}\n\nexport interface GenericNodeData extends NodeData {\n  width?: number;\n  height?: number;\n  [key: string]: any;\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/utils/HintComponent.tsx",
    "content": "import React, { useState, useEffect } from \"react\";\nimport { useTranslation } from \"react-i18next\";\n\ninterface HintComponentProps {\n  hintId: string;\n  textVar: string;\n}\n\nconst HintComponent: React.FC<HintComponentProps> = ({ hintId, textVar }) => {\n  const { t } = useTranslation(\"flow\");\n  const [showHint, setShowHint] = useState<boolean>(false);\n\n  useEffect(() => {\n    const storageKey = `hasHintBeenHidden-${hintId}`;\n    const hasHintBeenHidden = localStorage.getItem(storageKey);\n    if (hasHintBeenHidden) {\n      setShowHint(false);\n    } else {\n      setShowHint(true);\n    }\n  }, [hintId]);\n\n  const handleHideClick = () => {\n    const storageKey = `hasHintBeenHidden-${hintId}`;\n    localStorage.setItem(storageKey, \"true\");\n    setShowHint(false);\n  };\n\n  return (\n    <>\n      {showHint && (\n        <div className=\"flex flex-col items-center justify-center bg-sky-500 p-3 text-center\">\n          <div>{t(textVar)}</div>\n          <button\n            className=\"mt-2 rounded bg-white px-4 py-2 text-sky-500 shadow hover:bg-slate-200\"\n            onClick={handleHideClick}\n          >\n            {t(\"HideHint\")}\n          </button>\n        </div>\n      )}\n    </>\n  );\n};\n\nexport default HintComponent;\n"
  },
  {
    "path": "packages/ui/src/components/nodes/utils/ImageModal.tsx",
    "content": "import ReactDOM from \"react-dom\";\nimport { FaTimes } from \"react-icons/fa\";\n\ninterface ImageModalProps {\n  src: string;\n  alt: string;\n  onClose: () => void;\n}\nexport function ImageModal({ src, alt, onClose }: ImageModalProps) {\n  return ReactDOM.createPortal(\n    <>\n      <div\n        className=\"fixed inset-0 flex items-center justify-center bg-black bg-opacity-50\"\n        style={{ zIndex: 9999 }}\n        onClick={onClose}\n        onTouchStart={onClose}\n      >\n        <div className=\"relative p-2\">\n          <img src={src} alt={alt} className=\"max-h-full max-w-full\" />\n          <button\n            onClick={onClose}\n            onTouchStart={onClose}\n            className=\"absolute right-1 top-1 p-2 text-white\"\n          >\n            <FaTimes />\n          </button>\n        </div>\n      </div>\n    </>,\n    document.body,\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/utils/ImageZoomable.tsx",
    "content": "import { useState } from \"react\";\nimport { FaSearchPlus, FaTimes } from \"react-icons/fa\";\nimport { ImageModal } from \"./ImageModal\";\n\ninterface ImageZoomableProps {\n  src: string;\n  alt: string;\n}\n\nexport function ImageZoomable({ src, alt }: ImageZoomableProps) {\n  const [isImageZoomed, setImageZoomed] = useState(false);\n\n  const handleImageZoom = () => setImageZoomed(true);\n  const handleCloseZoom = () => setImageZoomed(false);\n  return (\n    <>\n      <div className=\"relative h-fit\">\n        <img src={src} alt={alt} className=\"h-auto w-fit object-cover\" />\n        <button\n          onClick={handleImageZoom}\n          onTouchStart={handleImageZoom}\n          className=\"absolute bottom-0 right-0 p-2 text-white\"\n        >\n          <FaSearchPlus />\n        </button>\n      </div>\n      {isImageZoomed && (\n        <ImageModal src={src} alt={alt} onClose={handleCloseZoom} />\n      )}\n    </>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/utils/NodeHelp.tsx",
    "content": "import { url } from \"inspector\";\nimport { useState } from \"react\";\nimport { useTranslation } from \"react-i18next\";\nimport { FaSearchPlus, FaTimes } from \"react-icons/fa\";\nimport { ImageZoomable } from \"./ImageZoomable\";\n\nexport type UrlWithLabel = {\n  url: string;\n  label: string;\n};\nexport type NodeHelpData = {\n  description: string;\n  imageUrl: string;\n  docUrls: UrlWithLabel[];\n};\n\ninterface NodeHelpProps {\n  data: NodeHelpData;\n  onClose: () => void;\n}\n\nexport function NodeHelp({ data, onClose }: NodeHelpProps) {\n  const { t } = useTranslation(\"flow\");\n\n  if (!data || !data.description) {\n    return (\n      <div className=\"relative overflow-hidden\">\n        <div className=\"p-2 text-center\">{t(\"noDataAvailableForThisNode\")}</div>\n        <button\n          onClick={onClose}\n          onTouchStart={onClose}\n          className=\"absolute right-0 top-0 p-2 text-white\"\n        >\n          <FaTimes />\n        </button>\n      </div>\n    );\n  }\n\n  return (\n    <div className=\"relative overflow-hidden\">\n      {data.imageUrl && (\n        <ImageZoomable src={data.imageUrl} alt={data.description} />\n      )}\n      <button\n        onClick={onClose}\n        onTouchStart={onClose}\n        className=\"absolute right-0 top-0 p-2 text-white\"\n      >\n        <FaTimes />\n      </button>\n      <div className=\"p-4 text-white\">\n        <p className=\"mb-4 text-sm\">{data.description}</p>\n        {!!data.docUrls && data.docUrls.length > 0 && (\n          <>\n            <p className=\"text-xs\">{t(\"learnMore\")}</p>\n            <div className=\"flex flex-col flex-wrap\">\n              {data.docUrls.map((urlData, index) => (\n                <a\n                  key={index}\n                  href={urlData.url}\n                  target=\"_blank\"\n                  rel=\"noopener noreferrer\"\n                  className=\"text-xs text-blue-400 transition duration-150 ease-in-out hover:text-blue-300\"\n                >\n                  {urlData.label ?? \"More Info\"}\n                </a>\n              ))}\n            </div>\n          </>\n        )}\n      </div>\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/nodes/utils/NodeIcons.tsx",
    "content": "import { FC } from \"react\";\nimport {\n  AiOutlineEdit,\n  AiOutlineMergeCells,\n  AiOutlineSearch,\n} from \"react-icons/ai\";\nimport { BiMask } from \"react-icons/bi\";\nimport { BsFiletypeJson, BsListTask, BsRegex } from \"react-icons/bs\";\nimport { GiPerspectiveDiceSix } from \"react-icons/gi\";\nimport {\n  FaUserCircle,\n  FaRobot,\n  FaPlay,\n  FaLink,\n  FaFilm,\n  FaImage,\n  FaEye,\n  FaAws,\n  FaProjectDiagram,\n  FaGoogle,\n  FaRandom,\n} from \"react-icons/fa\";\nimport { SiZapier } from \"react-icons/si\";\nimport { FiFilter, FiRepeat } from \"react-icons/fi\";\nimport {\n  MdHttp,\n  MdLoop,\n  MdOutlineBolt,\n  MdOutlineCrop,\n  MdSwapHoriz,\n} from \"react-icons/md\";\nimport { TbHttpGet } from \"react-icons/tb\";\n\nconst ICON_MAP: { [key: string]: FC } = {\n  FaUserCircle: FaUserCircle,\n  FaRobot: FaRobot,\n  FaPlay: FaPlay,\n  FaLink: FaLink,\n  FaFilm: FaFilm,\n  FaImage: FaImage,\n  FaEye: FaEye,\n  FiFilter: FiFilter,\n  AiOutlineSearch: AiOutlineSearch,\n  BsRegex: BsRegex,\n  MdSwapHoriz: MdSwapHoriz,\n  AiOutlineEdit: AiOutlineEdit,\n  AiOutlineMergeCells: AiOutlineMergeCells,\n  BsJson: BsFiletypeJson,\n  FaAws: FaAws,\n  TbHttpGet: TbHttpGet,\n  MdHttp: MdHttp,\n  MdOutlineCrop: MdOutlineCrop,\n  BiMask: BiMask,\n  FaProjectDiagram: FaProjectDiagram,\n  FiRepeat: FiRepeat,\n  BsListTask: BsListTask,\n  SubflowLoop: () => (\n    <div>\n      <FaProjectDiagram className=\"\" />\n      <MdLoop className=\"absolute left-11 top-9\" />\n    </div>\n  ),\n  AIFlowLogo: () => <img src=\"./logo.svg\" alt=\"hi\" className=\"w-full\" />,\n  OpenAILogo: () => (\n    <img\n      src=\"./img/openai-white-logomark.svg\"\n      alt=\"openai\"\n      className=\"rounded-lg bg-teal-600 p-1\"\n    />\n  ),\n  ReplicateLogo: () => (\n    <img\n      src=\"./img/replicate-logo.png\"\n      alt=\"replicate\"\n      className=\"rounded-lg\"\n    />\n  ),\n  YoutubeLogo: () => (\n    <img\n      src=\"./img/youtube-logo.svg\"\n      alt=\"youtube\"\n      className=\"w-full rounded-lg bg-white px-1 py-2\"\n    />\n  ),\n  AnthropicLogo: () => (\n    <img\n      src=\"./img/anthropic-logo.svg\"\n      alt=\"anthropic\"\n      className=\"w-full rounded-lg bg-white p-1\"\n    />\n  ),\n  StabilityAILogo: () => (\n    <img\n      src=\"./img/stabilityai-logo.jpg\"\n      alt=\"stabilityai\"\n      className=\"w-full rounded-lg\"\n    />\n  ),\n  AirTableLogo: () => (\n    <img src=\"./img/airtable-logo.svg\" alt=\"airtable\" className=\"w-full\" />\n  ),\n  OpenRouterLogo: () => (\n    <img\n      src=\"./img/openrouter-logo.jpg\"\n      alt=\"openrouter\"\n      className=\"w-full rounded-lg \"\n    />\n  ),\n  FaGoogle,\n  ZapierIcon: () => <SiZapier />,\n  MakeIcon: () => (\n    <img src=\"./img/make-logo.svg\" alt=\"make\" className=\"w-full\" />\n  ),\n  DeepSeekLogo: () => (\n    <img\n      src=\"./img/deepseek-logo.png\"\n      alt=\"deepseek\"\n      className=\"w-full rounded-lg bg-white p-1\"\n    />\n  ),\n  GeminiIcon: () => (\n    <img\n      src=\"./img/gemini-logo.png\"\n      alt=\"gemini\"\n      className=\"w-full rounded-lg\"\n    />\n  ),\n  FaRandom,\n  GiPerspectiveDiceSix,\n};\n\nexport const getIconComponent = (type: string) => ICON_MAP[type];\n"
  },
  {
    "path": "packages/ui/src/components/nodes/utils/TextareaModal.tsx",
    "content": "import { Button, Textarea } from \"@mantine/core\";\nimport { ChangeEvent, useState } from \"react\";\nimport ReactDOM from \"react-dom\";\nimport { useTranslation } from \"react-i18next\";\nimport { FaTimes } from \"react-icons/fa\";\n\ninterface TextareaModalProps {\n  initValue: string;\n  onChange: (value: string) => void;\n  onClose: () => void;\n  fieldName?: string;\n}\nexport function TextareaModal({\n  initValue,\n  fieldName,\n  onChange,\n  onClose,\n}: TextareaModalProps) {\n  const { t } = useTranslation(\"flow\");\n  const [value, setValue] = useState<string>(initValue);\n\n  function handleChange(event: ChangeEvent<any>) {\n    setValue(event.target.value);\n  }\n\n  function handleValidate() {\n    onChange(value);\n    onClose();\n  }\n\n  return ReactDOM.createPortal(\n    <>\n      <div\n        className=\"fixed inset-0 flex items-center justify-center bg-black bg-opacity-50\"\n        style={{ zIndex: 9999 }}\n        onClick={onClose}\n        onTouchStart={onClose}\n      >\n        <div\n          className=\"relative flex max-h-[80vh] w-3/4 flex-col overflow-y-auto rounded-lg bg-zinc-900 p-4\"\n          onClick={(e) => e.stopPropagation()}\n        >\n          <div className=\"mb-2 flex flex-col space-y-2\">\n            <div className=\"font-semibold\">{t(\"EditTextContent\")}</div>\n            {fieldName && <p className=\"font-mono\">{fieldName}</p>}\n          </div>\n          <Textarea\n            defaultValue={value}\n            onChange={handleChange}\n            size=\"md\"\n            autosize\n            minRows={20}\n            maxRows={25}\n          />\n          <div className=\"flex justify-end pt-3\">\n            <Button color=\"teal\" onClick={handleValidate}>\n              {t(\"Validate\")}\n            </Button>\n          </div>\n          <button\n            onClick={onClose}\n            onTouchStart={onClose}\n            className=\"absolute right-1 top-1 p-2 text-white\"\n          >\n            <FaTimes />\n          </button>\n        </div>\n      </div>\n    </>,\n    document.body,\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/players/VideoJS.tsx",
    "content": "import React, { useEffect } from \"react\";\n\nimport \"video.js/dist/video-js.css\";\nimport \"videojs-wavesurfer/dist/css/videojs.wavesurfer.css\";\n\nimport videojs from \"video.js\";\nimport \"videojs-wavesurfer\";\n\ninterface VideoJSProps {\n  options: any;\n  onReady?: (player: any) => void;\n}\n\nexport const VideoJS = (props: VideoJSProps) => {\n  const videoRef = React.useRef<HTMLDivElement>(null);\n  const playerRef = React.useRef<any>(null);\n  const { options, onReady } = props;\n\n  useEffect(() => {\n    if (!playerRef.current) {\n      const videoElement = document.createElement(\"video-js\");\n\n      videoElement.classList.add(\"vjs-big-play-centered\");\n\n      if (!videoRef.current) return;\n\n      videoRef.current.appendChild(videoElement);\n\n      const player = (playerRef.current = videojs(videoElement, options, () => {\n        onReady && onReady(player);\n      }));\n    } else {\n      const player = playerRef.current;\n\n      player.autoplay(options.autoplay);\n      player.src(options.sources);\n    }\n  }, [options]);\n\n  useEffect(() => {\n    const player = playerRef.current;\n\n    return () => {\n      if (player && !player.isDisposed()) {\n        player.dispose();\n        playerRef.current = null;\n      }\n    };\n  }, [playerRef]);\n\n  return (\n    <div data-vjs-player className=\"h-full w-full\">\n      <div ref={videoRef} />\n    </div>\n  );\n};\n\nexport default VideoJS;\n"
  },
  {
    "path": "packages/ui/src/components/popups/ConfirmPopup.tsx",
    "content": "import React from \"react\";\nimport { Modal, Text, Button } from \"@mantine/core\";\n\ninterface ConfirmPopupProps {\n  isOpen: boolean;\n  onClose: () => void;\n  onConfirm: () => void;\n  message: string;\n  confirmButtonLabel?: string;\n  title?: string;\n}\n\nconst ConfirmPopup: React.FC<ConfirmPopupProps> = ({\n  isOpen,\n  onConfirm,\n  onClose,\n  message,\n  confirmButtonLabel,\n  title,\n}) => {\n  return (\n    <Modal\n      opened={isOpen}\n      onClose={onClose}\n      title={title}\n      withCloseButton={false}\n      size=\"md\"\n      centered\n      styles={{\n        content: {\n          borderRadius: \"0.75em\",\n          boxShadow: \"0 4px 8px rgba(0, 0, 0, 0.1)\",\n          background: \"linear-gradient(135deg, #101113, #1a1b1e)\",\n          padding: \"2em\",\n        },\n        title: {\n          fontSize: \"1.25rem\",\n          color: \"#d8dee9\",\n          fontWeight: \"bold\",\n          marginBottom: \"0.5em\",\n        },\n        header: {\n          background: \"transparent\",\n        },\n      }}\n    >\n      <Text mt=\"md\" color=\"#d8dee9\">\n        {message}\n      </Text>\n\n      <div className=\"mt-5 flex justify-center space-x-3\">\n        <Button\n          onClick={onConfirm}\n          color=\"#4cb897\"\n          styles={{\n            root: {\n              color: \"white\",\n              fontSize: \"1rem\",\n              fontWeight: \"bold\",\n              borderRadius: \"0.5em\",\n              boxShadow: \"0 2px 4px rgba(0, 0, 0, 0.1)\",\n            },\n            label: {\n              padding: \"0.75em 1em\",\n            },\n          }}\n        >\n          {confirmButtonLabel ?? \"Confirm\"}\n        </Button>\n        <Button\n          onClick={onClose}\n          color=\"#8d8d8d\"\n          styles={{\n            root: {\n              color: \"white\",\n              fontSize: \"1rem\",\n              fontWeight: \"bold\",\n              borderRadius: \"0.5em\",\n              boxShadow: \"0 2px 4px rgba(0, 0, 0, 0.1)\",\n            },\n            label: {\n              padding: \"0.75em 1em\",\n            },\n          }}\n        >\n          {\"Ignore\"}\n        </Button>\n      </div>\n    </Modal>\n  );\n};\nexport default ConfirmPopup;\n"
  },
  {
    "path": "packages/ui/src/components/popups/DefaultPopup.tsx",
    "content": "import React, { CSSProperties } from \"react\";\nimport ReactDOM from \"react-dom\";\nimport EaseOut from \"../shared/motions/EaseOut\";\n\ninterface DefaultPopupWrapperProps {\n  show: boolean;\n  onClose: () => void;\n  centered?: boolean;\n  popupClassNames?: string;\n  style?: CSSProperties;\n  children: React.ReactNode;\n}\n\nexport default function DefaultPopupWrapper({\n  show,\n  onClose,\n  centered,\n  popupClassNames,\n  style,\n  children,\n}: DefaultPopupWrapperProps) {\n  if (!show) return null;\n\n  return ReactDOM.createPortal(\n    <div\n      className=\"fixed left-0 top-0 z-50 flex h-full w-full flex-col items-center justify-center bg-black/50\"\n      onClick={onClose}\n      onTouchEnd={onClose}\n    >\n      <div\n        className={`${!!popupClassNames ? popupClassNames : \"h-5/6 w-5/6\"} flex flex-col items-center ${centered ? \"\" : \"mb-auto\"}`}\n        onClick={(e) => {\n          e.stopPropagation();\n        }}\n        onTouchEnd={(e) => e.stopPropagation()}\n        style={{ ...style }}\n      >\n        <EaseOut>{children}</EaseOut>\n      </div>\n    </div>,\n    document.body,\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/popups/HelpPopup.tsx",
    "content": "import React, { useState } from \"react\";\nimport styled from \"styled-components\";\nimport { useTranslation } from \"react-i18next\";\nimport { Badge, Button, Card, Group, Modal, Image, Text } from \"@mantine/core\";\nimport { FaTimes } from \"react-icons/fa\";\n\ninterface HelpPopupProps {\n  isOpen: boolean;\n  onClose: () => void;\n}\n\ninterface HelpArticle {\n  title: string;\n  description: string;\n  url: string;\n  imgUrl: string;\n  newFeature: boolean;\n}\n\nconst HelpPopup: React.FC<HelpPopupProps> = ({ isOpen, onClose }) => {\n  const { t } = useTranslation(\"tips\");\n\n  const [articleSelected, setArticleSelected] = useState<\n    HelpArticle | undefined\n  >();\n\n  const articles: HelpArticle[] = t(\"tips\", { returnObjects: true });\n\n  function selectArticle(item: HelpArticle) {\n    setArticleSelected(item);\n  }\n\n  function resetSelectedArticle() {\n    setArticleSelected(undefined);\n  }\n\n  return (\n    <Modal\n      opened={isOpen}\n      onClose={onClose}\n      title=\"Help\"\n      size=\"auto\"\n      centered\n      styles={{\n        title: {\n          fontSize: \"1.3rem\",\n          fontWeight: \"bold\",\n          color: \"white\",\n          paddingLeft: \"0.5rem\",\n        },\n        header: {\n          backgroundColor: \"#6b8177\",\n          fontSize: \"1.5rem\",\n        },\n        body: {\n          backgroundColor: \"rgb(24 24 27)\",\n          padding: 0,\n        },\n      }}\n    >\n      <PopupContent>\n        <div className=\"flex h-full w-full flex-row overflow-auto\">\n          {articleSelected && (\n            <div className=\"relative flex w-3/4\">\n              <iframe\n                src={articleSelected.url}\n                width=\"100%\"\n                height=\"100%\"\n                sandbox=\"allow-scripts allow-same-origin\"\n              ></iframe>\n              <div className=\"absolute top-2 flex w-full justify-center\">\n                <button\n                  onClick={resetSelectedArticle}\n                  onTouchStart={resetSelectedArticle}\n                  className=\"flex rounded-full p-1 text-slate-300/50 ring-1 ring-slate-300/50 hover:text-white hover:ring-white\"\n                >\n                  <FaTimes />\n                </button>\n              </div>\n            </div>\n          )}\n\n          <div\n            className={`mx-2 my-10 flex flex-wrap items-baseline justify-center gap-3 ${articleSelected ? \"w-1/4\" : \"w-full\"} `}\n          >\n            {articles.map((article, index) => (\n              <div\n                className={`h-auto  ${articleSelected ? \"w-fit\" : \"w-fit md:w-1/6\"}`}\n              >\n                <Card shadow=\"sm\" padding=\"lg\" radius=\"md\" withBorder>\n                  <Card.Section>\n                    <div className=\"relative\">\n                      <img\n                        src={article.imgUrl}\n                        alt={article.title}\n                        className=\"aspect-video\"\n                      />\n                      {article.newFeature && (\n                        <div className=\"absolute top-2 z-50 flex text-sm\">\n                          <div className=\"rounded-r-lg bg-teal-600 px-2 text-white\">\n                            New\n                          </div>\n                        </div>\n                      )}\n                    </div>\n                  </Card.Section>\n\n                  <Group justify=\"space-between\" mt=\"md\" mb=\"xs\">\n                    <Text fw={700} lineClamp={1}>\n                      {article.title}\n                    </Text>\n                  </Group>\n\n                  <Text size=\"sm\" c=\"dimmed\" lineClamp={2}>\n                    {article.description}\n                  </Text>\n\n                  <div\n                    className=\"mt-4 cursor-pointer pl-1 font-bold text-teal-400\"\n                    onClick={() => selectArticle(article)}\n                    onTouchStart={() => selectArticle(article)}\n                  >\n                    Read more {\">\"}\n                  </div>\n                </Card>\n              </div>\n            ))}\n          </div>\n        </div>\n      </PopupContent>\n    </Modal>\n  );\n};\n\nconst PopupContent = styled.div.attrs({\n  className: \"overflow-hidden hover:overflow-auto w-full\",\n})``;\n\nexport default HelpPopup;\n"
  },
  {
    "path": "packages/ui/src/components/popups/UserMessagePopup.tsx",
    "content": "import { Modal } from \"@mantine/core\";\nimport { ReactNode } from \"react\";\n\nexport enum MessageType {\n  Error,\n  Info,\n  Warning,\n}\n\nexport interface UserMessage {\n  type?: MessageType;\n  nodeId?: string;\n  content: string;\n}\n\ninterface PopupProps {\n  isOpen: boolean;\n  message: UserMessage;\n  children?: ReactNode;\n  onClose: () => void;\n}\n\nfunction UserMessagePopup(props: PopupProps) {\n  return props.isOpen ? (\n    <Modal\n      opened={props.isOpen}\n      onClose={props.onClose}\n      title={props.message?.type === MessageType.Error ? \"Error\" : \"Info\"}\n      size=\"auto\"\n      centered\n      styles={{\n        header: {\n          backgroundColor: \"rgb(24 24 27)\",\n        },\n        body: {\n          backgroundColor: \"rgb(24 24 27)\",\n        },\n      }}\n    >\n      <div className=\"flex h-full flex-col rounded-lg bg-zinc-900 px-4 text-slate-200\">\n        {!!props.message.nodeId && (\n          <div className=\"my-2 flex w-full justify-center\">\n            <div className=\"w-fit rounded-lg bg-sky-400/30 px-2 py-1\">\n              {props.message.nodeId}\n            </div>\n          </div>\n        )}\n        <div className=\"mt-5 text-slate-300\">{props.message?.content}</div>\n        {props.children}\n      </div>\n    </Modal>\n  ) : (\n    <></>\n  );\n}\n\nexport default UserMessagePopup;\n"
  },
  {
    "path": "packages/ui/src/components/popups/config-popup/AppParameters.tsx",
    "content": "import React, { useContext, useEffect, useState } from \"react\";\nimport { useTranslation } from \"react-i18next\";\nimport { Field, Input, Label, Section } from \"./ParametersFields\";\nimport { AppConfig, configMetadata } from \"./configMetadata\";\nimport { SocketContext } from \"../../../providers/SocketProvider\";\nimport { Button } from \"@mantine/core\";\n\nexport default function AppParameters() {\n  const { t } = useTranslation(\"flow\");\n  const { socket, connect } = useContext(SocketContext);\n\n  // Load configuration from local storage immediately\n  const initialConfig: Partial<AppConfig> = JSON.parse(\n    localStorage.getItem(\"appConfig\") || \"{}\",\n  );\n  const [config, setConfig] = useState<Partial<AppConfig>>(initialConfig);\n  const [error, setError] = useState<string | null>(null);\n  const [success, setSuccess] = useState<string | null>(null);\n\n  useEffect(() => {\n    connect();\n  }, [connect]);\n\n  const handleChange = (e: React.ChangeEvent<HTMLInputElement>) => {\n    const { id, value } = e.target;\n    setConfig((prev: Partial<AppConfig>) => {\n      const newConfig = { ...prev, [id]: value };\n      localStorage.setItem(\"appConfig\", JSON.stringify(newConfig));\n      return newConfig;\n    });\n  };\n\n  const handleSubmit = async () => {\n    setError(null);\n    setSuccess(null);\n    try {\n      if (!socket) {\n        throw new Error(\"Socket is not connected\");\n      }\n      socket.emit(\"update_app_config\", config);\n      setSuccess(\"Configuration updated successfully.\");\n    } catch (err) {\n      console.error(\"Error updating config:\", err);\n      setError(\"Failed to update configuration.\");\n    }\n  };\n\n  return (\n    <div className=\"app-parameters\">\n      {error && <p className=\"text-red-500\">{error}</p>}\n      {success && <p className=\"text-green-500\">{success}</p>}\n\n      {(Object.keys(configMetadata) as (keyof AppConfig)[]).map((key) => (\n        <Section key={key}>\n          <Field key={key}>\n            <Label htmlFor={key}>{configMetadata[key].label}</Label>\n            {configMetadata[key].description && (\n              <p className=\"text-sm text-slate-400\">\n                {configMetadata[key].description}\n              </p>\n            )}\n            <Input\n              type={configMetadata[key].type}\n              id={key}\n              value={config[key] || \"\"}\n              onChange={handleChange}\n            />\n          </Field>\n        </Section>\n      ))}\n\n      <Button onClick={handleSubmit} color=\"teal\">\n        Save Configuration\n      </Button>\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/popups/config-popup/ConfigPopup.tsx",
    "content": "import { FaGithub, FaXTwitter } from \"react-icons/fa6\";\nimport styled from \"styled-components\";\nimport { useTranslation } from \"react-i18next\";\nimport { Modal } from \"@mantine/core\";\nimport { UserParameters } from \"./UserParameters\";\nimport DisplayParameters from \"./DisplayParameters\";\nimport { useVisibility } from \"../../../providers/VisibilityProvider\";\nimport AppParameters from \"./AppParameters\";\n\ninterface ConfigPopupProps {\n  isOpen: boolean;\n  onClose: () => void;\n  onValidate?: () => void;\n}\n\nconst ConfigPopup = ({ isOpen }: ConfigPopupProps) => {\n  const { t } = useTranslation(\"config\");\n\n  const { getElement, configActiveTab, setConfigActiveTab } = useVisibility();\n  const configPopup = getElement(\"configPopup\");\n\n  const handleClose = () => {\n    configPopup.hide();\n  };\n\n  return (\n    <Modal\n      opened={isOpen}\n      onClose={handleClose}\n      withCloseButton={false}\n      size=\"50%\"\n      centered\n      styles={{\n        content: {\n          borderRadius: \"0.75em\",\n          boxShadow: \"0 4px 8px rgba(0, 0, 0, 0.1)\",\n          background:\n            \"linear-gradient(135deg, #101113, #1a1b1e), url('/backgrounds/g-simple.png')\",\n          backgroundBlendMode: \"overlay\",\n          backgroundSize: \"cover\",\n          backgroundRepeat: \"no-repeat\",\n          backgroundPosition: \"center\",\n          padding: \"2em\",\n          color: \"#d8dee9\",\n          minHeight: \"100%\",\n        },\n        title: {\n          fontSize: \"1.25rem\",\n          color: \"#d8dee9\",\n          fontWeight: \"bold\",\n          marginBottom: \"0.5em\",\n        },\n        header: {\n          background: \"transparent\",\n        },\n      }}\n    >\n      <Content>\n        <Tabs className=\"sm:text-md text-base\">\n          <Tab\n            isActive={configActiveTab === \"user\"}\n            onClick={() => setConfigActiveTab(\"user\")}\n          >\n            {t(\"userTabLabel\")}\n          </Tab>\n          <Tab\n            isActive={configActiveTab === \"app\"}\n            onClick={() => setConfigActiveTab(\"app\")}\n          >\n            {t(\"appParametersLabel\")}\n          </Tab>\n          <Tab\n            isActive={configActiveTab === \"display\"}\n            onClick={() => setConfigActiveTab(\"display\")}\n          >\n            {t(\"displayTabLabel\")}\n          </Tab>\n        </Tabs>\n        {configActiveTab === \"user\" && <UserParameters />}\n        {configActiveTab === \"app\" && <AppParameters />}\n        {configActiveTab === \"display\" && <DisplayParameters />}\n        <Footer>\n          <Message>{t(\"supportProjectPrompt\")}</Message>\n          <Icons>\n            <Icon\n              href=\"https://github.com/DahnM20/ai-flow\"\n              target=\"_blank\"\n              rel=\"noopener noreferrer\"\n            >\n              <FaGithub />\n            </Icon>\n            <Icon\n              href=\"https://twitter.com/DahnM20\"\n              target=\"_blank\"\n              rel=\"noopener noreferrer\"\n            >\n              <FaXTwitter />\n            </Icon>\n          </Icons>\n        </Footer>\n      </Content>\n    </Modal>\n  );\n};\n\nconst Content = styled.div`\n  display: flex;\n  flex-direction: column;\n  justify-content: space-between;\n  overflow: auto;\n`;\n\nconst Tabs = styled.div`\n  display: flex;\n  justify-content: center;\n  margin-bottom: 20px;\n`;\n\nconst Tab = styled.button<{ isActive: boolean }>`\n  padding: 10px 20px;\n  font-weight: bold;\n  color: ${(props) => (props.isActive ? \"#fff\" : \"#b4b4b4\")};\n  background-color: ${(props) => (props.isActive ? \"#1a1b1e\" : \"transparent\")};\n  border: none;\n  border-bottom: ${(props) => (props.isActive ? \"2px solid #00bcd4\" : \"none\")};\n  cursor: pointer;\n  transition:\n    color 0.3s,\n    background-color 0.3s;\n\n  &:hover {\n    color: #fff;\n  }\n`;\n\nconst Footer = styled.div`\n  margin-top: 20px;\n  display: flex;\n  flex-direction: column;\n  align-items: center;\n  font-size: 14px;\n`;\n\nconst Message = styled.p`\n  margin-bottom: 10px;\n`;\n\nconst Icons = styled.div`\n  display: flex;\n  gap: 10px;\n`;\n\nconst Icon = styled.a`\n  font-size: 1.75em;\n  cursor: pointer;\n  transition: color 0.3s ease-in-out;\n\n  &:hover {\n    color: #b3edff;\n  }\n`;\n\nexport default ConfigPopup;\n"
  },
  {
    "path": "packages/ui/src/components/popups/config-popup/DisplayParameters.tsx",
    "content": "import { useTranslation } from \"react-i18next\";\nimport { nodeConfigs } from \"../../../nodes-configuration/nodeConfig\";\nimport { ActionButton, Actions, ParametersContainer } from \"./UserParameters\";\nimport { Checkbox } from \"@mantine/core\";\nimport { useState } from \"react\";\nimport { getNodesHiddenList, saveNodesHiddenList } from \"./parameters\";\nimport { toastFastSuccessMessage } from \"../../../utils/toastUtils\";\nimport {\n  getNonGenericNodeConfig,\n  transformNodeConfigsToDndNode,\n} from \"../../../nodes-configuration/sectionConfig\";\nimport { useVisibility } from \"../../../providers/VisibilityProvider\";\n\nexport default function DisplayParameters() {\n  const { t } = useTranslation(\"flow\");\n  const { t: tc } = useTranslation(\"config\");\n  const { getElement } = useVisibility();\n\n  const minimap = getElement(\"minimap\");\n\n  const [nodesHidden, setNodesHidden] =\n    useState<string[]>(getNodesHiddenList());\n\n  function handleCheckField(key: string): void {\n    if (nodesHidden.includes(key)) {\n      setNodesHidden(nodesHidden.filter((node) => node !== key));\n    } else {\n      setNodesHidden([...nodesHidden, key]);\n    }\n  }\n\n  function handleSave(): void {\n    saveNodesHiddenList(nodesHidden);\n    toastFastSuccessMessage(tc(\"configUpdated\"));\n  }\n\n  let allNodes = transformNodeConfigsToDndNode(nodeConfigs).concat(\n    getNonGenericNodeConfig(),\n  );\n\n  const nodesBySection = allNodes.reduce((acc: Record<string, any[]>, node) => {\n    const section = node.section || \"Default\";\n    if (!acc[section]) {\n      acc[section] = [];\n    }\n    acc[section].push(node);\n    return acc;\n  }, {});\n\n  return (\n    <div className=\"flex w-full justify-center\">\n      <div className=\"flex flex-col\">\n        <ParametersContainer className=\"flex flex-col\">\n          <h3 className=\"mb-2 font-semibold\">{tc(\"UI\")}</h3>\n          <Checkbox\n            label={tc(\"ShowMinimap\")}\n            size=\"sm\"\n            darkHidden={false}\n            color=\"cyan\"\n            checked={minimap.isVisible}\n            onChange={() => minimap.toggle()}\n          />\n\n          <h3 className=\"mb-2 mt-10 font-semibold\">{tc(\"Core Nodes\")}</h3>\n\n          <div className=\"flex flex-col gap-10 md:flex-row\">\n            {Object.keys(nodesBySection).map((section) => (\n              <div key={section} className=\"mb-4\">\n                <h4 className=\"mb-2 text-sm italic\">{tc(section)}</h4>\n                <div\n                  className=\"mb-5 flex w-full items-center justify-center\"\n                  style={{\n                    display: \"grid\",\n                    gridTemplateColumns: \"repeat(auto-fit, minmax(150px, 1fr))\",\n                    gap: \"10px\",\n                  }}\n                >\n                  {nodesBySection[section].map((node) => (\n                    <Checkbox\n                      key={node.type}\n                      label={t(node.label ?? node.type)}\n                      size=\"sm\"\n                      darkHidden={false}\n                      color=\"cyan\"\n                      checked={!nodesHidden?.includes(node.type)}\n                      onChange={() => handleCheckField(node.type)}\n                    />\n                  ))}\n                </div>\n              </div>\n            ))}\n          </div>\n        </ParametersContainer>\n        <Actions>\n          <ActionButton\n            onClick={handleSave}\n            className=\"bg-teal-500 hover:bg-teal-400\"\n          >\n            {tc(\"validateButtonLabel\")}\n          </ActionButton>\n        </Actions>\n      </div>\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/popups/config-popup/ParametersFields.tsx",
    "content": "import { useTranslation } from \"react-i18next\";\nimport styled from \"styled-components\";\nimport { Parameters } from \"./parameters\";\n\ninterface ParameterFieldsProps {\n  parameters: Parameters;\n  onParameterChange: (\n    section: string,\n    key: string,\n    value: string | number | boolean,\n  ) => void;\n}\n\nconst ParameterFields = ({\n  parameters,\n  onParameterChange,\n}: ParameterFieldsProps) => {\n  const { t } = useTranslation(\"config\");\n\n  return (\n    <div className=\"flex w-full flex-col\">\n      {Object.entries(parameters).map(([section, keys]) => (\n        <Section key={section}>\n          <SectionTitle>{t(`sections.${section}`)}</SectionTitle>\n          {Object.entries(keys).map(([key, { value, type }]) => (\n            <Field key={key}>\n              <Label htmlFor={`parameter-${section}-${key}`}>\n                {t(`parameters.${section}.${key}`)}\n              </Label>\n              <Input\n                type={\n                  type === \"boolean\"\n                    ? \"checkbox\"\n                    : value !== undefined\n                      ? \"password\"\n                      : \"text\"\n                }\n                id={`parameter-${section}-${key}`}\n                value={type === \"boolean\" ? undefined : value?.toString()}\n                checked={type === \"boolean\" ? (value as boolean) : undefined}\n                onChange={(e) =>\n                  onParameterChange(\n                    section,\n                    key,\n                    type === \"boolean\" ? e.target.checked : e.target.value,\n                  )\n                }\n              />\n            </Field>\n          ))}\n        </Section>\n      ))}\n    </div>\n  );\n};\n\nexport default ParameterFields;\n\nexport const Section = styled.div`\n  margin-bottom: 30px;\n  width: 100%;\n`;\n\nexport const SectionTitle = styled.h3`\n  font-size: 18px;\n  font-weight: bold;\n  margin-bottom: 10px;\n  color: #d8d8d8;\n`;\n\nexport const Field = styled.div`\n  display: flex;\n  flex-direction: column;\n  margin-bottom: 20px;\n`;\n\nexport const Label = styled.label`\n  font-size: 16px;\n  font-weight: bold;\n  margin-bottom: 5px;\n  color: #b4b4b4;\n`;\n\nexport const Input = styled.input`\n  padding: 10px;\n  border-radius: 5px;\n  border: none;\n  font-size: 16px;\n  width: 100%;\n  background-color: #80808012;\n  color: #cecece;\n`;\n"
  },
  {
    "path": "packages/ui/src/components/popups/config-popup/UserParameters.tsx",
    "content": "import { useContext, useState } from \"react\";\nimport { useTranslation } from \"react-i18next\";\nimport {\n  Parameters,\n  getConfigParameters,\n  updateParameters,\n} from \"./parameters\";\nimport {\n  SocketContext,\n  WSConfiguration,\n} from \"../../../providers/SocketProvider\";\nimport styled from \"styled-components\";\nimport ParameterFields from \"./ParametersFields\";\nimport { toastFastSuccessMessage } from \"../../../utils/toastUtils\";\n\nexport function UserParameters() {\n  const { t } = useTranslation(\"config\");\n  const { updateSocket } = useContext(SocketContext);\n\n  const [parameters, setParameters] = useState<Parameters>(\n    getConfigParameters(),\n  );\n\n  const onParameterChange = (section: string, name: string, value: any) => {\n    setParameters((prevParameters) => ({\n      ...prevParameters,\n      [section]: {\n        ...prevParameters[section],\n        [name]: {\n          ...prevParameters[section][name],\n          value,\n        },\n      },\n    }));\n  };\n\n  const handleValidate = () => {\n    updateParameters(parameters);\n    const config: WSConfiguration = {};\n    updateSocket(config);\n    toastFastSuccessMessage(t(\"configUpdated\"));\n  };\n\n  return (\n    <>\n      <ParametersContainer>\n        <ParameterFields\n          parameters={parameters}\n          onParameterChange={onParameterChange}\n        />\n      </ParametersContainer>\n      <Actions>\n        <ActionButton\n          onClick={handleValidate}\n          className=\"bg-teal-500 hover:bg-teal-400\"\n        >\n          {t(\"validateButtonLabel\")}\n        </ActionButton>\n      </Actions>\n    </>\n  );\n}\n\nexport const ParametersContainer = styled.div`\n  display: flex;\n  justify-content: center;\n  overflow: auto;\n  width: 100%;\n`;\n\nexport const Actions = styled.div`\n  display: flex;\n  justify-content: center;\n  align-items: center;\n  width: 100%;\n`;\n\nexport const ActionButton = styled.button`\n  display: flex;\n  align-items: center;\n  padding: 10px 20px;\n  color: #fff;\n  font-size: 16px;\n  font-weight: bold;\n  border: none;\n  border-radius: 5px;\n  cursor: pointer;\n  transition: background-color 0.3s ease-in-out;\n`;\n"
  },
  {
    "path": "packages/ui/src/components/popups/config-popup/configMetadata.ts",
    "content": "// src/configMetadata.ts\nexport interface FieldMetadata {\n  label: string;\n  description: string;\n  type: string; // e.g., \"text\", \"password\"\n  required?: boolean;\n}\n\nexport interface ConfigMetadata {\n  [key: string]: FieldMetadata;\n}\n\nexport interface AppConfig {\n  S3_BUCKET_NAME: string;\n  S3_AWS_ACCESS_KEY_ID: string;\n  S3_AWS_SECRET_ACCESS_KEY: string;\n  S3_AWS_REGION_NAME: string;\n  S3_ENDPOINT_URL: string;\n  REPLICATE_API_KEY: string;\n}\n\nexport const configMetadata: ConfigMetadata = {\n  S3_BUCKET_NAME: {\n    label: \"S3 Bucket Name\",\n    description: \"The name of your S3-compatible storage bucket.\",\n    type: \"text\",\n  },\n  S3_AWS_ACCESS_KEY_ID: {\n    label: \"S3 Access Key\",\n    description: \"Your S3 access key.\",\n    type: \"password\",\n  },\n  S3_AWS_SECRET_ACCESS_KEY: {\n    label: \"S3 Secret Access Key\",\n    description: \"Your S3 secret access key.\",\n    type: \"password\",\n  },\n  S3_AWS_REGION_NAME: {\n    label: \"S3 AWS Region Name\",\n    description: \"The AWS region where your S3 bucket is located.\",\n    type: \"text\",\n  },\n  S3_ENDPOINT_URL: {\n    label: \"Optional - S3 Endpoint URL\",\n    description: \"The URL of your S3-compatible storage endpoint.\",\n    type: \"text\",\n  },\n  REPLICATE_API_KEY: {\n    label: \"Replicate API Key\",\n    description: \"Used to fetch Replicate models.\",\n    type: \"password\",\n  },\n};\n"
  },
  {
    "path": "packages/ui/src/components/popups/config-popup/parameters.ts",
    "content": "import withCache from \"../../../api/cache/withCache\";\nimport { getParameters } from \"../../../api/parameters\";\nimport { getDefaultNodesHiddenList } from \"../../../config/config\";\n\nexport interface ParameterDetail {\n  value?: string | number | boolean;\n  type?: string;\n  tag?: string;\n  description?: string;\n}\n\nexport type Parameters = {\n  [section: string]: {\n    [key: string]: ParameterDetail;\n  };\n};\n\nconst defaultParameters: Parameters = {\n  core: {\n    openai_api_key: {\n      value: undefined,\n      tag: \"core\",\n    },\n    stabilityai_api_key: {\n      value: undefined,\n      tag: \"core\",\n    },\n    replicate_api_key: {\n      value: undefined,\n      tag: \"core\",\n    },\n  },\n};\n\nlet parameters: Parameters = {};\n\nconst PARAMETERS_KEY_LOCAL_STORAGE = \"parameters\";\nexport const PARAMETER_NODES_HIDDEN_LIST_KEY_LOCAL_STORAGE = \"nodes_hidden\";\n\nexport async function updateParameters(parameters: Parameters) {\n  window.localStorage.setItem(\n    PARAMETERS_KEY_LOCAL_STORAGE,\n    JSON.stringify(parameters),\n  );\n  loadFromLocalStorage();\n}\n\nexport function loadFromLocalStorage() {\n  const storedParameters = window.localStorage.getItem(\n    PARAMETERS_KEY_LOCAL_STORAGE,\n  );\n\n  Object.keys(parameters).forEach((section) => {\n    Object.keys(parameters[section]).forEach((key) => {\n      if (storedParameters) {\n        const storedParameter = JSON.parse(storedParameters);\n        if (!!storedParameter[section] && !!storedParameter[section][key]) {\n          parameters[section][key] = storedParameter[section][key];\n        }\n      }\n    });\n  });\n}\n\nexport async function loadParameters() {\n  migrateOldParameters();\n\n  const fetchedParameters = await withCache(getParameters);\n  parameters = !!fetchedParameters\n    ? { ...fetchedParameters }\n    : defaultParameters;\n\n  loadFromLocalStorage();\n}\n\nexport function getConfigParameters(): Parameters {\n  return structuredClone(parameters);\n}\n\nexport function getConfigParametersFlat() {\n  const parametersFlat = Object.values(getConfigParameters() || {}).reduce(\n    (flat, keys) => {\n      return { ...flat, ...keys };\n    },\n    {},\n  );\n\n  let paramKeyValue: any = {};\n\n  Object.keys(parametersFlat).forEach((key) => {\n    if (!!parametersFlat[key].value) {\n      paramKeyValue[key] = parametersFlat[key].value;\n    }\n  });\n\n  return paramKeyValue;\n}\n\nexport function migrateOldParameters() {\n  if (!window.localStorage.getItem(\"apiKeys\")) return;\n\n  console.log(\"Migrating old parameters to new format\");\n\n  const oldParameters = JSON.parse(\n    window.localStorage.getItem(\"apiKeys\") || \"{}\",\n  );\n\n  const newParams = structuredClone(defaultParameters);\n\n  if (oldParameters.openai_api_key) {\n    newParams.core.openai_api_key.value = oldParameters.openai_api_key;\n  }\n  if (oldParameters.replicate_api_key) {\n    newParams.core.replicate_api_key.value = oldParameters.replicate_api_key;\n  }\n  if (oldParameters.stabilityai_api_key) {\n    newParams.core.stabilityai_api_key.value =\n      oldParameters.stabilityai_api_key;\n  }\n\n  window.localStorage.setItem(\n    PARAMETERS_KEY_LOCAL_STORAGE,\n    JSON.stringify(newParams),\n  );\n\n  window.localStorage.removeItem(\"apiKeys\");\n\n  return newParams;\n}\n\nlet loadedNodesHiddenList = loadNodesHiddenList();\n\nexport function loadNodesHiddenList(): string[] {\n  if (\n    !window.localStorage.getItem(PARAMETER_NODES_HIDDEN_LIST_KEY_LOCAL_STORAGE)\n  )\n    return getDefaultNodesHiddenList();\n  return JSON.parse(\n    window.localStorage.getItem(\n      PARAMETER_NODES_HIDDEN_LIST_KEY_LOCAL_STORAGE,\n    ) || \"[]\",\n  );\n}\nexport function getNodesHiddenList(): string[] {\n  return loadedNodesHiddenList;\n}\n\nexport function saveNodesHiddenList(nodesHiddenList: string[]) {\n  window.localStorage.setItem(\n    PARAMETER_NODES_HIDDEN_LIST_KEY_LOCAL_STORAGE,\n    JSON.stringify(nodesHiddenList),\n  );\n  loadedNodesHiddenList = loadNodesHiddenList();\n  window.dispatchEvent(new CustomEvent(\"nodesHiddenListChanged\", {}));\n}\n"
  },
  {
    "path": "packages/ui/src/components/popups/select-model-popup/Model.tsx",
    "content": "import { ModelData } from \"./SelectModelPopup\";\n\ninterface ModelProps {\n  model: ModelData;\n  onValidate: (modelName: string) => void;\n}\n\nexport function Model({ model, onValidate }: ModelProps) {\n  const realModelName = model.modelName.includes(\"/\")\n    ? model.modelName.split(\"/\")[1]\n    : model.modelName;\n\n  const authorName = model.modelName.includes(\"/\")\n    ? model.modelName.split(\"/\")[0]\n    : \"\";\n\n  return (\n    <div\n      key={model.modelName}\n      className=\"group relative flex h-96 w-full transform cursor-pointer flex-col overflow-hidden rounded-lg shadow-lg transition-all duration-300 ease-in-out hover:scale-105\"\n      onClick={() => onValidate(model.modelName)}\n    >\n      <img\n        src={model.coverImage || \"default-image-url.jpg\"}\n        alt={model.modelName}\n        className=\"h-2/3 w-full object-cover\"\n      />\n\n      <p className=\"absolute right-0 top-0 m-2 rounded-md bg-slate-800/90 p-1 text-xs text-slate-400\">\n        {model.runCount} 🚀\n      </p>\n      <div className=\"relative flex h-1/3 w-full flex-col border-t border-teal-600/20\">\n        <div\n          className=\"absolute inset-0 bg-cover bg-center blur-md filter\"\n          style={{\n            backgroundImage: `url(${model.coverImage || \"default-image-url.jpg\"})`,\n          }}\n        />\n        <div className=\"relative h-full w-full overflow-auto bg-zinc-800/80 p-3\">\n          <div className=\"flex flex-row items-center space-x-2 overflow-hidden\">\n            <p className=\"text-md truncate font-semibold\" title={realModelName}>\n              {realModelName}\n            </p>\n          </div>\n          <p\n            className=\"mt-1 flex-grow overflow-auto text-xs text-gray-300\"\n            title={model.description}\n          >\n            {model.description}\n          </p>\n        </div>\n      </div>\n      <div className=\"pointer-events-none absolute inset-0 rounded-lg border border-teal-600/30\"></div>\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/popups/select-model-popup/SelectModelPopup.tsx",
    "content": "import { useState, useEffect } from \"react\";\nimport { LoadingIcon } from \"../../nodes/Node.styles\";\nimport DefaultPopupWrapper from \"../DefaultPopup\";\nimport FilterGrid from \"../shared/FilterGrid\";\nimport LoadMoreButton from \"../shared/LoadMoreButton\";\nimport { useTranslation } from \"react-i18next\";\nimport Grid from \"../shared/Grid\";\nimport { Model } from \"./Model\";\nimport { useLoading } from \"../../../hooks/useLoading\";\nimport withCache from \"../../../api/cache/withCache\";\nimport {\n  getCollectionModels,\n  getCollections,\n  getHighlightedModels,\n  getPublicModels,\n} from \"../../../api/replicateModels\";\nimport { Modal } from \"@mantine/core\";\nimport { toastErrorMessage } from \"../../../utils/toastUtils\";\n\ninterface SelectModelPopupProps {\n  show: boolean;\n  onClose: () => void;\n  onValidate: (data: any) => void;\n}\n\nexport default function SelectModelPopup({\n  show,\n  onClose,\n  onValidate,\n}: SelectModelPopupProps) {\n  const { t } = useTranslation(\"flow\");\n\n  const [models, setModels] = useState<any>();\n  const [highlitedModels, setHighlightedModels] = useState<any>();\n  const [collections, setCollections] = useState<any>();\n  const [selectedCollection, setSelectedCollection] = useState<any>();\n  const [opening, startOpeningWith] = useLoading();\n  const [loading, startLoadingWith] = useLoading();\n  const [cursor, setCursor] = useState(\"\");\n\n  useEffect(() => {\n    async function loadAllData() {\n      try {\n        const collections = await withCache(getCollections);\n        const { models, cursor: newCursor } = await withCache(getPublicModels);\n        const highlightedModels = await withCache(getHighlightedModels);\n        const extractedData = extractModelsData(models);\n        const extractedHighlightedModels = extractModelsData(highlightedModels);\n        setCursor(newCursor);\n        setModels(extractedData);\n        setHighlightedModels(extractedHighlightedModels);\n        setCollections(collections);\n      } catch (e) {\n        toastErrorMessage(\n          \"Error while fetching Replicate models. Please check that REPLICATE_API_KEY is set in the app environnement or in the App parameters tab.\",\n        );\n        throw e;\n      }\n    }\n\n    async function configurePopup() {\n      await startOpeningWith(loadAllData);\n    }\n\n    if (!!models) return;\n\n    configurePopup();\n  }, []);\n\n  useEffect(() => {\n    if (!selectedCollection) return;\n\n    const loadCollectionModels = async () => {\n      const { models, cursor: newCursor } = await withCache(\n        getCollectionModels,\n        selectedCollection,\n      );\n      const extractedData = extractModelsData(models);\n      setModels(extractedData);\n      setCursor(newCursor);\n    };\n\n    startLoadingWith(loadCollectionModels);\n  }, [selectedCollection]);\n\n  function extractModelsData(models: any) {\n    return models\n      ?.map(\n        (result: {\n          default_example: { model: string };\n          cover_image_url: string;\n          description: string;\n          name: string;\n          owner: string;\n          run_count: number;\n        }) => {\n          return {\n            modelName: result.owner + \"/\" + result.name,\n            coverImage: result.cover_image_url,\n            description: result.description,\n            runCount: result.run_count,\n          };\n        },\n      )\n      .filter((model: any) => {\n        return !!model && model.coverImage != null;\n      });\n  }\n\n  async function handleSelectCollection(collectionName: string) {\n    setModels([]);\n    setCursor(\"\");\n    setSelectedCollection(collectionName);\n  }\n\n  async function loadCollectionsModels(cursor?: string) {\n    const collections = await withCache(\n      getCollectionModels,\n      selectedCollection,\n      cursor,\n    );\n    setCollections(collections);\n    return collections;\n  }\n\n  async function handleLoadMore() {\n    let newModels: any[] = [];\n    if (selectedCollection) {\n      const { models, cursor: newCursor } = await startLoadingWith(\n        loadCollectionsModels,\n        cursor,\n      );\n      setCursor(newCursor);\n      newModels = extractModelsData(models);\n    } else {\n      const { models, cursor: newCursor } = await startLoadingWith(\n        getPublicModels,\n        cursor,\n      );\n      setCursor(newCursor);\n      newModels = extractModelsData(models);\n    }\n    setModels([...models, ...newModels]);\n  }\n\n  const renderModelSections = () => {\n    if (!selectedCollection) {\n      return (\n        <>\n          <ModelsSection\n            title={t(\"SpotlightModels\")}\n            models={highlitedModels}\n            onValidate={onValidate}\n          />\n          <ModelsSection\n            title={t(\"AllModels\")}\n            models={models}\n            onValidate={onValidate}\n          />\n        </>\n      );\n    }\n    return <ModelsSection models={models} onValidate={onValidate} />;\n  };\n\n  if (opening) return <LoadingIcon className=\"ml-5\" />;\n\n  if (!models && !collections) return null;\n\n  return (\n    <DefaultPopupWrapper\n      show={show}\n      onClose={onClose}\n      centered\n      popupClassNames=\"overflow-auto w-5/6 h-5/6 flex rounded-xl p-4 shadow-lg\"\n      style={{\n        background: \"linear-gradient(135deg, #101113, #1a1b1e)\",\n      }}\n    >\n      <div className=\"flex h-full w-full\" data-test-id=\"select-model-popup\">\n        <div className=\"flex w-full flex-col rounded-xl text-slate-200 lg:flex-row\">\n          {collections && collections.length > 0 && (\n            <div className=\"flex h-fit w-full lg:w-auto lg:flex-shrink-0\">\n              <FilterGrid\n                filters={collections}\n                selectedFilter={selectedCollection}\n                onSelectFilter={handleSelectCollection}\n              />\n            </div>\n          )}\n\n          <div className=\"flex flex-col gap-2 lg:w-full\">\n            {renderModelSections()}\n\n            <LoadMoreButton\n              loading={loading}\n              cursor={cursor}\n              onLoadMore={handleLoadMore}\n            />\n          </div>\n        </div>\n      </div>\n    </DefaultPopupWrapper>\n  );\n}\n\ninterface ModelSectionProps {\n  title?: string | null;\n  models: any;\n  onValidate: (data: any) => void;\n}\n\nexport interface ModelData {\n  modelName: string;\n  coverImage: string;\n  description: string;\n  runCount: number;\n}\n\nconst ModelsSection = ({ title, models, onValidate }: ModelSectionProps) => {\n  if (!models || models.length == 0) return null;\n\n  const renderModelItem = (\n    model: ModelData,\n    onValidate: (modelName: string) => void,\n  ) => <Model key={model.modelName} model={model} onValidate={onValidate} />;\n\n  return (\n    <>\n      {title && <h2 className=\"p-1 text-xl\">{title}</h2>}\n      <Grid\n        items={models}\n        onValidate={onValidate}\n        renderItem={renderModelItem}\n        numberColMax={5}\n      />\n    </>\n  );\n};\n"
  },
  {
    "path": "packages/ui/src/components/popups/shared/FilterGrid.tsx",
    "content": "export type FilterItem = {\n  name: string;\n  slug: string;\n};\n\ntype FilterGridProps = {\n  filters: FilterItem[];\n  selectedFilter: string;\n  onSelectFilter: (slug: string) => void;\n};\n\nfunction FilterGrid({\n  filters,\n  selectedFilter,\n  onSelectFilter,\n}: FilterGridProps) {\n  function getUpperCaseFirstCharString(value: string) {\n    return value.charAt(0).toUpperCase() + value.slice(1);\n  }\n  return (\n    <div className=\"grid w-full grid-cols-1 gap-4 px-4\">\n      {filters &&\n        filters.map((filter) => (\n          <div\n            key={filter.slug}\n            className={`flex w-full flex-row items-center rounded-lg shadow-lg\n                         ${filter.slug === selectedFilter ? \"bg-zinc-700/70 hover:bg-zinc-500\" : \"bg-zinc-950/70 hover:bg-zinc-700\"}\n                         cursor-pointer py-1 transition-colors duration-300 ease-in-out`}\n          >\n            <p\n              className=\"w-full overflow-hidden whitespace-nowrap px-4 text-center text-sm\"\n              onClick={() => onSelectFilter(filter.slug)}\n            >\n              {getUpperCaseFirstCharString(filter.name)}\n            </p>\n          </div>\n        ))}\n    </div>\n  );\n}\n\nexport default FilterGrid;\n"
  },
  {
    "path": "packages/ui/src/components/popups/shared/Grid.tsx",
    "content": "import styled from \"styled-components\";\n\ninterface GridProps<T> {\n  items: T[];\n  renderItem: (item: T, onValidate: (id: string) => void) => JSX.Element;\n  onValidate: (id: string) => void;\n  numberColMax?: number;\n}\n\nconst getGridTemplateColumns = (maxCols: number) => {\n  return `\n    grid-template-columns: repeat(1, minmax(0, 1fr));\n    @media (min-width: 640px) {\n      grid-template-columns: repeat(${Math.min(2, maxCols)}, minmax(0, 1fr));\n    }\n    @media (min-width: 768px) {\n      grid-template-columns: repeat(${Math.min(3, maxCols)}, minmax(0, 1fr));\n    }\n    @media (min-width: 1024px) {\n      grid-template-columns: repeat(${Math.min(4, maxCols)}, minmax(0, 1fr));\n    }\n    @media (min-width: 1280px) {\n      grid-template-columns: repeat(${maxCols}, minmax(0, 1fr));\n    }\n  `;\n};\n\nexport default function Grid<T>({\n  items,\n  onValidate,\n  renderItem,\n  numberColMax = 2,\n}: GridProps<T>) {\n  return (\n    <StyledGrid maxCols={numberColMax}>\n      {items && items.map((item) => renderItem(item, onValidate))}\n    </StyledGrid>\n  );\n}\n\nconst StyledGrid = styled.div<{ maxCols: number }>`\n  display: grid;\n  width: 100%;\n  gap: 1rem;\n  ${({ maxCols }) => getGridTemplateColumns(maxCols)}\n`;\n"
  },
  {
    "path": "packages/ui/src/components/popups/shared/LoadMoreButton.tsx",
    "content": "import { useTranslation } from \"react-i18next\";\nimport { LoadingIcon } from \"../../nodes/Node.styles\";\n\ninterface LoadMoreButtonProps {\n  loading: boolean;\n  cursor: string | null;\n  onLoadMore: () => void;\n}\n\nexport default function LoadMoreButton({\n  loading,\n  cursor,\n  onLoadMore,\n}: LoadMoreButtonProps) {\n  const { t } = useTranslation(\"flow\");\n  return (\n    <div className=\"flex w-full justify-center\">\n      {loading ? (\n        <LoadingIcon className=\"ml-5 flex w-full items-center justify-center\" />\n      ) : (\n        cursor != null &&\n        cursor != \"\" && (\n          <div\n            className=\"text-md w-1/4 transform cursor-pointer rounded-lg bg-teal-800 py-1 text-center text-slate-200 shadow-lg transition-transform hover:scale-105 hover:bg-teal-700\"\n            onClick={onLoadMore}\n          >\n            {t(\"LoadMore\")}\n          </div>\n        )\n      )}\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/selectors/ActionGroup.tsx",
    "content": "import { ReactNode } from \"react\";\n\ninterface ActionGroupProps<T> {\n  actions: Action<T>[];\n  showIcon: boolean;\n}\n\nexport interface Action<T> {\n  name: string;\n  icon: ReactNode;\n  value: T;\n  onClick: () => void;\n  hoverColor?: string;\n  tooltipPosition?: \"top\" | \"bottom\" | \"left\" | \"right\";\n}\n\nexport default function ActionGroup<T>({\n  actions: options,\n  showIcon,\n}: ActionGroupProps<T>) {\n  return (\n    <div\n      className={`flex flex-row gap-x-2 \n        `}\n    >\n      {options.map((option) => {\n        return (\n          <button\n            key={option.name}\n            className={`cursor-pointer \n                        rounded-full bg-slate-200/10\n                        p-2\n                        text-stone-100\n                        hover:bg-slate-200/20\n                        ${option.hoverColor ? \"hover:\" + option.hoverColor : \"hover:text-blue-400\"}\n                        `}\n            onClick={option.onClick}\n            onTouchStart={option.onClick}\n            style={{ display: showIcon ? \"block\" : \"none\" }}\n            data-tooltip-id={\"app-tooltip\"}\n            data-tooltip-content={option.name}\n            data-tooltip-place={option.tooltipPosition ?? \"top\"}\n          >\n            {option.icon}\n          </button>\n        );\n      })}\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/selectors/ColorSelector.tsx",
    "content": "const colorList = [\n  \"transparent\",\n  \"chocolate\",\n  \"firebrick\",\n  \"cyan\",\n  \"greenyellow\",\n  \"gold\",\n  \"blueviolet\",\n  \"magenta\",\n];\n\ninterface ColorSelectorProps {\n  onChangeColor: (color: string) => void;\n}\n\nexport default function ColorSelector({ onChangeColor }: ColorSelectorProps) {\n  return (\n    <>\n      {colorList.map((color, index) => (\n        <div\n          key={index}\n          className=\"h-4 w-4 cursor-pointer rounded-full ring-slate-200 transition-all duration-150 ease-in-out hover:ring-2\"\n          style={{\n            backgroundColor: color,\n          }}\n          onClick={() => onChangeColor(color)}\n          onTouchStart={() => onChangeColor(color)}\n        />\n      ))}\n    </>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/selectors/ExpandableBloc.tsx",
    "content": "import { Disclosure, Transition } from \"@headlessui/react\";\nimport { FiChevronRight } from \"react-icons/fi\";\n\ninterface ExpandableBlocProps {\n  title: string;\n  children: React.ReactNode;\n  defaultOpen?: boolean;\n}\n\nexport default function ExpandableBloc({\n  title,\n  defaultOpen,\n  children,\n}: ExpandableBlocProps) {\n  return (\n    <Disclosure defaultOpen={defaultOpen}>\n      {({ open }) => (\n        <>\n          <Disclosure.Button className=\"flex flex-row items-center space-x-3 rounded-lg  bg-zinc-900/50 p-2 text-left text-xl\">\n            <FiChevronRight className={open ? \"rotate-90 transform\" : \"\"} />\n            <div>{title}</div>\n          </Disclosure.Button>\n          <Transition\n            enter=\"transition duration-100 ease-out\"\n            enterFrom=\"transform scale-95 opacity-0\"\n            enterTo=\"transform scale-100 opacity-100\"\n            leave=\"transition duration-75 ease-out\"\n            leaveFrom=\"transform scale-100 opacity-100\"\n            leaveTo=\"transform scale-95 opacity-0\"\n          >\n            <Disclosure.Panel className={\"p-1\"}>{children}</Disclosure.Panel>\n          </Transition>\n        </>\n      )}\n    </Disclosure>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/selectors/FileDropZone.tsx",
    "content": "import { Accept, useDropzone } from \"react-dropzone\";\nimport { FaCheckCircle, FaFileAlt } from \"react-icons/fa\";\n\ninterface FileDropZoneProps {\n  accept: Accept;\n  onAcceptFile: (files: File[]) => void;\n  oneFile: boolean;\n  dragActiveText?: string;\n  dropZoneText?: string;\n  selectedFiles?: File[] | null;\n  maxSize?: number;\n}\n\nconst DEFAULT_MAX_SIZE = 314572800; // 300 MB\n\nexport default function FileDropZone({\n  accept,\n  onAcceptFile,\n  oneFile,\n  dragActiveText = \"Drop the file here\",\n  dropZoneText = \"Drag and drop a file here or click to select\",\n  selectedFiles,\n  maxSize,\n}: FileDropZoneProps) {\n  const {\n    getRootProps,\n    getInputProps,\n    isDragActive = false,\n  } = useDropzone({\n    accept,\n    multiple: !oneFile,\n    maxSize: maxSize ?? DEFAULT_MAX_SIZE,\n    onDrop: (acceptedFiles, fileRejections) => {\n      if (fileRejections.length > 0) {\n        alert(\n          \"Some files were rejected due to exceeding the maximum size limit of 300MB.\",\n        );\n      }\n\n      if (oneFile) {\n        onAcceptFile([acceptedFiles[0]]);\n        return;\n      }\n      onAcceptFile(acceptedFiles);\n    },\n  });\n\n  return (\n    <div\n      className={`${isDragActive ? \" border-sky-300 \" : \"border-slate-500 \"} \n        flex  flex-col items-center space-y-3 rounded-lg  border-2 border-dashed p-20 text-slate-200 transition-all hover:text-sky-300`}\n      {...getRootProps()}\n    >\n      <input {...getInputProps()} />\n\n      {!!selectedFiles ? (\n        <>\n          <FaCheckCircle className=\"text-4xl text-green-400\" />\n          <p>{selectedFiles.length} file(s) selected</p>\n          {selectedFiles.map((file) => (\n            <p key={file.name}>{file.name}</p>\n          ))}\n        </>\n      ) : (\n        <>\n          <FaFileAlt className=\"text-4xl\" />\n          <p className=\"text-center text-lg\">\n            {isDragActive ? dragActiveText : dropZoneText}\n          </p>\n        </>\n      )}\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/selectors/OptionSelector.tsx",
    "content": "import React, { ReactNode } from \"react\";\n\ninterface OptionSelectorProps<T> {\n  options: Option<T>[];\n  selectedOption?: T;\n  onSelectOption: (option: Option<T>) => void;\n  showLabels?: boolean;\n}\n\nexport interface Option<T> {\n  name: string;\n  icon: ReactNode;\n  value: T;\n}\n\nexport default function OptionSelector<T>({\n  options,\n  selectedOption,\n  onSelectOption,\n  showLabels,\n}: OptionSelectorProps<T>) {\n  return (\n    <div className=\"flex flex-row items-center justify-center space-x-3 py-2\">\n      {options.map((option) => {\n        const isSelected = selectedOption === option.value;\n        return (\n          <div\n            key={option.name}\n            className={`\n                        flex cursor-pointer\n                        flex-row items-center\n                        justify-center rounded-lg\n                        p-2\n                        hover:bg-blue-400/50\n                        ${isSelected ? \"bg-blue-400 text-white\" : \" bg-slate-200/10 text-stone-100\"}\n                        `}\n            onClick={() => onSelectOption(option)}\n          >\n            {option.icon}\n            {showLabels && <span className=\"ml-2\">{option.name}</span>}\n          </div>\n        );\n      })}\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/components/selectors/SelectAutocomplete.tsx",
    "content": "import { Combobox, Transition } from \"@headlessui/react\";\nimport { Fragment, useState } from \"react\";\nimport { FaCheck } from \"react-icons/fa\";\nimport { FiChevronRight } from \"react-icons/fi\";\n\nexport interface SelectItem<T> {\n  name: string;\n  value: T;\n}\n\ninterface SelectAutocompleteProps<T> {\n  values: SelectItem<T>[];\n  selectedValue?: T;\n  onChange: (value: T) => void;\n}\n\nfunction SelectAutocomplete<T>({\n  values,\n  selectedValue,\n  onChange,\n}: SelectAutocompleteProps<T>) {\n  const [query, setQuery] = useState(\"\");\n\n  const filteredItems =\n    query === \"\"\n      ? values\n      : values.filter((item) => {\n          const nameStr = item.name ?? \"\" + item.value;\n          return nameStr.toLowerCase().includes(query.toLowerCase());\n        });\n\n  return (\n    <Combobox value={selectedValue} onChange={onChange}>\n      <div className=\"relative mt-1 w-full\">\n        <div className=\"relative w-full cursor-default overflow-hidden rounded-lg  border-none text-left shadow-md outline-none sm:text-sm\">\n          <Combobox.Input\n            className=\"w-full border-none bg-zinc-700/40 p-3 text-lg leading-5 text-slate-50 outline-none\"\n            onChange={(event) => setQuery(event.target.value)}\n            displayValue={(value: any) =>\n              filteredItems.find((item) => item.value === value)?.name ?? \"\"\n            }\n          />\n          <Combobox.Button className=\"absolute inset-y-0 right-0 flex items-center pr-2\">\n            <FiChevronRight\n              className=\"h-5 w-5 text-gray-400\"\n              aria-hidden=\"true\"\n            />\n          </Combobox.Button>\n        </div>\n        <Transition\n          as={Fragment}\n          leave=\"transition ease-in duration-100\"\n          leaveFrom=\"opacity-100\"\n          leaveTo=\"opacity-0\"\n          afterLeave={() => setQuery(\"\")}\n        >\n          <Combobox.Options className=\"absolute z-50 mt-1 max-h-60 w-full overflow-auto rounded-md bg-zinc-900 py-1 text-lg shadow-lg  outline-none sm:text-sm\">\n            {filteredItems.length === 0 && query !== \"\" ? (\n              <div className=\"relative cursor-default select-none px-4 py-2 text-slate-100\">\n                Nothing found.\n              </div>\n            ) : (\n              filteredItems.map((item, index) => (\n                <Combobox.Option\n                  key={item.value + \"-\" + index}\n                  className={({ active }) =>\n                    `relative cursor-default select-none py-2 pl-10 pr-4 ${\n                      active ? \"bg-teal-600 text-white\" : \"text-slate-100\"\n                    }`\n                  }\n                  value={item.value}\n                >\n                  {({ selected, active }) => (\n                    <>\n                      <span\n                        className={`block truncate ${\n                          selected ? \"font-medium\" : \"font-normal\"\n                        }`}\n                      >\n                        {item.name}\n                      </span>\n                      {selected ? (\n                        <span\n                          className={`absolute inset-y-0 left-0 flex items-center pl-3 ${\n                            active ? \"text-white\" : \"text-teal-600\"\n                          }`}\n                        >\n                          <FaCheck />\n                        </span>\n                      ) : null}\n                    </>\n                  )}\n                </Combobox.Option>\n              ))\n            )}\n          </Combobox.Options>\n        </Transition>\n      </div>\n    </Combobox>\n  );\n}\n\nexport default SelectAutocomplete;\n"
  },
  {
    "path": "packages/ui/src/components/shared/motions/EaseOut.tsx",
    "content": "import { motion } from \"framer-motion\";\nimport { AnimationProps } from \"./types\";\n\nfunction EaseOut({ children }: AnimationProps) {\n  return (\n    <motion.div\n      initial={{ opacity: 0, scale: 1 }}\n      animate={{ scale: [0.95, 1], opacity: [0, 1] }}\n      transition={{\n        duration: 0.3,\n        delay: 0,\n        ease: [0, 0.71, 0.2, 1.01],\n      }}\n      className=\"flex h-full w-full flex-col items-center justify-center\"\n    >\n      {children}\n    </motion.div>\n  );\n}\n\nexport default EaseOut;\n"
  },
  {
    "path": "packages/ui/src/components/shared/motions/TapScale.tsx",
    "content": "import { motion } from \"framer-motion\"\nimport { AnimationProps } from \"./types\";\nimport { memo } from \"react\";\n\ninterface TapScaleProps extends AnimationProps {\n    scale?: number;\n}\n\nfunction TapScale({ children, scale }: TapScaleProps) {\n    const actualScale = scale ?? 0.9\n    return (\n        <motion.div\n            whileTap={{ scale: actualScale }}\n        >\n            {children}\n        </motion.div >\n    )\n}\n\nexport default memo(TapScale);"
  },
  {
    "path": "packages/ui/src/components/shared/motions/types.ts",
    "content": "import { ReactNode } from \"react\";\n\n\nexport interface AnimationProps {\n    children: ReactNode;\n}"
  },
  {
    "path": "packages/ui/src/components/shared/theme.tsx",
    "content": "// theme.tsx\nexport const theme = {\n  light: {\n    bg: \"#f7f7f7\",\n    text: \"#232323\",\n    accent: \"linear-gradient(120deg, #84fab0 0%, #8fd3f4 100%)\",\n    accentText: \"rgb(33 34 43)\",\n    logsText: \"#EFF0F7\",\n    boxShadow: \"0px 2px 4px rgba(0, 0, 0, 0.2)\",\n    minimapBg: \"#f7f7f7\",\n    minimapMaskBg: \"#f2f2f5\",\n    primary: \"#ff0072\",\n    sidebarBg: \"rgb(242 242 245 / 67%)\",\n    tabBarBg: \"rgb(233 233 237)\",\n    popupBg: \"#f2f2f5\",\n    nodeBg: \"#f2f2f5\",\n    nodeGradientBg:\n      \"linear-gradient(rgb(249 242 249 / 41%), rgba(24, 20, 57, 0.01)) rgb(247, 247, 247)\",\n    outputBg: \"#f2f2f5\",\n    nodeInputBg: \"#f2f2f5\",\n    nodeColor: \"#222\",\n    nodeBorder: \"#222\",\n    controlsBg: \"#fefefe\",\n    controlsBgHover: \"#eee\",\n    controlsColor: \"#222\",\n    controlsBorder: \"#ddd\",\n    helpButtonBg: \"#555\",\n    optionButtonBg: \"#555\",\n    optionButtonBgSelected:\n      \"linear-gradient(29deg, rgba(2,0,36,1) 0%, rgba(89,198,104,0.8239670868347339) 0%, rgba(94,209,232,0.8603816526610644) 100%)\",\n    optionButtonBgHover: \"rgb(233 233 237)\",\n    optionButtonColor: \"#dddddd\",\n    optionButtonColorSelected: \"#f7f7f7\",\n  },\n  dark: {\n    bg: \"#232323\",\n    // text: \"#f7f7f7\",\n    text: \"#fbfbfb\",\n    accent: \"linear-gradient(120deg, #84fab0 0%, #8fd3f4 100%)\",\n    accentSelected:\n      \"linear-gradient(120deg, rgb(66 211 255) 0%, rgb(149 200 225) 100%)\",\n    accentText: \"#f5f5f5\",\n    logsText: \"#fcfcfc\",\n    boxShadow: \"rgb(1 1 1 / 82%) 0px 2px 4px\",\n    minimapBg: \"#2323237a\",\n    minimapMaskBg: \"#3434356e\",\n    primary: \"#ff0072\",\n    sidebarBg: \"rgba(31, 23, 34, 0.84)\",\n    tabBarBg: \"rgb(31 23 34)\",\n    popupBg: \"#343435\",\n    nodeBg: \"rgba(28, 28, 30, 0.685)\",\n    nodeGradientBg:\n      \"linear-gradient(rgb(32 32 32), rgba(24, 20, 57, 0.08)) rgb(41 44 58)\",\n    //outputBg: \"rgba(18, 15, 20, 0.84)\",\n    outputBg: \"rgb(22 20 24)\",\n    nodeInputBg: \"rgb(54 54 54 / 43%)\",\n    nodeColor: \"#f9f9f9\",\n    nodeBorder: \"#888\",\n    controlsBg: \"#555\",\n    controlsBgHover: \"#676768\",\n    controlsColor: \"#dddddd\",\n    controlsBorder: \"#676768\",\n    helpButtonBg: \"#555\",\n    optionButtonBg: \"#555\",\n    optionButtonBgSelected: \"rgb(72 159 159)\",\n    optionButtonBgHover: \"#676768\",\n    optionButtonColor: \"#dddddd\",\n    optionButtonColorSelected: \"#f7f7f7\",\n  },\n};\n"
  },
  {
    "path": "packages/ui/src/components/side-views/CurrentNodeView.tsx",
    "content": "import React, { useContext, memo } from \"react\";\nimport styled from \"styled-components\";\nimport { useTranslation } from \"react-i18next\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport { useFormFields } from \"../../hooks/useFormFields\";\nimport { Field } from \"../../nodes-configuration/types\";\nimport OutputDisplay from \"../nodes/node-output/OutputDisplay\";\nimport ExpandableBloc from \"../selectors/ExpandableBloc\";\nimport NodePlayButton from \"../nodes/node-button/NodePlayButton\";\n\ninterface CurrentNodeViewProps {}\n\nconst CurrentNodeView: React.FC<CurrentNodeViewProps> = () => {\n  const { t } = useTranslation(\"flow\");\n  const { onUpdateNodeData, currentNodeIdSelected, findNode, hasParent } =\n    useContext(NodeContext);\n\n  const node = findNode(currentNodeIdSelected);\n\n  const handleNodeDataChange = (fieldName: string, value: any) => {\n    if (!node) return;\n    onUpdateNodeData(node.id, {\n      ...node.data,\n      [fieldName]: value,\n    });\n  };\n\n  function setDefaultOptions() {\n    if (!node || !node.data.config?.fields) return;\n    const defaultOptions: any = {};\n    node.data.config.fields\n      .filter(\n        (field: Field) =>\n          field.options?.find((option) => option.default) &&\n          !node.data[field.name],\n      )\n      .forEach((field: Field) => {\n        defaultOptions[field.name] = field.options?.find(\n          (option) => option.default,\n        )?.value;\n      });\n\n    onUpdateNodeData(node.id, {\n      ...node.data,\n      ...defaultOptions,\n    });\n  }\n\n  const formFields = useFormFields(\n    node?.data,\n    node?.id ?? \"\",\n    handleNodeDataChange,\n    setDefaultOptions,\n    hasParent,\n    {\n      showHandles: false,\n      showLabels: true,\n    },\n  );\n\n  if (!currentNodeIdSelected || !node)\n    return (\n      <ViewContainer className=\"my-5 flex flex-col space-y-1 text-center text-lg\">\n        <p>{t(\"NoNodeSelected\")}</p>\n        <p>{t(\"ClickOnNodeToSelectIt\")}</p>\n      </ViewContainer>\n    );\n\n  return (\n    <>\n      <ViewContainer className=\"space-y-2\">\n        <div className=\"mb-4 flex flex-col items-center justify-center\">\n          <div className=\"flex flex-row space-x-3\">\n            <p className=\"text-lg font-bold text-sky-100\">\n              {t(node?.data.config?.nodeName)}\n            </p>\n            <NodePlayButton nodeName={currentNodeIdSelected} size=\"medium\" />\n          </div>\n          <p className=\"mt-1 rounded-md bg-zinc-600/30 px-2 py-1 font-mono text-sm\">\n            {currentNodeIdSelected}\n          </p>\n        </div>\n        {!!formFields && (\n          <div className=\"flex flex-col space-y-3\">\n            <ExpandableBloc title={t(\"Parameters\")} defaultOpen>\n              <div className=\"space-y-1 px-1\">{formFields}</div>\n            </ExpandableBloc>\n          </div>\n        )}\n        {!!node?.data.outputData && (\n          <div className=\"flex flex-col space-y-3\">\n            <ExpandableBloc title={t(\"Output\")} defaultOpen>\n              <OutputDisplay data={node?.data} />\n            </ExpandableBloc>\n          </div>\n        )}\n      </ViewContainer>\n    </>\n  );\n};\n\nconst ViewContainer = styled.div`\n  padding: 20px;\n`;\n\nexport default memo(CurrentNodeView);\n"
  },
  {
    "path": "packages/ui/src/components/side-views/JSONView.tsx",
    "content": "import React, { useContext, memo, useState, useEffect } from \"react\";\nimport {\n  FiCloud,\n  FiCrosshair,\n  FiDownload,\n  FiSave,\n  FiUpload,\n} from \"react-icons/fi\";\nimport { Edge, Node } from \"reactflow\";\nimport {\n  clearSelectedNodes,\n  convertFlowToJson,\n  convertJsonToFlow,\n  nodesTopologicalSort,\n} from \"../../utils/flowUtils\";\nimport { useTranslation } from \"react-i18next\";\nimport { NodeContext } from \"../../providers/NodeProvider\";\nimport { FaExclamationTriangle } from \"react-icons/fa\";\nimport { isDev } from \"../../config/config\";\nimport { Button, Group, Switch } from \"@mantine/core\";\nimport { MdCopyAll } from \"react-icons/md\";\nimport \"react18-json-view/src/style.css\";\nimport \"react18-json-view/src/dark.css\";\nimport JsonViewLib from \"react18-json-view\";\n\ninterface JSONViewProps {\n  nodes: Node[];\n  edges: Edge[];\n  onChangeFlow: (nodes: Node[], edges: Edge[]) => void;\n}\n\nconst JSONView: React.FC<JSONViewProps> = ({ nodes, edges, onChangeFlow }) => {\n  const { t } = useTranslation(\"flow\");\n  const { removeAll, clearAllOutput } = useContext(NodeContext);\n  const [showFieldsConfig, setShowFieldsConfig] = useState(isDev());\n  const [showCoordinates, setShowCoordinates] = useState(isDev());\n\n  // Sort and convert nodes and edges to JSON\n  const sortedNodes = nodesTopologicalSort(nodes, edges);\n  const data = convertFlowToJson(\n    sortedNodes,\n    edges,\n    showCoordinates,\n    showFieldsConfig,\n  );\n\n  const handleUploadClick = (event: React.MouseEvent<HTMLButtonElement>) => {\n    const input = document.createElement(\"input\");\n    input.type = \"file\";\n    input.accept = \"application/json\";\n\n    input.onchange = () => {\n      const file = input.files?.[0];\n\n      if (file) {\n        const reader = new FileReader();\n        reader.readAsText(file, \"UTF-8\");\n        reader.onload = (evt) => {\n          if (evt.target) {\n            const flow = evt.target.result as string;\n            const { nodes, edges } = convertJsonToFlow(JSON.parse(flow));\n            onChangeFlow(nodes, edges);\n          }\n        };\n      }\n    };\n\n    input.click();\n  };\n\n  // Handle file download\n  const handleDownloadClick = () => {\n    const fullData = convertFlowToJson(sortedNodes, edges, true, true);\n    const blob = new Blob([JSON.stringify(fullData, null, 2)], {\n      type: \"application/json\",\n    });\n    const url = URL.createObjectURL(blob);\n    const link = document.createElement(\"a\");\n    link.href = url;\n    link.download = \"flow.json\";\n    link.click();\n    URL.revokeObjectURL(url);\n    link.remove();\n  };\n\n  return (\n    <>\n      <div className=\"space-y-4 p-3\">\n        {/* Action Buttons */}\n        <div className=\"flex flex-wrap items-center justify-center gap-2\">\n          <Button\n            variant=\"light\"\n            color=\"blue\"\n            leftSection={<FiUpload />}\n            onClick={handleUploadClick}\n          >\n            {t(\"Upload\")}\n          </Button>\n          <Button\n            variant=\"light\"\n            color=\"green\"\n            leftSection={<FiDownload />}\n            onClick={handleDownloadClick}\n          >\n            {t(\"Download\")}\n          </Button>\n          <Button\n            variant=\"light\"\n            color=\"orange\"\n            leftSection={<FiCrosshair />}\n            onClick={clearAllOutput}\n          >\n            {t(\"DeleteOutputs\")}\n          </Button>\n          <Button\n            variant=\"light\"\n            color=\"red\"\n            leftSection={<FaExclamationTriangle />}\n            onClick={removeAll}\n          >\n            {t(\"DeleteAll\")}\n          </Button>\n        </div>\n\n        {/* Switches */}\n        <div className=\"flex flex-col space-y-2\">\n          <Group p=\"left\">\n            <Switch\n              checked={showFieldsConfig}\n              onChange={(event) =>\n                setShowFieldsConfig(event.currentTarget.checked)\n              }\n              label={t(\"ShowNodesConfig\")}\n            />\n          </Group>\n          <Group p=\"left\">\n            <Switch\n              checked={showCoordinates}\n              onChange={(event) =>\n                setShowCoordinates(event.currentTarget.checked)\n              }\n              label={t(\"ShowCoordinates\")}\n            />\n          </Group>\n        </div>\n\n        {/* JSON Viewer */}\n        <div className=\"relative w-full\">\n          <div className=\"absolute right-3 top-3 text-lg text-slate-500 transition-all duration-150 ease-in-out hover:text-slate-100\">\n            <MdCopyAll\n              onClick={() =>\n                navigator.clipboard.writeText(JSON.stringify(data, null, 2))\n              }\n            />\n          </div>\n        </div>\n\n        {/* JSON Viewer */}\n        <div className=\"relative w-full\">\n          {isDev() && (\n            <div className=\"text-af-text-element-3 absolute right-3 top-3 text-lg transition-all duration-150 ease-in-out hover:text-slate-100\">\n              <MdCopyAll\n                onClick={() =>\n                  navigator.clipboard.writeText(JSON.stringify(data, null, 2))\n                }\n              />\n            </div>\n          )}\n          <JsonViewLib\n            src={data}\n            className=\"mt-5 rounded-xl bg-[#1E1E1E] p-2\"\n            theme=\"vscode\"\n            collapsed={isDev() ? 2 : 1}\n            dark={true}\n            enableClipboard={false}\n          />\n        </div>\n      </div>\n    </>\n  );\n};\n\nexport default memo(JSONView);\n"
  },
  {
    "path": "packages/ui/src/components/tools/Fallback.tsx",
    "content": "import React from 'react';\nimport styled, { keyframes } from 'styled-components';\n\nconst rotate = keyframes`\n  0% {\n    transform: rotate(0deg);\n  }\n  100% {\n    transform: rotate(360deg);\n  }\n`;\n\nconst LoadingSpinner = styled.div`\n  display: inline-block;\n  width: 50px;\n  height: 50px;\n  border: 3px solid ${({ theme }) => theme.accent};\n  border-radius: 50%;\n  border-top-color: transparent;\n  animation: ${rotate} 1s linear infinite;\n`;\n\nconst LoadingScreen = styled.div`\n  display: flex;\n  justify-content: center;\n  align-items: center;\n  height: 100vh;\n  width: 100vw;\n  background-color: ${({ theme }) => theme.bg};\n`;\n\nexport const Fallback = () => (\n  <LoadingScreen>\n    <LoadingSpinner />\n  </LoadingScreen>\n);"
  },
  {
    "path": "packages/ui/src/components/tour/AppTour.tsx",
    "content": "import { useCallback, useEffect, useState } from \"react\";\nimport Joyride, { ACTIONS, EVENTS, STATUS, Step } from \"react-joyride\";\nimport { theme } from \"../shared/theme\";\nimport { useTranslation } from \"react-i18next\";\nimport { useVisibility } from \"../../providers/VisibilityProvider\";\n\ninterface AppTourProps {\n  run: boolean;\n  setRun: (run: boolean) => void;\n}\n\nconst classToObserve = new Set([\n  \"react-flow__viewport\",\n  \"react-flow__node\",\n  \"react-flow__pane\",\n]);\n\nconst imageUrls: string[] = [\n  \"./tour-assets/tour-step-drag-and-drop.gif\",\n  \"./tour-assets/tour-step-run-node.gif\",\n  \"./tour-assets/tour-step-connect-nodes.gif\",\n  \"./tour-assets/tour-step-replicate-node.gif\",\n];\n\nfunction preloadImages(urls: string[]) {\n  urls.forEach((url) => {\n    const img = new Image();\n    img.src = url;\n  });\n}\n\nexport function AppTour({ run, setRun }: AppTourProps) {\n  const [joyrideKey, setJoyrideKey] = useState(0);\n  const [stepIndex, setStepIndex] = useState(0);\n\n  const { getElement } = useVisibility();\n\n  const { t } = useTranslation(\"tour\");\n\n  useEffect(() => {\n    preloadImages(imageUrls);\n  }, []);\n\n  const [tourSteps, setTourSteps] = useState<Array<Step>>([\n    {\n      target: \".sidebar-dnd-node\",\n      content: (\n        <div className=\"space-y-10\">\n          <div className=\"space-y-5 text-lg\">\n            <p>{t(\"firstTimeHere\")}</p>\n            <p>{t(\"discoverApp\")}</p>\n          </div>\n          <div className=\"flex flex-col justify-center space-y-4 text-sm md:text-lg\">\n            <button\n              type=\"button\"\n              className=\"w-full max-w-xs rounded-lg bg-teal-500 px-4 py-3 text-base text-white transition duration-150 ease-in-out hover:bg-teal-700 focus:outline-none focus:ring-2 focus:ring-teal-300 sm:text-xl\"\n              aria-label=\"start-tour\"\n              onClick={() => setStepIndex(1)}\n            >\n              {t(\"letsStart\")}\n            </button>\n            <button\n              type=\"button\"\n              className=\"w-full max-w-xs rounded-lg  bg-slate-700 px-4 py-3 text-base text-white shadow-sm transition duration-150 ease-in-out hover:bg-slate-800 focus:outline-none focus:ring-2 focus:ring-slate-500 sm:text-xl\"\n              aria-label=\"skip-tour\"\n              onClick={() => setRun(false)}\n            >\n              {t(\"iKnowTheApp\")}\n            </button>\n          </div>\n        </div>\n      ),\n      placement: \"center\",\n      isFixed: true,\n      hideCloseButton: true,\n      hideFooter: true,\n      title: t(\"welcomeToAIFLOW\"),\n    },\n    {\n      target: \".sidebar-dnd-node\",\n      hideCloseButton: true,\n      content: (\n        <div className=\"space-y-2\">\n          <p className=\"text-base\">{t(\"addNodesWithDragAndDrop\")}</p>\n          <img\n            src={imageUrls[0]}\n            className=\"rounded-lg shadow-lg\"\n            alt={t(\"dragAndDrop\") ?? \"drag and drop\"}\n          ></img>\n        </div>\n      ),\n      floaterProps: {\n        disableAnimation: true,\n      },\n      placement: \"right\",\n      isFixed: true,\n      spotlightPadding: 10,\n      title: t(\"addingNodes\"),\n    },\n    {\n      target: \".node-play-button\",\n      hideCloseButton: true,\n      placement: \"right\",\n      content: (\n        <div className=\"space-y-2\">\n          <p className=\"text-base\">{t(\"executeSingleNode\")}</p>\n          <img\n            src={imageUrls[1]}\n            className=\"rounded-lg shadow-lg\"\n            alt={\"run nodes\"}\n          ></img>\n        </div>\n      ),\n      title: t(\"runningANode\"),\n    },\n    {\n      target: \".handle\",\n      isFixed: true,\n      hideCloseButton: true,\n      placement: \"right\",\n      content: (\n        <div className=\"space-y-2\">\n          <p className=\"text-base\">{t(\"handlesExplanation\")}</p>\n          <img\n            src={imageUrls[2]}\n            className=\"rounded-lg shadow-lg\"\n            alt={\"connect-nodes\"}\n          ></img>\n        </div>\n      ),\n      title: t(\"connectingNodes\"),\n    },\n    {\n      target: \"#run-all-button\",\n      isFixed: true,\n      hideCloseButton: true,\n      placement: \"bottom\",\n      content: (\n        <div className=\"text-base\">{t(\"executeAllNodesDescription\")}</div>\n      ),\n      title: t(\"runEverything\"),\n    },\n    {\n      target: \"#replicate\",\n      isFixed: true,\n      hideCloseButton: true,\n      content: (\n        <div className=\"space-y-2\">\n          <p className=\"text-base\">{t(\"replicateNodeDescription\")}</p>\n          <img\n            src={imageUrls[3]}\n            className=\"rounded-lg shadow-lg\"\n            alt={\"replicate-node\"}\n          ></img>\n        </div>\n      ),\n      placement: \"top\",\n      title: t(\"exploringMoreModels\"),\n    },\n    {\n      target: \".config-button\",\n      isFixed: true,\n      hideCloseButton: true,\n      content: (\n        <div className=\"space-y-2\">\n          <p className=\"text-base\">{t(\"configDescription\")}</p>\n        </div>\n      ),\n      placement: \"top\",\n      title: t(\"config\"),\n    },\n    {\n      target: \".sidebar-dnd-node\",\n      hideCloseButton: true,\n      content: (\n        <div className=\"text-md space-y-2\">\n          <p>{t(\"checkHelpForAdvanced\")}</p>\n        </div>\n      ),\n      placement: \"center\",\n      isFixed: true,\n      title: t(\"youveGotTheBasics\"),\n    },\n  ]);\n\n  const handleJoyrideCallback = (data: any) => {\n    const { action, index, status, type } = data;\n    const sidebar = getElement(\"dragAndDropSidebar\");\n\n    if ([EVENTS.STEP_AFTER, EVENTS.TARGET_NOT_FOUND].includes(type)) {\n      if (index === 1) {\n        sidebar.hide();\n      }\n\n      if (index === 3) {\n        sidebar.show();\n      }\n      setStepIndex(index + (action === ACTIONS.PREV ? -1 : 1));\n    } else if ([STATUS.FINISHED, STATUS.SKIPPED].includes(status)) {\n      // Need to set our running state to false, so we can restart if we click start again.\n      setRun(false);\n    }\n  };\n\n  const refreshJoyride = () => {\n    setJoyrideKey((prevKey) => prevKey + 1); // Increment key to force refresh\n  };\n\n  function debounce(func: MutationCallback, wait: number): MutationCallback {\n    let timeout: ReturnType<typeof setTimeout> | null = null;\n\n    return function (...args: [MutationRecord[], MutationObserver]): void {\n      const later = () => {\n        timeout = null;\n        func(...args);\n      };\n\n      if (timeout !== null) {\n        clearTimeout(timeout);\n      }\n      timeout = setTimeout(later, wait);\n    };\n  }\n\n  const onMutation = useCallback(\n    debounce((mutationsList: any, observer: any) => {\n      for (const mutation of mutationsList) {\n        if (\n          mutation.type === \"attributes\" &&\n          Array.from(mutation.target.classList as string[]).some((className) =>\n            classToObserve.has(className),\n          )\n        ) {\n          refreshJoyride();\n          break;\n        }\n      }\n    }, 100),\n    [],\n  );\n\n  const observer = new MutationObserver(onMutation);\n  const targetElement = document.querySelector(\"body\");\n\n  if (targetElement) {\n    const config = {\n      attributes: true,\n      subtree: true,\n    };\n    observer.observe(targetElement, config);\n  } else {\n    console.log(\"Element not found\");\n  }\n\n  const buttonBase = {\n    backgroundColor: \"transparent\",\n    border: 0,\n    color: \"#555\",\n    cursor: \"pointer\",\n    lineHeight: 1,\n    padding: 8,\n    WebkitAppearance: \"none\" as const,\n    fontSize: \"1rem\",\n    fontWeight: \"bold\",\n    borderRadius: \"0.5em\",\n    boxShadow: \"0 2px 4px rgba(0, 0, 0, 0.1)\",\n  };\n\n  return (\n    <Joyride\n      key={joyrideKey}\n      steps={tourSteps}\n      run={run}\n      continuous={true}\n      stepIndex={stepIndex}\n      callback={handleJoyrideCallback}\n      styles={{\n        beaconInner: {\n          animation: \"joyride-beacon-inner 1.2s infinite ease-in-out\",\n          backgroundColor: \"#ff8c20\",\n          borderRadius: \"50%\",\n          display: \"block\",\n          height: \"50%\",\n          left: \"50%\",\n          opacity: 0.7,\n          position: \"absolute\",\n          top: \"50%\",\n          transform: \"translate(-50%, -50%)\",\n          width: \"50%\",\n        },\n        beaconOuter: {\n          animation: \"joyride-beacon-outer 1.2s infinite ease-in-out\",\n          backgroundColor: `rgba(#ff8c20, 0.2)`,\n          border: `2px solid #ff8c20`,\n          borderRadius: \"50%\",\n          boxSizing: \"border-box\",\n          display: \"block\",\n          height: \"100%\",\n          left: 0,\n          opacity: 0.9,\n          position: \"absolute\",\n          top: 0,\n          transformOrigin: \"center\",\n          width: \"100%\",\n        },\n        tooltip: {\n          // backgroundColor: theme.dark.bg,\n          background: \"linear-gradient(135deg, #101113, #1a1b1e)\",\n          borderRadius: \"0.75em\",\n          boxShadow: \"0 4px 8px rgba(0, 0, 0, 0.1)\",\n          boxSizing: \"border-box\" as const,\n          color: theme.dark.text,\n          fontSize: \"1.25rem\",\n          maxWidth: \"100%\",\n          padding: \"2em\",\n        },\n        buttonBack: {\n          ...buttonBase,\n          color: theme.dark.text,\n          marginLeft: \"auto\",\n          marginRight: 5,\n          fontSize: \"1rem\",\n          fontWeight: \"bold\",\n          borderRadius: \"0.5em\",\n          boxShadow: \"0 2px 4px rgba(0, 0, 0, 0.1)\",\n        },\n        buttonClose: {\n          ...buttonBase,\n          color: theme.dark.text,\n          height: 14,\n          padding: 15,\n          position: \"absolute\" as const,\n          right: 0,\n          top: 0,\n          width: 14,\n        },\n        options: {\n          arrowColor: theme.dark.bg,\n          primaryColor: theme.dark.optionButtonBg,\n        },\n      }}\n    />\n  );\n}\n"
  },
  {
    "path": "packages/ui/src/config/config.ts",
    "content": "const HOST = import.meta.env.VITE_APP_WS_HOST || \"localhost\";\nconst WS_PORT = import.meta.env.VITE_APP_WS_PORT || 5000;\nconst REST_API_PORT = import.meta.env.VITE_APP_API_REST_PORT || 5000;\nconst USE_HTTPS = import.meta.env.VITE_APP_USE_HTTPS || \"false\";\nconst USE_CACHE = import.meta.env.VITE_APP_USE_CACHE?.toLowerCase() || \"true\";\nconst CURRENT_APP_VERSION = import.meta.env.VITE_APP_VERSION;\nconst DEFAULT_NODES_HIDDEN_LIST =\n  import.meta.env.VITE_APP_DEFAULT_NODES_HIDDEN_LIST || \"\";\n\nconst LOW_PRIORITY_NODE_PREFIXES_RAW =\n  import.meta.env.VITE_APP_LOW_PRIORITY_PREFIXES || \"aws-textract;dalle\";\nconst HIGH_PRIORITY_NODE_PREFIXES_RAW =\n  import.meta.env.VITE_APP_HIGH_PRIORITY_PREFIXES || \"llm;gpt;openai;replicate\";\n\nconst IS_DEV = import.meta.env.VITE_APP_IS_DEV?.toLowerCase() === \"true\";\nconst protocol = USE_HTTPS.toLowerCase() === \"true\" ? \"https\" : \"http\";\n\nexport const getWsUrl = () => `${protocol}://${HOST}:${WS_PORT}`;\nexport const getRestApiUrl = () => `${protocol}://${HOST}:${REST_API_PORT}`;\nexport const isCacheEnabled = () => USE_CACHE === \"true\";\nexport const getCurrentAppVersion = () => CURRENT_APP_VERSION;\nexport const getDefaultNodesHiddenList = () =>\n  DEFAULT_NODES_HIDDEN_LIST.split(\",\") as string[];\n\nexport const isDev = () => IS_DEV;\n\nexport const getLowPriorityNodePrefixes = () =>\n  LOW_PRIORITY_NODE_PREFIXES_RAW.split(\";\") as string[];\nexport const getHighPriorityNodePrefixes = () =>\n  HIGH_PRIORITY_NODE_PREFIXES_RAW.split(\";\") as string[];\n"
  },
  {
    "path": "packages/ui/src/hooks/useFlowSocketListeners.tsx",
    "content": "import { useContext, useEffect } from \"react\";\nimport { SocketContext } from \"../providers/SocketProvider\";\nimport { useTranslation } from \"react-i18next\";\nimport { toastInfoMessage } from \"../utils/toastUtils\";\n\nexport const useSocketListeners = <\n  ProgressData,\n  ErrorData,\n  CurrentNodeRunningData,\n>(\n  onProgress: (data: ProgressData) => void,\n  onError: (data: ErrorData) => void,\n  onRunEnd: () => void,\n  onCurrentNodeRunning: (data: CurrentNodeRunningData) => void,\n  onDisconnect?: (reason: string) => void,\n) => {\n  const { t } = useTranslation(\"flow\");\n  const { socket } = useContext(SocketContext);\n\n  useEffect(() => {\n    if (socket) {\n      socket.on(\"progress\", onProgress);\n      socket.on(\"error\", onError);\n      socket.on(\"run_end\", onRunEnd);\n      socket.on(\"current_node_running\", onCurrentNodeRunning);\n      socket.on(\n        \"disconnect\",\n        onDisconnect ? onDisconnect : defaultOnDisconnect,\n      );\n    }\n\n    return () => {\n      if (socket) {\n        socket.off(\"progress\", onProgress);\n        socket.off(\"error\", onError);\n        socket.off(\"run_end\", onRunEnd);\n        socket.off(\"current_node_running\", onCurrentNodeRunning);\n        socket.off(\n          \"disconnect\",\n          onDisconnect ? onDisconnect : defaultOnDisconnect,\n        );\n      }\n    };\n  }, [socket]);\n\n  function defaultOnDisconnect(reason: string) {\n    if (reason === \"transport close\") {\n      toastInfoMessage(t(\"socketConnectionLost\"), \"socket-connection-lost\");\n    }\n  }\n};\n"
  },
  {
    "path": "packages/ui/src/hooks/useFormFields.tsx",
    "content": "import { useTranslation } from \"react-i18next\";\nimport { OptionSelector, OptionButton } from \"../components/nodes/Node.styles\";\nimport InputNameBar from \"../components/nodes/node-button/InputNameBar\";\nimport { Field } from \"../nodes-configuration/types\";\nimport React, { useCallback, useEffect, useRef } from \"react\";\nimport { generateIdForHandle } from \"../utils/flowUtils\";\nimport { Autocomplete, Pill, PillsInput, Select, Slider } from \"@mantine/core\";\nimport { Switch } from \"@mantine/core\";\nimport NodeField from \"../components/nodes/node-input/NodeField\";\nimport SelectAutocomplete from \"../components/selectors/SelectAutocomplete\";\nimport NodeTextField from \"../components/nodes/node-input/NodeTextField\";\nimport useIsTouchDevice from \"./useIsTouchDevice\";\nimport _ from \"lodash\";\nimport NodeTextarea from \"../components/nodes/node-input/NodeTextarea\";\nimport { KeyValueInputList } from \"../components/nodes/node-input/KeyValueInputList\";\nimport ImageMaskCreatorFieldFlowAware from \"../components/nodes/node-input/ImageMaskCreatorFieldFlowAware\";\nimport { evaluateCondition } from \"../utils/evaluateConditions\";\nimport FileUploadField from \"../components/nodes/node-input/FileUploadField\";\nexport interface DisplayParams {\n  showHandles?: boolean;\n  showLabels?: boolean;\n  showOnlyConnectedFields?: boolean;\n  specificFields?: string[];\n}\n\nexport function useFormFields(\n  data: any,\n  id: string,\n  handleNodeFieldChange: (fieldName: string, value: any, target?: any) => void,\n  setDefaultOptions?: Function,\n  hasParent?: Function,\n  displayParams?: DisplayParams,\n  handleNodeDataChange?: (data: any) => void,\n) {\n  const { t } = useTranslation(\"flow\");\n  const isTouchDevice = useIsTouchDevice();\n\n  const textareaRef = useRef<HTMLTextAreaElement>(null);\n\n  const getFields = () => {\n    let fields;\n    if (!!data?.config?.fields) {\n      fields = data.config.fields;\n    } else if (!!data?.dynamicValues?.fields) {\n      fields = data.dynamicValues.fields;\n    }\n    return fields;\n  };\n\n  useEffect(() => {\n    if (!setDefaultOptions) return;\n    setDefaultOptions();\n  }, []);\n\n  const handleEventNodeDataChange = (\n    event: React.ChangeEvent<HTMLInputElement | HTMLTextAreaElement>,\n  ) => {\n    handleNodeFieldChange(event.target.name, event.target.value, event.target);\n  };\n\n  function calculateStep(min?: number, max?: number, allowDecimal?: boolean) {\n    if (min == null || max == null) return 1;\n\n    const range = max - min;\n    let step;\n\n    if (range <= 1 && allowDecimal) {\n      step = 0.01;\n    } else if (range <= 10 && allowDecimal) {\n      step = 0.1;\n    } else if (range <= 100) {\n      step = 1;\n    } else if (range <= 1000) {\n      step = 10;\n    } else {\n      step = 100;\n    }\n\n    return step;\n  }\n\n  function renderList(data: any, field: Field) {\n    const values = data[field.name] ?? [];\n\n    return (\n      <div className=\"w-full items-center\">\n        <PillsInput size=\"lg\">\n          <Pill.Group>\n            {values.map((value: string, index: number) => (\n              <Pill\n                key={`${id}-${field.name}-${index}`}\n                withRemoveButton\n                onRemove={() => {\n                  values.splice(index, 1);\n                  handleNodeFieldChange(field.name, values);\n                }}\n              >\n                {value}\n              </Pill>\n            ))}\n            <PillsInput.Field\n              placeholder={\n                t(field.placeholder ?? \"\") ?? t(\"DefaultListPlaceholder\") ?? \"\"\n              }\n              onKeyDown={(e) => {\n                if (e.key === \"Enter\") {\n                  values.push(e.currentTarget.value);\n                  handleNodeFieldChange(field.name, values);\n                  e.currentTarget.value = \"\";\n                }\n              }}\n              onBlur={(e) => {\n                if (!e.currentTarget.value) return;\n                values.push(e.currentTarget.value);\n                handleNodeFieldChange(field.name, values);\n                e.currentTarget.value = \"\";\n              }}\n            />\n          </Pill.Group>\n        </PillsInput>\n      </div>\n    );\n  }\n\n  const renderField = (field: Field, isLoopField?: boolean) => {\n    if (isLoopField) {\n      return renderList(data, field);\n    }\n    switch (field.type) {\n      case \"textToDisplay\":\n        return <p>{field.defaultValue}</p>;\n      case \"input\":\n      case \"textfield\":\n        return (\n          <NodeTextField\n            value={data[field.name]}\n            placeholder={field.placeholder ? String(t(field.placeholder)) : \"\"}\n            isTouchDevice={isTouchDevice}\n            onChange={(event) => {\n              handleNodeFieldChange(\n                field.name,\n                event.target.value,\n                event.target,\n              );\n            }}\n            onChangeValue={(value) => {\n              handleNodeFieldChange(field.name, value);\n            }}\n            fieldName={field?.name}\n            withEditPopup={field?.withModalEdit ?? true}\n          />\n        );\n      case \"inputInt\":\n      case \"numericfield\":\n        return (\n          <NodeTextField\n            value={data[field.name]}\n            placeholder={field.placeholder ? String(t(field.placeholder)) : \"\"}\n            isTouchDevice={isTouchDevice}\n            onChange={(event) => {\n              const value = event.target.value;\n\n              if (value === \"\") {\n                handleNodeFieldChange(field.name, undefined);\n                return;\n              }\n\n              const defaultValue =\n                field.defaultValue != null ? +field.defaultValue : 0;\n\n              let numericValue = isNaN(+value) ? defaultValue : +value;\n\n              if (field.min != null && numericValue < +field.min) {\n                numericValue = +field.min;\n              }\n              if (field.max != null && numericValue > +field.max) {\n                numericValue = +field.max;\n              }\n              handleNodeFieldChange(field.name, numericValue);\n            }}\n            error={isNaN(data[field.name])}\n          />\n        );\n      case \"textarea\":\n        return (\n          <NodeTextarea\n            key={`${id}-${field.name}`}\n            textareaRef={textareaRef}\n            field={field}\n            data={data}\n            withMinHeight\n            isTouchDevice={isTouchDevice}\n            onEventNodeDataChange={handleEventNodeDataChange}\n            onNodeDataChange={handleNodeFieldChange}\n            id={id}\n          />\n        );\n      case \"select\":\n        return (\n          <SelectAutocomplete\n            key={`${id}-${field.name}`}\n            onChange={(value) => handleNodeFieldChange(field.name, value)}\n            selectedValue={data[field.name] ?? \"\"}\n            values={\n              !!field.options\n                ? field.options?.map((option) => {\n                    return {\n                      name: option.label,\n                      value: option.value,\n                    };\n                  })\n                : []\n            }\n          />\n        );\n      case \"option\":\n        return (\n          <div className=\"my-1 flex w-full items-center justify-center\">\n            <OptionSelector key={`${id}-${field.name}`}>\n              {field.options?.map((option) => (\n                <OptionButton\n                  key={`${id}-${option.value}`}\n                  selected={data[field.name] === option.value}\n                  onClick={() =>\n                    handleNodeFieldChange(field.name, option.value)\n                  }\n                  onTouchEnd={() =>\n                    handleNodeFieldChange(field.name, option.value)\n                  }\n                >\n                  {t(option.label)}\n                </OptionButton>\n              ))}\n            </OptionSelector>\n          </div>\n        );\n      case \"inputNameBar\":\n        return (\n          data.config.inputNames && (\n            <InputNameBar\n              key={`${id}-${field.name}`}\n              inputNames={data.config.inputNames}\n              textareaRef={textareaRef}\n              fieldToUpdate={field.associatedField}\n              onNameClick={(value: string) => {\n                if (!field.associatedField) return;\n                const currentValue = data[field.associatedField] ?? \"\";\n                handleNodeFieldChange(\n                  field.associatedField,\n                  currentValue + value,\n                );\n              }}\n              addNewInput={() => {\n                const currentInputs = data.config.inputNames ?? [];\n                const newInput = \"input-\" + (currentInputs.length + 1);\n                const newInputs = [...currentInputs, newInput];\n                const newConfig = {\n                  ...data.config,\n                  inputNames: newInputs,\n                };\n                handleNodeFieldChange(\"config\", newConfig);\n              }}\n              removeInput={() => {\n                const currentInputs = data.config.inputNames ?? [];\n                if (currentInputs.length <= 2) return;\n                const newInputs = currentInputs.slice(0, -1);\n                const newConfig = {\n                  ...data.config,\n                  inputNames: newInputs,\n                };\n                handleNodeFieldChange(\"config\", newConfig);\n              }}\n            />\n          )\n        );\n      case \"slider\":\n        return (\n          <div className=\"flex w-full flex-row items-center justify-center\">\n            <p className=\"w-1/12 text-left text-sm text-blue-700 dark:text-blue-200\">\n              {data[field.name]}\n            </p>\n            <Slider\n              className=\"nodrag track w-11/12\"\n              value={data[field.name]}\n              onChange={(value) => handleNodeFieldChange(field.name, value)}\n              onChangeEnd={(value) => handleNodeFieldChange(field.name, value)}\n              styles={{\n                track: {\n                  backgroundColor: \"rgba(54, 54, 54, 0.8)\",\n                  borderColor: \"rgba(54, 54, 54, 0.8)\",\n                  height: \"0.35em\",\n                },\n                bar: {\n                  backgroundColor: \"rgba(29, 193, 226, 0.85)\",\n                },\n                thumb: {\n                  backgroundColor: \"rgba(94, 209, 232, 1)\",\n                  borderColor: \"rgba(94, 209, 232, 1)\",\n                },\n              }}\n              min={field.min}\n              max={field.max}\n              step={\n                !!field.step\n                  ? field.step\n                  : calculateStep(\n                      field.min,\n                      field.max,\n                      field.allowDecimal ?? true,\n                    )\n              }\n            />\n          </div>\n        );\n      case \"switch\":\n      case \"boolean\":\n        return (\n          <div className=\"flex w-full flex-row items-center\">\n            <Switch\n              onChange={(e) =>\n                handleNodeFieldChange(field.name, e.currentTarget.checked)\n              }\n              checked={data[field.name]}\n              className={`nowheel ${!isTouchDevice ? \"nodrag\" : \"\"}`}\n              size=\"lg\"\n              color=\"rgba(29, 193, 226, 0.95)\"\n              onLabel=\"ON\"\n              offLabel=\"OFF\"\n            />\n          </div>\n        );\n\n      case \"list\":\n        return renderList(data, field);\n\n      case \"dictionnary\":\n        return (\n          <KeyValueInputList\n            pairs={data[field.name] ?? []}\n            onChange={(pairs: any) => handleNodeFieldChange(field.name, pairs)}\n          />\n        );\n\n      case \"imageMaskCreator\":\n        return (\n          <ImageMaskCreatorFieldFlowAware\n            key={`${id}-${field.name}`}\n            onChange={(value) => handleNodeFieldChange(field.name, value)}\n          />\n        );\n\n      case \"fileUpload\":\n        return (\n          <div className=\"text-md w-full\">\n            <FileUploadField\n              value={data[field.name]}\n              onFileUpload={(info) => {\n                handleNodeFieldChange(field.name, info.url);\n              }}\n              onUrlSubmit={(url) => {\n                handleNodeFieldChange(field.name, url);\n              }}\n              isRenderForNode\n            />\n          </div>\n        );\n\n      default:\n        return (\n          <p>\n            {t(\"FieldNotSupportedInCurrentVersion\")} {field.type}\n          </p>\n        );\n    }\n  };\n\n  const fields = getFields();\n\n  if (!data || !data.config || !fields) {\n    return null;\n  }\n\n  return fields\n    .filter((field: Field) =>\n      !!hasParent && hasParent(id) && field.hideIfParent != null\n        ? !field.hideIfParent\n        : true,\n    )\n    .filter((field: Field) =>\n      displayParams?.specificFields\n        ? displayParams.specificFields.includes(field.name)\n        : true,\n    )\n    .filter((field: Field) => {\n      if (!field.condition) return true;\n\n      return evaluateCondition(field.condition, data);\n    })\n    .map((field: Field, index: number) => {\n      if (displayParams?.showOnlyConnectedFields && !field.isLinked) {\n        return null;\n      }\n\n      if (field.hidden) {\n        return null;\n      }\n\n      return (\n        <NodeField\n          key={`${id}-${field.name}`}\n          field={field}\n          label={t(field.name)}\n          renderField={renderField}\n          handleId={generateIdForHandle(index)}\n          displayParams={displayParams}\n          onAddNewField={\n            field.canAddChildrenFields\n              ? () => {\n                  // Get the current input names list (or empty if not set)\n                  const currentInputs = data.config.inputNames ?? [];\n                  // Find the index of the current field in the list\n                  const currentIndex = currentInputs.findIndex(\n                    (name: string) => name === field.name,\n                  );\n                  // Generate new field name: if parent's name is \"file_url\", new name becomes \"file_url_2\"\n                  let newFieldName;\n                  const match = field.name.match(/^(.*?)(?:_(\\d+))?$/);\n                  if (match) {\n                    const baseName = match[1];\n                    const suffix = match[2] ? parseInt(match[2], 10) : 1;\n                    newFieldName = `${baseName}_${suffix + 1}`;\n                  } else {\n                    newFieldName = field.name + \"_2\";\n                  }\n                  // Insert the new field name right after the current field\n                  const newInputs = [\n                    ...currentInputs.slice(0, currentIndex + 1),\n                    newFieldName,\n                    ...currentInputs.slice(currentIndex + 1),\n                  ];\n\n                  // Update fields: disable the parent's add button and add the new child field\n                  let updatedFields = [...data.config.fields];\n                  const parentFieldIndex = updatedFields.findIndex(\n                    (f) => f.name === field.name,\n                  );\n                  if (parentFieldIndex !== -1) {\n                    // Disable parent's add button\n                    updatedFields[parentFieldIndex] = {\n                      ...updatedFields[parentFieldIndex],\n                      canAddChildrenFields: false,\n                    };\n                  }\n                  // Create new child field with its add button enabled\n                  const newChildField: Field = {\n                    ...field,\n                    name: newFieldName,\n                    isChild: true,\n                    isLinked: false,\n                    required: false,\n                    canAddChildrenFields: true,\n                  };\n                  // Insert new child field right after the parent\n                  updatedFields.splice(parentFieldIndex + 1, 0, newChildField);\n\n                  // Update config with the new input names and fields order\n                  const newConfig = {\n                    ...data.config,\n                    inputNames: newInputs,\n                    fields: updatedFields,\n                  };\n\n                  const newNodeData = {\n                    ...data,\n                    config: newConfig,\n                  };\n\n                  if (!!handleNodeDataChange) {\n                    handleNodeDataChange(newNodeData);\n                  }\n                }\n              : undefined\n          }\n          onDeleteField={\n            field.isChild && field.canAddChildrenFields\n              ? () => {\n                  // 1. Remove the deleted child's name from the inputNames list.\n                  const currentInputs = data.config.inputNames ?? [];\n                  const removeIndex = currentInputs.findIndex(\n                    (name: string) => name === field.name,\n                  );\n                  if (removeIndex === -1) return;\n\n                  let newInputs = [\n                    ...currentInputs.slice(0, removeIndex),\n                    ...currentInputs.slice(removeIndex + 1),\n                  ];\n\n                  // 2. Remove the corresponding field from the fields array.\n                  let updatedFields = [...data.config.fields];\n                  const fieldIndex = updatedFields.findIndex(\n                    (f) => f.name === field.name,\n                  );\n                  if (fieldIndex !== -1) {\n                    updatedFields.splice(fieldIndex, 1);\n                  }\n\n                  // 3. Determine the base name (e.g. \"file_url\") from the deleted field.\n                  const match = field.name.match(/^(.*?)(?:_(\\d+))?$/);\n                  if (!match) return;\n                  const baseName = match[1];\n\n                  // 4. Identify the contiguous group in newInputs that belongs to this base name.\n                  // The parent's field should be the first one (with name exactly equal to baseName).\n                  const parentIndex = newInputs.findIndex(\n                    (name) => name === baseName,\n                  );\n                  if (parentIndex === -1) return; // nothing to do if parent is missing\n\n                  // Collect indices for fields in the group (parent and its children)\n                  let groupIndices: number[] = [];\n                  for (let i = parentIndex; i < newInputs.length; i++) {\n                    const regex = new RegExp(`^${baseName}(?:_\\\\d+)?$`);\n                    if (regex.test(newInputs[i])) {\n                      groupIndices.push(i);\n                    } else {\n                      break;\n                    }\n                  }\n\n                  // 5. Reassign new names sequentially in the group:\n                  // Parent remains as baseName; children become baseName_2, baseName_3, etc.\n                  newInputs = newInputs.map((name, idx) => {\n                    if (groupIndices.includes(idx)) {\n                      const pos = groupIndices.indexOf(idx);\n                      return pos === 0 ? baseName : `${baseName}_${pos + 1}`;\n                    }\n                    return name;\n                  });\n\n                  // 6. Update names in the fields array for those belonging to the group.\n                  let groupCounter = 1;\n                  updatedFields = updatedFields.map((f) => {\n                    const regexField = new RegExp(`^${baseName}(?:_\\\\d+)?$`);\n                    if (regexField.test(f.name)) {\n                      const newName =\n                        groupCounter === 1\n                          ? baseName\n                          : `${baseName}_${groupCounter}`;\n                      groupCounter++;\n                      return {\n                        ...f,\n                        name: newName,\n                      };\n                    }\n                    return f;\n                  });\n\n                  // 7. Update add button state: disable all in the group then enable it only on the last one.\n                  updatedFields = updatedFields.map((f) => {\n                    const regexField = new RegExp(`^${baseName}(?:_\\\\d+)?$`);\n                    if (regexField.test(f.name)) {\n                      return { ...f, canAddChildrenFields: false };\n                    }\n                    return f;\n                  });\n                  for (let i = updatedFields.length - 1; i >= 0; i--) {\n                    const regexField = new RegExp(`^${baseName}(?:_\\\\d+)?$`);\n                    if (regexField.test(updatedFields[i].name)) {\n                      updatedFields[i] = {\n                        ...updatedFields[i],\n                        canAddChildrenFields: true,\n                      };\n                      break;\n                    }\n                  }\n\n                  // 8. Update the config with the new inputNames and fields order.\n                  const newConfig = {\n                    ...data.config,\n                    inputNames: newInputs,\n                    fields: updatedFields,\n                  };\n\n                  const newNodeData = {\n                    ...data,\n                    config: newConfig,\n                  };\n\n                  if (!!handleNodeDataChange) {\n                    handleNodeDataChange(newNodeData);\n                  }\n                }\n              : undefined\n          }\n        />\n      );\n    });\n}\n"
  },
  {
    "path": "packages/ui/src/hooks/useHandlePositions.tsx",
    "content": "import { useMemo } from \"react\";\nimport { generateIdForHandle } from \"../utils/flowUtils\";\nimport { LinkedHandlePositions } from \"../components/handles/HandleWrapper\";\nimport { Position } from \"reactflow\";\n\nconst useHandlePositions = (\n  data: any,\n  nbInput: number,\n  outputHandleIds: string[],\n) => {\n  const allInputHandleIds = Array.from({ length: nbInput }, (_, i) => i).map(\n    (index) => generateIdForHandle(index),\n  );\n\n  const allHandleIds = useMemo(() => {\n    const inputHandleIds = Array.from({ length: nbInput }, (_, i) => i).map(\n      (index) => generateIdForHandle(index),\n    );\n    return [...inputHandleIds, ...outputHandleIds];\n  }, [nbInput, outputHandleIds]);\n\n  const allHandlePositions = useMemo(() => {\n    const positions = {} as LinkedHandlePositions;\n    allHandleIds.forEach((id) => {\n      let currentPosition: Position =\n        data?.handles?.[id] ??\n        (id.includes(\"out\") ? Position.Right : Position.Left);\n      positions[currentPosition] = [...(positions[currentPosition] || []), id];\n    });\n    return positions;\n  }, [allHandleIds, data]);\n\n  return {\n    nbInput,\n    allInputHandleIds,\n    allHandleIds,\n    allHandlePositions,\n  };\n};\n\nexport default useHandlePositions;\n"
  },
  {
    "path": "packages/ui/src/hooks/useHandleShowOutput.tsx",
    "content": "import { Dispatch, SetStateAction, useEffect } from \"react\";\n\ninterface UseHandleShowOutputProps {\n  showOnlyOutput?: boolean;\n  setCollapsed: Dispatch<SetStateAction<boolean>>;\n  setShowLogs?: Dispatch<SetStateAction<boolean>>;\n}\n\nconst useHandleShowOutput = ({\n  showOnlyOutput,\n  setCollapsed,\n  setShowLogs,\n}: UseHandleShowOutputProps) => {\n  useEffect(() => {\n    if (showOnlyOutput !== undefined) {\n      setCollapsed(showOnlyOutput);\n      if (setShowLogs !== undefined && showOnlyOutput) {\n        setShowLogs(showOnlyOutput);\n      }\n    }\n  }, [showOnlyOutput, setCollapsed, setShowLogs]);\n};\n\nexport default useHandleShowOutput;\n"
  },
  {
    "path": "packages/ui/src/hooks/useIsPlaying.tsx",
    "content": "import { useContext, useEffect, useState } from \"react\";\nimport { NodeContext } from \"../providers/NodeProvider\";\n\n/**\n * This hook stop playing animation whenever an error is raised globaly.\n */\nexport const useIsPlaying = (): [\n  boolean,\n  React.Dispatch<React.SetStateAction<boolean>>,\n] => {\n  const { errorCount } = useContext(NodeContext);\n  const [isPlaying, setIsPlaying] = useState<boolean>(false);\n\n  useEffect(() => {\n    setIsPlaying(false);\n  }, [errorCount]);\n\n  return [isPlaying, setIsPlaying];\n};\n"
  },
  {
    "path": "packages/ui/src/hooks/useIsTouchDevice.tsx",
    "content": "import React, { useEffect, useState } from \"react\";\n\nconst useIsTouchDevice = (): boolean => {\n  const [isTouchDevice, setIsTouchDevice] = useState<boolean>(false);\n\n  useEffect(() => {\n    const checkTouchDevice = () => {\n      setIsTouchDevice(window.matchMedia(\"(pointer: coarse)\").matches);\n    };\n\n    checkTouchDevice();\n    window.addEventListener(\"resize\", checkTouchDevice);\n\n    return () => {\n      window.removeEventListener(\"resize\", checkTouchDevice);\n    };\n  }, []);\n\n  return isTouchDevice;\n};\n\nexport default useIsTouchDevice;\n"
  },
  {
    "path": "packages/ui/src/hooks/useLoading.tsx",
    "content": "import { useState } from \"react\";\n\ntype AsyncFunction<T extends any[], N> = (...args: T) => Promise<N>;\n\ntype Params<T> = T extends (...args: infer U) => any ? U : never;\n\ntype StartLoadingWith = <T extends any[], N>(\n  func: AsyncFunction<T, N>,\n  ...args: Params<AsyncFunction<T, N>>\n) => Promise<N>;\n\nexport const useLoading = (): [\n  isLoading: boolean,\n  startLoadingWith: StartLoadingWith,\n] => {\n  const [isLoading, setIsLoading] = useState(false);\n\n  const startLoadingWith: StartLoadingWith = async (func, ...args) => {\n    setIsLoading(true);\n    try {\n      const result = await func(...args);\n      setIsLoading(false);\n      return result;\n    } catch (error) {\n      setIsLoading(false);\n      throw error;\n    }\n  };\n\n  return [isLoading, startLoadingWith];\n};\n"
  },
  {
    "path": "packages/ui/src/hooks/useLocalStorage.tsx",
    "content": "// useLocalStorage.ts\nimport { useState, useEffect } from \"react\";\n\n/**\n * A custom hook that synchronizes state with localStorage.\n *\n * @param key - The key under which the value is stored in localStorage.\n * @param initialValue - The initial value to use if the key does not exist in localStorage.\n * @returns A stateful value and a function to update it.\n */\nfunction useLocalStorage<T>(\n  key: string,\n  initialValue: T,\n): [T, React.Dispatch<React.SetStateAction<T>>] {\n  // Initialize state with a function to avoid reading localStorage on every render\n  const [storedValue, setStoredValue] = useState<T>(() => {\n    if (typeof window === \"undefined\") {\n      // If window is undefined, likely during SSR, return initialValue\n      return initialValue;\n    }\n    try {\n      const item = window.localStorage.getItem(key);\n      return item ? (JSON.parse(item) as T) : initialValue;\n    } catch (error) {\n      console.warn(`Error reading localStorage key \"${key}\":`, error);\n      return initialValue;\n    }\n  });\n\n  // useEffect to update localStorage whenever storedValue changes\n  useEffect(() => {\n    if (typeof window === \"undefined\") {\n      // If window is undefined, do nothing\n      return;\n    }\n    try {\n      const valueToStore =\n        storedValue instanceof Function\n          ? storedValue(storedValue)\n          : storedValue;\n      window.localStorage.setItem(key, JSON.stringify(valueToStore));\n    } catch (error) {\n      console.warn(`Error setting localStorage key \"${key}\":`, error);\n    }\n  }, [key, storedValue]);\n\n  return [storedValue, setStoredValue];\n}\n\nexport default useLocalStorage;\n"
  },
  {
    "path": "packages/ui/src/hooks/useRefreshOnAppearanceChange.tsx",
    "content": "import { useEffect } from \"react\";\n\nexport const useRefreshOnAppearanceChange = (\n  updateNodeInternals: (id: string) => void,\n  id: string,\n  deps: any[],\n) => {\n  useEffect(() => {\n    updateNodeInternals(id);\n  }, deps);\n};\n"
  },
  {
    "path": "packages/ui/src/i18n.js",
    "content": "import i18n from \"i18next\";\nimport { initReactI18next } from \"react-i18next\";\nimport Backend from \"i18next-http-backend\";\nimport LanguageDetector from \"i18next-browser-languagedetector\";\n\ni18n\n  .use(Backend) // load translation using http -> see /public/locales\n  .use(LanguageDetector)\n  .use(initReactI18next)\n  .init({\n    load: \"languageOnly\",\n    fallbackLng: \"en\",\n    debug: false,\n    interpolation: {\n      escapeValue: false,\n    },\n    backend: {\n      loadPath: \"/locales/{{lng}}/{{ns}}.json\",\n    },\n  });\n\nexport default i18n;\n"
  },
  {
    "path": "packages/ui/src/index.css",
    "content": "@tailwind base;\n@tailwind components;\n@tailwind utilities;\n\n\nhtml,\nbody,\n#root {\n  width: 100%;\n  margin: 0;\n  padding: 0;\n  box-sizing: border-box;\n  font-family: sans-serif;\n  line-height: 1.5em;\n  word-spacing: 0.16em;\n  /*letter-spacing: 0.12em;*/\n}\n\n#webpack-dev-server-client-overlay{\n  display: none;\n}\n\nhtml, body{\n  background: linear-gradient(0deg, hsl(180deg 9.33% 12.39%) 0%, hsl(0deg 0% 10.39%) 100%);\n}\n\n:root {\n  --scrollbar-track-color: #F1F1F1;\n  --scrollbar-thumb-color: #AAA;\n  --scrollbar-border-color: #F1F1F1;\n  --scrollbar-thumb-hover-color: #888;\n}\n\nbody.dark-theme {\n  --scrollbar-track-color: #2A2A2A;\n  --scrollbar-thumb-color: rgba(85, 85, 85, 0.431);\n  --scrollbar-border-color: #2A2A2A;\n  --scrollbar-thumb-hover-color: #666;\n}\n\n\n/* Custom Slider Styles */\n::-webkit-scrollbar {\n  width: 5px;\n  height: 5px;\n}\n\n\n::-webkit-scrollbar-track {\n  background-color: var(--scrollbar-track-color);\n}\n\n::-webkit-scrollbar-thumb {\n  background-color: var(--scrollbar-thumb-color);\n  border-radius: 5px;\n  border: 2px solid var(--scrollbar-border-color);\n}\n\n::-webkit-scrollbar-thumb:hover {\n  background-color: var(--scrollbar-thumb-hover-color);\n}\n\n\n/* Custom React Flow Handle Style */\n.custom-handle.react-flow__handle {\n  top: initial;\n  left: initial;\n  transform: initial;\n}\n\n.markdown-body {\n  color: #f5f5f5 !important; \n}\n\n.markdown-body pre {\n  background-color: transparent !important;\n}\n\n\n@import 'tailwindcss/base';\n@import 'tailwindcss/components';\n@import 'tailwindcss/utilities';"
  },
  {
    "path": "packages/ui/src/index.tsx",
    "content": "import \"./init\";\n\nimport React, { Suspense } from \"react\";\nimport ReactDOM from \"react-dom/client\";\nimport \"./index.css\";\nimport \"@mantine/core/styles.css\";\nimport { createTheme, MantineProvider } from \"@mantine/core\";\nimport \"react-toastify/dist/ReactToastify.css\";\nimport reportWebVitals from \"./reportWebVitals\";\nimport { ThemeProvider } from \"./providers/ThemeProvider\";\nimport { GlobalStyle } from \"./components/nodes/Node.styles\";\nimport { Fallback } from \"./components/tools/Fallback\";\nimport \"./i18n\";\nimport { ToastContainer } from \"react-toastify\";\nimport Main from \"./Main\";\n\nconst root = ReactDOM.createRoot(\n  document.getElementById(\"root\") as HTMLElement,\n);\n\nconst theme = createTheme({});\n\nroot.render(\n  <>\n    <GlobalStyle />\n    <MantineProvider theme={theme} forceColorScheme=\"dark\">\n      <ThemeProvider>\n        <Suspense fallback={<Fallback />}>\n          <ToastContainer />\n          <Main />\n        </Suspense>\n      </ThemeProvider>\n    </MantineProvider>\n  </>,\n);\n\n// If you want to start measuring performance in your app, pass a function\n// to log results (for example: reportWebVitals(console.log))\n// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals\nreportWebVitals();\n"
  },
  {
    "path": "packages/ui/src/init.js",
    "content": "// init.js\nwindow.global ||= window;\n"
  },
  {
    "path": "packages/ui/src/layout/main-layout/AppLayout.tsx",
    "content": "import { useCallback, useContext, useEffect, useRef, useState } from \"react\";\nimport Flow from \"../../components/Flow\";\nimport { Node, Edge } from \"reactflow\";\nimport { useTranslation } from \"react-i18next\";\nimport {\n  convertFlowToJson,\n  formatFlow,\n  nodesTopologicalSort,\n} from \"../../utils/flowUtils\";\nimport {\n  toastErrorMessage,\n  toastFastInfoMessage,\n  toastInfoMessage,\n} from \"../../utils/toastUtils\";\nimport ButtonRunAll from \"../../components/buttons/ButtonRunAll\";\nimport { FlowEvent, SocketContext } from \"../../providers/SocketProvider\";\nimport FlowWrapper from \"./wrapper/FlowWrapper\";\nimport TabHeader from \"./header/TabHeader\";\nimport {\n  createErrorMessageForMissingFields,\n  getNodeInError,\n} from \"../../utils/flowChecker\";\nimport { useVisibility } from \"../../providers/VisibilityProvider\";\nimport { FlowDataProvider } from \"../../providers/FlowDataProvider\";\nimport {\n  getCurrentTabIndex,\n  saveCurrentTabIndex,\n  saveTabsLocally,\n} from \"../../services/tabStorage\";\nimport { useLoading } from \"../../hooks/useLoading\";\n\nexport interface FlowTab {\n  nodes: Node[];\n  edges: Edge[];\n  metadata?: FlowMetadata;\n}\n\nexport interface FlowMetadata {\n  id?: string;\n  name?: string;\n  saveFlow?: boolean;\n  version?: string;\n  hostUrl?: string;\n  lastSave?: number;\n  isPublic?: boolean;\n}\n\nexport interface FlowManagerState {\n  tabs: FlowTab[];\n}\n\nexport interface FlowTabsProps {\n  tabs: FlowTab[];\n}\n\nexport type ApplicationMode = \"flow\";\nexport type ApplicationMenu = \"template\" | \"config\" | \"help\";\n\nconst FlowTabs = ({ tabs }: FlowTabsProps) => {\n  const { t } = useTranslation(\"flow\");\n\n  const [flowTabs, setFlowTabs] = useState<FlowManagerState>({\n    tabs: tabs,\n  });\n  const [currentTab, setCurrentTab] = useState(0);\n  const [refresh, setRefresh] = useState(false);\n  const [showOnlyOutput, setShowOnlyOutput] = useState(false);\n  const { emitEvent, connect } = useContext(SocketContext);\n  const [isRunning, setIsRunning] = useState(false);\n  const [mode, setMode] = useState<ApplicationMode>(\"flow\");\n  const [selectedEdgeType, setSelectedEdgeType] = useState(\"default\");\n  const useAuth = import.meta.env.VITE_APP_USE_AUTH === \"true\";\n  const { getElement } = useVisibility();\n  const [loading, startLoadingWith] = useLoading();\n  const configPopup = getElement(\"configPopup\");\n\n  const currentTabRef = useRef(currentTab);\n  const flowTabsRef = useRef(flowTabs);\n\n  useEffect(() => {\n    connect();\n  });\n\n  useEffect(() => {\n    currentTabRef.current = currentTab;\n  }, [currentTab]);\n\n  useEffect(() => {\n    flowTabsRef.current = flowTabs;\n  }, [flowTabs]);\n\n  useEffect(() => {\n    const init = async () => {\n      const savedCurrentTab = getCurrentTabIndex();\n      await handleChangeTab(parseInt(savedCurrentTab || \"0\"));\n      setRefresh((prev) => !prev);\n    };\n    init();\n  }, []);\n\n  useEffect(() => {\n    saveTabsLocally(flowTabs.tabs);\n  }, [flowTabs]);\n\n  useEffect(() => {\n    saveCurrentTabIndex(currentTab);\n  }, [currentTab]);\n\n  const addNewFlowTab = () => {\n    setFlowTabs((prevFlowTabs) => {\n      const newFlowTab = { ...prevFlowTabs };\n      newFlowTab.tabs.push({\n        nodes: [],\n        edges: [],\n        metadata: { version: \"1.0.0\" },\n      });\n      return newFlowTab;\n    });\n  };\n\n  const handleFlowChange = (\n    nodes: Node[],\n    edges: Edge[],\n    metadata?: FlowMetadata,\n  ) => {\n    setFlowTabs((prevFlowTabs) => {\n      const updatedTabs = prevFlowTabs.tabs.map((tab, index) => {\n        if (index === currentTab) {\n          return {\n            ...tab,\n            nodes,\n            edges,\n            metadata: { ...tab.metadata, ...metadata },\n          };\n        }\n        return tab;\n      });\n      return { ...prevFlowTabs, tabs: updatedTabs };\n    });\n  };\n\n  const handleMetadataChange = (metadata: FlowMetadata) => {\n    setFlowTabs((prevFlowTabs) => {\n      const updatedTabs = prevFlowTabs.tabs.map((tab, index) => {\n        if (index === currentTab) {\n          return {\n            ...tab,\n            metadata: { ...tab.metadata, ...metadata },\n          };\n        }\n        return tab;\n      });\n      return { ...prevFlowTabs, tabs: updatedTabs };\n    });\n  };\n\n  const handleRunAllCurrentFlow = () => {\n    const nodes = flowTabs.tabs[currentTab].nodes;\n    const edges = flowTabs.tabs[currentTab].edges;\n\n    if (nodes.length === 0) {\n      toastFastInfoMessage(t(\"NoNodesToRun\"));\n      return;\n    }\n\n    const nodesSorted = nodesTopologicalSort(nodes, edges);\n    const flowFile = convertFlowToJson(nodesSorted, edges, true, true);\n\n    const nodesInError = getNodeInError(flowFile, nodesSorted);\n\n    if (nodesInError.length > 0) {\n      let errorMessage = createErrorMessageForMissingFields(nodesInError, t);\n      toastErrorMessage(errorMessage);\n      setFlowTabs({ ...flowTabs });\n      return;\n    }\n\n    const event: FlowEvent = {\n      name: \"process_file\",\n      data: {\n        jsonFile: JSON.stringify(flowFile),\n        metadata: flowTabs.tabs[currentTab].metadata,\n      },\n    };\n    const success = emitEvent(event);\n\n    setIsRunning(success);\n  };\n\n  const handleChangeRun = (runStatus: boolean) => {\n    setIsRunning(runStatus);\n  };\n\n  const handleChangeTab = useCallback(\n    async (index: number) => {\n      if (!isRunning) {\n        setCurrentTab(index);\n      } else {\n        toastFastInfoMessage(t(\"CannotChangeTabWhileRunning\"));\n      }\n    },\n    [isRunning],\n  );\n\n  const handleChangeMode = (mode: ApplicationMode) => {\n    setMode(mode);\n  };\n\n  const handleDeleteFlow = async (index: number) => {\n    if (flowTabsRef.current.tabs.length === 1) {\n      toastInfoMessage(t(\"CannotDeleteLastFlow\"));\n      return;\n    }\n\n    setFlowTabs((prev) => {\n      let updatedTabs = structuredClone(prev.tabs);\n      updatedTabs = updatedTabs.filter((_: FlowTab, i: number) => i !== index);\n      const updatedFlowTabs = { ...prev, tabs: updatedTabs };\n      return updatedFlowTabs;\n    });\n\n    setCurrentTab(index - 1 > 0 ? index - 1 : 0);\n    setRefresh((prev) => !prev);\n  };\n\n  const handleAddNewFlow = (flowData: any) => {\n    setFlowTabs((prevFlowTabs) => {\n      const newFlowTab = { ...prevFlowTabs };\n      newFlowTab.tabs.push(flowData);\n      return newFlowTab;\n    });\n    setCurrentTab(flowTabs.tabs.length - 1);\n  };\n\n  const handleChangeTabName = (index: number, name: string) => {\n    setFlowTabs((prevFlowTabs) => {\n      const updatedTabs = prevFlowTabs.tabs.map((tab, i) =>\n        i === index\n          ? {\n              ...tab,\n              metadata: {\n                ...tab.metadata,\n                name,\n              },\n            }\n          : tab,\n      );\n      return { ...prevFlowTabs, tabs: updatedTabs };\n    });\n  };\n\n  return (\n    <div className=\"relative flex h-screen flex-col\">\n      <TabHeader\n        currentTab={currentTab}\n        tabs={flowTabs.tabs}\n        onDeleteTab={handleDeleteFlow}\n        onAddFlowTab={addNewFlowTab}\n        onChangeTab={handleChangeTab}\n        onChangeTabName={handleChangeTabName}\n        tabPrefix={t(\"Flow\")}\n      >\n        <div className=\"ml-auto flex flex-row items-center space-x-2  \">\n          <div className=\"mr-5\">\n            <ButtonRunAll\n              onClick={handleRunAllCurrentFlow}\n              isRunning={isRunning}\n            />\n          </div>\n        </div>\n      </TabHeader>\n\n      <FlowDataProvider\n        flowTab={flowTabs.tabs[currentTab]}\n        onFlowChange={handleFlowChange}\n      >\n        <FlowWrapper\n          key={`flow-${currentTab}`}\n          mode={mode}\n          onChangeMode={handleChangeMode}\n          onAddNewFlow={handleAddNewFlow}\n        >\n          {mode === \"flow\" && (\n            <Flow\n              key={`flow-${currentTab}-${refresh}`}\n              nodes={flowTabs.tabs[currentTab]?.nodes ?? []}\n              edges={flowTabs.tabs[currentTab]?.edges ?? []}\n              metadata={flowTabs.tabs[currentTab]?.metadata ?? {}}\n              onFlowChange={handleFlowChange}\n              onUpdateMetadata={handleMetadataChange}\n              showOnlyOutput={showOnlyOutput}\n              isRunning={isRunning}\n              onRunChange={handleChangeRun}\n              onLoaded={() => {}}\n            />\n          )}\n        </FlowWrapper>\n      </FlowDataProvider>\n    </div>\n  );\n};\n\nexport default FlowTabs;\n"
  },
  {
    "path": "packages/ui/src/layout/main-layout/header/Tab.tsx",
    "content": "import React, { CSSProperties, ReactNode, useRef, useState } from \"react\";\nimport ReactDOM from \"react-dom\";\nimport { MdDelete, MdEdit } from \"react-icons/md\";\nimport styled from \"styled-components\";\nimport ActionGroup, { Action } from \"../../../components/selectors/ActionGroup\";\nimport { useTranslation } from \"react-i18next\";\nimport { FaCheck } from \"react-icons/fa\";\n\ninterface TabProps {\n  index: number;\n  active: boolean;\n  onChangeTab: (index: number) => void;\n  onDeleteTab: (index: number) => void;\n  onChangeTabName?: (index: number, name: string) => void;\n  name: string;\n}\n\ntype TabActions = \"remove\" | \"name\";\n\nconst Tab = ({\n  index,\n  active,\n  onChangeTab,\n  onDeleteTab,\n  onChangeTabName,\n  name,\n}: TabProps) => {\n  const { t } = useTranslation(\"flow\");\n  const [showActions, setShowActions] = useState(false);\n  const [editName, setEditName] = useState(false);\n  const [editableName, setTabName] = useState(name);\n\n  const buttonRef = useRef(null);\n\n  let hideActionsTimeout: ReturnType<typeof setTimeout>;\n\n  const hideActionsWithDelay = () => {\n    hideActionsTimeout = setTimeout(() => {\n      setShowActions(false);\n      setEditName(false);\n      if (name !== editableName) {\n        setTabName(name);\n      }\n    }, 1500);\n  };\n\n  const clearHideActionsTimeout = () => {\n    if (hideActionsTimeout) {\n      clearTimeout(hideActionsTimeout);\n    }\n  };\n\n  const handleChangeTabName = (name: string) => {\n    if (onChangeTabName) {\n      setTabName(name);\n    }\n  };\n\n  const handleSaveTabName = () => {\n    if (onChangeTabName) {\n      onChangeTabName(index, editableName);\n      setEditName(false);\n    }\n  };\n\n  const actions: Action<TabActions>[] = [\n    {\n      icon: editName ? <FaCheck /> : <MdEdit />,\n      name: t(\"ChangeName\"),\n      value: \"name\",\n      onClick: editName ? handleSaveTabName : () => setEditName(true),\n      tooltipPosition: \"bottom\",\n    },\n    {\n      icon: <MdDelete />,\n      name: t(\"RemoveFlow\"),\n      value: \"remove\",\n      onClick: () => {\n        onDeleteTab(index);\n      },\n      hoverColor: \"text-red-400\",\n      tooltipPosition: \"bottom\",\n    },\n  ];\n  return (\n    <>\n      <TabButton\n        active={active}\n        ref={buttonRef}\n        className={`text-md group\n                        relative\n                        mr-5 inline-block cursor-pointer px-2 py-2 \n                        font-medium\n                        ${active ? \"text-slate-50\" : \"text-slate-500\"} \n                        hover:text-slate-50`}\n        onClick={() => {\n          if (active) {\n            setShowActions(true);\n          } else {\n            onChangeTab(index);\n          }\n        }}\n        onMouseEnter={() => {\n          clearHideActionsTimeout();\n        }}\n        onMouseLeave={() => hideActionsWithDelay()}\n      >\n        {editName ? (\n          <input\n            type=\"text\"\n            value={editableName}\n            onChange={(e) => {\n              handleChangeTabName(e.target.value);\n            }}\n            onKeyDown={(e) => {\n              if (e.key === \"Enter\") {\n                handleSaveTabName();\n              }\n            }}\n          />\n        ) : (\n          <span className=\"text-md\">{name}</span>\n        )}\n        {active && (\n          <Portal>\n            <div\n              className={`absolute ${showActions ? \"opacity-100\" : \"pointer-events-none opacity-0\"} flex translate-y-2 justify-center transition-all duration-300 ease-in-out`}\n              onMouseEnter={() => {\n                clearHideActionsTimeout();\n              }}\n              style={\n                !!buttonRef.current ? calculatePosition(buttonRef.current) : {}\n              }\n            >\n              <ActionGroup actions={actions} showIcon />\n            </div>\n          </Portal>\n        )}\n      </TabButton>\n    </>\n  );\n};\n\nconst Portal = ({ children }: { children: ReactNode }) => {\n  const mountNode = document.getElementById(\"root\");\n\n  if (!mountNode) {\n    throw new Error(\"The element #portal-root was not found in the DOM\");\n  }\n\n  return ReactDOM.createPortal(children, mountNode);\n};\n\nfunction calculatePosition(element: HTMLDivElement) {\n  const rect = element.getBoundingClientRect();\n  return {\n    position: \"absolute\",\n    width: rect.width,\n    top: `${rect.bottom + window.scrollY}px`, // Position below the button\n    left: `${rect.left + window.scrollX}px`, // Align with the button's left edge\n  } as CSSProperties;\n}\n\nexport default Tab;\n\nexport const TabButton = styled.button<{ active: boolean }>`\n  transition:\n    background-color 0.3s ease-in-out,\n    color 0.3s ease-in-out;\n  transform: ${(props) => (props.active ? \"scale(1.15)\" : \"scale(1)\")};\n\n  &::after {\n    content: \"\";\n    position: absolute;\n    display: flex;\n    align-items: center;\n    justify-content: center;\n    bottom: 0;\n    left: 15%;\n    right: 15%;\n    height: 3px;\n    background: ${(props) => props.theme.accent};\n    transform: ${(props) => (props.active ? \"scaleX(1)\" : \"scaleX(0)\")};\n    transition: transform 0.3s ease-in-out;\n    z-index: 11;\n  }\n`;\n"
  },
  {
    "path": "packages/ui/src/layout/main-layout/header/TabHeader.tsx",
    "content": "import React from \"react\";\nimport { FaPlus } from \"react-icons/fa\";\nimport styled from \"styled-components\";\nimport Tab from \"./Tab\";\nimport { FlowTab } from \"../AppLayout\";\n\ninterface TabHeaderProps {\n  tabs: any[];\n  currentTab: number;\n  onChangeTab: (index: number) => void;\n  onDeleteTab: (index: number) => void;\n  onAddFlowTab: () => void;\n  onChangeTabName?: (index: number, name: string) => void;\n  tabPrefix: string;\n  children: React.ReactNode;\n}\n\nconst TabHeader = ({\n  tabs,\n  currentTab,\n  onChangeTab,\n  onDeleteTab,\n  onAddFlowTab,\n  onChangeTabName,\n  tabPrefix,\n  children,\n}: TabHeaderProps) => {\n  return (\n    <TabsContainer className=\"z-30 flex h-16 flex-row items-center justify-center border-b-sky-950/50 py-2\">\n      <div className=\"mx-auto ml-4 flex w-10 flex-row justify-center pr-2 text-center align-middle\">\n        <img src=\"logo.svg\" alt=\"Logo\"></img>\n      </div>\n      <Tabs className=\"overflow-hidden hover:overflow-x-auto\">\n        {tabs.map((tab: any, index: number) => (\n          <Tab\n            key={index}\n            index={index}\n            active={index === currentTab}\n            onChangeTab={onChangeTab}\n            onDeleteTab={onDeleteTab}\n            onChangeTabName={onChangeTabName}\n            name={\n              !!tab.metadata?.name\n                ? tab.metadata.name\n                : !!tab.name\n                  ? tab.name\n                  : tabPrefix + \" \" + (index + 1)\n            }\n          />\n        ))}\n      </Tabs>\n      <AddTabButton\n        onClick={onAddFlowTab}\n        className=\"rounded-lg px-1 py-1 text-lg text-slate-200 ring-slate-200 transition-all duration-300 ease-in-out hover:text-slate-50 hover:ring-2\"\n      >\n        <FaPlus />\n      </AddTabButton>\n      {children}\n    </TabsContainer>\n  );\n};\n\nconst TabsContainer = styled.div`\n  font-family: Roboto;\n`;\n\nconst Tabs = styled.div`\n  white-space: nowrap;\n  overflow-y: hidden;\n  padding-bottom: 3px;\n  max-width: 60%;\n`;\n\nconst AddTabButton = styled.div``;\n\nexport default TabHeader;\n"
  },
  {
    "path": "packages/ui/src/layout/main-layout/wrapper/FlowErrorBoundary.tsx",
    "content": "import { useEffect, useState, ErrorInfo } from \"react\";\n\ninterface ErrorBoundaryProps {\n  children: React.ReactNode;\n}\n\ninterface ErrorBoundaryState {\n  hasError: boolean;\n}\n\nexport default function ErrorBoundary({ children }: ErrorBoundaryProps) {\n  const [state, setState] = useState<ErrorBoundaryState>({ hasError: false });\n\n  useEffect(() => {\n    const handleError = (error: Error, errorInfo: ErrorInfo) => {\n      if (!error) return;\n      if (error.stack?.includes(\"videojs-wavesurfer\")) return; //TMP - But no idea how to catch them otherwise...\n      console.error(\"Error Boundary caught an error\", error, errorInfo);\n      setState({ hasError: true });\n    };\n\n    const globalErrorHandler = (event: ErrorEvent) => {\n      handleError(event.error, { componentStack: \"\" });\n    };\n\n    const unhandledRejectionHandler = (event: PromiseRejectionEvent) => {\n      handleError(event.reason, { componentStack: \"\" });\n    };\n\n    window.addEventListener(\"error\", globalErrorHandler);\n    window.addEventListener(\"unhandledrejection\", unhandledRejectionHandler);\n\n    return () => {\n      window.removeEventListener(\"error\", globalErrorHandler);\n      window.removeEventListener(\n        \"unhandledrejection\",\n        unhandledRejectionHandler,\n      );\n    };\n  }, []);\n\n  if (state.hasError) {\n    return (\n      <div className=\"items-center pt-6 text-center\">\n        <h1>Something went wrong with this flow.</h1>\n        <button onClick={() => window.location.reload()}>Reload</button>\n      </div>\n    );\n  }\n  return <>{children}</>;\n}\n"
  },
  {
    "path": "packages/ui/src/layout/main-layout/wrapper/FlowWrapper.tsx",
    "content": "import { ReactNode, memo, useCallback, useState } from \"react\";\nimport { FiHelpCircle } from \"react-icons/fi\";\nimport ConfigPopup from \"../../../components/popups/config-popup/ConfigPopup\";\nimport DnDSidebar from \"../../../components/bars/dnd-sidebar/DnDSidebar\";\nimport RightIconButton from \"../../../components/buttons/ConfigurationButton\";\nimport { ApplicationMenu, ApplicationMode } from \"../AppLayout\";\nimport HelpPopup from \"../../../components/popups/HelpPopup\";\nimport FlowErrorBoundary from \"./FlowErrorBoundary\";\nimport { useVisibility } from \"../../../providers/VisibilityProvider\";\n\ninterface FlowWrapperProps {\n  children?: ReactNode;\n  mode: ApplicationMode;\n  onChangeMode: (newMode: ApplicationMode) => void;\n  onAddNewFlow: (flowData: any) => void;\n}\n\ntype MenuStateType = {\n  [key in ApplicationMenu]: boolean;\n};\n\nfunction FlowWrapper({\n  mode,\n  onChangeMode,\n  onAddNewFlow,\n  children,\n}: FlowWrapperProps) {\n  const [menuState, setMenuState] = useState<MenuStateType>(\n    {} as MenuStateType,\n  );\n\n  const { getElement } = useVisibility();\n  const configPopup = getElement(\"configPopup\");\n\n  const handleMenuChange = useCallback((menu: ApplicationMenu) => {\n    menuState[menu] = !menuState[menu];\n    setMenuState({ ...menuState });\n  }, []);\n\n  return (\n    <>\n      <FlowErrorBoundary>\n        <div\n          className=\"fixed left-0 \n                          z-10\n                          flex\n                          h-full\n                          flex-row\n                          pt-16\"\n        >\n          {mode === \"flow\" && <DnDSidebar />}\n        </div>\n        <RightIconButton onClick={() => configPopup.show()} />\n        <RightIconButton\n          onClick={() => handleMenuChange(\"help\")}\n          color=\"#7fcce38f\"\n          bottom=\"80px\"\n          icon={<FiHelpCircle />}\n        />\n\n        <ConfigPopup\n          isOpen={configPopup.isVisible}\n          onClose={() => configPopup.hide()}\n        />\n        <HelpPopup\n          isOpen={menuState[\"help\"]}\n          onClose={() => handleMenuChange(\"help\")}\n        />\n        {children}\n      </FlowErrorBoundary>\n    </>\n  );\n}\n\nexport default memo(FlowWrapper);\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/dallENode.ts",
    "content": "import { NodeConfig } from \"./types\";\n\nconst dallENodeConfig: NodeConfig = {\n  nodeName: \"DALL-E 3\",\n  processorType: \"dalle-prompt\",\n  icon: \"OpenAILogo\",\n  hasInputHandle: true,\n  fields: [\n    {\n      type: \"textarea\",\n      name: \"prompt\",\n      placeholder: \"DallEPromptPlaceholder\",\n      hideIfParent: true,\n    },\n    {\n      type: \"select\",\n      name: \"size\",\n      options: [\n        {\n          label: \"1024x1024\",\n          value: \"1024x1024\",\n          default: true,\n        },\n        {\n          label: \"1024x1792\",\n          value: \"1024x1792\",\n        },\n        {\n          label: \"1792x1024\",\n          value: \"1792x1024\",\n        },\n      ],\n    },\n    {\n      type: \"select\",\n      name: \"quality\",\n      options: [\n        {\n          label: \"standard\",\n          value: \"standard\",\n          default: true,\n        },\n        {\n          label: \"hd\",\n          value: \"hd\",\n        },\n      ],\n    },\n  ],\n  outputType: \"imageUrl\",\n  section: \"models\",\n  helpMessage: \"dallePromptHelp\",\n};\n\nexport default dallENodeConfig;\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/gptVisionNode.ts",
    "content": "import { NodeConfig } from \"./types\";\n\nexport const gptVisionNodeConfig: NodeConfig = {\n  nodeName: \"GPT Vision\",\n  processorType: \"gpt-vision\",\n  icon: \"OpenAILogo\",\n  inputNames: [\"image_url\", \"prompt\"],\n  fields: [\n    {\n      name: \"prompt\",\n      label: \"Prompt\",\n      type: \"textarea\",\n      required: true,\n      hasHandle: true,\n      placeholder: \"VisionPromptPlaceholder\",\n    },\n    {\n      name: \"image_url\",\n      label: \"Image URL\",\n      type: \"fileUpload\",\n      hasHandle: true,\n      required: true,\n      placeholder: \"VisionImageURLPlaceholder\",\n      canAddChildrenFields: true,\n    },\n  ],\n  outputType: \"markdown\",\n  hasInputHandle: true,\n  section: \"models\",\n  helpMessage: \"gptVisionPromptHelp\",\n  showHandlesNames: true,\n};\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/inputTextNode.ts",
    "content": "import { NodeConfig } from \"./types\";\n\nconst inputTextNodeConfig: NodeConfig = {\n  nodeName: \"Text\",\n  processorType: \"input-text\",\n  icon: \"AiOutlineEdit\",\n  fields: [\n    {\n      type: \"textarea\",\n      name: \"inputText\",\n      required: true,\n      placeholder: \"InputPlaceholder\",\n    },\n  ],\n  outputType: \"text\",\n  defaultHideOutput: true,\n  section: \"input\",\n  helpMessage: \"inputHelp\",\n};\n\nexport default inputTextNodeConfig;\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/llmPrompt.ts",
    "content": "import { NodeConfig } from \"./types\";\n\nexport const llmPromptNodeConfig: NodeConfig = {\n  nodeName: \"LLMPrompt\",\n  processorType: \"llm-prompt\",\n  icon: \"OpenAILogo\",\n  inputNames: [\"prompt\", \"context\"],\n  fields: [\n    {\n      name: \"model\",\n      label: \"\",\n      type: \"select\",\n      options: [\n        {\n          label: \"GPT-4o-mini\",\n          value: \"gpt-4o-mini\",\n        },\n        {\n          label: \"GPT-4o\",\n          value: \"gpt-4o\",\n        },\n        {\n          label: \"GPT-4.1-nano\",\n          value: \"gpt-4.1-nano\",\n        },\n        {\n          label: \"GPT-4.1-mini\",\n          value: \"gpt-4.1-mini\",\n          default: true,\n        },\n        {\n          label: \"GPT-4.1\",\n          value: \"gpt-4.1\",\n        },\n      ],\n    },\n    {\n      name: \"context\",\n      label: \"context\",\n      type: \"textfield\",\n      hasHandle: true,\n      placeholder: \"ContextPlaceholder\",\n    },\n    {\n      name: \"prompt\",\n      label: \"prompt\",\n      type: \"textarea\",\n      required: true,\n      hasHandle: true,\n      placeholder: \"PromptPlaceholder\",\n    },\n    {\n      name: \"web_search\",\n      label: \"web_search\",\n      type: \"boolean\",\n      defaultValue: false,\n      condition: {\n        field: \"model\",\n        operator: \"in\",\n        value: [\"gpt-4o\", \"gpt-4o-mini\", \"gpt-4.1\", \"gpt-4.1-mini\"],\n      },\n    },\n    {\n      name: \"search_context_size\",\n      label: \"search_context_size\",\n      type: \"select\",\n      placeholder: \"SearchContextSizePlaceholder\",\n      options: [\n        {\n          label: \"Low\",\n          value: \"low\",\n        },\n        {\n          label: \"Medium\",\n          value: \"medium\",\n          default: true,\n        },\n        {\n          label: \"High\",\n          value: \"high\",\n        },\n      ],\n      condition: {\n        logic: \"AND\",\n        conditions: [\n          {\n            field: \"model\",\n            operator: \"in\",\n            value: [\"gpt-4o\", \"gpt-4o-mini\", \"gpt-4.1\", \"gpt-4.1-mini\"],\n          },\n          {\n            field: \"web_search\",\n            operator: \"equals\",\n            value: true,\n          },\n        ],\n      },\n    },\n    {\n      name: \"af_node_version\",\n      type: \"nonRendered\",\n      hidden: true,\n      defaultValue: 2,\n    },\n  ],\n  outputType: \"markdown\",\n  showHandlesNames: true,\n  // hasInputHandle: true,\n  section: \"models\",\n  helpMessage: \"llmPromptHelp\",\n};\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/mergerPromptNode.ts",
    "content": "import { NodeConfig } from \"./types\";\n\nexport const mergerPromptNode: NodeConfig = {\n  nodeName: \"MergerNode\",\n  processorType: \"merger-prompt\",\n  icon: \"AiOutlineMergeCells\",\n  inputNames: [\"input-1\", \"input-2\"],\n  fields: [\n    {\n      name: \"mergeMode\",\n      label: \"\",\n      type: \"option\",\n      options: [\n        {\n          label: \"Merge\",\n          value: \"1\",\n        },\n        {\n          label: \"Merge + GPT\",\n          value: \"2\",\n          default: true,\n        },\n      ],\n      hidden: true,\n    },\n    {\n      name: \"inputNameBar\",\n      type: \"inputNameBar\",\n      associatedField: \"prompt\",\n    },\n    {\n      name: \"prompt\",\n      type: \"textarea\",\n      required: true,\n      placeholder: \"MergePromptPlaceholder\",\n    },\n  ],\n  outputType: \"markdown\",\n  hasInputHandle: true,\n  section: \"tools\",\n  helpMessage: \"mergerPromptHelp\",\n};\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/nodeConfig.ts",
    "content": "import dallENodeConfig from \"./dallENode\";\nimport inputTextNodeConfig from \"./inputTextNode\";\nimport { llmPromptNodeConfig } from \"./llmPrompt\";\nimport stableDiffusionStabilityAiNodeConfig from \"./stableDiffusionStabilityAiNode\";\nimport { urlNodeConfig } from \"./urlNode\";\nimport { youtubeTranscriptNodeConfig } from \"./youtubeTranscriptNode\";\nimport { mergerPromptNode } from \"./mergerPromptNode\";\nimport { gptVisionNodeConfig } from \"./gptVisionNode\";\nimport { FieldType, NodeConfig } from \"./types\";\nimport { getNodeExtensions } from \"../api/nodes\";\nimport withCache from \"../api/cache/withCache\";\n\nexport const nodeConfigs: { [key: string]: NodeConfig | undefined } = {\n  \"input-text\": inputTextNodeConfig,\n  url_input: urlNodeConfig,\n  \"llm-prompt\": llmPromptNodeConfig,\n  \"gpt-vision\": gptVisionNodeConfig,\n  youtube_transcript_input: youtubeTranscriptNodeConfig,\n  \"dalle-prompt\": dallENodeConfig,\n  \"stable-diffusion-stabilityai-prompt\": stableDiffusionStabilityAiNodeConfig,\n  \"merger-prompt\": mergerPromptNode,\n  // add other configs here...\n};\n\nconst fieldTypeWithoutHandle: FieldType[] = [\n  \"select\",\n  \"option\",\n  \"boolean\",\n  \"slider\",\n];\n\nexport const getConfigViaType = (type: string): NodeConfig | undefined => {\n  return structuredClone(nodeConfigs[type]);\n};\n\nexport const fieldHasHandle = (fieldType: FieldType): boolean => {\n  return !fieldTypeWithoutHandle.includes(fieldType);\n};\n\nexport const loadExtensions = async () => {\n  const extensions = await withCache(getNodeExtensions);\n  extensions.forEach((extension: NodeConfig) => {\n    const key = extension.processorType;\n    if (!key) return;\n    if (key in nodeConfigs) return;\n\n    nodeConfigs[key] = extension;\n  });\n};\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/sectionConfig.ts",
    "content": "import { AiOutlineRobot } from \"react-icons/ai\";\nimport { BsInputCursorText } from \"react-icons/bs\";\nimport { FaToolbox } from \"react-icons/fa\";\nimport {\n  NodeConfig,\n  SectionType,\n  SubnodeData,\n  SubnodeShortcutStyle,\n} from \"./types\";\nimport { nodeConfigs } from \"./nodeConfig\";\nimport { getNodesHiddenList } from \"../components/popups/config-popup/parameters\";\nimport {\n  getHighPriorityNodePrefixes,\n  getLowPriorityNodePrefixes,\n} from \"../config/config\";\nimport { DraggableNodeAdditionnalData } from \"../components/bars/dnd-sidebar/types\";\n\nexport type NodeSection = {\n  label: string;\n  type: SectionType;\n  icon?: any;\n  nodes?: DnDNode[];\n};\n\nexport type DnDNode = {\n  label: string;\n  type: string;\n  keywords?: string[];\n  helpMessage?: string;\n  section: SectionType;\n  isBeta?: boolean;\n  isNew?: boolean;\n  color?: string;\n  subnodesShortcutConfig?: SubnodeData[];\n  subnodesShortcutStyle?: SubnodeShortcutStyle;\n  additionnalData?: DraggableNodeAdditionnalData;\n};\nexport function transformNodeConfigsToDndNode(configs: {\n  [key: string]: NodeConfig | undefined;\n}): DnDNode[] {\n  return Object.entries(configs).map(([type, config]) => {\n    return {\n      label: config?.nodeName,\n      type: type,\n      helpMessage: config?.helpMessage || undefined,\n      section: config?.section,\n      isBeta: config?.isBeta,\n    } as DnDNode;\n  });\n}\n\nexport function getNonGenericNodeConfig() {\n  const nonGenericNodeConfig: DnDNode[] = [\n    {\n      label: \"File\",\n      type: \"file\",\n      helpMessage: \"fileUploadHelp\",\n      section: \"input\",\n    },\n    {\n      label: \"AiDataSplitter\",\n      type: \"ai-data-splitter\",\n      helpMessage: \"dataSplitterHelp\",\n      section: \"tools\",\n      additionnalData: {\n        additionnalData: {\n          mode: \"manual\",\n          separator: \";\",\n        },\n      },\n    },\n    {\n      label: \"ReplicateModel\",\n      type: \"replicate\",\n      helpMessage: \"replicateHelp\",\n      section: \"models\",\n      subnodesShortcutConfig: [\n        {\n          label: \"Imagen 4\",\n          configData: {\n            nodeName: \"google/imagen-4\",\n          },\n          description: \"imagenDescription\",\n          keywords: [\n            \"Image Generation\",\n            \"Generate Image\",\n            \"Image\",\n            \"Google Imagen\",\n          ],\n        },\n\n        {\n          label: \"FLUX Kontext Pro\",\n          configData: {\n            nodeName: \"black-forest-labs/flux-kontext-pro\",\n          },\n          description: \"fluxKontextDescription\",\n          keywords: [\n            \"Image Generation\",\n            \"Generate Image\",\n            \"Image\",\n            \"Edit Image\",\n          ],\n        },\n        {\n          label: \"FLUX 1.1 Pro\",\n          configData: {\n            nodeName: \"black-forest-labs/flux-1.1-pro\",\n          },\n          description: \"fluxDescription\",\n          keywords: [\"Image Generation\", \"Generate Image\", \"Image\"],\n        },\n        {\n          label: \"Recraft V3\",\n          configData: {\n            nodeName: \"recraft-ai/recraft-v3\",\n          },\n          description: \"recraftDescription\",\n          keywords: [\"Image Generation\", \"Generate Image\", \"Image\", \"SVG\"],\n        },\n        {\n          label: \"FLUX Kontext Max\",\n          configData: {\n            nodeName: \"black-forest-labs/flux-kontext-max\",\n          },\n          description: \"fluxKontextDescription\",\n          keywords: [\n            \"Image Generation\",\n            \"Generate Image\",\n            \"Image\",\n            \"Edit Image\",\n          ],\n        },\n\n        {\n          label: \"Imagen 3\",\n          configData: {\n            nodeName: \"google/imagen-3\",\n          },\n          description: \"imagenDescription\",\n          keywords: [\n            \"Image Generation\",\n            \"Generate Image\",\n            \"Image\",\n            \"Google Imagen\",\n          ],\n        },\n\n        {\n          label: \"Remove BG\",\n          configData: {\n            nodeName: \"lucataco/remove-bg\",\n          },\n          description: \"removeBgDescription\",\n          keywords: [\"Image Edit\", \"Background\", \"Background removal\"],\n        },\n        {\n          label: \"Recraft V3 SVG\",\n          configData: {\n            nodeName: \"recraft-ai/recraft-v3-svg\",\n          },\n          description: \"recraftSVGDescription\",\n          keywords: [\"Image Generation\", \"Generate Image\", \"Image\", \"SVG\"],\n        },\n        {\n          label: \"Google Upscaler\",\n          configData: {\n            nodeName: \"google/upscaler\",\n          },\n          description: \"GoogleUpscaleDescription\",\n          keywords: [\"Image Edit\", \"Edit Image\"],\n        },\n        {\n          label: \"Upscale Image\",\n          configData: {\n            nodeName: \"nightmareai/real-esrgan\",\n          },\n          description: \"upscaleDescription\",\n          keywords: [\"Image Edit\", \"Edit Image\"],\n        },\n\n        {\n          label: \"Face Swap\",\n          configData: {\n            nodeName: \"lucataco/faceswap\",\n          },\n          description: \"faceswapDescription\",\n          keywords: [\"Image Edit\", \"Edit Image\"],\n        },\n        {\n          label: \"FLUX 1.1 Pro Ultra\",\n          configData: {\n            nodeName: \"black-forest-labs/flux-1.1-pro-ultra\",\n          },\n          description: \"fluxDescription\",\n          keywords: [\"Image Generation\", \"Generate Image\", \"Image\"],\n        },\n\n        {\n          label: \"Imagen 3 Fast\",\n          configData: {\n            nodeName: \"google/imagen-3-fast\",\n          },\n          description: \"imagenDescription\",\n          keywords: [\n            \"Image Generation\",\n            \"Generate Image\",\n            \"Image\",\n            \"Google Imagen\",\n          ],\n        },\n        {\n          label: \"FLUX Dev\",\n          configData: {\n            nodeName: \"black-forest-labs/flux-dev\",\n          },\n          description: \"fluxDescription\",\n          keywords: [\"Image Generation\", \"Generate Image\", \"Image\"],\n        },\n        {\n          label: \"FLUX Schnell\",\n          configData: {\n            nodeName: \"black-forest-labs/flux-schnell\",\n          },\n          description: \"fluxDescription\",\n          keywords: [\"Image Generation\", \"Generate Image\", \"Image\"],\n        },\n        {\n          label: \"Google Veo 3\",\n          configData: {\n            nodeName: \"google/veo-3\",\n          },\n          description: \"veo3Description\",\n          keywords: [\"Video\", \"Generate Video\"],\n        },\n        {\n          label: \"Video-01\",\n          configData: {\n            nodeName: \"minimax/video-01\",\n          },\n          description: \"video01Description\",\n          keywords: [\"Video\", \"Hailuo\", \"Minimax\", \"Generate Video\", \"Animate\"],\n        },\n        {\n          label: \"Video-01-Live\",\n          configData: {\n            nodeName: \"minimax/video-01-live\",\n          },\n          description: \"video01LiveDescription\",\n          keywords: [\n            \"Video\",\n            \"Hailuo\",\n            \"Minimax\",\n            \"Generate Video\",\n            \"Animate\",\n            \"Animation\",\n          ],\n        },\n        {\n          label: \"Kling v1.6 Pro\",\n          configData: {\n            nodeName: \"kwaivgi/kling-v1.6-pro\",\n          },\n          description: \"klingDescription\",\n          keywords: [\"Video\", \"Generate Video\", \"Animate\"],\n        },\n        {\n          label: \"Kling v1.6\",\n          configData: {\n            nodeName: \"kwaivgi/kling-v1.6-standard\",\n          },\n          description: \"klingDescription\",\n          keywords: [\"Video\", \"Generate Video\", \"Animate\"],\n        },\n        {\n          label: \"Play-Dialog\",\n          configData: {\n            nodeName: \"playht/play-dialog\",\n          },\n          description: \"playDialogDescription\",\n          keywords: [\"Audio\", \"Generate Audio\", \"Voice\", \"TTS\", \"Conversation\"],\n        },\n        {\n          label: \"Music-01\",\n          configData: {\n            nodeName: \"minimax/music-01\",\n          },\n          description: \"music01Description\",\n          keywords: [\"Audio\", \"Generate Audio\", \"Music\", \"Song\", \"Sound\"],\n        },\n        {\n          label: \"Speech-02 HD\",\n          configData: {\n            nodeName: \"minimax/speech-02-hd\",\n          },\n          description: \"music01Description\",\n          keywords: [\"Audio\", \"Generate Speech\", \"Voice\", \"Sound\"],\n        },\n        {\n          label: \"Speech-02 Turbo\",\n          configData: {\n            nodeName: \"minimax/speech-02-turbo\",\n          },\n          description: \"music01Description\",\n          keywords: [\"Audio\", \"Generate Speech\", \"Voice\", \"Sound\"],\n        },\n        {\n          label: \"Face Expression Edit\",\n          configData: {\n            nodeName: \"fofr/expression-editor\",\n          },\n          description: \"expressionEditorDescription\",\n          keywords: [\"Image Edit\", \"Edit Image\", \"Face\"],\n        },\n      ],\n    },\n    {\n      label: \"Transition\",\n      type: \"transition\",\n      helpMessage: \"transitionHelp\",\n      section: \"tools\",\n    },\n    {\n      label: \"Display\",\n      type: \"display\",\n      helpMessage: \"displayHelp\",\n      section: \"tools\",\n    },\n  ];\n  return nonGenericNodeConfig;\n}\n\nfunction getAllDndNode(): DnDNode[] {\n  const nodesDisabled = getNodesHiddenList();\n  const nonGenericNodeConfig = getNonGenericNodeConfig();\n  return transformNodeConfigsToDndNode(nodeConfigs)\n    .concat(nonGenericNodeConfig)\n    .filter((node) => !nodesDisabled.includes(node.type));\n}\n\nexport const populateNodeSections = () => {\n  const emptyNodeSections: NodeSection[] = [\n    {\n      label: \"Input\",\n      type: \"input\",\n      icon: BsInputCursorText,\n    },\n    {\n      label: \"Models\",\n      type: \"models\",\n      icon: AiOutlineRobot,\n    },\n    {\n      label: \"Tools\",\n      type: \"tools\",\n      icon: FaToolbox,\n    },\n  ];\n  const nodes = getAllDndNode();\n\n  nodes.forEach((node) => {\n    const section = emptyNodeSections.find((sec) => sec.type === node.section);\n\n    if (section) {\n      if (!section.nodes) {\n        section.nodes = [];\n      }\n      section.nodes.push(node);\n    }\n  });\n\n  const sectionFiltered = emptyNodeSections.filter(\n    (sec) => sec.nodes && sec.nodes.length > 0,\n  );\n\n  for (const sec of sectionFiltered) {\n    if (sec.type === \"models\") sortSection(sec);\n  }\n\n  return sectionFiltered;\n};\n\nexport function sortSection(\n  section: NodeSection,\n  lowPriorityPrefixes?: string[],\n  highPriorityPrefixes?: string[],\n) {\n  const lowPriority = lowPriorityPrefixes ?? getLowPriorityNodePrefixes();\n  const highPriority = highPriorityPrefixes ?? getHighPriorityNodePrefixes();\n\n  if (section.nodes) {\n    section.nodes.sort((a, b) => {\n      const getHighPriorityRank = (type: string): number => {\n        for (let i = 0; i < highPriority.length; i++) {\n          if (type.startsWith(highPriority[i])) {\n            return i;\n          }\n        }\n        return -1;\n      };\n\n      const isLowPriority = (label: string): boolean =>\n        lowPriority.some((prefix: string) =>\n          label.toLowerCase().startsWith(prefix.toLowerCase()),\n        );\n\n      const aHighRank = getHighPriorityRank(a.type);\n      const bHighRank = getHighPriorityRank(b.type);\n      const aLow = isLowPriority(a.type);\n      const bLow = isLowPriority(b.type);\n\n      // Low priority always goes last\n      if (aLow && !bLow) return 1;\n      if (!aLow && bLow) return -1;\n\n      // High priority comes first, sorted by priority rank\n      if (aHighRank !== -1 && bHighRank !== -1) {\n        return aHighRank - bHighRank;\n      }\n      if (aHighRank !== -1) return -1;\n      if (bHighRank !== -1) return 1;\n\n      // All remaining items sorted alphabetically by label\n      return a.label.localeCompare(b.label);\n    });\n  }\n}\n\nexport const getSections = () => populateNodeSections();\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/stableDiffusionStabilityAiNode.ts",
    "content": "import { NodeConfig } from \"./types\";\n\nconst stableDiffusionStabilityAiNodeConfig: NodeConfig = {\n  nodeName: \"Stable Diffusion\",\n  processorType: \"stable-diffusion-stabilityai-prompt\",\n  icon: \"StabilityAILogo\",\n  hasInputHandle: true,\n  inputNames: [\"prompt\"],\n  fields: [\n    {\n      type: \"textarea\",\n      name: \"prompt\",\n      placeholder: \"DallEPromptPlaceholder\",\n      hideIfParent: true,\n    },\n    {\n      type: \"select\",\n      name: \"size\",\n      placeholder: \"StableDiffusionSizePlaceholder\",\n      options: [\n        {\n          label: \"1024x1024\",\n          value: \"1024x1024\",\n          default: true,\n        },\n        {\n          label: \"1152x896\",\n          value: \"1152x896\",\n        },\n        {\n          label: \"1216x832\",\n          value: \"1216x832\",\n        },\n        {\n          label: \"1344x768\",\n          value: \"1344x768\",\n        },\n        {\n          label: \"1536x640\",\n          value: \"1536x640\",\n        },\n        {\n          label: \"640x1536\",\n          value: \"640x1536\",\n        },\n        {\n          label: \"768x1344\",\n          value: \"768x1344\",\n        },\n        {\n          label: \"832x1216\",\n          value: \"832x1216\",\n        },\n        {\n          label: \"896x1152\",\n          value: \"896x1152\",\n        },\n      ],\n    },\n  ],\n  outputType: \"imageUrl\",\n  section: \"models\",\n  helpMessage: \"stableDiffusionPromptHelp\",\n};\n\nexport default stableDiffusionStabilityAiNodeConfig;\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/types.ts",
    "content": "export type SectionType = \"models\" | \"image-generation\" | \"tools\" | \"input\";\nexport type FieldType =\n  | \"input\"\n  | \"inputInt\"\n  | \"textarea\"\n  | \"select\"\n  | \"option\"\n  | \"inputNameBar\"\n  | \"boolean\"\n  | \"slider\"\n  | \"textfield\"\n  | \"numericfield\"\n  | \"switch\"\n  | \"textToDisplay\"\n  | \"list\"\n  | \"json\"\n  | \"nonRendered\"\n  | \"dictionnary\"\n  | \"fileUpload\"\n  | \"imageMaskCreator\";\n\nexport type OutputType =\n  | \"imageUrl\"\n  | \"videoUrl\"\n  | \"audioUrl\"\n  | \"pdfUrl\"\n  | \"imageBase64\"\n  | \"markdown\"\n  | \"text\"\n  | \"fileUrl\"\n  | \"3dUrl\";\n\nexport interface Option {\n  label: string;\n  value: string;\n  default?: boolean;\n}\n\nexport type Operator =\n  | \"equals\"\n  | \"not equals\"\n  | \"in\"\n  | \"not in\"\n  | \"greater than\"\n  | \"less than\"\n  | \"exists\"\n  | \"not exists\";\n\nexport interface Condition {\n  field: string;\n  operator: Operator;\n  value: any;\n}\n\nexport interface ConditionGroup {\n  logic: \"AND\" | \"OR\";\n  conditions: Condition[];\n}\n\nexport type FieldCondition = Condition | ConditionGroup;\n\nexport interface Field {\n  name: string;\n  type: FieldType;\n  label?: string;\n  placeholder?: string;\n  defaultValue?: any;\n  max?: number;\n  min?: number;\n  step?: number;\n  allowDecimal?: boolean;\n  options?: Option[];\n  hideIfParent?: boolean;\n  required?: boolean;\n  hidden?: boolean;\n  hasHandle?: boolean;\n  isLinked?: boolean;\n  associatedField?: string;\n  description?: string;\n  isBinary?: boolean;\n  withModalEdit?: boolean;\n  condition?: FieldCondition;\n  canAddChildrenFields?: boolean;\n  isChild?: boolean;\n}\n\nexport interface SubnodeData {\n  label: string;\n  keywords?: string[];\n  data?: any;\n  configData?: any;\n  isBeta?: boolean;\n  isNew?: boolean;\n  description?: string;\n}\nexport interface SubnodeShortcutStyle {\n  title: string;\n}\n\nexport interface NodeConfig {\n  processorType?: string;\n  nodeName: string;\n  icon: string;\n  inputNames?: string[];\n  fields: Field[];\n  hideFieldsIfParent?: boolean;\n  outputType: OutputType;\n  defaultHideOutput?: boolean;\n  hasInputHandle?: boolean;\n  section: SectionType;\n  helpMessage?: string;\n  showHandlesNames?: boolean;\n  isDynamicallyGenerated?: boolean;\n  isBeta?: boolean;\n}\n\nexport interface DiscriminatedNodeConfig {\n  discriminators: { [key: string]: string };\n  config: NodeConfig;\n}\n\nexport interface NodeSubConfig {\n  discriminatorFields: string[];\n  subConfigurations: DiscriminatedNodeConfig[];\n}\n\nexport type NodeConfigVariant = NodeSubConfig &\n  Omit<NodeConfig, \"fields\" | \"outputType\">;\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/urlNode.ts",
    "content": "import { NodeConfig } from \"./types\";\n\nexport const urlNodeConfig: NodeConfig = {\n  nodeName: \"EnterURL\",\n  processorType: \"url_input\",\n  icon: \"FaLink\",\n  showHandlesNames: true,\n  fields: [\n    {\n      name: \"url\",\n      type: \"input\",\n      required: true,\n      label: \"url\",\n      hasHandle: true,\n      placeholder: \"URLPlaceholder\",\n    },\n    {\n      name: \"with_html_tags\",\n      type: \"boolean\",\n      label: \"with_html_tags\",\n    },\n    {\n      name: \"with_html_attributes\",\n      type: \"boolean\",\n      label: \"with_html_attributes\",\n    },\n    {\n      name: \"selectors\",\n      type: \"list\",\n      label: \"selectors\",\n      placeholder: \"div .article #id\",\n    },\n    {\n      name: \"selectors_to_remove\",\n      type: \"list\",\n      label: \"selectors_to_remove\",\n      placeholder: \"div .article #id\",\n      defaultValue: [\"meta\", \"link\", \"script\"],\n    },\n  ],\n  outputType: \"text\",\n  defaultHideOutput: true,\n  section: \"input\",\n  helpMessage: \"urlInputHelp\",\n};\n"
  },
  {
    "path": "packages/ui/src/nodes-configuration/youtubeTranscriptNode.ts",
    "content": "import { NodeConfig } from \"./types\";\n\nexport const youtubeTranscriptNodeConfig: NodeConfig = {\n  nodeName: \"YoutubeTranscriptNodeName\",\n  processorType: \"youtube_transcript_input\",\n  icon: \"YoutubeLogo\",\n  fields: [\n    {\n      name: \"url\",\n      label: \"url\",\n      type: \"input\",\n      required: true,\n      placeholder: \"URLPlaceholder\",\n      hasHandle: true,\n    },\n    {\n      name: \"language\",\n      label: \"language\",\n      type: \"select\",\n      options: [\n        {\n          label: \"English\",\n          value: \"en\",\n          default: true,\n        },\n        {\n          label: \"French\",\n          value: \"fr\",\n        },\n        {\n          label: \"Spanish\",\n          value: \"es\",\n        },\n        {\n          label: \"German\",\n          value: \"de\",\n        },\n        {\n          label: \"Italian\",\n          value: \"it\",\n        },\n        {\n          label: \"Chinese\",\n          value: \"zh\",\n        },\n        {\n          label: \"Hindi\",\n          value: \"hi\",\n        },\n        {\n          label: \"Arabic\",\n          value: \"ar\",\n        },\n        {\n          label: \"Japanese\",\n          value: \"ja\",\n        },\n        {\n          label: \"Portuguese\",\n          value: \"pt\",\n        },\n\n        {\n          label: \"Russian\",\n          value: \"ru\",\n        },\n\n        {\n          label: \"Korean\",\n          value: \"ko\",\n        },\n      ],\n    },\n  ],\n  outputType: \"text\",\n  defaultHideOutput: true,\n  section: \"input\",\n  helpMessage: \"youtubeTranscriptHelp\",\n  showHandlesNames: true,\n};\n"
  },
  {
    "path": "packages/ui/src/providers/FlowDataProvider.tsx",
    "content": "import React, { createContext, useContext, useState, ReactNode } from \"react\";\nimport { FlowMetadata, FlowTab } from \"../layout/main-layout/AppLayout\";\nimport { Edge, Node } from \"reactflow\";\n\ninterface FlowDataContextType {\n  getCurrentTab: () => FlowTab;\n  updateCurrentTabMetadata: (metadata: FlowMetadata) => void;\n}\n\nexport const FlowDataContext = createContext<FlowDataContextType | undefined>(\n  undefined,\n);\n\ninterface FlowDataProviderProps {\n  children: ReactNode;\n  flowTab: FlowTab;\n  onFlowChange: (nodes: Node[], edges: Edge[], metadata?: FlowMetadata) => void;\n}\n\nexport const FlowDataProvider: React.FC<FlowDataProviderProps> = ({\n  children,\n  flowTab,\n  onFlowChange,\n}) => {\n  function getCurrentTab() {\n    return flowTab;\n  }\n\n  function updateCurrentTabMetadata(metadata: FlowMetadata) {\n    onFlowChange(flowTab.nodes, flowTab.edges, metadata);\n  }\n\n  const value = {\n    getCurrentTab: getCurrentTab,\n    updateCurrentTabMetadata: updateCurrentTabMetadata,\n  };\n\n  return (\n    <FlowDataContext.Provider value={value}>\n      {children}\n    </FlowDataContext.Provider>\n  );\n};\n\nexport const useFlowData = (): FlowDataContextType => {\n  const context = useContext(FlowDataContext);\n  if (context === undefined) {\n    throw new Error(\"useVisibility must be used within a VisibilityProvider\");\n  }\n  return context;\n};\n"
  },
  {
    "path": "packages/ui/src/providers/NodeProvider.tsx",
    "content": "import { ReactNode, createContext, useContext, useState } from \"react\";\nimport { Node, Edge } from \"reactflow\";\nimport { nodesTopologicalSort, convertFlowToJson } from \"../utils/flowUtils\";\nimport { FlowEvent, SocketContext } from \"./SocketProvider\";\nimport { useTranslation } from \"react-i18next\";\nimport { toastErrorMessage, toastFastInfoMessage } from \"../utils/toastUtils\";\nimport {\n  createErrorMessageForMissingFields,\n  getNodeInError,\n} from \"../utils/flowChecker\";\nimport { createUniqNodeId } from \"../utils/nodeUtils\";\nimport { NodeAppearance, NodeData } from \"../components/nodes/types/node\";\nimport { NodeConfig } from \"../nodes-configuration/types\";\nimport { getDefaultOptions } from \"../utils/nodeConfigurationUtils\";\nimport { FlowMetadata } from \"../layout/main-layout/AppLayout\";\n\nexport type NodeDimensions = {\n  width?: number | null;\n  height?: number | null;\n};\n\ninterface NodeContextType {\n  runNode: (nodeName: string) => boolean;\n  runAllNodes: () => void;\n  hasParent: (id: string) => boolean;\n  getIncomingEdges: (id: string) => Edge[] | undefined;\n  getOutgoingEdges: (id: string) => Edge[] | undefined;\n  removeNodeIncomingEdges: (id: string) => void;\n  removeEdgesByIds: (id: string[]) => void;\n  getEdgeIndex: (id: string) => Edge | undefined;\n  showOnlyOutput?: boolean;\n  isRunning: boolean;\n  currentNodesRunning: string[];\n  errorCount: number;\n  onUpdateNodeData: (nodeId: string, data: any) => void;\n  onUpdateNodes: (nodesUpdated: Node[], edgesUpdated: Edge[]) => void;\n  getNodeDimensions: (nodeId: string) => NodeDimensions | undefined;\n  duplicateNode: (nodeId: string) => void;\n  createNodeRef: (nodeId: string) => void;\n  clearNodeOutput: (nodeId: string) => void;\n  clearAllOutput: () => void;\n  updateNodeAppearance: (nodeId: string, appearance: NodeAppearance) => void;\n  overrideConfigForNode: (\n    nodeId: string,\n    newConfig: NodeConfig,\n    newData: NodeData,\n  ) => void;\n  removeNode: (nodeId: string) => void;\n  removeAll: () => void;\n  findNode: (nodeId: string) => Node | undefined;\n  nodes: Node[];\n  edges: Edge[];\n  currentNodeIdSelected: string;\n  setCurrentNodeIdSelected: (id: string) => void;\n}\n\nconst DUPLICATED_NODE_OFFSET = 100;\n\nexport const NodeContext = createContext<NodeContextType>({\n  runNode: () => false,\n  runAllNodes: () => undefined,\n  hasParent: () => false,\n  getIncomingEdges: () => undefined,\n  getOutgoingEdges: () => undefined,\n  removeNodeIncomingEdges: () => undefined,\n  removeEdgesByIds: () => undefined,\n  getEdgeIndex: () => undefined,\n  showOnlyOutput: false,\n  isRunning: false,\n  currentNodesRunning: [],\n  errorCount: 0,\n  onUpdateNodeData: () => undefined,\n  onUpdateNodes: () => undefined,\n  getNodeDimensions: () => undefined,\n  duplicateNode: () => undefined,\n  createNodeRef: () => undefined,\n  clearNodeOutput: () => undefined,\n  clearAllOutput: () => undefined,\n  updateNodeAppearance: () => undefined,\n  overrideConfigForNode: () => undefined,\n  removeNode: () => undefined,\n  removeAll: () => undefined,\n  findNode: () => undefined,\n  nodes: [],\n  edges: [],\n  currentNodeIdSelected: \"\",\n  setCurrentNodeIdSelected: () => undefined,\n});\n\nexport const NodeProvider = ({\n  nodes,\n  edges,\n  metadata,\n  showOnlyOutput,\n  isRunning,\n  currentNodesRunning,\n  errorCount,\n  onUpdateNodeData,\n  onUpdateNodes,\n  children,\n}: {\n  nodes: Node[];\n  edges: Edge[];\n  metadata?: FlowMetadata;\n  showOnlyOutput?: boolean;\n  isRunning: boolean;\n  currentNodesRunning: string[];\n  errorCount: number;\n  onUpdateNodeData: (nodeId: string, data: any) => void;\n  onUpdateNodes: (nodesUpdated: Node[], edgesUpdated: Edge[]) => void;\n  children: ReactNode;\n}) => {\n  const { t } = useTranslation(\"flow\");\n  const { emitEvent } = useContext(SocketContext);\n  const [currentNodeIdSelected, setCurrentNodeIdSelected] =\n    useState<string>(\"\");\n\n  const runNode = (name: string) => {\n    const nodesSorted = nodesTopologicalSort(nodes, edges);\n    const flowFile = convertFlowToJson(nodesSorted, edges, true, true);\n\n    const nodesInError = getNodeInError(flowFile, nodesSorted, name);\n\n    if (nodesInError.length > 0) {\n      let errorMessage = createErrorMessageForMissingFields(nodesInError, t);\n      toastErrorMessage(errorMessage);\n      return false;\n    }\n\n    const event: FlowEvent = {\n      name: \"run_node\",\n      data: {\n        jsonFile: JSON.stringify(flowFile),\n        nodeName: name,\n        metadata: metadata,\n      },\n    };\n    return emitEvent(event);\n  };\n\n  const runAllNodes = () => {\n    if (nodes.length === 0) {\n      toastFastInfoMessage(t(\"NoNodesToRun\"));\n      return;\n    }\n\n    const nodesSorted = nodesTopologicalSort(nodes, edges);\n    const flowFile = convertFlowToJson(nodesSorted, edges, true, true);\n\n    const nodesInError = getNodeInError(flowFile, nodesSorted);\n\n    if (nodesInError.length > 0) {\n      let errorMessage = createErrorMessageForMissingFields(nodesInError, t);\n      toastErrorMessage(errorMessage);\n      return;\n    }\n\n    const event: FlowEvent = {\n      name: \"process_file\",\n      data: {\n        jsonFile: JSON.stringify(flowFile),\n        metadata: metadata,\n      },\n    };\n    emitEvent(event);\n  };\n\n  const hasParent = (id: string) => {\n    return !!edges.find((edge) => edge.target === id);\n  };\n\n  const getIncomingEdges = (id: string) => {\n    return edges.filter((edge) => edge.target === id);\n  };\n\n  const getOutgoingEdges = (id: string) => {\n    return edges.filter((edge) => edge.source === id);\n  };\n\n  const removeNodeIncomingEdges = (id: string) => {\n    const edgesUpdated = edges.filter((edge) => edge.target !== id);\n    onUpdateNodes(nodes, edgesUpdated);\n  };\n\n  const removeEdgesByIds = (ids: string[]) => {\n    const edgesUpdated = edges.filter((edge) => !ids.includes(edge.id));\n    onUpdateNodes(nodes, edgesUpdated);\n  };\n\n  const overrideConfigForNode = (\n    id: string,\n    newConfig: NodeConfig,\n    newData: NodeData,\n  ) => {\n    const nodesUpdated = nodes.map((node) => {\n      if (node.id === id) {\n        const defaultOptions: any = getDefaultOptions(\n          newConfig.fields,\n          newData,\n        );\n        console.log(newData);\n        node.data = {\n          ...newData,\n          ...defaultOptions,\n          config: {\n            ...newConfig,\n            isDynamicallyGenerated: false,\n          },\n        };\n      }\n      return node;\n    });\n\n    const edgesUpdated = edges.filter((edge) => edge.target !== id);\n    onUpdateNodes(nodesUpdated, edgesUpdated);\n  };\n\n  const getEdgeIndex = (id: string) => {\n    return edges.find((edge) => edge.target === id);\n  };\n\n  const getNodeDimensions = (id: string) => {\n    const node = nodes.find((node) => node.id === id);\n    let dimensions: NodeDimensions = { width: undefined, height: undefined };\n    if (!!node) {\n      dimensions = { width: node.width, height: node.height };\n    }\n\n    return dimensions;\n  };\n\n  const createNodeRef = (nodeId: string) => {\n    const nodeToDuplicate = nodes.find((node) => node.id === nodeId);\n\n    if (nodeToDuplicate) {\n      const newNodeId = createUniqNodeId(nodeToDuplicate.data.processorType);\n      if (nodeToDuplicate.data.nodeRef) {\n        nodeId = nodeToDuplicate.data.nodeRef;\n      }\n\n      nodeToDuplicate.data.metadata = {\n        refList: nodeToDuplicate.data.metadata?.refList\n          ? [...nodeToDuplicate.data.metadata.refList, newNodeId]\n          : [newNodeId],\n      };\n\n      const newNode = {\n        ...nodeToDuplicate,\n        id: newNodeId,\n        selected: false,\n        data: {\n          ...nodeToDuplicate.data,\n          name: newNodeId,\n          isDone: false,\n          lastRun: undefined,\n          nodeRef: nodeId,\n        },\n        position: {\n          x: nodeToDuplicate.position.x + DUPLICATED_NODE_OFFSET,\n          y: nodeToDuplicate.position.y + DUPLICATED_NODE_OFFSET,\n        },\n      };\n      const nodesUpdated = [...nodes, newNode];\n      const edgesUpdated = [...edges];\n      onUpdateNodes(nodesUpdated, edgesUpdated);\n    }\n  };\n\n  const duplicateNode = (nodeId: string) => {\n    const nodeToDuplicate = nodes.find((node) => node.id === nodeId);\n    if (nodeToDuplicate) {\n      const newNodeId = createUniqNodeId(nodeToDuplicate.data.processorType);\n\n      const deepClone = structuredClone(nodeToDuplicate);\n      deepClone.id = newNodeId;\n      deepClone.selected = false;\n      deepClone.data.name = newNodeId;\n      deepClone.data.isDone = false;\n      deepClone.data.lastRun = undefined;\n      deepClone.position.x += DUPLICATED_NODE_OFFSET;\n      deepClone.position.y += DUPLICATED_NODE_OFFSET;\n\n      const nodesUpdated = [...nodes, deepClone];\n      const edgesUpdated = [...edges];\n      onUpdateNodes(nodesUpdated, edgesUpdated);\n    }\n  };\n\n  const clearNodeOutput = (nodeId: string) => {\n    const nodeToUpdate = nodes.find((node) => node.id === nodeId);\n    if (nodeToUpdate) {\n      const nodesUpdated = nodes.map((node) => {\n        if (node.id === nodeId) {\n          return {\n            ...node,\n            data: {\n              ...node.data,\n              outputData: undefined,\n              lastRun: undefined,\n              isDone: false,\n            },\n          };\n        }\n        return node;\n      });\n      onUpdateNodes(nodesUpdated, edges);\n    }\n  };\n\n  function clearAllOutput() {\n    const nodesCleared = nodes.map((node) => {\n      node.data.outputData = undefined;\n      node.data.lastRun = undefined;\n      return node;\n    });\n    onUpdateNodes(nodesCleared, edges);\n  }\n\n  const removeNode = (nodeId: string) => {\n    const nodesUpdated = nodes.filter((node) => node.id !== nodeId);\n    const edgesUpdated = edges.filter(\n      (edge) => edge.source !== nodeId && edge.target !== nodeId,\n    );\n    onUpdateNodes(nodesUpdated, edgesUpdated);\n  };\n\n  const removeAll = () => {\n    onUpdateNodes([], []);\n  };\n\n  const findNode = (nodeId: string) => {\n    return nodes.find((node) => node.id === nodeId);\n  };\n\n  const updateNodeAppearance = (nodeId: string, appearance: NodeAppearance) => {\n    const nodeToUpdate = nodes.find((node) => node.id === nodeId);\n    if (nodeToUpdate) {\n      const nodesUpdated = nodes.map((node) => {\n        if (node.id === nodeId) {\n          return {\n            ...node,\n            data: {\n              ...node.data,\n              appearance: {\n                ...node.data.appearance,\n                ...appearance,\n              },\n            },\n          };\n        }\n        return node;\n      });\n      onUpdateNodes(nodesUpdated, edges);\n    }\n  };\n\n  return (\n    <NodeContext.Provider\n      value={{\n        runNode,\n        runAllNodes,\n        hasParent,\n        getIncomingEdges,\n        getOutgoingEdges,\n        removeNodeIncomingEdges,\n        removeEdgesByIds,\n        getEdgeIndex,\n        showOnlyOutput,\n        isRunning,\n        currentNodesRunning,\n        errorCount,\n        onUpdateNodeData,\n        onUpdateNodes,\n        getNodeDimensions,\n        duplicateNode,\n        createNodeRef,\n        clearNodeOutput,\n        clearAllOutput,\n        updateNodeAppearance,\n        overrideConfigForNode,\n        removeNode,\n        removeAll,\n        findNode,\n        nodes,\n        edges,\n        currentNodeIdSelected: currentNodeIdSelected,\n        setCurrentNodeIdSelected: setCurrentNodeIdSelected,\n      }}\n    >\n      {children}\n    </NodeContext.Provider>\n  );\n};\n"
  },
  {
    "path": "packages/ui/src/providers/SocketProvider.tsx",
    "content": "import {\n  createContext,\n  useState,\n  ReactNode,\n  useEffect,\n  useContext,\n} from \"react\";\nimport { io } from \"socket.io-client\";\nimport { getWsUrl } from \"../config/config\";\nimport {\n  Parameters,\n  getConfigParametersFlat,\n} from \"../components/popups/config-popup/parameters\";\nimport { toastInfoMessage } from \"../utils/toastUtils\";\nimport { useTranslation } from \"react-i18next\";\nimport { FlowEventOut, FlowSocket } from \"../sockets/flowSocket\";\n\nimport { FlowMetadata } from \"../layout/main-layout/AppLayout\";\nimport { AppConfig } from \"../components/popups/config-popup/configMetadata\";\n\nexport interface FlowEventData {\n  jsonFile: string;\n  nodeName?: string;\n  metadata?: FlowMetadata;\n}\n\nexport interface FlowEvent {\n  name: FlowEventOut;\n  data: FlowEventData;\n}\n\nexport type WSConfiguration = {\n  parameters?: Parameters;\n};\n\ninterface ISocketContext {\n  socket: FlowSocket | null;\n  config: WSConfiguration | null;\n  updateSocket: (config?: WSConfiguration) => void;\n  emitEvent: (event: FlowEvent) => boolean;\n  connect: () => void;\n  disconnect: () => void;\n}\n\ninterface SocketProviderProps {\n  children: ReactNode;\n}\n\nexport const SocketContext = createContext<ISocketContext>({\n  socket: null,\n  config: null,\n  updateSocket: (config?: WSConfiguration) => {},\n  emitEvent: (event: FlowEvent) => false,\n  connect: () => {},\n  disconnect: () => {},\n});\n\nexport const SocketProvider = ({ children }: SocketProviderProps) => {\n  const [socket, setSocket] = useState<FlowSocket | null>(null);\n  const [config, setConfig] = useState<WSConfiguration | null>(null);\n\n  const { t } = useTranslation(\"flow\");\n\n  useEffect(() => {\n    if (socket) {\n      socket.disconnect();\n    }\n\n    const config = {};\n\n    setConfig(config);\n\n    if (!!socket) {\n      return () => {\n        socket.close();\n      };\n    }\n  }, []);\n\n  function updateSocket(config?: WSConfiguration): void {\n    if (!!socket) {\n      socket.close();\n      createNewSocket(config);\n    } else {\n      if (config) setConfig(config);\n    }\n  }\n\n  function getActiveSocket(): FlowSocket | null {\n    if (!socket && !!config) {\n      return createNewSocket(config);\n    }\n    return socket;\n  }\n\n  function createNewSocket(configuration?: WSConfiguration) {\n    if (configuration) setConfig(configuration);\n\n    const newSocket = new FlowSocket(io(getWsUrl()));\n\n    const sendAppConfig = () => {\n      const appConfig: Partial<AppConfig> = JSON.parse(\n        localStorage.getItem(\"appConfig\") || \"{}\",\n      );\n\n      newSocket.emit(\"update_app_config\", appConfig);\n    };\n\n    //Fire on the very first connection and on every reconnection\n    newSocket.on(\"connect\", sendAppConfig);\n\n    setSocket(newSocket);\n    return newSocket;\n  }\n\n  function emitEvent(event: FlowEvent): boolean {\n    const activeSocket = getActiveSocket();\n\n    if (activeSocket) {\n      activeSocket.emit(event.name, {\n        ...event.data,\n        parameters: getConfigParametersFlat(),\n      });\n\n      return true;\n    }\n\n    return false;\n  }\n\n  function connect() {\n    getActiveSocket();\n  }\n\n  function disconnect() {\n    if (socket) {\n      socket.disconnect();\n    }\n\n    return false;\n  }\n\n  return (\n    <SocketContext.Provider\n      value={{\n        socket,\n        config,\n        updateSocket,\n        emitEvent,\n        connect,\n        disconnect,\n      }}\n    >\n      {children}\n    </SocketContext.Provider>\n  );\n};\n"
  },
  {
    "path": "packages/ui/src/providers/ThemeProvider.tsx",
    "content": "// ThemeProvider.tsx\nimport React, {\n  createContext,\n  useState,\n  ReactNode,\n  useEffect,\n  CSSProperties,\n} from \"react\";\nimport { ThemeProvider as StyledThemeProvider } from \"styled-components\";\nimport { theme } from \"../components/shared/theme\";\nimport useLocalStorage from \"../hooks/useLocalStorage\";\n\ninterface ThemeContextType {\n  dark: boolean;\n  toggleTheme: () => void;\n  getStyle: () => any;\n}\n\nexport const ThemeContext = createContext<ThemeContextType>({\n  dark: false,\n  toggleTheme: () => {},\n  getStyle: () => ({}), // return an empty style object as a default\n});\ninterface ThemeProviderProps {\n  children: ReactNode;\n}\n\nexport const ThemeProvider = ({ children }: ThemeProviderProps) => {\n  const [dark, setDark] = useLocalStorage(\"darkMode\", true);\n\n  useEffect(() => {\n    // Toggle the \"dark\" class on <html> for Tailwind\n    document.documentElement.classList.toggle(\"dark\", dark);\n\n    // Update Mantine's color schema attribute on <html>\n    document.documentElement.setAttribute(\n      \"data-mantine-color-scheme\",\n      dark ? \"dark\" : \"light\",\n    );\n  }, [dark]);\n\n  const toggleTheme = () => {\n    setDark(!dark);\n  };\n\n  const getStyle = () => {\n    return dark ? theme.dark : theme.light;\n  };\n\n  return (\n    <ThemeContext.Provider value={{ dark, toggleTheme, getStyle }}>\n      <StyledThemeProvider theme={dark ? theme.dark : theme.light}>\n        {children}\n      </StyledThemeProvider>\n    </ThemeContext.Provider>\n  );\n};\n"
  },
  {
    "path": "packages/ui/src/providers/VisibilityProvider.tsx",
    "content": "import React, { createContext, useContext, useState, ReactNode } from \"react\";\n\nexport type VisibilityElement =\n  | \"sidebar\"\n  | \"minimap\"\n  | \"dragAndDropSidebar\"\n  | \"configPopup\";\n\nexport type SidepaneTab = \"json\" | \"topological\" | \"current_node\";\nexport type ConfigTab = \"user\" | \"display\" | \"app\";\n\nexport interface VisibilityContextState {\n  [key: string]: {\n    isVisible: boolean;\n    persistent?: boolean;\n    show: () => void;\n    hide: () => void;\n    toggle: () => void;\n  };\n}\n\ninterface VisibilityContextType {\n  getElement: (key: VisibilityElement) => VisibilityContextState[string];\n  sidepaneActiveTab: SidepaneTab;\n  setSidepaneActiveTab: (tab: SidepaneTab) => void;\n  configActiveTab: ConfigTab;\n  setConfigActiveTab: (tab: ConfigTab) => void;\n}\n\nexport const VisibilityContext = createContext<\n  VisibilityContextType | undefined\n>(undefined);\n\ninterface VisibilityProviderProps {\n  children: ReactNode;\n}\n\nconst VISBILITY_PROVIDER_PREFIX = \"vp-\";\n\nfunction getIsVisibleFromLocalStorage(key: string): boolean | null {\n  const value = localStorage.getItem(VISBILITY_PROVIDER_PREFIX + key);\n  return value ? JSON.parse(value) : null;\n}\n\nexport const VisibilityProvider: React.FC<VisibilityProviderProps> = ({\n  children,\n}) => {\n  const [visibilityState, setVisibilityState] =\n    useState<VisibilityContextState>({\n      sidebar: {\n        isVisible: false,\n        show: () => setVisibility(\"sidebar\", true),\n        hide: () => setVisibility(\"sidebar\", false),\n        toggle: () => toggleVisibility(\"sidebar\"),\n      },\n      minimap: {\n        isVisible: (getIsVisibleFromLocalStorage(\"minimap\") as boolean) ?? true,\n        persistent: true,\n        show: () => setVisibility(\"minimap\", true),\n        hide: () => setVisibility(\"minimap\", false),\n        toggle: () => toggleVisibility(\"minimap\"),\n      },\n      dragAndDropSidebar: {\n        isVisible: true,\n        show: () => setVisibility(\"dragAndDropSidebar\", true),\n        hide: () => setVisibility(\"dragAndDropSidebar\", false),\n        toggle: () => toggleVisibility(\"dragAndDropSidebar\"),\n      },\n      configPopup: {\n        isVisible: false,\n        show: () => setVisibility(\"configPopup\", true),\n        hide: () => setVisibility(\"configPopup\", false),\n        toggle: () => toggleVisibility(\"configPopup\"),\n      },\n    });\n\n  const [sidepaneActiveTab, setSidepaneActiveTab] =\n    useState<SidepaneTab>(\"json\");\n\n  const [configActiveTab, setConfigActiveTab] = useState<ConfigTab>(\"user\");\n\n  const setVisibility = (key: VisibilityElement, isVisible: boolean) => {\n    setVisibilityState((prevState) => {\n      if (visibilityState[key].persistent) {\n        localStorage.setItem(\n          VISBILITY_PROVIDER_PREFIX + key,\n          JSON.stringify(isVisible),\n        );\n      }\n\n      return {\n        ...prevState,\n        [key]: {\n          ...prevState[key],\n          isVisible,\n        },\n      };\n    });\n  };\n\n  const toggleVisibility = (key: VisibilityElement) => {\n    setVisibilityState((prevState) => {\n      if (prevState[key].persistent) {\n        localStorage.setItem(\n          VISBILITY_PROVIDER_PREFIX + key,\n          JSON.stringify(!prevState[key].isVisible),\n        );\n      }\n\n      return {\n        ...prevState,\n        [key]: {\n          ...prevState[key],\n          isVisible: !prevState[key].isVisible,\n        },\n      };\n    });\n  };\n\n  const getElement = (key: VisibilityElement) => {\n    return visibilityState[key];\n  };\n\n  const value = {\n    getElement: getElement,\n    sidepaneActiveTab: sidepaneActiveTab,\n    setSidepaneActiveTab: setSidepaneActiveTab,\n    configActiveTab: configActiveTab,\n    setConfigActiveTab: setConfigActiveTab,\n  };\n\n  return (\n    <VisibilityContext.Provider value={value}>\n      {children}\n    </VisibilityContext.Provider>\n  );\n};\n\nexport const useVisibility = (): VisibilityContextType => {\n  const context = useContext(VisibilityContext);\n  if (context === undefined) {\n    throw new Error(\"useVisibility must be used within a VisibilityProvider\");\n  }\n  return context;\n};\n"
  },
  {
    "path": "packages/ui/src/react-app-env.d.ts",
    "content": "/// <reference types=\"react-scripts\" />\n"
  },
  {
    "path": "packages/ui/src/reportWebVitals.ts",
    "content": "import { ReportHandler } from 'web-vitals';\n\nconst reportWebVitals = (onPerfEntry?: ReportHandler) => {\n  if (onPerfEntry && onPerfEntry instanceof Function) {\n    import('web-vitals').then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => {\n      getCLS(onPerfEntry);\n      getFID(onPerfEntry);\n      getFCP(onPerfEntry);\n      getLCP(onPerfEntry);\n      getTTFB(onPerfEntry);\n    });\n  }\n};\n\nexport default reportWebVitals;\n"
  },
  {
    "path": "packages/ui/src/services/tabStorage.ts",
    "content": "import { FlowTab } from \"../layout/main-layout/AppLayout\";\n\nconst LOCAL_STORAGE_TAB_KEY = \"flowTabs\";\nconst LOCAL_STORAGE_CURRENT_TAB_KEY = \"currentTab\";\n\nexport function getCurrentTabIndex() {\n  const savedTabIndex = localStorage.getItem(LOCAL_STORAGE_CURRENT_TAB_KEY);\n  if (!savedTabIndex) return undefined;\n  return savedTabIndex;\n}\n\nexport function saveCurrentTabIndex(index: number) {\n  localStorage.setItem(LOCAL_STORAGE_CURRENT_TAB_KEY, index.toString());\n}\n\nexport function getLocalTabs() {\n  const savedTabs = localStorage.getItem(LOCAL_STORAGE_TAB_KEY);\n  if (!savedTabs) return undefined;\n  return JSON.parse(savedTabs)?.tabs as FlowTab[];\n}\n\nexport function saveTabsLocally(tabs: FlowTab[]) {\n  if (!tabs) return;\n  if (tabs.length >= 1 && tabs[0].nodes.length !== 0) {\n    const tabsToStore = { tabs: tabs };\n    localStorage.setItem(LOCAL_STORAGE_TAB_KEY, JSON.stringify(tabsToStore));\n  }\n}\n\nexport async function getAllTabs() {\n  const savedFlowTabs = getLocalTabs();\n  return savedFlowTabs ?? [];\n}\n"
  },
  {
    "path": "packages/ui/src/setupTests.ts",
    "content": "// jest-dom adds custom jest matchers for asserting on DOM nodes.\n// allows you to do things like:\n// expect(element).toHaveTextContent(/react/i)\n// learn more: https://github.com/testing-library/jest-dom\nimport \"@testing-library/jest-dom\";\n"
  },
  {
    "path": "packages/ui/src/sockets/flowEventTypes.ts",
    "content": "export interface FlowOnProgressEventData<T = any> {\n  instanceName: string;\n  output: T;\n  isDone: boolean;\n}\n\nexport interface FlowOnErrorEventData {\n  instanceName: string;\n  nodeName: string;\n  error: string;\n}\n\nexport interface FlowOnCurrentNodeRunningEventData {\n  instanceName: string;\n}\n"
  },
  {
    "path": "packages/ui/src/sockets/flowSocket.ts",
    "content": "import { Socket } from \"socket.io-client\";\n\nexport type FlowEventIn =\n  | \"connect\"\n  | \"progress\"\n  | \"error\"\n  | \"run_end\"\n  | \"current_node_running\"\n  | \"reconnect_error\"\n  | \"disconnect\";\n\nexport type FlowEventOut = \"run_node\" | \"process_file\" | \"update_app_config\";\n\nexport class FlowSocket {\n  private socket: Socket;\n\n  constructor(socket: Socket) {\n    this.socket = socket;\n  }\n\n  public on(event: FlowEventIn, handler: (...args: any[]) => void): void {\n    this.socket.on(event, handler);\n  }\n\n  public off(event: FlowEventIn, handler: (...args: any[]) => void): void {\n    this.socket.off(event, handler);\n  }\n\n  public emit(event: FlowEventOut, ...args: any[]): void {\n    this.socket.emit(event, ...args);\n  }\n\n  public connect(): void {\n    if (!this.socket.connected) {\n      this.socket.connect();\n    }\n  }\n\n  public disconnect(): void {\n    if (this.socket.connected) {\n      this.socket.disconnect();\n    }\n  }\n\n  public close(): void {\n    this.socket.close();\n  }\n}\n"
  },
  {
    "path": "packages/ui/src/utils/evaluateConditions.ts",
    "content": "import {\n  Condition,\n  ConditionGroup,\n  FieldCondition,\n  Operator,\n} from \"../nodes-configuration/types\";\n\n/**\n * Evaluates a single condition against the form values.\n * @param condition - A single Condition object.\n * @param formValues - The current form values.\n * @returns Boolean indicating if the condition is met.\n */\nfunction evaluateSingleCondition(\n  condition: Condition,\n  formValues: Record<string, any>,\n): boolean {\n  const fieldValue = formValues[condition.field];\n\n  switch (condition.operator) {\n    case \"equals\":\n      return fieldValue === condition.value;\n    case \"not equals\":\n      return fieldValue !== condition.value;\n    case \"in\":\n      return (\n        Array.isArray(condition.value) && condition.value.includes(fieldValue)\n      );\n    case \"not in\":\n      return (\n        Array.isArray(condition.value) && !condition.value.includes(fieldValue)\n      );\n    case \"greater than\":\n      return typeof fieldValue === \"number\" && fieldValue > condition.value;\n    case \"less than\":\n      return typeof fieldValue === \"number\" && fieldValue < condition.value;\n    case \"exists\":\n      return fieldValue !== undefined && fieldValue !== null;\n    case \"not exists\":\n      return fieldValue === undefined || fieldValue === null;\n    default:\n      console.warn(`Unsupported operator: ${condition.operator}`);\n      return false;\n  }\n}\n\n/**\n * Recursively evaluates FieldCondition against the form values.\n * @param condition - A FieldCondition object (Condition or ConditionGroup).\n * @param formValues - The current form values.\n * @returns Boolean indicating if the condition is met.\n */\nexport function evaluateCondition(\n  condition: FieldCondition | undefined,\n  formValues: Record<string, any>,\n): boolean {\n  if (!condition) return true; // No condition means always show\n\n  if (\"logic\" in condition && \"conditions\" in condition) {\n    const { logic, conditions } = condition as ConditionGroup;\n\n    if (!Array.isArray(conditions) || conditions.length === 0) {\n      console.warn(\"ConditionGroup has no conditions.\");\n      return true;\n    }\n\n    const results = conditions.map((cond) =>\n      evaluateCondition(cond, formValues),\n    );\n\n    if (logic === \"AND\") {\n      return results.every(Boolean);\n    } else if (logic === \"OR\") {\n      return results.some(Boolean);\n    } else {\n      console.warn(`Unsupported logic operator: ${logic}`);\n      return false;\n    }\n  }\n\n  // It's a single Condition\n  return evaluateSingleCondition(condition as Condition, formValues);\n}\n"
  },
  {
    "path": "packages/ui/src/utils/flowChecker.ts",
    "content": "import { NodeData } from \"../components/nodes/types/node\";\nimport { Field } from \"../nodes-configuration/types\";\nimport { Node } from \"reactflow\";\n\nexport const isFieldLinkedToAnotherNode = (\n  field: Field,\n  nodeData: NodeData,\n) => {\n  const nodeFieldsWithSameName = nodeData.inputs?.find(\n    (input) => input.inputName === field.name,\n  );\n  return nodeFieldsWithSameName?.inputNode;\n};\n\nexport const getRequiredNodesForLaunch = (\n  flowFile: NodeData[],\n  nodeName: string,\n  requiredNodes?: string[],\n) => {\n  if (!requiredNodes) requiredNodes = [];\n\n  const currentNode = flowFile.find((node) => node.name === nodeName);\n  const nodesLinked = currentNode?.inputs?.map((input) => input.inputNode);\n\n  nodesLinked?.forEach((node) => {\n    getRequiredNodesForLaunch(flowFile, node, requiredNodes);\n  });\n\n  requiredNodes.push(nodeName);\n\n  return requiredNodes;\n};\n\nexport const getNodeMissingFields = (nodeData: NodeData) => {\n  const missingFields: string[] = [];\n\n  nodeData.config?.fields?.forEach((field) => {\n    if (\n      field.required &&\n      !nodeData[field.name] &&\n      !isFieldLinkedToAnotherNode(field, nodeData)\n    ) {\n      missingFields?.push(field.name);\n    }\n  });\n\n  return missingFields;\n};\n\nexport function createErrorMessageForMissingFields(\n  nodesInError: Node[],\n  t: any,\n) {\n  let errorMessage = t(\"MissingFieldsMessage\") + \"\\n\\n\";\n\n  nodesInError.forEach((node) => {\n    const nodeData = node.data as NodeData;\n    errorMessage += `${t(\"Node\")}: ${nodeData.config.nodeName}\\n`;\n    errorMessage += `${t(\"MissingFields\")}:\\n`;\n\n    nodeData.missingFields?.forEach((field) => {\n      errorMessage += ` - ${field}\\n`;\n    });\n\n    errorMessage += \"\\n\\n\";\n  });\n  return errorMessage;\n}\n\nexport function getNodeInError(\n  flowFile: NodeData[],\n  nodesSorted: Node[],\n  name?: string,\n) {\n  let nodesToCheck = flowFile.map((node) => node.name);\n  if (!!name) {\n    nodesToCheck = getRequiredNodesForLaunch(flowFile, name);\n  }\n\n  const nodesToUpdate = nodesSorted.filter((node) =>\n    nodesToCheck.includes(node.data.name),\n  );\n\n  const nodesInError = nodesToUpdate\n    .map((node) => {\n      const flowFileCurrentNodeData = flowFile.find(\n        (data) => data.name === node.data.name,\n      );\n      if (flowFileCurrentNodeData) {\n        node.data.missingFields = getNodeMissingFields(flowFileCurrentNodeData);\n      }\n      return node;\n    })\n    .filter((node) => node.data.missingFields?.length > 0);\n  return nodesInError;\n}\n"
  },
  {
    "path": "packages/ui/src/utils/flowUtils.ts",
    "content": "import { Node, Edge } from \"reactflow\";\nimport { Field } from \"../nodes-configuration/types\";\nimport { getConfigViaType } from \"../nodes-configuration/nodeConfig\";\nimport { NodeData } from \"../components/nodes/types/node\";\nimport { FlowTab } from \"../layout/main-layout/AppLayout\";\nimport { evaluateCondition } from \"./evaluateConditions\";\n\nexport type BasicNode = Pick<\n  Node,\n  \"id\" | \"data\" | \"position\" | \"type\" | \"width\" | \"height\"\n>;\n\nexport type BasicEdge = Pick<\n  Edge,\n  \"id\" | \"source\" | \"sourceHandle\" | \"target\" | \"targetHandle\" | \"type\"\n>;\n\nconst CONFIG = {\n  FLOW_VERSION: \"1.0.0\",\n};\n\nexport const handleInPrefix = \"handle-in\";\nexport const handleOutPrefix = \"handle-out\";\nexport const handleSeparator = \"-\";\nconst indexKeyHandleOut = 2;\nconst indexKeyHandleIn = 2;\n\nexport function getConfig() {\n  return CONFIG;\n}\n\nexport function isCompatibleConfigVersion(fileVersion: string | undefined) {\n  return fileVersion === CONFIG.FLOW_VERSION;\n}\n\nexport const getKeyFromHandleName = (name: string) =>\n  name.split(handleSeparator)[indexKeyHandleOut];\n\nexport const generateIdForHandle = (key: number, isOutput?: boolean) =>\n  !isOutput\n    ? `${handleInPrefix}${handleSeparator}${key}`\n    : `${handleOutPrefix}${handleSeparator}${key}`;\n\nexport const generateIdForHandles = (nbHandle: number, isOutput?: boolean) => {\n  const handles = [];\n  for (let i = 0; i < nbHandle; ++i) {\n    handles.push(generateIdForHandle(i, isOutput));\n  }\n  return handles;\n};\n\nexport function nodesTopologicalSort(\n  nodes: BasicNode[],\n  edges: BasicEdge[],\n): Node[] {\n  const visited = new Set<string>();\n  const sortedNodes: Node[] = [];\n\n  function visit(node: Node) {\n    if (visited.has(node.id)) return;\n\n    visited.add(node.id);\n\n    const edgesToNode = edges.filter((edge) => edge.target === node.id);\n\n    edgesToNode.forEach((edge) => {\n      visit(nodes.find((node) => node.id === edge.source) as Node);\n    });\n\n    sortedNodes.push(node);\n  }\n\n  nodes.forEach((node) => visit(node));\n\n  return sortedNodes;\n}\n\nexport function findParents(node: BasicNode, edges: BasicEdge[]) {\n  return edges\n    .filter((edge) => edge.target === node.id)\n    .map((edge) => edge.source);\n}\n\nexport function formatFlow(nodes: BasicNode[], edges: BasicEdge[]) {\n  const nodesSorted = nodesTopologicalSort(nodes, edges);\n\n  const levelDict: any = {};\n\n  nodesSorted.forEach((node) => {\n    const parents = findParents(node, edges);\n    if (parents.length === 0) {\n      levelDict[node.id] = 0;\n    } else {\n      let maxParentLevel = Math.max(\n        ...parents.map((parent) => levelDict[parent]),\n      );\n      levelDict[node.id] = maxParentLevel + 1;\n    }\n  });\n\n  nodes.forEach((node) => {\n    node.position.x = 700 * levelDict[node.id];\n    node.position.y =\n      400 *\n      Object.keys(levelDict)\n        .filter((n: string) => levelDict[n] === levelDict[node.id])\n        .indexOf(node.id);\n  });\n\n  return nodes;\n}\n\nexport const getTargetHandleKey: any = (edge: BasicEdge) => {\n  return edge?.targetHandle?.split(handleSeparator)[indexKeyHandleIn];\n};\n\nexport function clearSelectedNodes(nodes: Node[]) {\n  return nodes.map((node) => {\n    node.data.outputData = undefined;\n    node.data.lastRun = undefined;\n    return node;\n  });\n}\n\nfunction getConfigEssentials(config: any) {\n  const {\n    fields,\n    nodeName,\n    inputNames,\n    hasInputHandle,\n    outputType,\n    showHandlesNames,\n  } = config || {};\n  return {\n    fields,\n    nodeName,\n    inputNames,\n    hasInputHandle,\n    outputType,\n    showHandlesNames,\n  };\n}\n\nexport function convertFlowToJson(\n  nodes: BasicNode[],\n  edges: BasicEdge[],\n  withCoordinates?: boolean,\n  withConfig?: boolean,\n): NodeData[] {\n  return nodes.map((node: BasicNode) => {\n    const { data, id, position } = node;\n    const { config, variantConfig, ...nodeValues } = data;\n\n    const inputEdges = edges.filter((edge: any) => edge.target === id);\n\n    const inputs = inputEdges.map((edge: any) => {\n      return convertEdgeToNodeInput(edge, nodes, node);\n    });\n\n    let nodeJson = { ...nodeValues, inputs };\n\n    if (withConfig) {\n      nodeJson.config = getConfigEssentials(config);\n    }\n\n    if (withCoordinates) {\n      nodeJson = { ...nodeJson, x: position.x, y: position.y };\n    }\n    return nodeJson;\n  });\n}\n\nexport function getInputNamesWithValidCondition(node: BasicNode) {\n  const inputNamesWithValidCondition = !!node?.data?.config?.inputNames\n    ? node.data.config.inputNames.filter((inputName: string, index: number) => {\n        const condition = node.data.config.fields[index]?.condition;\n        if (!!condition) {\n          return evaluateCondition(condition, node.data);\n        }\n\n        return true;\n      })\n    : undefined;\n\n  return inputNamesWithValidCondition;\n}\n\nexport function convertEdgeToNodeInput(\n  edge: any,\n  nodes: BasicNode[],\n  node: BasicNode,\n) {\n  const inputId = edge?.source || \"\";\n\n  const keySplitted =\n    edge?.sourceHandle?.split(handleSeparator)[indexKeyHandleOut];\n  const inputNodeOutputKey =\n    !keySplitted || isNaN(+keySplitted) ? undefined : +keySplitted;\n\n  const inputNode =\n    nodes.find((node: any) => node.id === inputId)?.data.name || \"\";\n\n  const targetHandleKey = getTargetHandleKey(edge);\n\n  const inputNamesWithValidCondition = !!node.data.config?.inputNames\n    ? node.data.config.inputNames.filter((inputName: string, index: number) => {\n        const condition = node.data.config.fields[index]?.condition;\n        if (!!condition) {\n          return evaluateCondition(condition, node.data);\n        }\n\n        return true;\n      })\n    : undefined;\n\n  return {\n    inputName: !!inputNamesWithValidCondition\n      ? inputNamesWithValidCondition[targetHandleKey]\n      : undefined,\n    inputNode,\n    inputNodeOutputKey,\n  };\n}\n\nexport function getFieldsWithValidCondition(\n  node: BasicNode,\n): Field[] | undefined {\n  const fieldsWithValidCondition = !!node?.data?.config?.fields\n    ? node.data.config.fields.filter((field: Field) => {\n        const condition = field.condition;\n        if (!!condition) {\n          return evaluateCondition(condition, node.data);\n        }\n\n        return true;\n      })\n    : undefined;\n\n  return fieldsWithValidCondition;\n}\n\nexport function convertJsonToFlow(json: any): {\n  nodes: BasicNode[];\n  edges: BasicEdge[];\n} {\n  const nodes: BasicNode[] = [];\n  const edges: BasicEdge[] = [];\n\n  // Create nodes\n  json.forEach((node: any) => {\n    const { x, y, ...nodeData } = node;\n\n    nodes.push({\n      id: node.name,\n      type: nodeData.processorType,\n      position: { x, y },\n      data: {\n        ...nodeData,\n        config: !!nodeData.config\n          ? nodeData.config\n          : getConfigViaType(nodeData.processorType),\n      },\n    });\n  });\n\n  // Create edges\n  json.forEach((node: any) => {\n    if (node.inputs) {\n      node.inputs.forEach((input: any, index: number) => {\n        let targetHandleIndex = index;\n\n        const fields: Field[] = node.config?.fields;\n\n        const fieldsWithValidCondition = !!fields\n          ? fields.filter((field: Field) => {\n              const condition = field.condition;\n              if (!!condition) {\n                return evaluateCondition(condition, node);\n              }\n\n              return true;\n            })\n          : undefined;\n\n        if (!!fieldsWithValidCondition) {\n          targetHandleIndex = fieldsWithValidCondition.findIndex(\n            (field) => field.name === input.inputName,\n          );\n          if (targetHandleIndex === -1) {\n            targetHandleIndex = index;\n          }\n        }\n        edges.push({\n          id: `${input.inputNode}-to-${node.name}`,\n          sourceHandle: generateIdForHandle(\n            input.inputNodeOutputKey ?? 0,\n            true,\n          ),\n          targetHandle: generateIdForHandle(targetHandleIndex),\n          target: node.name,\n          source: input.inputNode,\n          type: \"buttonedge\",\n        });\n      });\n    }\n\n    //For old files\n    if (node.input && !node.inputs) {\n      edges.push({\n        id: `${node.input}-to-${node.name}`,\n        sourceHandle: !!node.inputKey\n          ? generateIdForHandle(node.inputKey)\n          : undefined,\n        source: node.input,\n        target: node.name,\n        type: \"buttonedge\",\n      });\n    }\n  });\n\n  return { nodes, edges };\n}\n\nexport function migrateConfig(oldConfig: FlowTab) {\n  if (!oldConfig.metadata) {\n    oldConfig.nodes.forEach((node) => {\n      // arrangeOldType(node);\n      // arrangeOldFields(node.data);\n    });\n  }\n}\n"
  },
  {
    "path": "packages/ui/src/utils/mappings.tsx",
    "content": "import { NodeProps } from \"reactflow\";\nimport FileUploadNode from \"../components/nodes/FileUploadNode\";\nimport GenericNode from \"../components/nodes/GenericNode\";\nimport AIDataSplitterNode from \"../components/nodes/AIDataSplitterNode\";\nimport NodeWrapper from \"../components/nodes/NodeWrapper\";\nimport TransitionNode from \"../components/nodes/TransitionNode\";\nimport ReplicateNode from \"../components/nodes/ReplicateNode\";\nimport { nodeConfigs } from \"../nodes-configuration/nodeConfig\";\nimport DisplayNode from \"../components/nodes/DisplayNode\";\n\nlet allNodeTypes: string[] = [];\n\n/**\n * Nodes types that uses specific components, instead of the generic one.\n */\nexport const specificNodeTypes: Partial<Record<string, React.FC<NodeProps>>> = {\n  \"file-drop\": FileUploadNode,\n  \"ai-data-splitter\": AIDataSplitterNode,\n  file: FileUploadNode,\n  replicate: ReplicateNode,\n  transition: TransitionNode,\n  display: DisplayNode,\n};\n\nexport const loadAllNodesTypes = () => {\n  allNodeTypes = !!nodeConfigs\n    ? Object.keys(nodeConfigs)\n        .filter((key: string) => {\n          return !!nodeConfigs[key]?.processorType;\n        })\n        .map((key: string) => {\n          return nodeConfigs[key]?.processorType as string;\n        })\n    : [];\n\n  allNodeTypes = allNodeTypes.concat(Object.keys(specificNodeTypes));\n};\n\n/**\n * Generate the mapping used by ReactFlow.\n *\n * @returns The complete mapping of all node types to their respective components.\n */\nexport const getAllNodeTypesComponentMapping = () => {\n  const completeNodeTypes: Record<string, React.FC<NodeProps>> = {} as Record<\n    string,\n    React.FC<NodeProps>\n  >;\n\n  allNodeTypes.forEach((type) => {\n    completeNodeTypes[type] = specificNodeTypes[type] || GenericNode;\n  });\n\n  return completeNodeTypes;\n};\n\nexport const getAllNodeWithEaseOut = (): Record<\n  string,\n  React.FC<NodeProps>\n> => {\n  const completeNodeTypes: Record<string, React.FC<NodeProps>> = {} as Record<\n    string,\n    React.FC<NodeProps>\n  >;\n\n  allNodeTypes.forEach((type: string) => {\n    const NodeComponent = specificNodeTypes[type] || GenericNode;\n\n    completeNodeTypes[type] = (props: NodeProps) => (\n      // <EaseOut key={props.id}>\n      <NodeWrapper nodeId={props.id}>\n        <NodeComponent {...props} />\n      </NodeWrapper>\n      // </EaseOut>\n    );\n  });\n\n  return completeNodeTypes;\n};\n"
  },
  {
    "path": "packages/ui/src/utils/navigatorUtils.ts",
    "content": "export const copyToClipboard = (text: string) => {\n  if (window.isSecureContext) {\n    navigator.clipboard\n      .writeText(text)\n      .then(() => {\n        console.log(\"Text copied to clipboard successfully!\");\n      })\n      .catch((err) => {\n        console.error(\"Failed to copy text: \", err);\n      });\n  } else {\n    const textarea = document.createElement(\"textarea\");\n    textarea.value = text;\n\n    textarea.style.position = \"absolute\";\n    textarea.style.left = \"-99999999px\";\n\n    document.body.prepend(textarea);\n\n    textarea.select();\n\n    try {\n      document.execCommand(\"copy\");\n    } catch (err) {\n      console.log(err);\n    } finally {\n      textarea.remove();\n    }\n  }\n};\n"
  },
  {
    "path": "packages/ui/src/utils/nodeConfigurationUtils.ts",
    "content": "import { NodeData } from \"../components/nodes/types/node\";\nimport { Field } from \"../nodes-configuration/types\";\n\nexport function getAdequateConfigFromDiscriminators(nodeData: NodeData) {\n  const discriminatorValues = nodeData.variantConfig?.discriminatorFields.map(\n    (discr) => nodeData[discr],\n  );\n\n  const newConfig = nodeData.variantConfig?.subConfigurations.find((config) => {\n    const configDiscriminatorValues = Object.keys(config.discriminators).map(\n      (key) => config.discriminators[key],\n    );\n\n    return (\n      JSON.stringify(configDiscriminatorValues) ===\n      JSON.stringify(discriminatorValues)\n    );\n  });\n  return structuredClone(newConfig);\n}\n\nexport const hasDiscriminatorChanged = (\n  fieldName: string,\n  nodeData: NodeData,\n) => {\n  if (!!nodeData.variantConfig) {\n    return nodeData.variantConfig.discriminatorFields.includes(fieldName);\n  }\n};\n\nexport function getDefaultOptions(fields: Field[], data: NodeData) {\n  const defaultOptions: any = {};\n\n  //Default options\n  fields\n    .filter(\n      (field) =>\n        field.options?.find((option) => option.default) && !data[field.name],\n    )\n    .forEach((field) => {\n      defaultOptions[field.name] = field.options?.find(\n        (option) => option.default,\n      )?.value;\n    });\n\n  //Default values\n  fields\n    .filter((field) => field.defaultValue != null && data[field.name] == null)\n    .forEach((field) => {\n      defaultOptions[field.name] = field.defaultValue;\n    });\n\n  return defaultOptions;\n}\n\nexport function getNbInputs(data: NodeData, fields?: Field[]) {\n  if (!!data.config.inputNames) {\n    return data.config.inputNames.length;\n  }\n  if (!!fields && fields.some((field) => field.hasHandle)) {\n    return fields.length;\n  }\n  return 1;\n}\n\nexport function getNbOutputs(data: NodeData) {\n  return data.outputData != null && typeof data.outputData !== \"string\"\n    ? data.outputData.length\n    : 1;\n}\n"
  },
  {
    "path": "packages/ui/src/utils/nodeUtils.ts",
    "content": "import { Node } from \"reactflow\";\nimport { getConfigViaType } from \"../nodes-configuration/nodeConfig\";\n\nexport const generatedIdIdentifier = \"#\";\n\nexport const createUniqNodeId = (suffix: string) => {\n  return (\n    Math.random().toString(36).substr(2, 9) + generatedIdIdentifier + suffix\n  );\n};\n\nexport const createNewNode = (\n  type: string,\n  { x, y }: { x: number; y: number } = { x: 0, y: 0 },\n  additionnalData: any = {},\n  additionnalConfig: any = {},\n) => {\n  const id = createUniqNodeId(type);\n\n  const typeConfig = getConfigViaType(type);\n\n  const inputNames =\n    !!typeConfig && !!typeConfig.fields && !typeConfig.inputNames\n      ? typeConfig.fields.map((field) => field.name)\n      : typeConfig?.inputNames;\n\n  const newNode: Node = {\n    id,\n    type,\n    data: {\n      name: id,\n      processorType: type,\n      ...additionnalData,\n      config: {\n        ...typeConfig,\n        inputNames: inputNames,\n        ...additionnalConfig,\n      },\n    },\n    position: { x, y },\n  };\n\n  return newNode;\n};\n"
  },
  {
    "path": "packages/ui/src/utils/openAPIUtils.ts",
    "content": "import { Field } from \"../nodes-configuration/types\";\nimport { fieldHasHandle } from \"../nodes-configuration/nodeConfig\";\n\nexport interface OpenApiSchema {\n  components: {\n    schemas: {\n      [key: string]: any;\n    };\n  };\n}\n\nexport interface Config {\n  schema: OpenApiSchema;\n  modelId: string;\n}\n\nexport function getSchemaFromConfig(config: Config, schemaName: string) {\n  return config.schema.components.schemas[schemaName];\n}\n\nconst getNodeFieldTypeFromProp: (prop: any) => Field[\"type\"] = (prop: any) => {\n  if (prop.allOf != null || prop.enum != null) {\n    return \"select\";\n  }\n  if (prop.maximum != null && prop.minimum != null) {\n    return \"slider\";\n  }\n  if (prop.type === \"boolean\") return \"boolean\";\n  if (prop.type === \"integer\" || prop.type == \"number\") return \"numericfield\";\n\n  return \"input\";\n};\n\nfunction resolveReference(ref: string, globalSchema?: OpenApiSchema): any {\n  if (!globalSchema) {\n    throw new Error(\"Global schema is required to resolve references\");\n  }\n\n  const refPath = ref.replaceAll(\"#/\", \"\").split(\"/\");\n\n  let schema = globalSchema as any;\n  for (const path of refPath) {\n    if (path) {\n      schema = schema[path];\n      if (!schema) {\n        throw new Error(`Reference ${ref} not found in schema`);\n      }\n    }\n  }\n  return schema;\n}\n\nexport function convertOpenAPISchemaToNodeConfig(schema: any, config?: Config) {\n  const requiredFields = schema.required || [];\n\n  return Object.entries(schema.properties)\n    .map(([name, prop]: [string, any]) => {\n      let options;\n      if (prop.allOf != null) {\n        options = prop.allOf.flatMap((refObj: any) => {\n          if (refObj.$ref) {\n            const resolvedSchema = resolveReference(\n              refObj.$ref,\n              config?.schema,\n            );\n            return resolvedSchema.enum.map((value: string) => {\n              return {\n                label: \"\" + value,\n                value,\n                default: value === prop.default,\n              };\n            });\n          } else {\n            return [refObj];\n          }\n        });\n      }\n\n      if (prop.enum != null) {\n        options = prop.enum.map((value: string) => ({\n          label: value,\n          value,\n        }));\n      }\n\n      const fieldType = getNodeFieldTypeFromProp(prop);\n\n      const field: Field = {\n        name,\n        description: prop.description,\n        type: fieldType,\n        label: name,\n        // placeholder: prop.description,\n        defaultValue: prop.default,\n        max: prop.maximum,\n        min: prop.minimum,\n        hasHandle: fieldHasHandle(fieldType),\n        isLinked: false,\n        required: requiredFields.includes(name),\n        options: options,\n      };\n\n      return field;\n    })\n    .sort((a, b) => {\n      if (a.required && !b.required) {\n        return -1;\n      }\n      if (!a.required && b.required) {\n        return 1;\n      }\n      return 0;\n    })\n    .filter((field) => !!field.name);\n}\n"
  },
  {
    "path": "packages/ui/src/utils/toastUtils.tsx",
    "content": "import { IconType } from \"react-icons/lib\";\nimport { toast } from \"react-toastify\";\n\nexport function toastInfoMessage(message: string, id?: string) {\n  toast.info(message, {\n    toastId: id,\n    position: \"top-center\",\n    autoClose: 5000,\n    hideProgressBar: false,\n    closeOnClick: true,\n    pauseOnHover: true,\n    draggable: true,\n    progress: undefined,\n    theme: \"dark\",\n  });\n}\n\nexport function toastErrorMessage(message: string) {\n  toast.error(\n    <div className=\"whitespace-pre-line text-center text-sm\"> {message} </div>,\n    {\n      position: \"top-center\",\n      autoClose: 5000,\n      hideProgressBar: false,\n      closeOnClick: true,\n      pauseOnHover: true,\n      draggable: true,\n      progress: undefined,\n      theme: \"dark\",\n    },\n  );\n}\n\nexport function toastFastSuccessMessage(message: string) {\n  toast.success(message, {\n    position: \"top-center\",\n    autoClose: 500,\n    hideProgressBar: true,\n    closeOnClick: true,\n    pauseOnHover: false,\n    draggable: false,\n    progress: undefined,\n    theme: \"dark\",\n  });\n}\n\nexport function toastFastInfoMessage(message: string) {\n  toast.info(message, {\n    position: \"top-center\",\n    autoClose: 500,\n    hideProgressBar: true,\n    closeOnClick: true,\n    pauseOnHover: false,\n    draggable: false,\n    progress: undefined,\n    theme: \"dark\",\n  });\n}\n\nexport function toastCustomIconInfoMessage(message: string, icon: IconType) {\n  toast.info(message, {\n    position: \"top-center\",\n    autoClose: 5000,\n    hideProgressBar: false,\n    closeOnClick: true,\n    pauseOnHover: true,\n    draggable: true,\n    progress: undefined,\n    theme: \"dark\",\n    icon: icon,\n  });\n}\n"
  },
  {
    "path": "packages/ui/src/vite-env.d.ts",
    "content": "/// <reference types=\"vite/client\" />\n"
  },
  {
    "path": "packages/ui/tailwind.config.js",
    "content": "/** @type {import('tailwindcss').Config} */\nmodule.exports = {\n  mode: \"jit\",\n  content: [\"./src/**/*.{js,jsx,ts,tsx}\"],\n  theme: {\n    extend: {},\n  },\n  plugins: [],\n};\n"
  },
  {
    "path": "packages/ui/test/e2e/intro-flow.spec.ts",
    "content": "import { test, expect } from \"@playwright/test\";\nimport { baseURL, waitForAppInitialRender } from \"../utils\";\n\ntest(\"initial gpt node is present after loading\", async ({ page }) => {\n  await page.goto(baseURL);\n  await waitForAppInitialRender(page);\n\n  const mainContent = await page.$(\"#main-content\");\n  expect(mainContent).not.toBeNull();\n\n  const reactFlow = await page.waitForSelector(\".reactflow-wrapper\", {\n    state: \"visible\",\n  });\n  expect(reactFlow).not.toBeNull();\n\n  const gptNode = await page.waitForSelector(\".react-flow__node-llm-prompt\", {\n    state: \"visible\",\n  });\n\n  expect(gptNode).not.toBeNull();\n});\n"
  },
  {
    "path": "packages/ui/test/e2e/loading-screen.spec.ts",
    "content": "import { test, expect } from \"@playwright/test\";\nimport { baseURL } from \"../utils\";\n\ntest(\"renders the loading screen initially\", async ({ page }) => {\n  await page.goto(baseURL);\n\n  await page.waitForSelector(\"#loading-screen\", { state: \"visible\" });\n  const loadingScreen = await page.$(\"#loading-screen\");\n  expect(loadingScreen).not.toBeNull();\n});\n"
  },
  {
    "path": "packages/ui/test/e2e/main-content.spec.ts",
    "content": "import { test, expect } from \"@playwright/test\";\nimport { baseURL, waitForAppInitialRender } from \"../utils\";\n\ntest(\"renders the main content after loading\", async ({ page }) => {\n  await page.goto(baseURL);\n  await waitForAppInitialRender(page);\n\n  const mainContent = await page.$(\"#main-content\");\n  expect(mainContent).not.toBeNull();\n});\n"
  },
  {
    "path": "packages/ui/test/e2e/sidebar-default-nodes.spec.ts",
    "content": "import { test, expect } from \"@playwright/test\";\nimport { baseURL, waitForAppInitialRender } from \"../utils\";\n\ntest(\"default nodes are loaded in sidebar\", async ({ page }) => {\n  await page.goto(baseURL);\n  await waitForAppInitialRender(page);\n\n  const textNodeSidebar = await page.locator(\"text=Text\").first();\n  const webNodeSidebar = await page.locator(\"text=Web Extractor\").first();\n  const fileNodeSidebar = await page.locator(\"text=File\").first();\n\n  const textNodeContent = await textNodeSidebar.textContent();\n  const webNodeContent = await webNodeSidebar.textContent();\n  const fileNodeContent = await fileNodeSidebar.textContent();\n\n  expect(textNodeContent).toContain(\"Text\");\n  expect(webNodeContent).toContain(\"Web Extractor\");\n  expect(fileNodeContent).toContain(\"File\");\n});\n\ntest(\"hidden nodes are not loaded in sidebar\", async ({ page }) => {\n  await page.goto(baseURL);\n  await waitForAppInitialRender(page);\n\n  const aiActionNodeSidebar = await page.locator(\"text=AI Action\");\n\n  const aiActionNodeCount = await aiActionNodeSidebar.count();\n\n  expect(aiActionNodeCount).toBe(0);\n});\n"
  },
  {
    "path": "packages/ui/test/e2e/sidebar-extensions-nodes.spec.ts",
    "content": "import { test, expect } from \"@playwright/test\";\nimport { baseURL, waitForAppInitialRender } from \"../utils\";\n\ntest(\"default extensions are loaded in sidebar\", async ({ page }) => {\n  await page.goto(baseURL);\n  await waitForAppInitialRender(page);\n\n  const stabilityNodeSidebar = await page.locator(\"text=StabilityAI\").first();\n  const documentNodeSidebar = await page\n    .locator(\"text=Document-to-Text\")\n    .first();\n\n  const stabilityaiNodeContent = await stabilityNodeSidebar.textContent();\n  const documentNodeContent = await documentNodeSidebar.textContent();\n\n  expect(stabilityaiNodeContent).toContain(\"StabilityAI\");\n  expect(documentNodeContent).toContain(\"Document-to-Text\");\n});\n"
  },
  {
    "path": "packages/ui/test/e2e/tuto-display.spec.ts",
    "content": "import { test, expect } from \"@playwright/test\";\nimport { baseURL, waitForAppInitialRender } from \"../utils\";\n\ntest(\"tuto is launched after loading\", async ({ page }) => {\n  await page.goto(baseURL);\n  await waitForAppInitialRender(page);\n\n  await page.screenshot({ path: \"screenshots/tuto-before-wait.png\" });\n  await page.waitForSelector(\"#react-joyride-step-0\", { state: \"attached\" });\n\n  const tutoStep = await page.$(\"#react-joyride-step-0\");\n  expect(tutoStep).not.toBeNull();\n\n  const textContent = await tutoStep?.textContent();\n  expect(textContent).toContain(\"Welcome to AI-FLOW\");\n\n  const closeTutoButton = await page.locator(\"text=I know the app\");\n  await closeTutoButton.click();\n\n  const tutoStepAfterClose = await page.$(\"#react-joyride-step-0\");\n  expect(tutoStepAfterClose).toBeNull();\n\n  await page.screenshot({ path: \"screenshots/tuto-after-wait.png\" });\n});\n"
  },
  {
    "path": "packages/ui/test/unit/flowChecker.test.ts",
    "content": "import { NodeData, NodeInput } from \"../../src/components/nodes/types/node\";\nimport * as flowChecker from \"../../src/utils/flowChecker\";\n\ndescribe(\"getNodeMissingFields\", () => {\n  beforeAll(() => {\n    vi.spyOn(flowChecker, \"isFieldLinkedToAnotherNode\").mockImplementation(\n      (field, node) => {\n        if (field.name === \"validMockedLinkedField\") {\n          return \"input\";\n        }\n      },\n    );\n  });\n\n  // it(\"identifies missing required fields correctly\", () => {\n  //   const nodeData: NodeData = {\n  //     name: \"ExampleNode\",\n  //     model: \"example-model\",\n  //     inputs: [],\n  //     config: {\n  //       nodeName: \"Example Node\",\n  //       fields: [\n  //         { name: \"mandatoryField\", type: \"input\", required: true },\n  //         { name: \"optionalField\", type: \"input\", required: false },\n  //         { name: \"validMockedLinkedField\", type: \"input\", required: true },\n  //       ],\n  //       icon: \"\",\n  //       outputType: \"imageUrl\",\n  //       section: \"input\",\n  //     },\n  //     mandatoryField: \"\",\n  //     id: \"\",\n  //     handles: undefined,\n  //     processorType: \"gpt\",\n  //     nbOutput: 0,\n  //   };\n\n  //   const missingFields = flowChecker.getNodeMissingFields(nodeData);\n\n  //   expect(missingFields).toEqual([\"mandatoryField\"]);\n  // });\n\n  it(\"identifies missing fields when no fields are linked\", () => {\n    const nodeData: NodeData = {\n      name: \"IncompleteNode\",\n      model: \"incomplete-model\",\n      inputs: [],\n      config: {\n        fields: [\n          { name: \"missingMandatoryField\", type: \"input\", required: true },\n          { name: \"optionalField\", type: \"input\", required: false },\n          { name: \"missingLinkedField\", type: \"input\", required: true },\n        ],\n        nodeName: \"\",\n        icon: \"\",\n        outputType: \"imageUrl\",\n        section: \"input\",\n      },\n      presentOptionalField: \"Present\",\n      id: \"\",\n      handles: undefined,\n      processorType: \"gpt\",\n      nbOutput: 0,\n    };\n\n    const missingFields = flowChecker.getNodeMissingFields(nodeData);\n\n    expect(missingFields).toEqual([\n      \"missingMandatoryField\",\n      \"missingLinkedField\",\n    ]);\n  });\n\n  it(\"returns empty array when no field is missing\", () => {\n    const nodeData: NodeData = {\n      name: \"IncompleteNode\",\n      model: \"incomplete-model\",\n      inputs: [],\n      config: {\n        fields: [{ name: \"optionalField\", type: \"input\", required: false }],\n        nodeName: \"\",\n        icon: \"\",\n        outputType: \"imageUrl\",\n        section: \"input\",\n      },\n      presentOptionalField: \"Present\",\n      id: \"\",\n      handles: undefined,\n      processorType: \"gpt\",\n      nbOutput: 0,\n    };\n\n    const missingFields = flowChecker.getNodeMissingFields(nodeData);\n\n    expect(missingFields).toEqual([]);\n  });\n\n  afterEach(() => {\n    vi.resetAllMocks();\n  });\n});\n\nfunction createNode({\n  name = \"\",\n  inputs = [] as NodeInput[],\n  id = \"\",\n  handles = [],\n  nbOutput = 0,\n  processorType = \"gpt-prompt\",\n  config = {\n    nodeName: \"\",\n    icon: \"\",\n    fields: [],\n    outputType: \"imageUrl\",\n    section: \"input\",\n  },\n} = {}) {\n  return {\n    name,\n    inputs,\n    id,\n    handles,\n    nbOutput,\n    processorType,\n    config,\n  } as NodeData;\n}\n\ndescribe(\"getRequiredNodesForLaunch\", () => {\n  const flowFile = [\n    createNode({ name: \"StartNode\" }),\n    createNode({\n      name: \"MiddleNode\",\n      inputs: [\n        { inputNode: \"StartNode\", inputName: \"\", inputNodeOutputKey: 0 },\n      ],\n    }),\n    createNode({\n      name: \"EndNode\",\n      inputs: [\n        { inputNode: \"MiddleNode\", inputName: \"\", inputNodeOutputKey: 0 },\n      ],\n    }),\n    createNode({ name: \"IndependentNode\" }),\n    createNode({\n      name: \"BranchNode\",\n      inputs: [\n        { inputNode: \"StartNode\", inputName: \"\", inputNodeOutputKey: 0 },\n        { inputNode: \"IndependentNode\", inputName: \"\", inputNodeOutputKey: 0 },\n      ],\n    }),\n  ];\n\n  it(\"returns the correct single node when there are no dependencies\", () => {\n    const requiredNodes = flowChecker.getRequiredNodesForLaunch(\n      flowFile,\n      \"StartNode\",\n    );\n    expect(requiredNodes).toEqual([\"StartNode\"]);\n  });\n\n  it(\"returns a linear dependency chain correctly\", () => {\n    const requiredNodes = flowChecker.getRequiredNodesForLaunch(\n      flowFile,\n      \"EndNode\",\n    );\n    expect(requiredNodes).toEqual([\"StartNode\", \"MiddleNode\", \"EndNode\"]);\n  });\n\n  it(\"handles branches in the dependency graph correctly\", () => {\n    const requiredNodes = flowChecker.getRequiredNodesForLaunch(\n      flowFile,\n      \"BranchNode\",\n    );\n    expect(requiredNodes).toEqual([\n      \"StartNode\",\n      \"IndependentNode\",\n      \"BranchNode\",\n    ]);\n  });\n\n  it(\"returns no dependencies when the node does not exist in the flow\", () => {\n    const requiredNodes = flowChecker.getRequiredNodesForLaunch(\n      flowFile,\n      \"NonexistentNode\",\n    );\n    expect(requiredNodes).toEqual([\"NonexistentNode\"]);\n  });\n\n  afterEach(() => {\n    vi.resetAllMocks();\n  });\n});\n"
  },
  {
    "path": "packages/ui/test/unit/flowUtils.test.ts",
    "content": "import {\n  BasicEdge,\n  BasicNode,\n  convertEdgeToNodeInput,\n  convertFlowToJson,\n  findParents,\n  generateIdForHandle,\n  handleInPrefix,\n  handleOutPrefix,\n  handleSeparator,\n  nodesTopologicalSort,\n} from \"../../src/utils/flowUtils\";\n\nfunction createNode(\n  id: string,\n  name?: string,\n  data?: any,\n  position?: any,\n): BasicNode {\n  return {\n    id,\n    data: name ? { name, ...data } : { ...data },\n    position: position ?? { x: 0, y: 0 },\n  };\n}\n\nfunction createEdge(\n  id: string,\n  source: string,\n  target: string,\n  sourceHandle?: string,\n  targetHandle?: string,\n): BasicEdge {\n  return { id, source, target, sourceHandle, targetHandle };\n}\n\n// Initial setup\nconst nodes: BasicNode[] = [\n  createNode(\"node1\"),\n  createNode(\"node2\"),\n  createNode(\"node3\"),\n  createNode(\"node4\"),\n];\n\nconst edges: BasicEdge[] = [\n  createEdge(\"1\", \"node1\", \"node2\"),\n  createEdge(\"2\", \"node2\", \"node3\"),\n  createEdge(\"3\", \"node3\", \"node4\"),\n];\n\ndescribe(\"nodesTopologicalSort\", () => {\n  it(\"should sort nodes correctly based on dependencies\", () => {\n    const expectedSortedNodes: Partial<BasicNode>[] = [\n      { id: \"node1\" },\n      { id: \"node2\" },\n      { id: \"node3\" },\n      { id: \"node4\" },\n    ];\n\n    const sortedNodesIds = nodesTopologicalSort(nodes, edges).map((node) => {\n      return { id: node.id };\n    });\n\n    expect(sortedNodesIds).toStrictEqual(expectedSortedNodes);\n  });\n\n  it(\"should work with disconnected subgraphs\", () => {\n    const disconnectedNode = createNode(\"node5\");\n    const updatedNodes = [...nodes, disconnectedNode];\n\n    const expectedSortedNodesWithDisconnected: Partial<BasicNode>[] = [\n      { id: \"node1\" },\n      { id: \"node2\" },\n      { id: \"node3\" },\n      { id: \"node4\" },\n      { id: \"node5\" },\n    ];\n\n    const sortedNodes = nodesTopologicalSort(updatedNodes, edges).map(\n      (node) => {\n        return { id: node.id };\n      },\n    );\n\n    expect(sortedNodes).toEqual(\n      expect.arrayContaining(expectedSortedNodesWithDisconnected),\n    );\n  });\n\n  it(\"should return an empty array when no nodes are provided\", () => {\n    expect(nodesTopologicalSort([], [])).toEqual([]);\n  });\n});\n\ndescribe(\"convertEdgeToNodeInput\", () => {\n  it(\"converts an edge to node input correctly\", () => {\n    const nodes: BasicNode[] = [\n      createNode(\"node1\", \"FirstNode\"),\n      createNode(\"node2\", \"SecondNode\"),\n    ];\n\n    const edge: BasicEdge = createEdge(\"1\", \"node1\", \"node2\", \"handle-out-1\");\n\n    const result = convertEdgeToNodeInput(edge, nodes, nodes[1]);\n\n    const expectedResult = {\n      inputName: undefined,\n      inputNode: \"FirstNode\",\n      inputNodeOutputKey: 1,\n    };\n\n    expect(result).toEqual(expectedResult);\n  });\n});\n\ndescribe(\"findParents\", () => {\n  it(\"returns an empty array when the node has no parents\", () => {\n    const nodes: BasicNode[] = [createNode(\"node1\"), createNode(\"node2\")];\n    const edges: BasicEdge[] = [createEdge(\"1\", \"node1\", \"node2\")];\n    const nodeWithoutParents = nodes[0];\n\n    const parents = findParents(nodeWithoutParents, edges);\n\n    expect(parents).toEqual([]);\n  });\n\n  it(\"returns a single parent when the node has one parent\", () => {\n    const nodes: BasicNode[] = [createNode(\"node1\"), createNode(\"node2\")];\n    const edges: BasicEdge[] = [createEdge(\"1\", \"node1\", \"node2\")];\n    const childNode = nodes[1];\n\n    const parents = findParents(childNode, edges);\n\n    expect(parents).toEqual([\"node1\"]);\n  });\n\n  it(\"returns multiple parents when the node has multiple parents\", () => {\n    const nodes: BasicNode[] = [\n      createNode(\"node1\"),\n      createNode(\"node2\"),\n      createNode(\"node3\"),\n    ];\n    const edges: BasicEdge[] = [\n      createEdge(\"1\", \"node1\", \"node3\"),\n      createEdge(\"2\", \"node2\", \"node3\"),\n    ];\n    const childNode = nodes[2];\n\n    const parents = findParents(childNode, edges);\n\n    expect(parents).toEqual(expect.arrayContaining([\"node1\", \"node2\"]));\n  });\n});\n\ndescribe(\"generateIdForHandle\", () => {\n  it(\"generates an ID for an input handle correctly\", () => {\n    const key = 1;\n    const isOutput = false;\n    const expectedId = `${handleInPrefix}${handleSeparator}${key}`;\n\n    const result = generateIdForHandle(key, isOutput);\n\n    expect(result).toBe(expectedId);\n  });\n\n  it(\"generates an ID for an output handle correctly\", () => {\n    const key = 2;\n    const isOutput = true;\n    const expectedId = `${handleOutPrefix}${handleSeparator}${key}`;\n\n    const result = generateIdForHandle(key, isOutput);\n\n    expect(result).toBe(expectedId);\n  });\n});\n\ndescribe(\"convertFlowToJson\", () => {\n  it(\"converts a flow to JSON correctly\", () => {\n    const firstNodeName = \"FirstNode\";\n    const secondNodeName = \"SecondNode\";\n\n    const nodes: BasicNode[] = [\n      createNode(\"node1\", firstNodeName, {\n        model: \"gpt-4\",\n        config: {\n          fields: [],\n          nodeName: firstNodeName,\n        },\n      }),\n      createNode(\"node2\", secondNodeName, {\n        model: \"gpt-3.5\",\n        config: {\n          fields: [],\n          nodeName: secondNodeName,\n        },\n      }),\n    ];\n\n    const edges: BasicEdge[] = [\n      createEdge(\"1\", \"node1\", \"node2\", \"handle-out-0\"),\n    ];\n\n    const result = convertFlowToJson(nodes, edges, true, true);\n\n    const expectedResult = [\n      {\n        name: firstNodeName,\n        model: \"gpt-4\",\n        inputs: [],\n        config: {\n          fields: [],\n          hasInputHandle: undefined,\n          inputNames: undefined,\n          nodeName: firstNodeName,\n          outputType: undefined,\n        },\n        x: 0,\n        y: 0,\n      },\n      {\n        name: secondNodeName,\n        model: \"gpt-3.5\",\n        inputs: [\n          {\n            inputName: undefined,\n            inputNode: firstNodeName,\n            inputNodeOutputKey: 0,\n          },\n        ],\n        config: {\n          fields: [],\n          hasInputHandle: undefined,\n          inputNames: undefined,\n          nodeName: secondNodeName,\n          outputType: undefined,\n        },\n        x: 0,\n        y: 0,\n      },\n    ];\n\n    expect(result).toEqual(expectedResult);\n  });\n\n  it(\"converts a complex flow with multiple inputs and configurations to JSON correctly\", () => {\n    const firstNodeName = \"InputNode\";\n    const secondNodeName = \"ProcessingNode1\";\n    const thirdNodeName = \"ProcessingNode2\";\n\n    const nodes: BasicNode[] = [\n      createNode(\n        \"node1\",\n        firstNodeName,\n        {\n          model: \"custom-model\",\n          config: {\n            fields: [],\n            nodeName: firstNodeName,\n          },\n        },\n        {\n          x: 0,\n          y: 0,\n        },\n      ),\n      createNode(\n        \"node2\",\n        secondNodeName,\n        {\n          model: \"enhanced-model\",\n          config: {\n            fields: [],\n            nodeName: secondNodeName,\n            inputNames: [\"prompt-input-node-2\"],\n            hasInputHandle: true,\n          },\n        },\n        {\n          x: 100,\n          y: 100,\n        },\n      ),\n      createNode(\n        \"node3\",\n        thirdNodeName,\n        {\n          model: \"advanced-model\",\n          config: {\n            fields: [],\n            nodeName: thirdNodeName,\n            inputNames: [\"prompt-input-node-3\"],\n            hasInputHandle: true,\n          },\n        },\n        {\n          x: 200,\n          y: 200,\n        },\n      ),\n    ];\n\n    const edges: BasicEdge[] = [\n      createEdge(\"1\", \"node1\", \"node2\", \"handle-out-0\", \"handle-in-0\"),\n      createEdge(\"2\", \"node1\", \"node3\", \"handle-out-0\", \"handle-in-0\"),\n    ];\n\n    const result = convertFlowToJson(nodes, edges, true, true);\n\n    const expectedResult = [\n      {\n        name: firstNodeName,\n        model: \"custom-model\",\n        inputs: [],\n        config: {\n          fields: [],\n          inputNames: undefined,\n          nodeName: firstNodeName,\n          hasInputHandle: undefined,\n        },\n        x: 0,\n        y: 0,\n      },\n      {\n        name: secondNodeName,\n        model: \"enhanced-model\",\n        inputs: [\n          {\n            inputName: \"prompt-input-node-2\",\n            inputNode: firstNodeName,\n            inputNodeOutputKey: 0,\n          },\n        ],\n        config: {\n          fields: [],\n          nodeName: secondNodeName,\n          inputNames: [\"prompt-input-node-2\"],\n          hasInputHandle: true,\n        },\n        x: 100,\n        y: 100,\n      },\n      {\n        name: thirdNodeName,\n        model: \"advanced-model\",\n        inputs: [\n          {\n            inputName: \"prompt-input-node-3\",\n            inputNode: firstNodeName,\n            inputNodeOutputKey: 0,\n          },\n        ],\n        config: {\n          fields: [],\n          nodeName: thirdNodeName,\n          inputNames: [\"prompt-input-node-3\"],\n          hasInputHandle: true,\n        },\n        x: 200,\n        y: 200,\n      },\n    ];\n\n    expect(result).toEqual(expectedResult);\n  });\n});\n"
  },
  {
    "path": "packages/ui/test/utils.ts",
    "content": "import { Page } from \"@playwright/test\";\nconst dotenv = require(\"dotenv\");\n\ndotenv.config({ path: \".env.test\" });\n\nexport const baseURL = process.env.E2E_TEST_BASE_URL || \"http://localhost:80\";\n\nexport async function waitForAppInitialRender(page: Page) {\n  await page.waitForSelector(\"#main-content\", { state: \"visible\" });\n  await page.waitForSelector(\"#loading-screen\", { state: \"detached\" });\n}\n"
  },
  {
    "path": "packages/ui/tsconfig.json",
    "content": "{\n  \"compilerOptions\": {\n    \"target\": \"esnext\",\n    \"lib\": [\n      \"dom\",\n      \"dom.iterable\",\n      \"esnext\"\n    ],\n    \"allowJs\": true,\n    \"skipLibCheck\": true,\n    \"esModuleInterop\": true,\n    \"allowSyntheticDefaultImports\": true,\n    \"strict\": true,\n    \"forceConsistentCasingInFileNames\": true,\n    \"noFallthroughCasesInSwitch\": true,\n    \"module\": \"esnext\",\n    \"moduleResolution\": \"node\",\n    \"resolveJsonModule\": true,\n    \"isolatedModules\": true,\n    \"noEmit\": true,\n    \"jsx\": \"react-jsx\",\n    \"types\": [\"vite/client\", \"vite-plugin-svgr/client\", \"vitest/globals\"]\n  },\n  \"include\": [\n    \"src\",\n    \"test\"\n  ]\n}\n"
  },
  {
    "path": "packages/ui/vite.config.ts",
    "content": "import { defineConfig } from \"vite\";\nimport react from \"@vitejs/plugin-react-swc\";\n\n// https://vitejs.dev/config/\nexport default defineConfig({\n  base: \"/\",\n  plugins: [react()],\n  preview: {\n    port: 3000,\n  },\n  server: {\n    port: 3000,\n  },\n  build: {\n    outDir: \"./build\",\n  },\n});\n"
  },
  {
    "path": "packages/ui/vitest.config.ts",
    "content": "// vitest.config.ts\nimport { defineConfig } from \"vitest/config\";\n\nexport default defineConfig({\n  test: {\n    globals: true,\n    setupFiles: \"src/setupTests.ts\",\n    include: [\"test/unit/**/*.test.{js,ts,jsx,tsx}\"],\n  },\n});\n"
  }
]