main a91f9fb081eb cached
130 files
895.4 KB
211.4k tokens
919 symbols
1 requests
Download .txt
Showing preview only (940K chars total). Download the full file or copy to clipboard to get everything.
Repository: robertvoy/ComfyUI-Distributed
Branch: main
Commit: a91f9fb081eb
Files: 130
Total size: 895.4 KB

Directory structure:
gitextract_vxcmpaik/

├── .github/
│   ├── FUNDING.yml
│   └── workflows/
│       └── publish_action.yml
├── .gitignore
├── .nvmrc
├── LICENSE
├── README.md
├── __init__.py
├── api/
│   ├── __init__.py
│   ├── config_routes.py
│   ├── job_routes.py
│   ├── orchestration/
│   │   ├── __init__.py
│   │   ├── dispatch.py
│   │   ├── media_sync.py
│   │   └── prompt_transform.py
│   ├── queue_orchestration.py
│   ├── queue_request.py
│   ├── schemas.py
│   ├── tunnel_routes.py
│   ├── usdu_routes.py
│   └── worker_routes.py
├── conftest.py
├── distributed.py
├── docs/
│   ├── comfyui-distributed-api.md
│   ├── model-download-script.md
│   ├── video-upscaler-runpod-preset.md
│   └── worker-setup-guides.md
├── nodes/
│   ├── __init__.py
│   ├── collector.py
│   ├── distributed_upscale.py
│   └── utilities.py
├── package.json
├── pyproject.toml
├── scripts/
│   └── test-web.sh
├── tests/
│   ├── api/
│   │   ├── test_config_routes.py
│   │   ├── test_distributed_queue.py
│   │   ├── test_media_sync.py
│   │   ├── test_usdu_routes.py
│   │   └── test_worker_routes.py
│   ├── conftest.py
│   ├── test_async_helpers.py
│   ├── test_batch_dividers.py
│   ├── test_config.py
│   ├── test_detection.py
│   ├── test_dispatch_selection.py
│   ├── test_distributed_value.py
│   ├── test_job_timeout.py
│   ├── test_network_helpers.py
│   ├── test_payload_parsers.py
│   ├── test_prompt_transform.py
│   ├── test_queue_request.py
│   ├── test_static_mode.py
│   └── test_worker_process_runtime.py
├── upscale/
│   ├── __init__.py
│   ├── conditioning.py
│   ├── job_models.py
│   ├── job_state.py
│   ├── job_store.py
│   ├── job_timeout.py
│   ├── modes/
│   │   ├── __init__.py
│   │   ├── dynamic.py
│   │   ├── single_gpu.py
│   │   └── static.py
│   ├── payload_parsers.py
│   ├── result_collector.py
│   ├── tile_ops.py
│   └── worker_comms.py
├── utils/
│   ├── __init__.py
│   ├── async_helpers.py
│   ├── audio_payload.py
│   ├── cloudflare/
│   │   ├── __init__.py
│   │   ├── binary.py
│   │   ├── process_reader.py
│   │   ├── state.py
│   │   └── tunnel.py
│   ├── config.py
│   ├── constants.py
│   ├── crop_model_patch.py
│   ├── exceptions.py
│   ├── image.py
│   ├── logging.py
│   ├── network.py
│   ├── process.py
│   ├── trace_logger.py
│   ├── usdu_managment.py
│   └── usdu_utils.py
├── vitest.config.js
├── web/
│   ├── apiClient.js
│   ├── constants.js
│   ├── distributed.css
│   ├── distributedValue.js
│   ├── executionUtils.js
│   ├── image_batch_divider.js
│   ├── main.js
│   ├── masterDetection.js
│   ├── sidebar/
│   │   ├── actionsSection.js
│   │   ├── settingsSection.js
│   │   └── workersSection.js
│   ├── sidebarRenderer.js
│   ├── stateManager.js
│   ├── tests/
│   │   ├── apiClient.test.js
│   │   ├── executionUtils.test.js
│   │   ├── urlUtils.test.js
│   │   ├── workerLifecycle.test.js
│   │   └── workerSettings.test.js
│   ├── tunnelManager.js
│   ├── ui/
│   │   ├── buttonHelpers.js
│   │   ├── cloudflareWarning.js
│   │   ├── entityCard.js
│   │   ├── logModal.js
│   │   └── settingsForm.js
│   ├── ui.js
│   ├── urlUtils.js
│   ├── workerLifecycle.js
│   ├── workerSettings.js
│   └── workerUtils.js
├── workers/
│   ├── __init__.py
│   ├── detection.py
│   ├── process/
│   │   ├── __init__.py
│   │   ├── launch_builder.py
│   │   ├── lifecycle.py
│   │   ├── persistence.py
│   │   └── root_discovery.py
│   ├── process_manager.py
│   ├── startup.py
│   └── worker_monitor.py
└── workflows/
    ├── distributed-txt2img.json
    ├── distributed-upscale-video.json
    ├── distributed-upscale.json
    ├── distributed-wan-2.2_14b_t2v.json
    └── distributed-wan.json

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/FUNDING.yml
================================================
# These are supported funding model platforms

github: robertvoy


================================================
FILE: .github/workflows/publish_action.yml
================================================
name: Publish to Comfy registry
on:
  workflow_dispatch:
  push:
    branches:
      - main
    paths:
      - "pyproject.toml"

permissions:
  issues: write

jobs:
  publish-node:
    name: Publish Custom Node to registry
    runs-on: ubuntu-latest
    if: ${{ github.repository_owner == 'robertvoy' }}
    steps:
      - name: Check out code
        uses: actions/checkout@v4
      - name: Publish Custom Node
        uses: Comfy-Org/publish-node-action@main
        with:
          personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}


================================================
FILE: .gitignore
================================================
bin/
logs/
gpu_config.json
__pycache__/
**/__pycache__/
*.py[cod]
node_modules/
npm-debug.log*
AGENTS.md


================================================
FILE: .nvmrc
================================================
20


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
<div align="center">
<img width="250" src="https://github.com/user-attachments/assets/533bb98d-0c4a-499f-9bca-5c937e361087" />
<br><br>
<a href="https://www.youtube.com/watch?v=p6eE3IlAbOs"><img src="https://img.shields.io/badge/Video_Tutorial-grey?style=flat&logo=youtube&logoColor=white" alt="Video Tutorial"></a>
<a href="/docs/worker-setup-guides.md"><img src="https://img.shields.io/badge/Setup_Guides-grey?style=flat&logo=gitbook&logoColor=white" alt="Setup Guides"></a>
<a href="/workflows"><img src="https://img.shields.io/badge/Workflows-grey?style=flat&logo=json&logoColor=white" alt="Workflows"></a>
<a href="https://buymeacoffee.com/robertvoy"><img src="https://img.shields.io/badge/Donation-grey?style=flat&logo=buymeacoffee&logoColor=white" alt="Donation"></a>
<a href="https://x.com/rbw_ai"><img src="https://img.shields.io/twitter/follow/rbw_ai" alt="Twitter"></a>
<br><br>
</div>

> **A powerful extension for ComfyUI that enables distributed and parallel processing across multiple GPUs and machines. Generate more images and videos and accelerate your upscaling workflows by leveraging all available GPU resources in your network and cloud.**

![Clipboard Image (7)](https://github.com/user-attachments/assets/66aaadef-f195-48a1-a368-17dd0dae477d)

---

## Key Features

#### Parallel Workflow Processing
- Run your workflow on multiple GPUs simultaneously with varied seeds, collect results on the master
- Scale output with more workers
- Supports images and videos

#### Distributed Upscaling
- Accelerate Ultimate SD Upscale by distributing tiles across GPUs
- Intelligent distribution
- Handles single images and videos

#### Ease of Use
- Auto-setup local workers; easily add remote/cloud ones
- Convert any workflow to distributed with 2 nodes
- JSON configuration with UI controls

---

## Worker Types

<img width="200" align="right" alt="ComfyUI_temp_khvcc_00034_@0 25x" src="https://github.com/user-attachments/assets/651e4912-7c23-4e32-bd88-250f5175e129" />

ComfyUI Distributed supports three types of workers:

- **Local Workers** - Additional GPUs on the same machine (auto-configured on first launch)
- **Remote Workers** - GPUs on other computers in your network
- **Cloud Workers** - GPUs hosted on a cloud service like Runpod, accessible via secure tunnels

> For detailed setup instructions, see the [setup guide](/docs/worker-setup-guides.md)

---

## Requirements

- ComfyUI
- Multiple NVIDIA GPUs
> No additional GPUs? Use [Cloud Workers](https://github.com/robertvoy/ComfyUI-Distributed/blob/main/docs/worker-setup-guides.md#cloud-workers)
- That's it

---

## Installation

1. **Clone this repository** into your ComfyUI custom nodes directory:
   ```bash
   git clone https://github.com/robertvoy/ComfyUI-Distributed.git
   ```
   
2. **Restart ComfyUI**
   - If you'll be using remote/cloud workers, add `--enable-cors-header` to your launch arguments on the master

3. Read the [setup guide](/docs/worker-setup-guides.md) for adding workers

---

## Official Sponsor

[<img width="1500" height="339" src="https://github.com/user-attachments/assets/c5f75e1f-3e19-4c57-b05d-151311cd1cf0" />](https://get.runpod.io/0bw29uf3ug0p)

Join Runpod with [this link](https://get.runpod.io/0bw29uf3ug0p) and unlock a special bonus.

---

## Workflow Examples

### Basic Parallel Generation
Generate multiple images in the time it takes to generate one. Each worker uses a different seed.

![Clipboard Image (6)](https://github.com/user-attachments/assets/9598c94c-d9b4-4ccf-ab16-a21398220aeb)

> [Download workflow](/workflows/distributed-txt2img.json)

1. Open your ComfyUI workflow
2. Add **Distributed Seed** → connect to sampler's seed
3. Add **Distributed Collector** → after VAE Decode
4. Optional: enable `load_balance` on Distributed Collector to run on one least-busy participant
5. Enable workers in the UI
6. Run the workflow!

### Parallel WAN Generation
Generate multiple videos in the time it takes to generate one. Each worker uses a different seed.

![Clipboard Image (5)](https://github.com/user-attachments/assets/5382b845-833b-43b7-b238-a91c5579581a)

> [Download workflow](/workflows/distributed-wan.json)

1. Open your WAN ComfyUI workflow
2. Add **Distributed Seed** → connect to sampler's seed
3. Add **Distributed Collector** → after VAE Decode
4. Add **Image Batch Divider** → after Distributed Collector
5. Set the `divide_by` to the number of GPUs you have available
> For example: if you have a master and 2x workers, set it to 3
7. Enable workers in the UI
8. Run the workflow!

### Distributed Image Upscaling
Accelerate Ultimate SD Upscaler by distributing tiles across multiple workers, with speed scaling as you add more GPUs.

![Clipboard Image (3)](https://github.com/user-attachments/assets/ffb57a0d-7b75-4497-96d2-875d60865a1a)

> [Download workflow](/workflows/distributed-upscale.json)

1. Load your image
2. Upscale with ESRGAN or similar
3. Connect to **Ultimate SD Upscale Distributed**
4. Configure tile settings
5. Enable workers for faster processing

### Distributed Video Upscaling
Accelerate Ultimate SD Upscaler by distributing video tiles across multiple workers, with speed scaling as you add more GPUs.

![Video Upscaler workflow](https://github.com/user-attachments/assets/3c3d61b1-0b5f-422e-8c58-7c1555fed765)

> [Download workflow](/workflows/distributed-upscale-video.json)

1. Load your video
2. Optional: upscale with ESRGAN or similar
3. Connect to **Ultimate SD Upscale Distributed**
4. Configure tile settings
5. Use RES4LYF (bong/res2) to get better results
6. Enable workers for faster processing

> You can run this workflow entirely on Runpod with minimal setup. [Check out the guide here.](https://github.com/robertvoy/ComfyUI-Distributed/blob/main/docs/video-upscaler-runpod-preset.md)

---

## Developer API

Control your distributed cluster programmatically without opening the browser.

* **Endpoint:** `POST /distributed/queue`
* **Functionality:** Accepts a standard ComfyUI workflow JSON, automatically distributes it to available workers, and returns the execution ID.
* **Documentation:** [See API Examples & Scripts](https://github.com/robertvoy/ComfyUI-Distributed/blob/main/docs/comfyui-distributed-api.md)

> **⚠️ Security Warning:** Do not expose your ComfyUI port to the public internet. If you need remote access, run ComfyUI behind a secure proxy (like Cloudflare or a VPN).

---

## Distributed Value

Use **Distributed Value** when you want per-worker overrides (for example, different prompts/models/settings per worker).

- Output type adapts to the connected input where possible (`STRING`, `INT`, `FLOAT`, `COMBO`).
- The node shows only currently enabled workers.
- If worker enablement changes, worker fields update automatically.
- When disconnected, it resets to default string mode and clears per-worker overrides.
- On execution, master uses `default_value`; workers use their mapped override with typed coercion fallback to default.

---

## Nodes

| Node | Description |
|------|-------------|
| **Distributed Seed** | Generates unique seeds for each worker |
| **Distributed Collector** | Collects results (image/video frames and optionally audio) from workers back to the master; `load_balance` can route the run to one least-busy participant |
| **Distributed Value** | Outputs per-worker override values with fallback to default |
| **Ultimate SD Upscale Distributed** | Distributes upscale tiles across workers |
| **Image Batch Divider** | Splits image batches for multi-GPU output |
| **Audio Batch Divider** | Splits audio batches for multi-GPU output |
| **Distributed Model Name** | Passes model paths to workers, enabling workflows to use models not present on the master in orchestrator-only mode |
| **Distributed Empty Image** | Produces an empty IMAGE batch used when the master delegates all work |

---

## FAQ

<details>
<summary>Does it combine VRAM of multiple GPUs?</summary>
No, it does not combine VRAM of multiple GPUs.
</details>

<details>
<summary>Does it speed up the generation of a single image or video?</summary>
No, it does not speed up the generation of a single image or video. Instead, it enables the generation of more images or videos simultaneously. However, it can speed up the upscaling of a single image when using the Ultimate SD Upscale Distributed feature.
</details>

<details>
<summary>Does it work with the ComfyUI desktop app?</summary>
Yes, it does now.
</details>

<details>
<summary>Can I combine my RTX 5090 with a GTX 980 to get faster results?</summary>
Yes, you can combine different GPUs, but performance is optimized when using similar GPUs. A significant performance imbalance between GPUs may cause bottlenecks.
</details>

<details>
<summary>Does this work with cloud providers?</summary>
Yes, it is compatible with cloud providers. Refer to the setup guides for detailed instructions.
</details>

<details>
<summary>Can I use my main machine just to coordinate workers without rendering?</summary>
Yes. Open the Distributed panel and uncheck the master toggle to run in orchestrator-only mode. The master will distribute work to workers but won't render locally. If all workers become unavailable, the master automatically re-enables to ensure your workflow still runs.
</details>

<details>
<summary>Can I make this work with my Docker setup?</summary>
Yes, it is compatible with Docker setups, but you will need to configure your Docker environment yourself. Unfortunately, assistance with Docker configuration is not provided.
</details>

---

## Disclaimer

This software is provided "as is" without any warranties, express or implied, including merchantability, fitness for a particular purpose, or non-infringement. The developers and copyright holders are not liable for any claims, damages, or liabilities arising from the use, modification, or distribution of the software. Users are solely responsible for ensuring compliance with applicable laws and regulations and for securing their networks against unauthorized access, hacking, data breaches, or loss. The developers assume no liability for any damages or incidents resulting from misuse, improper configuration, or external threats.

---

## Support the Project

<img width="200" align="right" src="https://github.com/user-attachments/assets/84291921-c44e-4556-94f2-a3b16500f4f9" />

If my custom nodes have added value to your workflow, consider fueling future development with a coffee!

Your support helps keep this project thriving.

Buy me a coffee at: https://buymeacoffee.com/robertvoy






================================================
FILE: __init__.py
================================================
# Import everything needed from the main module
from .distributed import (
    NODE_CLASS_MAPPINGS as DISTRIBUTED_CLASS_MAPPINGS, 
    NODE_DISPLAY_NAME_MAPPINGS as DISTRIBUTED_DISPLAY_NAME_MAPPINGS
)

# Import utilities
from .utils.config import ensure_config_exists, CONFIG_FILE
from .utils.logging import debug_log

# Import distributed upscale nodes
from .nodes.distributed_upscale import (
    NODE_CLASS_MAPPINGS as UPSCALE_CLASS_MAPPINGS,
    NODE_DISPLAY_NAME_MAPPINGS as UPSCALE_DISPLAY_NAME_MAPPINGS
)

WEB_DIRECTORY = "./web"

ensure_config_exists()

# Merge node mappings
NODE_CLASS_MAPPINGS = {**DISTRIBUTED_CLASS_MAPPINGS, **UPSCALE_CLASS_MAPPINGS}
NODE_DISPLAY_NAME_MAPPINGS = {**DISTRIBUTED_DISPLAY_NAME_MAPPINGS, **UPSCALE_DISPLAY_NAME_MAPPINGS}

__all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS']

debug_log("Loaded Distributed nodes.")
debug_log(f"Config file: {CONFIG_FILE}")
debug_log(f"Available nodes: {list(NODE_CLASS_MAPPINGS.keys())}")


================================================
FILE: api/__init__.py
================================================
from . import config_routes  # noqa: F401
from . import tunnel_routes  # noqa: F401
from . import worker_routes  # noqa: F401
from . import job_routes  # noqa: F401
from . import usdu_routes  # noqa: F401


================================================
FILE: api/config_routes.py
================================================
import json
from contextlib import asynccontextmanager

from aiohttp import web
import server

try:
    from ..utils.config import config_transaction, load_config, save_config
except ImportError:
    from ..utils.config import load_config

    try:
        from ..utils.config import save_config
    except ImportError:
        def save_config(_config):
            return True

    @asynccontextmanager
    async def config_transaction():
        config = load_config()
        original_snapshot = json.dumps(config, sort_keys=True)
        yield config
        if json.dumps(config, sort_keys=True) != original_snapshot:
            save_config(config)
from ..utils.logging import debug_log, log
from ..utils.network import handle_api_error, normalize_host


def _positive_int(value):
    return value > 0


CONFIG_SCHEMA = {
    "workers": (list, None),
    "master": (dict, None),
    "settings": (dict, None),
    "tunnel": (dict, None),
    "managed_processes": (dict, None),
    "worker_timeout_seconds": (int, _positive_int),
    "debug": (bool, None),
    "auto_launch_workers": (bool, None),
    "stop_workers_on_master_exit": (bool, None),
    "master_delegate_only": (bool, None),
    "websocket_orchestration": (bool, None),
    "has_auto_populated_workers": (bool, None),
}

_SETTINGS_FIELDS = {
    "worker_timeout_seconds",
    "debug",
    "auto_launch_workers",
    "stop_workers_on_master_exit",
    "master_delegate_only",
    "websocket_orchestration",
    "has_auto_populated_workers",
}

_WORKER_FIELDS = [
    ("enabled", None, False),
    ("name", None, False),
    ("port", None, False),
    ("host", normalize_host, True),
    ("cuda_device", None, True),
    ("extra_args", None, True),
    ("type", None, False),
]

_MASTER_FIELDS = [
    ("name", None, False),
    ("host", normalize_host, True),
    ("port", None, False),
    ("cuda_device", None, False),
    ("extra_args", None, False),
]


def _apply_field_patch(target: dict, data: dict, field_rules: list) -> None:
    """Apply a partial update to a target dict based on field rules."""
    for key, normalizer, remove_on_none in field_rules:
        if key not in data:
            continue
        value = data[key]
        if value is None and remove_on_none:
            target.pop(key, None)
        else:
            target[key] = normalizer(value) if (normalizer and value is not None) else value


@server.PromptServer.instance.routes.get("/distributed/config")
async def get_config_endpoint(request):
    config = load_config()
    return web.json_response(config)


@server.PromptServer.instance.routes.post("/distributed/config")
async def update_config_endpoint(request):
    """Bulk config update with schema validation."""
    try:
        data = await request.json()
    except Exception as e:
        return await handle_api_error(request, f"Invalid JSON payload: {e}", 400)

    if not isinstance(data, dict):
        return await handle_api_error(request, "Config payload must be an object", 400)

    validated_settings = {}
    validated_root = {}
    errors = []

    for key, value in data.items():
        if key not in CONFIG_SCHEMA:
            errors.append(f"Unknown field: {key}")
            continue

        expected_type, validator = CONFIG_SCHEMA[key]
        if not isinstance(value, expected_type):
            errors.append(f"{key}: expected {expected_type.__name__}")
            continue

        if validator and not validator(value):
            errors.append(f"{key}: value {value!r} failed validation")
            continue

        if key in _SETTINGS_FIELDS:
            validated_settings[key] = value
        else:
            validated_root[key] = value

    if errors:
        return web.json_response({
            "status": "error",
            "error": errors,
            "message": "; ".join(errors),
        }, status=400)

    try:
        async with config_transaction() as config:
            settings = config.setdefault("settings", {})
            settings.update(validated_settings)
            for key, value in validated_root.items():
                config[key] = value
            return web.json_response({"status": "success", "config": config})
    except Exception as e:
        return await handle_api_error(request, e)


@server.PromptServer.instance.routes.get("/distributed/queue_status/{job_id}")
async def queue_status_endpoint(request):
    """Check if a job queue is initialized."""
    try:
        job_id = request.match_info['job_id']
        
        # Import to ensure initialization
        from ..upscale.job_store import ensure_tile_jobs_initialized
        prompt_server = ensure_tile_jobs_initialized()
        
        async with prompt_server.distributed_tile_jobs_lock:
            exists = job_id in prompt_server.distributed_pending_tile_jobs
        
        debug_log(f"Queue status check for job {job_id}: {'exists' if exists else 'not found'}")
        return web.json_response({"exists": exists, "job_id": job_id})
    except Exception as e:
        return await handle_api_error(request, e, 500)

@server.PromptServer.instance.routes.post("/distributed/config/update_worker")
async def update_worker_endpoint(request):
    try:
        data = await request.json()
        worker_id = data.get("worker_id")
        
        if worker_id is None:
            return await handle_api_error(request, "Missing worker_id", 400)

        async with config_transaction() as config:
            worker_found = False
            workers = config.setdefault("workers", [])

            for worker in workers:
                if worker["id"] == worker_id:
                    _apply_field_patch(worker, data, _WORKER_FIELDS)
                    worker_found = True
                    break

            if not worker_found:
                # If worker not found and all required fields are provided, create new worker
                if all(key in data for key in ["name", "port", "cuda_device"]):
                    new_worker = {
                        "id": worker_id,
                        "name": data["name"],
                        "host": normalize_host(data.get("host", "localhost")),
                        "port": data["port"],
                        "cuda_device": data["cuda_device"],
                        "enabled": data.get("enabled", False),
                        "extra_args": data.get("extra_args", ""),
                        "type": data.get("type", "local")
                    }
                    workers.append(new_worker)
                else:
                    return await handle_api_error(
                        request,
                        f"Worker {worker_id} not found and missing required fields for creation",
                        404,
                    )

            return web.json_response({"status": "success"})
    except Exception as e:
        return await handle_api_error(request, e, 400)

@server.PromptServer.instance.routes.post("/distributed/config/delete_worker")
async def delete_worker_endpoint(request):
    try:
        data = await request.json()
        worker_id = data.get("worker_id")
        
        if worker_id is None:
            return await handle_api_error(request, "Missing worker_id", 400)
            
        async with config_transaction() as config:
            workers = config.get("workers", [])

            # Find and remove the worker
            worker_index = -1
            for i, worker in enumerate(workers):
                if worker["id"] == worker_id:
                    worker_index = i
                    break

            if worker_index == -1:
                return await handle_api_error(request, f"Worker {worker_id} not found", 404)

            # Remove the worker
            removed_worker = workers.pop(worker_index)

            return web.json_response({
                "status": "success",
                "message": f"Worker {removed_worker.get('name', worker_id)} deleted"
            })
    except Exception as e:
        return await handle_api_error(request, e, 400)

@server.PromptServer.instance.routes.post("/distributed/config/update_setting")
async def update_setting_endpoint(request):
    """Updates a specific key in the settings object."""
    try:
        data = await request.json()
        key = data.get("key")
        value = data.get("value")

        if not key or value is None:
            return await handle_api_error(request, "Missing 'key' or 'value' in request", 400)
        if key not in _SETTINGS_FIELDS:
            return await handle_api_error(request, f"Unknown setting: {key}", 400)

        async with config_transaction() as config:
            if 'settings' not in config:
                config['settings'] = {}

            config['settings'][key] = value

            return web.json_response({"status": "success", "message": f"Setting '{key}' updated."})
    except Exception as e:
        return await handle_api_error(request, e, 400)

@server.PromptServer.instance.routes.post("/distributed/config/update_master")
async def update_master_endpoint(request):
    """Updates master configuration."""
    try:
        data = await request.json()
        
        async with config_transaction() as config:
            if 'master' not in config:
                config['master'] = {}
            _apply_field_patch(config['master'], data, _MASTER_FIELDS)

            return web.json_response({"status": "success", "message": "Master configuration updated."})
    except Exception as e:
        return await handle_api_error(request, e, 400)


================================================
FILE: api/job_routes.py
================================================
import json
import asyncio
import io
import os
import base64
import binascii
import time

from aiohttp import web
import server
import torch
from PIL import Image

from ..utils.logging import debug_log
from ..utils.image import pil_to_tensor, ensure_contiguous
from ..utils.network import handle_api_error
from ..utils.constants import JOB_INIT_GRACE_PERIOD, MEMORY_CLEAR_DELAY
try:
    from .queue_orchestration import ensure_distributed_state, orchestrate_distributed_execution
except ImportError:
    from .queue_orchestration import orchestrate_distributed_execution

    def ensure_distributed_state():
        return None
from .queue_request import parse_queue_request_payload

prompt_server = server.PromptServer.instance

# Canonical worker result envelope accepted by POST /distributed/job_complete:
# { "job_id": str, "worker_id": str, "batch_idx": int, "image": <base64 PNG>, "is_last": bool }


def _decode_image_sync(image_path):
    """Decode image/video file and compute hash in a threadpool worker."""
    import base64
    import hashlib
    import folder_paths

    full_path = folder_paths.get_annotated_filepath(image_path)
    if not os.path.exists(full_path):
        raise FileNotFoundError(image_path)

    hash_md5 = hashlib.md5()
    with open(full_path, 'rb') as f:
        for chunk in iter(lambda: f.read(4096), b""):
            hash_md5.update(chunk)
    file_hash = hash_md5.hexdigest()

    video_extensions = {'.mp4', '.avi', '.mov', '.mkv', '.webm'}
    file_ext = os.path.splitext(full_path)[1].lower()

    if file_ext in video_extensions:
        with open(full_path, 'rb') as f:
            file_data = f.read()
        mime_types = {
            '.mp4': 'video/mp4',
            '.avi': 'video/x-msvideo',
            '.mov': 'video/quicktime',
            '.mkv': 'video/x-matroska',
            '.webm': 'video/webm'
        }
        mime_type = mime_types.get(file_ext, 'video/mp4')
        image_data = f"data:{mime_type};base64,{base64.b64encode(file_data).decode('utf-8')}"
    else:
        with Image.open(full_path) as img:
            if img.mode not in ('RGB', 'RGBA'):
                img = img.convert('RGB')
            buffer = io.BytesIO()
            img.save(buffer, format='PNG', compress_level=1)
            image_data = f"data:image/png;base64,{base64.b64encode(buffer.getvalue()).decode('utf-8')}"

    return {
        "status": "success",
        "image_data": image_data,
        "hash": file_hash,
    }


def _check_file_sync(filename, expected_hash):
    """Check file presence and hash in a threadpool worker."""
    import hashlib
    import folder_paths

    full_path = folder_paths.get_annotated_filepath(filename)
    if not os.path.exists(full_path):
        return {
            "status": "success",
            "exists": False,
        }

    hash_md5 = hashlib.md5()
    with open(full_path, 'rb') as f:
        for chunk in iter(lambda: f.read(4096), b""):
            hash_md5.update(chunk)
    file_hash = hash_md5.hexdigest()

    return {
        "status": "success",
        "exists": True,
        "hash_matches": file_hash == expected_hash,
    }


def _decode_canonical_png_tensor(image_payload):
    """Decode canonical base64 PNG payload into a contiguous IMAGE tensor."""
    if not isinstance(image_payload, str) or not image_payload.strip():
        raise ValueError("Field 'image' must be a non-empty base64 PNG string.")

    encoded = image_payload.strip()
    if encoded.startswith("data:"):
        header, sep, data_part = encoded.partition(",")
        if not sep:
            raise ValueError("Field 'image' data URL is malformed.")
        if not header.lower().startswith("data:image/png;base64"):
            raise ValueError("Field 'image' must be a PNG data URL when using data:* format.")
        encoded = data_part

    try:
        png_bytes = base64.b64decode(encoded, validate=True)
    except (binascii.Error, ValueError) as exc:
        raise ValueError("Field 'image' is not valid base64 PNG data.") from exc

    if not png_bytes:
        raise ValueError("Field 'image' decoded to empty PNG data.")

    try:
        with Image.open(io.BytesIO(png_bytes)) as img:
            img = img.convert("RGB")
            tensor = pil_to_tensor(img)
        return ensure_contiguous(tensor)
    except Exception as exc:
        raise ValueError(f"Failed to decode PNG image payload: {exc}") from exc


def _decode_audio_payload(audio_payload):
    """Decode canonical audio payload into an AUDIO dict."""
    from ..utils.audio_payload import decode_audio_payload

    return decode_audio_payload(audio_payload)


@server.PromptServer.instance.routes.post("/distributed/prepare_job")
async def prepare_job_endpoint(request):
    try:
        data = await request.json()
        multi_job_id = data.get('multi_job_id')
        if not multi_job_id:
            return await handle_api_error(request, "Missing multi_job_id", 400)

        ensure_distributed_state()
        async with prompt_server.distributed_jobs_lock:
            if multi_job_id not in prompt_server.distributed_pending_jobs:
                prompt_server.distributed_pending_jobs[multi_job_id] = asyncio.Queue()
        
        debug_log(f"Prepared queue for job {multi_job_id}")
        return web.json_response({"status": "success"})
    except Exception as e:
        return await handle_api_error(request, e)

@server.PromptServer.instance.routes.post("/distributed/clear_memory")
async def clear_memory_endpoint(request):
    debug_log("Received request to clear VRAM.")
    try:
        # Use ComfyUI's prompt server queue system like the /free endpoint does
        if hasattr(server.PromptServer.instance, 'prompt_queue'):
            server.PromptServer.instance.prompt_queue.set_flag("unload_models", True)
            server.PromptServer.instance.prompt_queue.set_flag("free_memory", True)
            debug_log("Set queue flags for memory clearing.")
        
        # Wait a bit for the queue to process
        await asyncio.sleep(MEMORY_CLEAR_DELAY)
        
        # Also do direct cleanup as backup, but with error handling
        import gc
        import comfy.model_management as mm
        
        try:
            mm.unload_all_models()
        except AttributeError as e:
            debug_log(f"Warning during model unload: {e}")
        
        try:
            mm.soft_empty_cache()
        except Exception as e:
            debug_log(f"Warning during cache clear: {e}")
        
        for _ in range(3):
            gc.collect()
        
        if torch.cuda.is_available():
            torch.cuda.empty_cache()
            torch.cuda.ipc_collect()
        
        debug_log("VRAM cleared successfully.")
        return web.json_response({"status": "success", "message": "GPU memory cleared."})
    except Exception as e:
        # Even if there's an error, try to do basic cleanup
        import gc
        gc.collect()
        if torch.cuda.is_available():
            torch.cuda.empty_cache()
        debug_log(f"Partial VRAM clear completed with warning: {e}")
        return web.json_response({"status": "success", "message": "GPU memory cleared (with warnings)"})


@server.PromptServer.instance.routes.post("/distributed/queue")
async def distributed_queue_endpoint(request):
    """Queue a distributed workflow, mirroring the UI orchestration pipeline."""
    try:
        raw_payload = await request.json()
    except Exception as exc:
        return await handle_api_error(request, f"Invalid JSON payload: {exc}", 400)

    try:
        payload = parse_queue_request_payload(raw_payload)
    except ValueError as exc:
        return await handle_api_error(request, exc, 400)

    try:
        prompt_id, prompt_number, worker_count, node_errors = await orchestrate_distributed_execution(
            payload.prompt,
            payload.workflow_meta,
            payload.client_id,
            enabled_worker_ids=payload.enabled_worker_ids,
            delegate_master=payload.delegate_master,
            trace_execution_id=payload.trace_execution_id,
        )
        return web.json_response({
            "prompt_id": prompt_id,
            "number": prompt_number,
            "node_errors": node_errors,
            "worker_count": worker_count,
            "auto_prepare_supported": True,
        })
    except Exception as exc:
        return await handle_api_error(request, exc, 500)

@server.PromptServer.instance.routes.post("/distributed/load_image")
async def load_image_endpoint(request):
    """Load an image or video file and return it as base64 data with hash."""
    try:
        data = await request.json()
        image_path = data.get("image_path")
        
        if not image_path:
            return await handle_api_error(request, "Missing image_path", 400)
        loop = asyncio.get_running_loop()
        payload = await loop.run_in_executor(None, _decode_image_sync, image_path)
        return web.json_response(payload)
    except FileNotFoundError:
        return await handle_api_error(request, f"File not found: {image_path}", 404)
    except Exception as e:
        return await handle_api_error(request, e, 500)

@server.PromptServer.instance.routes.post("/distributed/check_file")
async def check_file_endpoint(request):
    """Check if a file exists and matches the given hash."""
    try:
        data = await request.json()
        filename = data.get("filename")
        expected_hash = data.get("hash")
        
        if not filename or not expected_hash:
            return await handle_api_error(request, "Missing filename or hash", 400)
        loop = asyncio.get_running_loop()
        payload = await loop.run_in_executor(None, _check_file_sync, filename, expected_hash)
        return web.json_response(payload)
        
    except Exception as e:
        return await handle_api_error(request, e, 500)


@server.PromptServer.instance.routes.post("/distributed/job_complete")
async def job_complete_endpoint(request):
    try:
        data = await request.json()
    except Exception as exc:
        return await handle_api_error(request, f"Invalid JSON payload: {exc}", 400)

    if not isinstance(data, dict):
        return await handle_api_error(request, "Expected a JSON object body", 400)

    try:
        job_id = data.get("job_id")
        worker_id = data.get("worker_id")
        batch_idx = data.get("batch_idx")
        image_payload = data.get("image")
        audio_payload = data.get("audio")
        is_last = data.get("is_last")

        errors = []
        if not isinstance(job_id, str) or not job_id.strip():
            errors.append("job_id: expected non-empty string")
        if not isinstance(worker_id, str) or not worker_id.strip():
            errors.append("worker_id: expected non-empty string")
        if not isinstance(batch_idx, int) or batch_idx < 0:
            errors.append("batch_idx: expected non-negative integer")
        if not isinstance(image_payload, str) or not image_payload.strip():
            errors.append("image: expected non-empty base64 PNG string")
        if audio_payload is not None and not isinstance(audio_payload, dict):
            errors.append("audio: expected object when provided")
        if not isinstance(is_last, bool):
            errors.append("is_last: expected boolean")
        if errors:
            return await handle_api_error(request, errors, 400)

        tensor = _decode_canonical_png_tensor(image_payload)
        decoded_audio = _decode_audio_payload(audio_payload) if audio_payload is not None else None
        multi_job_id = job_id.strip()
        worker_id = worker_id.strip()

        pending = None
        queue_size = 0
        deadline = time.monotonic() + float(JOB_INIT_GRACE_PERIOD)
        while pending is None:
            async with prompt_server.distributed_jobs_lock:
                pending = prompt_server.distributed_pending_jobs.get(multi_job_id)
                if pending is not None:
                    await pending.put(
                        {
                            "tensor": tensor,
                            "worker_id": worker_id,
                            "image_index": int(batch_idx),
                            "is_last": is_last,
                            "audio": decoded_audio,
                        }
                    )
                    queue_size = pending.qsize()
                    break

            if time.monotonic() > deadline:
                return await handle_api_error(request, "job not initialized", 404)
            await asyncio.sleep(0.05)

        debug_log(
            f"job_complete received canonical envelope - job_id: {multi_job_id}, "
            f"worker: {worker_id}, batch_idx: {batch_idx}, is_last: {is_last}, "
            f"queue_size: {queue_size}"
        )

        return web.json_response({"status": "success"})
    except Exception as e:
        return await handle_api_error(request, e)


================================================
FILE: api/orchestration/__init__.py
================================================
# Orchestration helpers split out from queue_orchestration.py:
# - prompt_transform.py: graph pruning + hidden input overrides
# - media_sync.py: remote media/path normalization
# - dispatch.py: worker probe + prompt dispatch


================================================
FILE: api/orchestration/dispatch.py
================================================
import asyncio
import json
import uuid

import aiohttp

from ...utils.logging import debug_log, log
from ...utils.network import build_worker_url, get_client_session, probe_worker
try:
    from ...utils.trace_logger import trace_debug, trace_info
except ImportError:
    def trace_debug(*_args, **_kwargs):
        return None

    def trace_info(*_args, **_kwargs):
        return None

try:
    from ..schemas import parse_positive_int
except ImportError:
    def parse_positive_int(value, default):
        try:
            parsed = int(value)
            return parsed if parsed > 0 else default
        except (TypeError, ValueError):
            return default

_least_busy_rr_index = 0


async def worker_is_active(worker):
    """Ping worker's /prompt endpoint to confirm it's reachable."""
    url = build_worker_url(worker)
    return await probe_worker(url, timeout=3.0) is not None


async def worker_ws_is_active(worker):
    """Ping worker's websocket endpoint to confirm it's reachable."""
    session = await get_client_session()
    url = build_worker_url(worker, "/distributed/worker_ws")
    try:
        ws = await session.ws_connect(url, heartbeat=20, timeout=3)
        await ws.close()
        return True
    except asyncio.TimeoutError:
        debug_log(f"[Distributed] Worker WS probe timed out: {url}")
        return False
    except aiohttp.ClientConnectorError:
        debug_log(f"[Distributed] Worker WS unreachable: {url}")
        return False
    except Exception as e:
        debug_log(f"[Distributed] Worker WS probe unexpected error: {e}")
        return False


async def _probe_worker_active(worker, use_websocket, semaphore):
    async with semaphore:
        is_active = await (worker_ws_is_active(worker) if use_websocket else worker_is_active(worker))
        return worker, is_active


async def _dispatch_via_websocket(worker_url, payload, client_id, timeout=60.0):
    """Open a fresh worker websocket, dispatch one prompt, wait for ack, then close."""
    request_id = uuid.uuid4().hex
    ws_payload = {
        "type": "dispatch_prompt",
        "request_id": request_id,
        "prompt": payload.get("prompt"),
        "workflow": payload.get("workflow"),
        "client_id": client_id,
    }
    ws_url = worker_url.replace("http://", "ws://").replace("https://", "wss://")
    ws_url = f"{ws_url}/distributed/worker_ws"
    session = await get_client_session()

    async with session.ws_connect(ws_url, heartbeat=20, timeout=timeout) as ws:
        await ws.send_json(ws_payload)
        async for msg in ws:
            if msg.type == aiohttp.WSMsgType.TEXT:
                data = json.loads(msg.data or "{}")
                if data.get("type") == "dispatch_ack" and data.get("request_id") == request_id:
                    if data.get("ok"):
                        return
                    error_text = data.get("error") or "Worker rejected websocket dispatch."
                    validation_error = data.get("validation_error")
                    node_errors = data.get("node_errors")
                    if validation_error:
                        error_text = f"{error_text} | validation_error={validation_error}"
                    if node_errors:
                        error_text = f"{error_text} | node_errors={node_errors}"
                    raise RuntimeError(error_text)
            elif msg.type in (aiohttp.WSMsgType.ERROR, aiohttp.WSMsgType.CLOSED):
                raise RuntimeError(f"Worker websocket closed unexpectedly: {msg.type}")

    raise RuntimeError("Worker websocket closed before dispatch_ack was received.")


async def dispatch_worker_prompt(
    worker,
    prompt_obj,
    workflow_meta,
    client_id=None,
    use_websocket=False,
    trace_execution_id=None,
):
    """Send the prepared prompt to a worker ComfyUI instance."""
    worker_url = build_worker_url(worker)
    url = build_worker_url(worker, "/prompt")
    payload = {"prompt": prompt_obj}
    extra_data = {}
    if workflow_meta:
        extra_data.setdefault("extra_pnginfo", {})["workflow"] = workflow_meta
    if extra_data:
        payload["extra_data"] = extra_data

    if use_websocket:
        try:
            await _dispatch_via_websocket(
                worker_url,
                {
                    "prompt": prompt_obj,
                    "workflow": workflow_meta,
                },
                client_id,
            )
            return
        except Exception as exc:
            worker_id = worker.get("id")
            if trace_execution_id:
                trace_info(trace_execution_id, f"Websocket dispatch failed for worker {worker_id}: {exc}")
            else:
                log(f"[Distributed] Websocket dispatch failed for worker {worker_id}: {exc}")
            raise

    session = await get_client_session()
    async with session.post(
        url,
        json=payload,
        timeout=aiohttp.ClientTimeout(total=60),
    ) as resp:
        resp.raise_for_status()


async def select_active_workers(
    workers,
    use_websocket,
    delegate_master,
    trace_execution_id=None,
    probe_concurrency=8,
):
    """Probe workers and return (active_workers, updated_delegate_master)."""
    probe_limit = parse_positive_int(probe_concurrency, 8)
    probe_semaphore = asyncio.Semaphore(probe_limit)

    if trace_execution_id and workers:
        trace_debug(
            trace_execution_id,
            f"Probing {len(workers)} workers with probe_concurrency={probe_limit}",
        )

    probe_results = await asyncio.gather(
        *[
            _probe_worker_active(worker, use_websocket, probe_semaphore)
            for worker in workers
        ]
    )

    active_workers = []
    for worker, is_active in probe_results:
        if is_active:
            active_workers.append(worker)
        else:
            if trace_execution_id:
                trace_info(trace_execution_id, f"Worker {worker['name']} ({worker['id']}) is offline, skipping.")
            else:
                log(f"[Distributed] Worker {worker['name']} ({worker['id']}) is offline, skipping.")

    if trace_execution_id and workers:
        trace_debug(
            trace_execution_id,
            f"Worker probe complete: active={len(active_workers)}/{len(workers)}",
        )

    if not active_workers and delegate_master:
        if trace_execution_id:
            trace_debug(trace_execution_id, "All workers offline while delegate-only requested; enabling master participation.")
        else:
            debug_log("All workers offline while delegate-only requested; enabling master participation.")
        delegate_master = False

    return active_workers, delegate_master


def _extract_queue_remaining(payload):
    if not isinstance(payload, dict):
        return 0
    try:
        queue_remaining = int(payload.get("exec_info", {}).get("queue_remaining", 0))
    except (TypeError, ValueError):
        queue_remaining = 0
    return max(queue_remaining, 0)


async def _probe_worker_queue(worker, semaphore, probe_timeout):
    async with semaphore:
        worker_url = build_worker_url(worker)
        payload = await probe_worker(worker_url, timeout=probe_timeout)
        if payload is None:
            return None
        return {
            "worker": worker,
            "queue_remaining": _extract_queue_remaining(payload),
        }


def _select_idle_round_robin(statuses):
    global _least_busy_rr_index
    if not statuses:
        return None
    index = _least_busy_rr_index % len(statuses)
    _least_busy_rr_index += 1
    return statuses[index]


async def select_least_busy_worker(
    workers,
    trace_execution_id=None,
    probe_concurrency=8,
    probe_timeout=3.0,
):
    """Select one worker by queue depth, round-robin among idle workers."""
    if not workers:
        return None

    probe_limit = parse_positive_int(probe_concurrency, 8)
    probe_semaphore = asyncio.Semaphore(probe_limit)
    statuses = await asyncio.gather(
        *[
            _probe_worker_queue(worker, probe_semaphore, probe_timeout)
            for worker in workers
        ]
    )
    statuses = [status for status in statuses if status is not None]
    if not statuses:
        if trace_execution_id:
            trace_info(trace_execution_id, "Least-busy selection failed: no worker queue probes succeeded.")
        else:
            log("[Distributed] Least-busy selection failed: no worker queue probes succeeded.")
        return None

    idle_statuses = [status for status in statuses if status["queue_remaining"] == 0]
    if idle_statuses:
        selected = _select_idle_round_robin(idle_statuses)
    else:
        selected = min(statuses, key=lambda status: status["queue_remaining"])

    worker = selected["worker"]
    queue_remaining = selected["queue_remaining"]
    if trace_execution_id:
        trace_debug(
            trace_execution_id,
            f"Least-busy worker selected: {worker.get('name')} ({worker.get('id')}), queue_remaining={queue_remaining}",
        )
    else:
        debug_log(
            f"Least-busy worker selected: {worker.get('name')} ({worker.get('id')}), queue_remaining={queue_remaining}"
        )
    return worker


================================================
FILE: api/orchestration/media_sync.py
================================================
import asyncio
import hashlib
import mimetypes
import os
import re

import aiohttp

from ...utils.logging import debug_log, log
from ...utils.network import build_worker_url, get_client_session
from ...utils.trace_logger import trace_debug, trace_info


LIKELY_FILENAME_RE = re.compile(
    r"\.(ckpt|safetensors|pt|pth|bin|yaml|json|png|jpg|jpeg|webp|gif|bmp|mp4|avi|mov|mkv|webm|"
    r"wav|mp3|flac|m4a|aac|ogg|opus|aiff|aif|wma|latent|txt|vae|lora|embedding)"
    r"(\s*\[\w+\])?$",
    re.IGNORECASE,
)
MEDIA_FILE_RE = re.compile(
    r"\.(png|jpg|jpeg|webp|gif|bmp|mp4|avi|mov|mkv|webm|wav|mp3|flac|m4a|aac|ogg|opus|aiff|aif|wma)(\s*\[\w+\])?$",
    re.IGNORECASE,
)


def _normalize_media_reference(value):
    """Normalize one media string value to a path-like reference or None."""
    if not isinstance(value, str):
        return None
    cleaned = re.sub(r"\s*\[\w+\]$", "", value).strip().replace("\\", "/")
    if MEDIA_FILE_RE.search(cleaned):
        return cleaned
    return None


def convert_paths_for_platform(obj, target_separator):
    """Recursively normalize likely file paths for the worker platform separator."""
    if target_separator not in ("/", "\\"):
        return obj

    def _convert(value):
        if isinstance(value, str):
            if ("/" in value or "\\" in value) and LIKELY_FILENAME_RE.search(value):
                trimmed = value.strip()
                has_drive = bool(re.match(r"^[A-Za-z]:(\\\\|/)", trimmed))
                is_absolute = trimmed.startswith("/") or trimmed.startswith("\\\\")
                has_protocol = bool(re.match(r"^\w+://", trimmed))

                # URLs are not local paths and should never be separator-normalized.
                if has_protocol:
                    return trimmed

                # Keep relative media-style paths in forward-slash form (Comfy-style annotated paths).
                if not has_drive and not is_absolute and not has_protocol and MEDIA_FILE_RE.search(trimmed):
                    return re.sub(r"[\\]+", "/", trimmed)

                if target_separator == "\\":
                    return re.sub(r"[\\/]+", r"\\", trimmed)
                return re.sub(r"[\\/]+", "/", trimmed)
            return value
        if isinstance(value, list):
            return [_convert(item) for item in value]
        if isinstance(value, dict):
            return {key: _convert(item) for key, item in value.items()}
        return value

    return _convert(obj)


def _find_media_references(prompt_obj):
    """Find media file references in image/video/audio/file inputs used by worker prompts."""
    media_refs = set()
    for node in prompt_obj.values():
        if not isinstance(node, dict):
            continue
        inputs = node.get("inputs", {})
        for key in ("image", "video", "audio", "file"):
            cleaned = _normalize_media_reference(inputs.get(key))
            if cleaned:
                media_refs.add(cleaned)
    return sorted(media_refs)


def _rewrite_prompt_media_inputs(prompt_obj, worker_media_paths):
    """Rewrite media string inputs to worker-local uploaded paths."""
    if not isinstance(worker_media_paths, dict) or not worker_media_paths:
        return

    for node in prompt_obj.values():
        if not isinstance(node, dict):
            continue
        inputs = node.get("inputs", {})
        if not isinstance(inputs, dict):
            continue
        for key in ("image", "video", "audio", "file"):
            value = inputs.get(key)
            cleaned = _normalize_media_reference(value)
            if not cleaned:
                continue
            worker_path = worker_media_paths.get(cleaned)
            if worker_path:
                inputs[key] = worker_path


def _load_media_file_sync(filename):
    """Load local media bytes and hash for worker upload sync."""
    import folder_paths

    full_path = folder_paths.get_annotated_filepath(filename)
    if not os.path.exists(full_path):
        raise FileNotFoundError(filename)

    with open(full_path, "rb") as f:
        file_bytes = f.read()

    file_hash = hashlib.md5(file_bytes).hexdigest()
    mime_type = mimetypes.guess_type(full_path)[0]
    if not mime_type:
        ext = os.path.splitext(full_path)[1].lower()
        if ext in {".mp4", ".avi", ".mov", ".mkv", ".webm"}:
            mime_type = "video/mp4"
        else:
            mime_type = "image/png"
    return file_bytes, file_hash, mime_type


async def fetch_worker_path_separator(worker, trace_execution_id=None):
    """Best-effort fetch of a worker's path separator from /distributed/system_info."""
    url = build_worker_url(worker, "/distributed/system_info")
    session = await get_client_session()
    try:
        async with session.get(url, timeout=aiohttp.ClientTimeout(total=5)) as resp:
            if resp.status != 200:
                return None
            payload = await resp.json()
            separator = ((payload or {}).get("platform") or {}).get("path_separator")
            return separator if separator in ("/", "\\") else None
    except Exception as exc:
        if trace_execution_id:
            trace_debug(trace_execution_id, f"Failed to fetch worker system info ({worker.get('id')}): {exc}")
        else:
            debug_log(f"[Distributed] Failed to fetch worker system info ({worker.get('id')}): {exc}")
        return None


async def _upload_media_to_worker(worker, filename, file_bytes, file_hash, mime_type, trace_execution_id=None):
    """Upload one media file to worker iff missing or hash-mismatched."""
    session = await get_client_session()
    normalized = filename.replace("\\", "/")

    check_url = build_worker_url(worker, "/distributed/check_file")
    try:
        async with session.post(
            check_url,
            json={"filename": normalized, "hash": file_hash},
            timeout=aiohttp.ClientTimeout(total=6),
        ) as resp:
            if resp.status == 200:
                payload = await resp.json()
                if payload.get("exists") and payload.get("hash_matches"):
                    return False, normalized
    except Exception as exc:
        if trace_execution_id:
            trace_debug(trace_execution_id, f"Media check failed for '{normalized}' on worker {worker.get('id')}: {exc}")
        else:
            debug_log(f"[Distributed] Media check failed for '{normalized}' on worker {worker.get('id')}: {exc}")

    parts = normalized.split("/")
    clean_name = parts[-1]
    subfolder = "/".join(parts[:-1])

    form = aiohttp.FormData()
    form.add_field("image", file_bytes, filename=clean_name, content_type=mime_type)
    form.add_field("type", "input")
    form.add_field("subfolder", subfolder)
    form.add_field("overwrite", "true")

    upload_url = build_worker_url(worker, "/upload/image")
    async with session.post(
        upload_url,
        data=form,
        timeout=aiohttp.ClientTimeout(total=30),
    ) as resp:
        resp.raise_for_status()
        try:
            payload = await resp.json()
        except Exception:
            payload = {}

    name = str((payload or {}).get("name") or clean_name).strip()
    subfolder = str((payload or {}).get("subfolder") or "").strip().replace("\\", "/").strip("/")
    worker_path = f"{subfolder}/{name}" if subfolder else name
    return True, worker_path


async def sync_worker_media(worker, prompt_obj, trace_execution_id=None):
    """Sync referenced media files from master to a remote worker before dispatch."""
    media_refs = _find_media_references(prompt_obj)
    if not media_refs:
        return

    loop = asyncio.get_running_loop()
    uploaded = 0
    skipped = 0
    missing = 0
    worker_media_paths = {}
    for filename in media_refs:
        try:
            file_bytes, file_hash, mime_type = await loop.run_in_executor(
                None, _load_media_file_sync, filename
            )
        except FileNotFoundError:
            missing += 1
            if trace_execution_id:
                trace_info(trace_execution_id, f"Media file '{filename}' not found on master; worker may fail to load it.")
            else:
                log(f"[Distributed] Media file '{filename}' not found on master; worker may fail to load it.")
            continue
        except Exception as exc:
            if trace_execution_id:
                trace_info(trace_execution_id, f"Failed to load media '{filename}' for worker sync: {exc}")
            else:
                log(f"[Distributed] Failed to load media '{filename}' for worker sync: {exc}")
            continue

        try:
            changed, worker_path = await _upload_media_to_worker(
                worker,
                filename,
                file_bytes,
                file_hash,
                mime_type,
                trace_execution_id=trace_execution_id,
            )
            if worker_path:
                worker_media_paths[filename] = worker_path
            if changed:
                uploaded += 1
            else:
                skipped += 1
        except Exception as exc:
            if trace_execution_id:
                trace_info(trace_execution_id, f"Failed to upload media '{filename}' to worker {worker.get('id')}: {exc}")
            else:
                log(f"[Distributed] Failed to upload media '{filename}' to worker {worker.get('id')}: {exc}")

    _rewrite_prompt_media_inputs(prompt_obj, worker_media_paths)

    summary = (
        f"Media sync for worker {worker.get('id')}: "
        f"uploaded={uploaded}, skipped={skipped}, missing={missing}, referenced={len(media_refs)}"
    )
    if trace_execution_id:
        trace_debug(trace_execution_id, summary)
    else:
        debug_log(f"[Distributed] {summary}")


================================================
FILE: api/orchestration/prompt_transform.py
================================================
import json
from collections import deque

from ...utils.logging import debug_log


class PromptIndex:
    """Cache prompt metadata for faster worker/master prompt preparation."""

    def __init__(self, prompt_obj):
        self._prompt_json = json.dumps(prompt_obj)
        self.nodes_by_class = {}
        self.class_by_node = {}
        self.inputs_by_node = {}
        for node_id, node in _iter_prompt_nodes(prompt_obj):
            class_type = node.get("class_type")
            node_id_str = str(node_id)
            if class_type:
                self.nodes_by_class.setdefault(class_type, []).append(node_id_str)
            self.class_by_node[node_id_str] = class_type
            self.inputs_by_node[node_id_str] = node.get("inputs", {})
        self._upstream_cache = {}

    def copy_prompt(self):
        return json.loads(self._prompt_json)

    def nodes_for_class(self, class_name):
        return self.nodes_by_class.get(class_name, [])

    def has_upstream(self, start_node_id, target_class):
        cache_key = (str(start_node_id), target_class)
        if cache_key in self._upstream_cache:
            return self._upstream_cache[cache_key]

        visited = set()
        stack = [str(start_node_id)]
        while stack:
            node_id = stack.pop()
            if node_id in visited:
                continue
            visited.add(node_id)
            inputs = self.inputs_by_node.get(node_id, {})
            for value in inputs.values():
                if isinstance(value, list) and len(value) == 2:
                    upstream_id = str(value[0])
                    if self.class_by_node.get(upstream_id) == target_class:
                        self._upstream_cache[cache_key] = True
                        return True
                    if upstream_id in self.inputs_by_node:
                        stack.append(upstream_id)

        self._upstream_cache[cache_key] = False
        return False


def _iter_prompt_nodes(prompt_obj):
    for node_id, node in prompt_obj.items():
        if isinstance(node, dict):
            yield str(node_id), node


def find_nodes_by_class(prompt_obj, class_name):
    nodes = []
    for node_id, node in _iter_prompt_nodes(prompt_obj):
        if node.get("class_type") == class_name:
            nodes.append(node_id)
    return nodes


def _find_downstream_nodes(prompt_obj, start_ids):
    """Return all nodes reachable downstream from the provided IDs."""
    adjacency = {}
    for node_id, node in _iter_prompt_nodes(prompt_obj):
        inputs = node.get("inputs", {})
        for value in inputs.values():
            if isinstance(value, list) and len(value) == 2:
                source_id = str(value[0])
                adjacency.setdefault(source_id, set()).add(str(node_id))

    connected = set(start_ids)
    queue = deque(start_ids)
    while queue:
        current = queue.popleft()
        for dependent in adjacency.get(current, ()):  # pragma: no branch - simple iteration
            if dependent not in connected:
                connected.add(dependent)
                queue.append(dependent)
    return connected


def _create_numeric_id_generator(prompt_obj):
    """Return a closure that yields new numeric string IDs."""
    max_id = 0
    for node_id in prompt_obj.keys():
        try:
            numeric = int(node_id)
        except (TypeError, ValueError):
            continue
        max_id = max(max_id, numeric)

    counter = max_id

    def _next_id():
        nonlocal counter
        counter += 1
        return str(counter)

    return _next_id


def _find_upstream_nodes(prompt_obj, start_ids):
    """Return all nodes reachable upstream from start_ids, including start nodes."""
    connected = set(str(node_id) for node_id in start_ids)
    queue = deque(connected)
    while queue:
        node_id = queue.popleft()
        node = prompt_obj.get(node_id) or {}
        inputs = node.get("inputs", {})
        for value in inputs.values():
            if isinstance(value, list) and len(value) == 2:
                source_id = str(value[0])
                if source_id in prompt_obj and source_id not in connected:
                    connected.add(source_id)
                    queue.append(source_id)
    return connected


def prune_prompt_for_worker(prompt_obj):
    """Prune worker prompt to distributed nodes and their upstream dependencies."""
    collector_ids = find_nodes_by_class(prompt_obj, "DistributedCollector")
    upscale_ids = find_nodes_by_class(prompt_obj, "UltimateSDUpscaleDistributed")
    distributed_ids = collector_ids + upscale_ids
    if not distributed_ids:
        return prompt_obj

    connected = _find_upstream_nodes(prompt_obj, distributed_ids)
    pruned_prompt = {}
    for node_id in connected:
        node = prompt_obj.get(node_id)
        if node is not None:
            pruned_prompt[node_id] = json.loads(json.dumps(node))

    # Generate IDs from the original prompt so we never reuse IDs from pruned downstream nodes.
    next_id = _create_numeric_id_generator(prompt_obj)
    for dist_id in distributed_ids:
        if dist_id not in pruned_prompt:
            continue
        downstream = _find_downstream_nodes(prompt_obj, [dist_id])
        has_removed_downstream = any(node_id != dist_id for node_id in downstream)
        if has_removed_downstream:
            preview_id = next_id()
            pruned_prompt[preview_id] = {
                "inputs": {
                    "images": [dist_id, 0],
                },
                "class_type": "PreviewImage",
                "_meta": {
                    "title": "Preview Image (auto-added)",
                },
            }

    return pruned_prompt


def prepare_delegate_master_prompt(prompt_obj, collector_ids):
    """Prune master prompt so it only executes post-collector nodes in delegate mode."""
    downstream = _find_downstream_nodes(prompt_obj, collector_ids)
    nodes_to_keep = set(collector_ids)
    nodes_to_keep.update(downstream)

    pruned_prompt = {}
    for node_id in nodes_to_keep:
        node = prompt_obj.get(node_id)
        if node is not None:
            pruned_prompt[node_id] = json.loads(json.dumps(node))

    pruned_ids = set(pruned_prompt.keys())
    for node_id, node in pruned_prompt.items():
        inputs = node.get("inputs")
        if not inputs:
            continue
        for input_name, input_value in list(inputs.items()):
            if isinstance(input_value, list) and len(input_value) == 2:
                source_id = str(input_value[0])
                if source_id not in pruned_ids:
                    inputs.pop(input_name, None)
                    debug_log(
                        f"Removed upstream reference '{input_name}' from node {node_id} for delegate-only master prompt."
                    )

    # Generate IDs from the original prompt to avoid ID collisions with pruned nodes.
    next_id = _create_numeric_id_generator(prompt_obj)
    for collector_id in collector_ids:
        collector_entry = pruned_prompt.get(collector_id)
        if not collector_entry:
            continue
        placeholder_id = next_id()
        pruned_prompt[placeholder_id] = {
            "class_type": "DistributedEmptyImage",
            "inputs": {
                "height": 64,
                "width": 64,
                "channels": 3,
            },
            "_meta": {
                "title": "Distributed Empty Image (auto-added)",
            },
        }
        collector_entry.setdefault("inputs", {})["images"] = [placeholder_id, 0]
        debug_log(
            f"Inserted placeholder node {placeholder_id} for collector {collector_id} in delegate-only master prompt."
        )

    return pruned_prompt


def generate_job_id_map(prompt_index, prefix):
    """Create stable per-node job IDs for distributed nodes."""
    job_map = {}
    distributed_nodes = prompt_index.nodes_for_class("DistributedCollector") + prompt_index.nodes_for_class(
        "UltimateSDUpscaleDistributed"
    )
    for node_id in distributed_nodes:
        job_map[node_id] = f"{prefix}_{node_id}"
    return job_map


def _override_seed_nodes(prompt_copy, prompt_index, is_master, participant_id, worker_index_map):
    """Configure DistributedSeed nodes for master or worker role."""
    for node_id in prompt_index.nodes_for_class("DistributedSeed"):
        node = prompt_copy.get(node_id)
        if not isinstance(node, dict):
            continue
        inputs = node.setdefault("inputs", {})
        inputs["is_worker"] = not is_master
        if is_master:
            inputs["worker_id"] = ""
        else:
            inputs["worker_id"] = f"worker_{worker_index_map.get(participant_id, 0)}"


def _override_collector_nodes(
    prompt_copy,
    prompt_index,
    is_master,
    participant_id,
    job_id_map,
    master_url,
    enabled_json,
    delegate_master,
):
    """Configure DistributedCollector nodes for master or worker role."""
    for node_id in prompt_index.nodes_for_class("DistributedCollector"):
        node = prompt_copy.get(node_id)
        if not isinstance(node, dict):
            continue

        if prompt_index.has_upstream(node_id, "UltimateSDUpscaleDistributed"):
            node.setdefault("inputs", {})["pass_through"] = True
            continue

        inputs = node.setdefault("inputs", {})
        inputs["multi_job_id"] = job_id_map.get(node_id, node_id)
        inputs["is_worker"] = not is_master
        inputs["enabled_worker_ids"] = enabled_json
        if is_master:
            inputs["delegate_only"] = bool(delegate_master)
            inputs.pop("master_url", None)
            inputs.pop("worker_id", None)
        else:
            inputs["master_url"] = master_url
            inputs["worker_id"] = participant_id
            inputs["delegate_only"] = False


def _override_upscale_nodes(
    prompt_copy,
    prompt_index,
    is_master,
    participant_id,
    job_id_map,
    master_url,
    enabled_json,
):
    """Configure UltimateSDUpscaleDistributed nodes for master or worker role."""
    for node_id in prompt_index.nodes_for_class("UltimateSDUpscaleDistributed"):
        node = prompt_copy.get(node_id)
        if not isinstance(node, dict):
            continue
        inputs = node.setdefault("inputs", {})
        inputs["multi_job_id"] = job_id_map.get(node_id, node_id)
        inputs["is_worker"] = not is_master
        inputs["enabled_worker_ids"] = enabled_json
        if is_master:
            inputs.pop("master_url", None)
            inputs.pop("worker_id", None)
        else:
            inputs["master_url"] = master_url
            inputs["worker_id"] = participant_id


def _override_value_nodes(prompt_copy, prompt_index, is_master, participant_id, worker_index_map):
    """Configure DistributedValue nodes for master or worker role."""
    for node_id in prompt_index.nodes_for_class("DistributedValue"):
        node = prompt_copy.get(node_id)
        if not isinstance(node, dict):
            continue
        inputs = node.setdefault("inputs", {})
        inputs["is_worker"] = not is_master
        if is_master:
            inputs["worker_id"] = ""
        else:
            inputs["worker_id"] = f"worker_{worker_index_map.get(participant_id, 0)}"


def apply_participant_overrides(
    prompt_copy,
    participant_id,
    enabled_worker_ids,
    job_id_map,
    master_url,
    delegate_master,
    prompt_index,
):
    """Return a prompt copy with hidden inputs configured for master/worker."""
    is_master = participant_id == "master"
    worker_index_map = {wid: idx for idx, wid in enumerate(enabled_worker_ids)}
    enabled_json = json.dumps(enabled_worker_ids)

    _override_seed_nodes(prompt_copy, prompt_index, is_master, participant_id, worker_index_map)
    _override_value_nodes(prompt_copy, prompt_index, is_master, participant_id, worker_index_map)
    _override_collector_nodes(
        prompt_copy,
        prompt_index,
        is_master,
        participant_id,
        job_id_map,
        master_url,
        enabled_json,
        delegate_master,
    )
    _override_upscale_nodes(
        prompt_copy,
        prompt_index,
        is_master,
        participant_id,
        job_id_map,
        master_url,
        enabled_json,
    )

    return prompt_copy


================================================
FILE: api/queue_orchestration.py
================================================
import asyncio
import time
import uuid

import server

from ..utils.async_helpers import queue_prompt_payload
from ..utils.config import load_config
from ..utils.constants import (
    ORCHESTRATION_MEDIA_SYNC_CONCURRENCY,
    ORCHESTRATION_MEDIA_SYNC_TIMEOUT,
    ORCHESTRATION_WORKER_PROBE_CONCURRENCY,
    ORCHESTRATION_WORKER_PREP_CONCURRENCY,
)
from ..utils.logging import debug_log, log
from ..utils.network import build_master_url, build_master_callback_url
from ..utils.trace_logger import trace_debug
from .schemas import parse_positive_float, parse_positive_int
from .orchestration.dispatch import (
    dispatch_worker_prompt,
    select_active_workers,
    select_least_busy_worker,
)
from .orchestration.media_sync import convert_paths_for_platform, fetch_worker_path_separator, sync_worker_media
from .orchestration.prompt_transform import (
    PromptIndex,
    apply_participant_overrides,
    find_nodes_by_class,
    generate_job_id_map,
    prepare_delegate_master_prompt,
    prune_prompt_for_worker,
)


prompt_server = server.PromptServer.instance


def _generate_execution_trace_id():
    return f"exec_{int(time.time() * 1000)}_{uuid.uuid4().hex[:6]}"


def ensure_distributed_state(server_instance=None):
    """Ensure prompt_server has the state used by distributed queue orchestration."""
    ps = server_instance or prompt_server
    if not hasattr(ps, "distributed_pending_jobs"):
        ps.distributed_pending_jobs = {}
    if not hasattr(ps, "distributed_jobs_lock"):
        ps.distributed_jobs_lock = asyncio.Lock()


# Initialize top-level distributed queue state at module import time.
ensure_distributed_state()


async def _ensure_distributed_queue(job_id):
    """Ensure a queue exists for the given distributed job ID."""
    ensure_distributed_state()
    async with prompt_server.distributed_jobs_lock:
        if job_id not in prompt_server.distributed_pending_jobs:
            prompt_server.distributed_pending_jobs[job_id] = asyncio.Queue()


def _resolve_enabled_workers(config, requested_ids=None):
    """Return a list of worker configs that should participate."""
    workers = []
    for worker in config.get("workers", []):
        worker_id = str(worker.get("id") or "").strip()
        if not worker_id:
            continue

        if requested_ids is not None:
            if worker_id not in requested_ids:
                continue
        elif not worker.get("enabled", False):
            continue

        raw_port = worker.get("port", worker.get("listen_port", 8188))
        try:
            port = int(raw_port or 8188)
        except (TypeError, ValueError):
            log(f"[Distributed] Invalid port '{raw_port}' for worker {worker_id}; defaulting to 8188.")
            port = 8188

        workers.append(
            {
                "id": worker_id,
                "name": worker.get("name", worker_id),
                "host": worker.get("host"),
                "port": port,
                "type": worker.get("type", "local"),
            }
        )
    return workers


def _resolve_orchestration_limits(config):
    """Resolve bounded concurrency/timeouts for worker preparation pipeline."""
    settings = (config or {}).get("settings", {}) or {}
    worker_probe_concurrency = parse_positive_int(
        settings.get("worker_probe_concurrency"),
        ORCHESTRATION_WORKER_PROBE_CONCURRENCY,
    )
    worker_prep_concurrency = parse_positive_int(
        settings.get("worker_prep_concurrency"),
        ORCHESTRATION_WORKER_PREP_CONCURRENCY,
    )
    media_sync_concurrency = parse_positive_int(
        settings.get("media_sync_concurrency"),
        ORCHESTRATION_MEDIA_SYNC_CONCURRENCY,
    )
    media_sync_timeout_seconds = parse_positive_float(
        settings.get("media_sync_timeout_seconds"),
        ORCHESTRATION_MEDIA_SYNC_TIMEOUT,
    )
    return (
        worker_probe_concurrency,
        worker_prep_concurrency,
        media_sync_concurrency,
        media_sync_timeout_seconds,
    )


def _is_load_balance_enabled(value):
    if isinstance(value, bool):
        return value
    if isinstance(value, (int, float)):
        return bool(value)
    if isinstance(value, str):
        return value.strip().lower() in {"1", "true", "yes", "on"}
    return False


def _prompt_requests_load_balance(prompt_index):
    for node_id in prompt_index.nodes_for_class("DistributedCollector"):
        inputs = prompt_index.inputs_by_node.get(node_id, {})
        if _is_load_balance_enabled(inputs.get("load_balance", False)):
            return True
    return False


async def _prepare_worker_payload(
    worker,
    prompt_index,
    enabled_ids,
    job_id_map,
    master_url,
    config,
    delegate_master,
    trace_execution_id,
    worker_prep_semaphore,
    media_sync_semaphore,
    media_sync_timeout_seconds,
):
    """Prepare one worker prompt payload with bounded concurrency and media-sync timeout."""
    async with worker_prep_semaphore:
        worker_prompt = prompt_index.copy_prompt()
        worker_master_url = build_master_callback_url(
            worker,
            config=config,
            prompt_server_instance=prompt_server,
        )

        worker_type = str(worker.get("type") or "local").strip().lower()
        is_remote_like = bool(worker.get("host")) and worker_type != "local"
        if is_remote_like:
            path_separator = await fetch_worker_path_separator(worker, trace_execution_id=trace_execution_id)
            if path_separator:
                worker_prompt = convert_paths_for_platform(worker_prompt, path_separator)

        worker_prompt = prune_prompt_for_worker(worker_prompt)
        worker_prompt = apply_participant_overrides(
            worker_prompt,
            worker["id"],
            enabled_ids,
            job_id_map,
            worker_master_url,
            delegate_master,
            prompt_index,
        )

        if is_remote_like:
            async with media_sync_semaphore:
                try:
                    await asyncio.wait_for(
                        sync_worker_media(worker, worker_prompt, trace_execution_id=trace_execution_id),
                        timeout=media_sync_timeout_seconds,
                    )
                except asyncio.TimeoutError:
                    trace_debug(
                        trace_execution_id,
                        (
                            f"Media sync timed out after {media_sync_timeout_seconds:.1f}s "
                            f"for worker {worker.get('name')} ({worker.get('id')}); continuing dispatch."
                        ),
                    )

        return worker, worker_prompt


async def orchestrate_distributed_execution(
    prompt_obj,
    workflow_meta,
    client_id,
    enabled_worker_ids=None,
    delegate_master=None,
    trace_execution_id=None,
):
    """Core orchestration logic for the /distributed/queue endpoint.

    Returns:
        tuple[str, int, int, dict]: (prompt_id, number, worker_count, node_errors)
    """
    ensure_distributed_state()
    execution_trace_id = trace_execution_id or _generate_execution_trace_id()

    config = load_config()
    use_websocket = bool(config.get("settings", {}).get("websocket_orchestration", False))
    master_url = build_master_url(config=config, prompt_server_instance=prompt_server)
    (
        worker_probe_concurrency,
        worker_prep_concurrency,
        media_sync_concurrency,
        media_sync_timeout_seconds,
    ) = _resolve_orchestration_limits(config)
    requested_ids = enabled_worker_ids if enabled_worker_ids is not None else None
    workers = _resolve_enabled_workers(config, requested_ids)
    prompt_index = PromptIndex(prompt_obj)
    load_balance_requested = _prompt_requests_load_balance(prompt_index)
    trace_debug(
        execution_trace_id,
        (
            f"Orchestration start: requested_workers={len(workers)}, "
            f"requested_ids={requested_ids if requested_ids is not None else 'enabled_only'}, "
            f"websocket={use_websocket}, "
            f"probe_concurrency={worker_probe_concurrency}, "
            f"prep_concurrency={worker_prep_concurrency}, "
            f"media_sync_concurrency={media_sync_concurrency}, "
            f"media_sync_timeout={media_sync_timeout_seconds:.1f}s, "
            f"load_balance={load_balance_requested}"
        ),
    )

    # Respect master delegate-only configuration
    if delegate_master is None:
        delegate_master = bool(config.get("settings", {}).get("master_delegate_only", False))

    if not workers and delegate_master:
        trace_debug(
            execution_trace_id,
            "Delegate-only requested but no workers are enabled. Falling back to master execution.",
        )
        delegate_master = False

    active_workers, delegate_master = await select_active_workers(
        workers,
        use_websocket,
        delegate_master,
        trace_execution_id=execution_trace_id,
        probe_concurrency=worker_probe_concurrency,
    )

    if load_balance_requested:
        candidate_workers = list(active_workers)
        if not delegate_master:
            # Include master in load balancing only when master participation is enabled.
            candidate_workers.append(
                {
                    "id": "master",
                    "name": "Master",
                    "host": master_url,
                    "type": "local",
                }
            )

        selected_worker = None
        if candidate_workers:
            selected_worker = await select_least_busy_worker(
                candidate_workers,
                trace_execution_id=execution_trace_id,
                probe_concurrency=worker_probe_concurrency,
            )
        if selected_worker is None and candidate_workers:
            trace_debug(
                execution_trace_id,
                "Load-balance selection probe failed; using first available candidate.",
            )
            selected_worker = candidate_workers[0]

        if selected_worker is not None and str(selected_worker.get("id")) == "master":
            # Master selected as least busy; run master workload only.
            active_workers = []
            delegate_master = False
            trace_debug(
                execution_trace_id,
                "Load-balance selected master for execution (workers skipped).",
            )
        elif selected_worker is not None:
            active_workers = [selected_worker]
            # Worker selected as least busy; keep master orchestrator-only for this run.
            delegate_master = True
            trace_debug(
                execution_trace_id,
                f"Load-balance selected worker {selected_worker.get('id')} (master set to delegate-only).",
            )
        else:
            trace_debug(
                execution_trace_id,
                "Load-balance requested but no execution candidates were available.",
            )
            active_workers = []
            delegate_master = False

    enabled_ids = [worker["id"] for worker in active_workers]

    discovery_prefix = f"exec_{int(time.time() * 1000)}_{uuid.uuid4().hex[:6]}"
    job_id_map = generate_job_id_map(prompt_index, discovery_prefix)

    if not job_id_map:
        trace_debug(execution_trace_id, "No distributed nodes detected; queueing prompt on master only.")
        queue_result = await queue_prompt_payload(
            prompt_obj,
            workflow_meta,
            client_id,
            include_queue_metadata=True,
        )
        return (
            queue_result["prompt_id"],
            queue_result["number"],
            0,
            queue_result.get("node_errors", {}),
        )

    for job_id in job_id_map.values():
        await _ensure_distributed_queue(job_id)

    master_prompt = prompt_index.copy_prompt()
    master_prompt = apply_participant_overrides(
        master_prompt,
        "master",
        enabled_ids,
        job_id_map,
        master_url,
        delegate_master,
        prompt_index,
    )

    if delegate_master:
        collector_ids = find_nodes_by_class(master_prompt, "DistributedCollector")
        upscale_nodes = find_nodes_by_class(master_prompt, "UltimateSDUpscaleDistributed")
        if upscale_nodes:
            debug_log(
                "Delegate-only master mode currently does not support UltimateSDUpscaleDistributed nodes; running full prompt on master."
            )
        elif not collector_ids:
            debug_log(
                "Delegate-only master mode requested but no collectors found in master prompt. Running full prompt on master."
            )
        else:
            master_prompt = prepare_delegate_master_prompt(master_prompt, collector_ids)

    if active_workers:
        trace_debug(
            execution_trace_id,
            "Active distributed workers: "
            + ", ".join(f"{worker['name']} ({worker['id']})" for worker in active_workers),
        )
    worker_payloads = []
    if active_workers:
        worker_prep_semaphore = asyncio.Semaphore(worker_prep_concurrency)
        media_sync_semaphore = asyncio.Semaphore(media_sync_concurrency)
        worker_payloads = await asyncio.gather(
            *[
                _prepare_worker_payload(
                    worker,
                    prompt_index,
                    enabled_ids,
                    job_id_map,
                    master_url,
                    config,
                    delegate_master,
                    execution_trace_id,
                    worker_prep_semaphore,
                    media_sync_semaphore,
                    media_sync_timeout_seconds,
                )
                for worker in active_workers
            ]
        )

    if worker_payloads:
        await asyncio.gather(
            *[
                dispatch_worker_prompt(
                    worker,
                    wprompt,
                    workflow_meta,
                    client_id,
                    use_websocket=use_websocket,
                    trace_execution_id=execution_trace_id,
                )
                for worker, wprompt in worker_payloads
            ]
        )

    queue_result = await queue_prompt_payload(
        master_prompt,
        workflow_meta,
        client_id,
        include_queue_metadata=True,
    )
    prompt_id = queue_result["prompt_id"]
    prompt_number = queue_result["number"]
    node_errors = queue_result.get("node_errors", {})
    trace_debug(
        execution_trace_id,
        f"Orchestration complete: prompt_id={prompt_id}, dispatched_workers={len(worker_payloads)}, delegate_master={delegate_master}",
    )
    return prompt_id, prompt_number, len(worker_payloads), node_errors


================================================
FILE: api/queue_request.py
================================================
from dataclasses import dataclass
from typing import Any, Dict, List, Optional


@dataclass(frozen=True)
class QueueRequestPayload:
    prompt: Dict[str, Any]
    workflow_meta: Any
    client_id: str
    delegate_master: Optional[bool]
    enabled_worker_ids: List[str]
    auto_prepare: bool
    trace_execution_id: Optional[str]


def parse_queue_request_payload(data: Any) -> QueueRequestPayload:
    """Parse and validate /distributed/queue payload into a normalized shape."""
    if not isinstance(data, dict):
        raise ValueError("Expected a JSON object body")

    auto_prepare_raw = data.get("auto_prepare", True)
    if not isinstance(auto_prepare_raw, bool):
        raise ValueError("auto_prepare must be a boolean when provided")
    auto_prepare = auto_prepare_raw

    prompt = data.get("prompt")
    # Auto-prepare is always on server-side; keep the field for wire compatibility.
    if prompt is None:
        workflow_payload = data.get("workflow")
        if isinstance(workflow_payload, dict):
            candidate_prompt = workflow_payload.get("prompt")
            if isinstance(candidate_prompt, dict):
                prompt = candidate_prompt

    if not isinstance(prompt, dict):
        raise ValueError("Field 'prompt' must be an object")

    enabled_ids_raw = data.get("enabled_worker_ids")
    workers_field = data.get("workers")
    if enabled_ids_raw is None and workers_field is not None:
        if not isinstance(workers_field, list):
            raise ValueError("Field 'workers' must be a list when provided")
        enabled_ids_raw = []
        for entry in workers_field:
            worker_id = entry.get("id") if isinstance(entry, dict) else entry
            if worker_id is not None:
                enabled_ids_raw.append(str(worker_id))

    if enabled_ids_raw is None:
        raise ValueError("enabled_worker_ids required")
    else:
        if not isinstance(enabled_ids_raw, list):
            raise ValueError("enabled_worker_ids must be a list of worker IDs")
        enabled_ids = [str(worker_id).strip() for worker_id in enabled_ids_raw if str(worker_id).strip()]

    delegate_master = data.get("delegate_master")
    if delegate_master is not None and not isinstance(delegate_master, bool):
        raise ValueError("delegate_master must be a boolean when provided")

    client_id = data.get("client_id")
    if not isinstance(client_id, str) or not client_id.strip():
        raise ValueError("client_id required")
    client_id = client_id.strip()

    trace_execution_id = data.get("trace_execution_id")
    if trace_execution_id is not None:
        if not isinstance(trace_execution_id, str):
            raise ValueError("trace_execution_id must be a string when provided")
        trace_execution_id = trace_execution_id.strip() or None

    return QueueRequestPayload(
        prompt=prompt,
        workflow_meta=data.get("workflow"),
        client_id=client_id,
        delegate_master=delegate_master,
        enabled_worker_ids=enabled_ids,
        auto_prepare=auto_prepare,
        trace_execution_id=trace_execution_id,
    )


================================================
FILE: api/schemas.py
================================================
def require_fields(data: dict, *fields) -> list[str]:
    """Return field names that are missing or empty in a JSON object."""
    if not isinstance(data, dict):
        return list(fields)

    missing = []
    for field in fields:
        if field not in data:
            missing.append(field)
            continue
        value = data.get(field)
        if value is None:
            missing.append(field)
            continue
        if isinstance(value, str) and not value.strip():
            missing.append(field)

    return missing


def validate_worker_id(worker_id: str, config: dict) -> bool:
    """Return True when worker_id exists in config['workers']."""
    worker_id_str = str(worker_id)
    workers = (config or {}).get("workers", [])
    return any(str(worker.get("id")) == worker_id_str for worker in workers)


def validate_positive_int(value, field_name: str) -> str | None:
    """Validate positive integers and return an error string when invalid."""
    try:
        parsed = int(value)
    except (TypeError, ValueError):
        return f"Field '{field_name}' must be a positive integer."
    if parsed <= 0:
        return f"Field '{field_name}' must be a positive integer."
    return None


def parse_positive_int(value, default: int) -> int:
    """Parse value as positive int, returning default on failure."""
    try:
        parsed = int(value)
    except (TypeError, ValueError):
        return max(1, int(default))
    return max(1, parsed)


def parse_positive_float(value, default: float) -> float:
    """Parse value as positive float, returning default on failure."""
    try:
        parsed = float(value)
    except (TypeError, ValueError):
        return max(0.0, float(default))
    return max(0.0, parsed)


================================================
FILE: api/tunnel_routes.py
================================================
from aiohttp import web
import server

from ..utils.cloudflare import cloudflare_tunnel_manager
from ..utils.config import load_config
from ..utils.logging import debug_log, log
from ..utils.network import handle_api_error


@server.PromptServer.instance.routes.get("/distributed/tunnel/status")
async def tunnel_status_endpoint(request):
    """Return Cloudflare tunnel status and last known details."""
    try:
        status = cloudflare_tunnel_manager.get_status()
        config = load_config()
        master_host = (config.get("master") or {}).get("host")
        return web.json_response({
            "status": "success",
            "tunnel": status,
            "master_host": master_host
        })
    except Exception as e:
        return await handle_api_error(request, e, 500)

@server.PromptServer.instance.routes.post("/distributed/tunnel/start")
async def tunnel_start_endpoint(request):
    """Start a Cloudflare tunnel pointing at the current ComfyUI server."""
    try:
        result = await cloudflare_tunnel_manager.start_tunnel()
        config = load_config()
        return web.json_response({
            "status": "success",
            "tunnel": result,
            "master_host": (config.get("master") or {}).get("host")
        })
    except Exception as e:
        return await handle_api_error(request, e, 500)

@server.PromptServer.instance.routes.post("/distributed/tunnel/stop")
async def tunnel_stop_endpoint(request):
    """Stop the managed Cloudflare tunnel if running."""
    try:
        result = await cloudflare_tunnel_manager.stop_tunnel()
        config = load_config()
        return web.json_response({
            "status": "success",
            "tunnel": result,
            "master_host": (config.get("master") or {}).get("host")
        })
    except Exception as e:
        return await handle_api_error(request, e, 500)


================================================
FILE: api/usdu_routes.py
================================================
import asyncio
import io
import time

from aiohttp import web
from PIL import Image
import server

from ..upscale.job_models import BaseJobState, ImageJobState, TileJobState
from ..upscale.job_store import MAX_PAYLOAD_SIZE, ensure_tile_jobs_initialized
from ..upscale.payload_parsers import _parse_tiles_from_form
from ..utils.logging import debug_log
from ..utils.network import handle_api_error


@server.PromptServer.instance.routes.post("/distributed/heartbeat")
async def heartbeat_endpoint(request):
    try:
        data = await request.json()
        worker_id = data.get('worker_id')
        multi_job_id = data.get('multi_job_id')

        if not worker_id or not multi_job_id:
            return await handle_api_error(request, "Missing worker_id or multi_job_id", 400)

        prompt_server = ensure_tile_jobs_initialized()
        async with prompt_server.distributed_tile_jobs_lock:
            if multi_job_id in prompt_server.distributed_pending_tile_jobs:
                job_data = prompt_server.distributed_pending_tile_jobs[multi_job_id]
                if isinstance(job_data, BaseJobState):
                    job_data.worker_status[worker_id] = time.time()
                    debug_log(f"Heartbeat from worker {worker_id}")
                    return web.json_response({"status": "success"})
                return await handle_api_error(request, "Worker status tracking not available", 400)
            return await handle_api_error(request, "Job not found", 404)
    except Exception as e:
        return await handle_api_error(request, e, 500)


@server.PromptServer.instance.routes.post("/distributed/submit_tiles")
async def submit_tiles_endpoint(request):
    """Endpoint for workers to submit processed tiles in static mode."""
    try:
        content_length = request.headers.get('content-length')
        if content_length and int(content_length) > MAX_PAYLOAD_SIZE:
            return await handle_api_error(request, f"Payload too large: {content_length} bytes", 413)

        data = await request.post()
        multi_job_id = data.get('multi_job_id')
        worker_id = data.get('worker_id')
        is_last = data.get('is_last', 'False').lower() == 'true'

        if multi_job_id is None or worker_id is None:
            return await handle_api_error(request, "Missing multi_job_id or worker_id", 400)

        prompt_server = ensure_tile_jobs_initialized()

        batch_size = int(data.get('batch_size', 0))

        # Handle completion signal
        if batch_size == 0 and is_last:
            async with prompt_server.distributed_tile_jobs_lock:
                if multi_job_id in prompt_server.distributed_pending_tile_jobs:
                    job_data = prompt_server.distributed_pending_tile_jobs[multi_job_id]
                    if not isinstance(job_data, TileJobState):
                        return await handle_api_error(request, "Job not configured for tile submissions", 400)
                    await job_data.queue.put({
                        'worker_id': worker_id,
                        'is_last': True,
                        'tiles': [],
                    })
                    debug_log(f"Received completion signal from worker {worker_id}")
                    return web.json_response({"status": "success"})

        try:
            tiles = _parse_tiles_from_form(data)
        except ValueError as e:
            return await handle_api_error(request, str(e), 400)

        # Submit tiles to queue
        async with prompt_server.distributed_tile_jobs_lock:
            if multi_job_id in prompt_server.distributed_pending_tile_jobs:
                job_data = prompt_server.distributed_pending_tile_jobs[multi_job_id]
                if not isinstance(job_data, TileJobState):
                    return await handle_api_error(request, "Job not configured for tile submissions", 400)

                q = job_data.queue
                if batch_size > 0 or len(tiles) > 0:
                    await q.put({
                        'worker_id': worker_id,
                        'tiles': tiles,
                        'is_last': is_last,
                    })
                    debug_log(f"Received {len(tiles)} tiles from worker {worker_id} (is_last={is_last})")
                else:
                    await q.put({
                        'worker_id': worker_id,
                        'is_last': True,
                        'tiles': [],
                    })

                return web.json_response({"status": "success"})
            return await handle_api_error(request, "Job not found", 404)
    except Exception as e:
        return await handle_api_error(request, e, 500)


@server.PromptServer.instance.routes.post("/distributed/submit_image")
async def submit_image_endpoint(request):
    """Endpoint for workers to submit processed images in dynamic mode."""
    try:
        content_length = request.headers.get('content-length')
        if content_length and int(content_length) > MAX_PAYLOAD_SIZE:
            return await handle_api_error(request, f"Payload too large: {content_length} bytes", 413)

        data = await request.post()
        multi_job_id = data.get('multi_job_id')
        worker_id = data.get('worker_id')
        is_last = data.get('is_last', 'False').lower() == 'true'

        if multi_job_id is None or worker_id is None:
            return await handle_api_error(request, "Missing multi_job_id or worker_id", 400)

        prompt_server = ensure_tile_jobs_initialized()

        # Handle image submission
        if 'full_image' in data and 'image_idx' in data:
            image_idx = int(data.get('image_idx'))
            img_data = data['full_image'].file.read()
            img = Image.open(io.BytesIO(img_data)).convert("RGB")

            debug_log(f"Received full image {image_idx} from worker {worker_id}")

            async with prompt_server.distributed_tile_jobs_lock:
                if multi_job_id in prompt_server.distributed_pending_tile_jobs:
                    job_data = prompt_server.distributed_pending_tile_jobs[multi_job_id]
                    if not isinstance(job_data, ImageJobState):
                        return await handle_api_error(request, "Job not configured for image submissions", 400)
                    await job_data.queue.put({
                        'worker_id': worker_id,
                        'image_idx': image_idx,
                        'image': img,
                        'is_last': is_last,
                    })
                    return web.json_response({"status": "success"})

        # Handle completion signal (no image data)
        elif is_last:
            async with prompt_server.distributed_tile_jobs_lock:
                if multi_job_id in prompt_server.distributed_pending_tile_jobs:
                    job_data = prompt_server.distributed_pending_tile_jobs[multi_job_id]
                    if not isinstance(job_data, ImageJobState):
                        return await handle_api_error(request, "Job not configured for image submissions", 400)
                    await job_data.queue.put({
                        'worker_id': worker_id,
                        'is_last': True,
                    })
                    debug_log(f"Received completion signal from worker {worker_id}")
                    return web.json_response({"status": "success"})
        else:
            return await handle_api_error(request, "Missing image data or invalid request", 400)

        return await handle_api_error(request, "Job not found", 404)
    except Exception as e:
        return await handle_api_error(request, e, 500)


@server.PromptServer.instance.routes.post("/distributed/request_image")
async def request_image_endpoint(request):
    """Endpoint for workers to request tasks (images in dynamic mode, tiles in static mode)."""
    try:
        data = await request.json()
        worker_id = data.get('worker_id')
        multi_job_id = data.get('multi_job_id')

        if not worker_id or not multi_job_id:
            return await handle_api_error(request, "Missing worker_id or multi_job_id", 400)

        prompt_server = ensure_tile_jobs_initialized()
        async with prompt_server.distributed_tile_jobs_lock:
            if multi_job_id in prompt_server.distributed_pending_tile_jobs:
                job_data = prompt_server.distributed_pending_tile_jobs[multi_job_id]
                if not isinstance(job_data, BaseJobState):
                    return await handle_api_error(request, "Invalid job data structure", 500)

                mode = job_data.mode
                if isinstance(job_data, ImageJobState):
                    pending_queue = job_data.pending_images
                elif isinstance(job_data, TileJobState):
                    pending_queue = job_data.pending_tasks
                else:
                    return await handle_api_error(request, "Invalid job configuration", 400)

                try:
                    task_idx = await asyncio.wait_for(pending_queue.get(), timeout=0.1)
                    job_data.assigned_to_workers.setdefault(worker_id, []).append(task_idx)
                    job_data.worker_status[worker_id] = time.time()
                    remaining = pending_queue.qsize()

                    if mode == 'dynamic':
                        debug_log(f"UltimateSDUpscale API - Assigned image {task_idx} to worker {worker_id}")
                        return web.json_response({"image_idx": task_idx, "estimated_remaining": remaining})
                    debug_log(f"UltimateSDUpscale API - Assigned tile {task_idx} to worker {worker_id}")
                    return web.json_response({
                        "tile_idx": task_idx,
                        "estimated_remaining": remaining,
                        "batched_static": job_data.batched_static,
                    })
                except asyncio.TimeoutError:
                    if mode == 'dynamic':
                        return web.json_response({"image_idx": None})
                    return web.json_response({"tile_idx": None})
            return await handle_api_error(request, "Job not found", 404)
    except Exception as e:
        return await handle_api_error(request, e, 500)


@server.PromptServer.instance.routes.get("/distributed/job_status")
async def job_status_endpoint(request):
    """Endpoint to check if a job is ready."""
    multi_job_id = request.query.get('multi_job_id')
    if not multi_job_id:
        return web.json_response({"ready": False})
    prompt_server = ensure_tile_jobs_initialized()
    async with prompt_server.distributed_tile_jobs_lock:
        job_data = prompt_server.distributed_pending_tile_jobs.get(multi_job_id)
        ready = bool(isinstance(job_data, BaseJobState) and job_data.queue is not None)
        return web.json_response({"ready": ready})


================================================
FILE: api/worker_routes.py
================================================
import json
import asyncio
import os
import time
import platform
import subprocess
import socket

import torch
import aiohttp
from aiohttp import web
import server

from ..utils.config import load_config
from ..utils.logging import debug_log, log
from ..utils.network import (
    build_worker_url,
    get_client_session,
    handle_api_error,
    normalize_host,
    probe_worker,
)
from ..utils.constants import CHUNK_SIZE
from ..workers import get_worker_manager
from .schemas import require_fields, validate_worker_id
from ..workers.detection import (
    get_machine_id,
    is_docker_environment,
    is_runpod_environment,
)
try:
    from ..utils.async_helpers import PromptValidationError, queue_prompt_payload
except ImportError:
    from ..utils.async_helpers import queue_prompt_payload

    class PromptValidationError(RuntimeError):
        def __init__(self, message, validation_error=None, node_errors=None):
            super().__init__(str(message))
            self.validation_error = validation_error if isinstance(validation_error, dict) else {}
            self.node_errors = node_errors if isinstance(node_errors, dict) else {}


@server.PromptServer.instance.routes.get("/distributed/worker_ws")
async def worker_ws_endpoint(request):
    """WebSocket endpoint for worker prompt dispatch."""
    ws = web.WebSocketResponse(heartbeat=30)
    await ws.prepare(request)

    async for msg in ws:
        if msg.type == aiohttp.WSMsgType.TEXT:
            try:
                data = json.loads(msg.data or "{}")
            except json.JSONDecodeError:
                await ws.send_json({
                    "type": "dispatch_ack",
                    "request_id": None,
                    "ok": False,
                    "error": "Invalid JSON payload.",
                })
                continue

            if data.get("type") != "dispatch_prompt":
                await ws.send_json({
                    "type": "dispatch_ack",
                    "request_id": data.get("request_id"),
                    "ok": False,
                    "error": "Unsupported websocket message type.",
                })
                continue

            prompt = data.get("prompt")
            if not isinstance(prompt, dict):
                await ws.send_json({
                    "type": "dispatch_ack",
                    "request_id": data.get("request_id"),
                    "ok": False,
                    "error": "Field 'prompt' must be an object.",
                })
                continue

            try:
                prompt_id = await queue_prompt_payload(
                    prompt,
                    workflow_meta=data.get("workflow"),
                    client_id=data.get("client_id"),
                )
                await ws.send_json({
                    "type": "dispatch_ack",
                    "request_id": data.get("request_id"),
                    "ok": True,
                    "prompt_id": prompt_id,
                })
            except PromptValidationError as exc:
                await ws.send_json({
                    "type": "dispatch_ack",
                    "request_id": data.get("request_id"),
                    "ok": False,
                    "error": str(exc),
                    "validation_error": exc.validation_error,
                    "node_errors": exc.node_errors,
                })
            except Exception as exc:
                await ws.send_json({
                    "type": "dispatch_ack",
                    "request_id": data.get("request_id"),
                    "ok": False,
                    "error": str(exc),
                })
        elif msg.type == aiohttp.WSMsgType.ERROR:
            log(f"[Distributed] Worker websocket error: {ws.exception()}")

    return ws


@server.PromptServer.instance.routes.post("/distributed/worker/clear_launching")
async def clear_launching_state(request):
    """Clear the launching flag when worker is confirmed running."""
    try:
        wm = get_worker_manager()
        data = await request.json()
        missing = require_fields(data, "worker_id")
        if missing:
            return await handle_api_error(request, f"Missing required field(s): {', '.join(missing)}", 400)

        worker_id = str(data.get("worker_id")).strip()
        config = load_config()
        if not validate_worker_id(worker_id, config):
            return await handle_api_error(request, f"Worker {worker_id} not found", 404)
        
        # Clear launching flag in managed processes
        if worker_id in wm.processes:
            if 'launching' in wm.processes[worker_id]:
                del wm.processes[worker_id]['launching']
                wm.save_processes()
                debug_log(f"Cleared launching state for worker {worker_id}")
        
        return web.json_response({"status": "success"})
    except Exception as e:
        return await handle_api_error(request, e, 500)


def get_network_ips():
    """Get all network IPs, trying multiple methods."""
    ips = []
    hostname = socket.gethostname()

    # Method 1: Try socket.getaddrinfo
    try:
        addr_info = socket.getaddrinfo(hostname, None)
        for info in addr_info:
            ip = info[4][0]
            if ip and ip not in ips and not ip.startswith('::'):  # Skip IPv6 for now
                ips.append(ip)
    except (socket.gaierror, OSError):
        pass

    # Method 2: Try to connect to external server and get local IP
    try:
        s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
        s.connect(("8.8.8.8", 80))  # Google DNS
        local_ip = s.getsockname()[0]
        s.close()
        if local_ip not in ips:
            ips.append(local_ip)
    except (OSError, socket.error):
        pass

    # Method 3: Platform-specific commands
    try:
        if platform.system() == "Windows":
            # Windows ipconfig
            result = subprocess.run(["ipconfig"], capture_output=True, text=True)
            lines = result.stdout.split('\n')
            for i, line in enumerate(lines):
                if 'IPv4' in line and i + 1 < len(lines):
                    ip = lines[i].split(':')[-1].strip()
                    if ip and ip not in ips:
                        ips.append(ip)
        else:
            # Unix/Linux/Mac ifconfig or ip addr
            try:
                result = subprocess.run(["ip", "addr"], capture_output=True, text=True)
            except (FileNotFoundError, OSError):
                try:
                    result = subprocess.run(["ifconfig"], capture_output=True, text=True)
                except (FileNotFoundError, OSError):
                    result = None

            import re
            ip_pattern = re.compile(r'inet\s+(\d+\.\d+\.\d+\.\d+)')
            if result is not None:
                for match in ip_pattern.finditer(result.stdout):
                    ip = match.group(1)
                    if ip and ip not in ips:
                        ips.append(ip)
    except (OSError, subprocess.SubprocessError):
        pass

    return ips


def get_recommended_ip(ips):
    """Choose the best IP for master-worker communication."""
    # Priority order:
    # 1. Private network ranges (192.168.x.x, 10.x.x.x, 172.16-31.x.x)
    # 2. Other non-localhost IPs
    # 3. Localhost as last resort

    private_ips = []
    public_ips = []

    for ip in ips:
        if ip.startswith('127.') or ip == 'localhost':
            continue
        elif (ip.startswith('192.168.')
                or ip.startswith('10.')
                or (ip.startswith('172.') and 16 <= int(ip.split('.')[1]) <= 31)):
            private_ips.append(ip)
        else:
            public_ips.append(ip)

    # Prefer private IPs
    if private_ips:
        # Prefer 192.168 range as it's most common
        for ip in private_ips:
            if ip.startswith('192.168.'):
                return ip
        return private_ips[0]
    elif public_ips:
        return public_ips[0]
    elif ips:
        return ips[0]
    else:
        return None


def _get_cuda_info():
    """Detect CUDA device index and total physical GPU count.

    Returns (cuda_device, cuda_device_count, physical_device_count).
    All three are 0/None if CUDA is unavailable.
    """
    if not torch.cuda.is_available():
        return None, 0, 0
    try:
        cuda_device_count = torch.cuda.device_count()
        cuda_visible = os.environ.get('CUDA_VISIBLE_DEVICES', '')
        if cuda_visible and cuda_visible.strip():
            visible_devices = [int(d.strip()) for d in cuda_visible.split(',') if d.strip().isdigit()]
            if visible_devices:
                cuda_device = visible_devices[0]
                try:
                    result = subprocess.run(
                        ['nvidia-smi', '--query-gpu=name', '--format=csv,noheader'],
                        capture_output=True,
                        text=True,
                        timeout=5,
                    )
                    physical_device_count = (
                        len(result.stdout.strip().split('\n'))
                        if result.returncode == 0
                        else max(visible_devices) + 1
                    )
                except (FileNotFoundError, OSError, subprocess.SubprocessError):
                    physical_device_count = max(visible_devices) + 1
                return cuda_device, cuda_device_count, physical_device_count
            else:
                return 0, cuda_device_count, cuda_device_count
        else:
            cuda_device = torch.cuda.current_device()
            return cuda_device, cuda_device_count, cuda_device_count
    except Exception as e:
        debug_log(f"CUDA detection error: {e}")
        return None, 0, 0


def _collect_network_info_sync():
    """Collect network/cuda info in a worker thread to avoid blocking route handlers."""
    cuda_device, cuda_device_count, physical_device_count = _get_cuda_info()
    hostname = socket.gethostname()
    all_ips = get_network_ips()
    recommended_ip = get_recommended_ip(all_ips)
    return {
        "hostname": hostname,
        "all_ips": all_ips,
        "recommended_ip": recommended_ip,
        "cuda_device": cuda_device,
        "cuda_device_count": physical_device_count if physical_device_count > 0 else cuda_device_count,
    }


def _read_worker_log_sync(log_file, lines_to_read):
    """Read worker log content from disk in a threadpool worker."""
    file_size = os.path.getsize(log_file)

    with open(log_file, 'r', encoding='utf-8', errors='replace') as f:
        if lines_to_read > 0 and file_size > 1024 * 1024:
            # Read last N lines efficiently from end of file.
            lines = []
            f.seek(0, 2)
            file_length = f.tell()
            chunk_size = min(CHUNK_SIZE, file_length)

            while len(lines) < lines_to_read and f.tell() > 0:
                current_pos = max(0, f.tell() - chunk_size)
                f.seek(current_pos)
                chunk = f.read(chunk_size)
                chunk_lines = chunk.splitlines()
                if current_pos > 0:
                    chunk_lines = chunk_lines[1:]
                lines = chunk_lines + lines
                f.seek(current_pos)

            content = '\n'.join(lines[-lines_to_read:])
            truncated = len(lines) > lines_to_read
        else:
            content = f.read()
            truncated = False

    return {
        "content": content,
        "file_size": file_size,
        "truncated": truncated,
        "lines_shown": lines_to_read if truncated else content.count('\n') + 1,
    }


def _parse_positive_int_query(value, default, minimum=1, maximum=10000):
    """Parse bounded positive integer query params with sane fallback."""
    try:
        parsed = int(value)
    except (TypeError, ValueError):
        return default
    parsed = max(minimum, parsed)
    if maximum is not None:
        parsed = min(maximum, parsed)
    return parsed


def _find_worker_by_id(config, worker_id):
    worker_id_str = str(worker_id).strip()
    for worker in config.get("workers", []):
        if str(worker.get("id")).strip() == worker_id_str:
            return worker
    return None


@server.PromptServer.instance.routes.get("/distributed/local_log")
async def get_local_log_endpoint(request):
    """Return this instance's in-memory ComfyUI log buffer."""
    try:
        from app.logger import get_logs
    except Exception as e:
        return await handle_api_error(request, f"Failed to import app.logger: {e}", 500)

    try:
        lines_to_read = _parse_positive_int_query(request.query.get("lines"), default=300, maximum=3000)
        logs = get_logs()
        if logs is None:
            return web.json_response(
                {
                    "status": "success",
                    "content": "",
                    "entries": 0,
                    "source": "memory",
                    "truncated": False,
                    "lines_shown": 0,
                }
            )

        entries = list(logs)
        selected_entries = entries[-lines_to_read:]
        content = "".join(
            entry.get("m", "") if isinstance(entry, dict) else str(entry)
            for entry in selected_entries
        )
        lines_shown = content.count("\n") + (1 if content else 0)

        return web.json_response(
            {
                "status": "success",
                "content": content,
                "entries": len(selected_entries),
                "source": "memory",
                "truncated": len(entries) > len(selected_entries),
                "lines_shown": lines_shown,
            }
        )
    except Exception as e:
        return await handle_api_error(request, e, 500)


@server.PromptServer.instance.routes.get("/distributed/network_info")
async def get_network_info_endpoint(request):
    """Get network interfaces and recommend best IP for master."""
    try:
        loop = asyncio.get_running_loop()
        info = await loop.run_in_executor(None, _collect_network_info_sync)
        
        return web.json_response({
            "status": "success",
            **info,
            "message": "Auto-detected network configuration"
        })
    except Exception as e:
        return await handle_api_error(request, e, 500)

@server.PromptServer.instance.routes.get("/distributed/system_info")
async def get_system_info_endpoint(request):
    """Get system information including machine ID for local worker detection."""
    try:
        import socket
        
        return web.json_response({
            "status": "success",
            "hostname": socket.gethostname(),
            "machine_id": get_machine_id(),
            "platform": {
                "system": platform.system(),
                "machine": platform.machine(),
                "node": platform.node(),
                "path_separator": os.sep,  # Add path separator
                "os_name": os.name  # Add OS name (posix, nt, etc.)
            },
            "is_docker": is_docker_environment(),
            "is_runpod": is_runpod_environment(),
            "runpod_pod_id": os.environ.get('RUNPOD_POD_ID')
        })
    except Exception as e:
        return await handle_api_error(request, e, 500)

@server.PromptServer.instance.routes.post("/distributed/launch_worker")
async def launch_worker_endpoint(request):
    """Launch a worker process from the UI."""
    try:
        wm = get_worker_manager()
        data = await request.json()
        missing = require_fields(data, "worker_id")
        if missing:
            return await handle_api_error(request, f"Missing required field(s): {', '.join(missing)}", 400)

        worker_id = str(data.get("worker_id")).strip()
        
        # Find worker config
        config = load_config()
        if not validate_worker_id(worker_id, config):
            return await handle_api_error(request, f"Worker {worker_id} not found", 404)
        worker = next((w for w in config.get("workers", []) if str(w.get("id")) == worker_id), None)
        if not worker:
            return await handle_api_error(request, f"Worker {worker_id} not found", 404)
        
        # Ensure consistent string ID
        worker_id_str = worker_id
        
        # Check if already running (managed by this instance)
        if worker_id_str in wm.processes:
            proc_info = wm.processes[worker_id_str]
            process = proc_info.get('process')
            
            # Check if still running
            is_running = False
            if process:
                is_running = process.poll() is None
            else:
                # Restored process without subprocess object
                is_running = wm._is_process_running(proc_info['pid'])
            
            if is_running:
                return await handle_api_error(request, "Worker already running (managed by UI)", 409)
            else:
                # Process is dead, remove it
                del wm.processes[worker_id_str]
                wm.save_processes()
        
        # Launch the worker
        try:
            loop = asyncio.get_running_loop()
            pid = await loop.run_in_executor(None, wm.launch_worker, worker)
            log_file = wm.processes[worker_id_str].get('log_file')
            return web.json_response({
                "status": "success",
                "pid": pid,
                "message": f"Worker {worker['name']} launched",
                "log_file": log_file
            })
        except Exception as e:
            return await handle_api_error(request, f"Failed to launch worker: {str(e)}", 500)
            
    except Exception as e:
        return await handle_api_error(request, e, 400)


@server.PromptServer.instance.routes.post("/distributed/stop_worker")
async def stop_worker_endpoint(request):
    """Stop a worker process that was launched from the UI."""
    try:
        wm = get_worker_manager()
        data = await request.json()
        missing = require_fields(data, "worker_id")
        if missing:
            return await handle_api_error(request, f"Missing required field(s): {', '.join(missing)}", 400)

        worker_id = str(data.get("worker_id")).strip()
        config = load_config()
        if not validate_worker_id(worker_id, config):
            return await handle_api_error(request, f"Worker {worker_id} not found", 404)

        success, message = wm.stop_worker(worker_id)
        
        if success:
            return web.json_response({"status": "success", "message": message})
        else:
            return await handle_api_error(
                request,
                message,
                404 if "not managed" in message else 409,
            )
            
    except Exception as e:
        return await handle_api_error(request, e, 400)


@server.PromptServer.instance.routes.get("/distributed/managed_workers")
async def get_managed_workers_endpoint(request):
    """Get list of workers managed by this UI instance."""
    try:
        managed = get_worker_manager().get_managed_workers()
        return web.json_response({
            "status": "success",
            "managed_workers": managed
        })
    except Exception as e:
        return await handle_api_error(request, e, 500)


@server.PromptServer.instance.routes.get("/distributed/local-worker-status")
async def get_local_worker_status_endpoint(request):
    """Check status of all local workers (localhost/no host specified)."""
    try:
        config = load_config()
        worker_statuses = {}
        
        for worker in config.get("workers", []):
            # Only check local workers
            host = normalize_host(worker.get("host")) or ""
            if not host or host in ["localhost", "127.0.0.1"]:
                worker_id = worker["id"]
                port = worker["port"]
                
                # Check if worker is enabled
                if not worker.get("enabled", False):
                    worker_statuses[worker_id] = {
                        "online": False,
                        "enabled": False,
                        "processing": False,
                        "queue_count": 0
                    }
                    continue
                
                # Try to connect to worker
                try:
                    worker_url = build_worker_url(worker)
                    data = await probe_worker(worker_url, timeout=2.0)
                    if data is None:
                        worker_statuses[worker_id] = {
                            "online": False,
                            "enabled": True,
                            "processing": False,
                            "queue_count": 0,
                            "error": "Unavailable",
                        }
                        continue
                    queue_remaining = data.get("exec_info", {}).get("queue_remaining", 0)
                    worker_statuses[worker_id] = {
                        "online": True,
                        "enabled": True,
                        "processing": queue_remaining > 0,
                        "queue_count": queue_remaining
                    }
                except asyncio.TimeoutError:
                    worker_statuses[worker_id] = {
                        "online": False,
                        "enabled": True,
                        "processing": False,
                        "queue_count": 0,
                        "error": "Timeout"
                    }
                except Exception as e:
                    worker_statuses[worker_id] = {
                        "online": False,
                        "enabled": True,
                        "processing": False,
                        "queue_count": 0,
                        "error": str(e)
                    }
        
        return web.json_response({
            "status": "success",
            "worker_statuses": worker_statuses
        })
    except Exception as e:
        debug_log(f"Error checking local worker status: {e}")
        return await handle_api_error(request, e, 500)


@server.PromptServer.instance.routes.get("/distributed/worker_log/{worker_id}")
async def get_worker_log_endpoint(request):
    """Get log content for a specific worker."""
    try:
        wm = get_worker_manager()
        worker_id = request.match_info['worker_id']
        
        # Ensure worker_id is string
        worker_id = str(worker_id)
        
        # Check if we manage this worker
        if worker_id not in wm.processes:
            return await handle_api_error(request, f"Worker {worker_id} not managed by UI", 404)
        
        proc_info = wm.processes[worker_id]
        log_file = proc_info.get('log_file')
        
        if not log_file or not os.path.exists(log_file):
            return await handle_api_error(request, "Log file not found", 404)
        
        # Read last N lines (or full file if small)
        lines_to_read = _parse_positive_int_query(request.query.get('lines'), default=1000)
        
        try:
            loop = asyncio.get_running_loop()
            payload = await loop.run_in_executor(None, _read_worker_log_sync, log_file, lines_to_read)
            
            return web.json_response({
                "status": "success",
                "content": payload["content"],
                "log_file": log_file,
                "file_size": payload["file_size"],
                "truncated": payload["truncated"],
                "lines_shown": payload["lines_shown"],
            })
            
        except Exception as e:
            return await handle_api_error(request, f"Error reading log file: {str(e)}", 500)
            
    except Exception as e:
        return await handle_api_error(request, e, 500)


@server.PromptServer.instance.routes.get("/distributed/remote_worker_log/{worker_id}")
async def get_remote_worker_log_endpoint(request):
    """Proxy a remote worker log request to the worker's local in-memory log endpoint."""
    try:
        worker_id = str(request.match_info["worker_id"]).strip()
        config = load_config()
        worker = _find_worker_by_id(config, worker_id)
        if not worker:
            return await handle_api_error(request, f"Worker {worker_id} not found", 404)

        # Remote log proxy is only meaningful for remote/cloud workers.
        host = normalize_host(worker.get("host")) or ""
        if not host:
            return await handle_api_error(
                request,
                f"Worker {worker_id} is local; use /distributed/worker_log/{worker_id} instead.",
                400,
            )

        lines_to_read = _parse_positive_int_query(request.query.get("lines"), default=300, maximum=3000)
        worker_url = build_worker_url(worker, "/distributed/local_log")
        session = await get_client_session()
        async with session.get(
            worker_url,
            params={"lines": str(lines_to_read)},
            timeout=aiohttp.ClientTimeout(total=5),
        ) as resp:
            if resp.status >= 400:
                body = await resp.text()
                return await handle_api_error(
                    request,
                    f"Remote worker {worker_id} returned HTTP {resp.status}: {body[:400]}",
                    resp.status,
                )

            try:
                data = await resp.json()
            except Exception as e:
                return await handle_api_error(
                    request,
                    f"Remote worker {worker_id} returned invalid JSON: {e}",
                    502,
                )

        return web.json_response(data)
    except Exception as e:
        return await handle_api_error(request, e, 500)


================================================
FILE: conftest.py
================================================
# conftest.py — project-level pytest configuration.
#
# Problem: custom_nodes/ComfyUI-Distributed/__init__.py uses relative imports
# (from .distributed import ...) that fail when pytest tries to import it as a
# standalone module during Package.setup() for the root package node.
#
# Fix: patch Package.setup() to skip the root-package's __init__.py import.
# All actual package context is provided by each test module via
# importlib.util.spec_from_file_location with synthetic stub packages.

from _pytest.python import Package

_orig_pkg_setup = Package.setup


def _patched_pkg_setup(self) -> None:
    # Skip the root package setup — its __init__.py uses relative imports
    # that require a parent package (ComfyUI's plugin loader) which is not
    # available in the test environment.
    if self.path == self.config.rootpath:
        return
    _orig_pkg_setup(self)


Package.setup = _patched_pkg_setup

collect_ignore = [
    "__init__.py",
    "distributed.py",
    "distributed_upscale.py",
]


================================================
FILE: distributed.py
================================================
"""
ComfyUI-Distributed: thin entry point.
All implementation lives in workers/, nodes/, api/.
"""
import atexit
import os

import server

from .utils.config import ensure_config_exists
from .utils.logging import debug_log
from .utils.network import cleanup_client_session
from .workers import get_worker_manager
from .workers.startup import delayed_auto_launch, register_async_signals, sync_cleanup
from .upscale.job_store import ensure_tile_jobs_initialized
from .nodes import (
    NODE_CLASS_MAPPINGS,
    NODE_DISPLAY_NAME_MAPPINGS,
    ImageBatchDivider,
    DistributedCollectorNode,
    DistributedSeed,
    DistributedModelName,
    DistributedValue,
    AudioBatchDivider,
    DistributedEmptyImage,
    AnyType,
    ByPassTypeTuple,
    any_type,
)
from . import api  # noqa: F401 - triggers all @routes.* registrations
from .api.queue_orchestration import ensure_distributed_state

ensure_config_exists()

# Aiohttp session cleanup
async def _cleanup_session():
    await cleanup_client_session()


atexit.register(lambda: None)  # placeholder; real cleanup in sync_cleanup

# Initialize distributed job state on prompt_server
prompt_server = server.PromptServer.instance
ensure_distributed_state(prompt_server)
ensure_tile_jobs_initialized()

# Worker startup
if not os.environ.get('COMFYUI_IS_WORKER'):
    atexit.register(sync_cleanup)
    delayed_auto_launch()
    register_async_signals()


================================================
FILE: docs/comfyui-distributed-api.md
================================================
# ComfyUI-Distributed API (Experimental)

This document describes the **public HTTP API** added to ComfyUI-Distributed to allow queueing *distributed* workflows from external tools (scripts, services, CI jobs, render farms, etc.) without using the ComfyUI web UI.

## Demo

- Video walkthrough: https://youtu.be/yiQlPd0MzLk

## Examples Repository

- Examples repo: https://github.com/umanets/ComfyUI-Distributed-API-examples.git

---

## Overview

### What this adds

- `POST /distributed/queue` — queues a workflow using the same distributed orchestration rules as the UI:
  - Detects distributed nodes in the prompt (`DistributedCollector`, `UltimateSDUpscaleDistributed`).
  - Resolves enabled/selected workers.
  - Pings workers (`GET /prompt`) to include only reachable ones.
  - Dispatches the workflow to workers (`POST /prompt`).
  - Queues the master workflow in ComfyUI’s prompt queue.
  - If any `DistributedCollector` has `load_balance=true`, selects one least-busy participant for this run.

### What it does *not* add

- Authentication/authorization.
- A separate “job status” API for distributed results (you still use ComfyUI’s normal prompt history / websocket flow, and the existing `/distributed/queue_status/{job_id}` behavior for collector queues).

---

## Endpoint: `POST /distributed/queue`

Queue a workflow for distributed execution.

### URL

- `http://<master-host>:<master-port>/distributed/queue`

### Headers

- `Content-Type: application/json`

### Request Body

```json
{
  "prompt": { "<node_id>": { "class_type": "...", "inputs": { } } },
  "workflow": { },
  "client_id": "external-client",
  "delegate_master": false,
  "enabled_worker_ids": ["1", "2"],
  "workers": ["1", "2"],
  "auto_prepare": true,
  "trace_execution_id": "exec_1700000000_ab12cd"
}
```

#### Fields

- `prompt` (required unless `workflow.prompt` is present, object)
  - The ComfyUI prompt/workflow graph, same shape as used by `POST /prompt`.
- `workflow` (optional, object)
  - Workflow metadata that ComfyUI normally stores in `extra_pnginfo.workflow`.
  - If you don’t care about UI metadata, you can omit it.
- `client_id` (required, string)
  - Passed through as `extra_data.client_id` (useful if you consume ComfyUI websocket events).
- `delegate_master` (optional, boolean)
  - If `true`, attempts “workers-only” execution for workflows based on `DistributedCollector`.
  - Current limitation: delegate-only mode **does not support** `UltimateSDUpscaleDistributed` and will fall back to running the full prompt on master.
- `enabled_worker_ids` (required, array of strings)
  - The explicit worker IDs to consider for this run.
- `workers` (optional, array of strings or objects with `id`)
  - Transitional alias for `enabled_worker_ids` used by older clients.
- `auto_prepare` (optional, boolean)
  - Kept for wire compatibility.
  - Backend orchestration always runs with auto-prepare semantics.
  - If top-level `prompt` is omitted, backend will attempt `workflow.prompt`.
- `trace_execution_id` (optional, string)
  - Passed through to orchestration logs.
  - Server log lines include the marker as `[exec:<trace_execution_id>]`.

##### How to get `enabled_worker_ids`

Worker IDs come from the plugin config (`GET /distributed/config`) under `workers[].id`.

Example (bash + `jq`):

```bash
curl -s "http://127.0.0.1:8188/distributed/config" \
  | jq -r '.workers[] | "id=\(.id)\tname=\(.name)\tenabled=\(.enabled)\thost=\(.host)\tport=\(.port)\ttype=\(.type)"'
```

Example (PowerShell):

```powershell
$cfg = Invoke-RestMethod "http://127.0.0.1:8188/distributed/config"
$cfg.workers | Select-Object id,name,enabled,host,port,type | Format-Table -AutoSize
```

### Response Body

```json
{
  "prompt_id": "<uuid>",
  "worker_count": 2,
  "auto_prepare_supported": true
}
```

- `prompt_id` — the master prompt id queued into ComfyUI.
- `worker_count` — number of workers that received a dispatched prompt (only those that passed the health check).

### Status Codes

- `200` — queued.
- `400` — invalid JSON or invalid body.
- `500` — orchestration/dispatch failure (see server logs for details).

---

## Worker requirements (important)

For a worker to participate, it must be reachable from the master:

- Health check: `GET <worker-base>/prompt` must return HTTP 200.
- Dispatch: `POST <worker-base>/prompt` must accept the workflow.

Also, for collector-based flows:

- Workers will send results back to the master via `POST /distributed/job_complete` (that route must be reachable from workers).

---

## Endpoint: `POST /distributed/job_complete`

Submit one completed worker image back to the master collector queue.

### URL

- `http://<master-host>:<master-port>/distributed/job_complete`

### Request Body

```json
{
  "job_id": "exec_1234567890_17",
  "worker_id": "worker-1",
  "batch_idx": 0,
  "image": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...",
  "is_last": false
}
```

### Canonical envelope (required fields)

- `job_id` (string, required)
- `worker_id` (string, required)
- `batch_idx` (integer >= 0, required)
- `image` (string, required)
  - PNG payload as either:
    - data URL: `data:image/png;base64,...`
    - raw base64 PNG bytes
- `is_last` (boolean, required)

Legacy multipart/tensor payload formats are no longer accepted on this endpoint.

### CORS note

If you call the API from a browser (not from a backend), ensure the master ComfyUI is started with `--enable-cors-header`.

---

## Log Endpoints

### `GET /distributed/worker_log/{worker_id}`

Read log files for workers launched locally by the master UI process manager.

- Intended for managed local workers.
- Query param: `lines` (optional, default `1000`).

### `GET /distributed/local_log`

Read this ComfyUI instance's in-memory runtime log buffer.

- Available on any ComfyUI-Distributed instance (master or worker).
- Query param: `lines` (optional, default `300`, max `3000`).

### `GET /distributed/remote_worker_log/{worker_id}`

Proxy endpoint on master that fetches logs from a configured remote/cloud worker's
`/distributed/local_log`.

- Intended for remote/cloud workers in master config.
- Query param: `lines` (optional, default `300`, max `3000`).

---

## Examples

### 1) Minimal `curl`

```bash
curl -X POST "http://127.0.0.1:8188/distributed/queue" \
  -H "Content-Type: application/json" \
  -d @payload.json
```

Where `payload.json` contains at least:

```json
{
  "prompt": {
    "1": {"class_type": "KSampler", "inputs": {} }
  },
  "enabled_worker_ids": [],
  "client_id": "external-client"
}
```

### 2) Python (`requests`)

```python
import requests

url = "http://127.0.0.1:8188/distributed/queue"
payload = {
    "prompt": {...},
    "workflow": {...},
    "client_id": "external-client",
    "delegate_master": False,
    "enabled_worker_ids": ["1", "2"],
}

r = requests.post(url, json=payload, timeout=60)
r.raise_for_status()
print(r.json())
```

### 3) JavaScript (`fetch`)

```js
const url = "http://127.0.0.1:8188/distributed/queue";

const payload = {
  prompt: {/* ... */},
  workflow: {/* ... */},
  client_id: "external-client",
  delegate_master: false,
  enabled_worker_ids: ["1", "2"],
};

const resp = await fetch(url, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify(payload),
});

if (!resp.ok) throw new Error(await resp.text());
console.log(await resp.json());
```

---

## Operational notes / gotchas

- If the workflow contains **no distributed nodes**, the endpoint falls back to normal master queueing and returns `worker_count: 0`.
- Worker selection is “best-effort”: offline workers are skipped.
- For public URLs/tunnels: prefer configuring `master.host` with an explicit scheme (`https://...`) to avoid ambiguity.

---

## Changelog (this feature)

- Added `POST /distributed/queue` endpoint.
- Added orchestration module used by the endpoint.


================================================
FILE: docs/model-download-script.md
================================================
## Automating ComfyUI Model Downloads
> This guide will walk you through creating a shell script to automatically download the necessary models for your ComfyUI workflow, leveraging an advanced Large Language Model (LLM).

1. In ComfyUI (on your local machine), export your workflow as an API workflow
2. Copy the below prompt and upload the API workflow to an LLM **that has access to the internet**

<details>
<summary><strong>📋 Click to expand the full prompt</strong></summary>

```
Create a sh script that will download the models from this workflow into the correct folders. For reference, these are the paths:
base_path: /workspace/ComfyUI
checkpoints: models/checkpoints/
clip: models/clip/
clip_vision: models/clip_vision/
controlnet: models/controlnet/
diffusion_models: models/diffusion_models/
embeddings: models/embeddings/
florence2: models/florence2/
ipadapter: models/ipadapter/
loras: models/loras/
style_models: models/style_models/
text_encoders: models/text_encoders/
unet: models/unet/
upscale_models: models/upscale_models/
vae: models/vae/
---
Important:
Make sure you find the correct URLs for the models online.
Use comfy cli to download the models: `comfy model download --url <URL> [--relative-path <PATH>] [--set-civitai-api-token <TOKEN>] [--set-hf-api-token <TOKEN>]`
Make sure you add `--set-civitai-api-token $CIVITAI_API_TOKEN` for CivitAI download and `--set-hf-api-token $HF_API_TOKEN` for Hugging Face downloads.
---
Example:
#!/bin/bash
# Download from CivitAI
comfy model download --url https://civitai.com/api/download/models/1759168 --relative-path /workspace/ComfyUI/models/checkpoints --set-civitai-api-token $CIVITAI_API_TOKEN
# Download model from Hugging Face
comfy model download --url https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors --relative-path /workspace/ComfyUI/models/unet --set-hf-api-token $HF_API_TOKEN
# If a model in the workflow was in a subfolder
comfy model download --url https://civitai.com/api/download/models/1759168 --relative-path /workspace/ComfyUI/models/checkpoints/SDXL --set-civitai-api-token $CIVITAI_API_TOKEN
```

</details>

3. Review the LLMs output to make sure all download links are correct and save it as a .sh file, for example `download_models.sh`
4. Launch the [ComfyUI Distributed Pod](https://console.runpod.io/deploy?template=m21ynvo8yo&ref=ak218p52) with these Environment Variables:
   - `CIVITAI_API_TOKEN`: [get your token here](https://civitai.com/user/account)
   - `HF_API_TOKEN`: [get your token here](https://huggingface.co/settings/tokens)
5. Upload the .sh file to your Runpod instance, into `/workspace`
6. Then run these commands:
   - `chmod 755 /workspace/download_models.sh`
   - `/workspace/download_models.sh`
7. Confirm each model name (sometimes you might need to rename them to match the name on your local machine)


================================================
FILE: docs/video-upscaler-runpod-preset.md
================================================
![Clipboard Image](https://github.com/user-attachments/assets/5dc5224f-3f47-442c-b94a-116afeb28132)

**Accelerated Creative Video Upscaler On Runpod:**

1. Use the [ComfyUI Distributed Pod](https://console.runpod.io/deploy?template=m21ynvo8yo&ref=0bw29uf3ug0p) template.
2. Filter instances by CUDA 12.8 (add filter in Additional Filters at the top of the page).
3. Choose 4x 5090s
4. Press Edit Template to configure the pod's Environment Variables:
	- CIVITAI_API_TOKEN: Not necessary for this workflow.
	- HF_API_TOKEN: [get your token here](https://huggingface.co/settings/tokens)
	- SAGE_ATTENTION: optional optimisation (set to true/false). Recommended for this workflow.
	- PRESET_VIDEO_UPSCALER: set to true. This will download everything you need.
5. Deploy your pod.
6. Once pod setup is complete, connect to ComfyUI running on your pod.
7. In ComfyUI, open the GPU panel on the left.
> If you set SAGE_ATTENTION to true, add "--use-sage-attention" to Extra Args on the workers.
8. Launch the workers.
9. [Load the workflow.](https://github.com/robertvoy/ComfyUI-Distributed/blob/main/workflows/distributed-upscale-video.json)
10. Upload video, add prompt and run workflow.
11. Right-click the Video Combine node and click Save Preview to save the video.


================================================
FILE: docs/worker-setup-guides.md
================================================
## Worker Setup Guide

**Master**: The main ComfyUI instance that coordinates and distributes work. This is where you load workflows, manage the queue, and view results.

**Worker**: A ComfyUI instance that receives and processes tasks from the master. Workers handle just the GPU computation and send results back to the master. You can have multiple workers connected to a single master, each utilizing their own GPU.

<img width="600" src="https://github.com/user-attachments/assets/609c42aa-8a1c-4a3f-939e-f3552fa1d54f" />

### Master participation modes

The master can either contribute GPU work or stay in **orchestrator-only** mode:

- **Participating**: Master renders alongside workers, useful when you want every available GPU.
- **Orchestrator-only**: Master sends jobs to selected workers but skips local rendering. Enable this by opening the Distributed panel and unchecking the master toggle. The master card will display *“Master disabled: running as orchestrator only.”*
- **Fallback**: If orchestrator-only is enabled but no workers remain selected, the master automatically re-enables execution to guarantee the workflow still runs. The UI shows a green *“Master fallback execution active”* badge so you know work is executing locally again.

### Types of Workers

- **Local workers**: Additional GPUs on the same machine as the master
- **Remote workers**: GPUs on different computers within your network
- **Cloud workers**: GPUs hosted on cloud services like Runpod

## Local workers

<img align="right" width="200" src="https://github.com/user-attachments/assets/651e4912-7c23-4e32-bd88-250f5175e129" />

> These are added automatically on first launch, but you can add them manually if you need to.


📺 [Watch Tutorial](https://youtu.be/p6eE3IlAbOs?si=K7Km0_flmPHwRQwz&t=43)

1. **Open** the Distributed GPU panel.
2. **Click** "Add Worker" in the UI.
3. **Configure** your local worker:
   - **Name**: A descriptive name for the worker (e.g., "Studio PC 1")
   - **Port**: A unique port number for this worker (e.g., 8189, 8190...).
   - **CUDA Device**: The GPU index from `nvidia-smi` (e.g., 0, 1).
   - **Extra Args**: Optional ComfyUI arguments for this specific worker.
4. **Save** and  launch the local worker.

## Remote workers

<img align="right" width="200"  src="https://github.com/user-attachments/assets/84291921-c44e-4556-94f2-a3b16500f4f9" />


> ComfyUI instances running on completely different computers on your network. These allow you to harness GPU power from other machines. Remote workers must be manually started on their respective computers and are connected via IP address.

📺 [Watch Tutorial](https://youtu.be/p6eE3IlAbOs?si=Oxj3EzPyf4jKDvfG&t=140)

**On the Remote Worker Machine:**
1. **Launch** ComfyUI with the `--listen --enable-cors-header` arguments. ⚠️ **Required!**
   - This ComfyUI instance will serve as a worker for your main master.
2. *Optionally* add additional local workers on this machine if it has multiple GPUs:
   - Access the Distributed GPU panel in this ComfyUI instance
   - Add workers for any additional GPUs (if they haven't been added automatically)
   - Make sure they have `--listen` set in `Extra Args`
   - Launch them
3. **Open** the ComfyUI port (e.g., 8188) and any additional worker ports (e.g., 8189, 8190) in the firewall.
  
**On the Main Machine:**
1. **Launch** ComfyUI with `--enable-cors-header` launch argument.
2. **Open** the Distributed GPU panel (sidebar on the left).
3. **Click** "Add Worker."
4. **Choose** "Remote".
5. **Configure** your remote worker:
   - **Name**: A descriptive name for the worker (e.g., "Server Rack GPU 0")
   - **Host**: The remote worker's IP address.
   - **Port**: The port number used when launching ComfyUI on the remote master/worker (e.g., 8188).
6. **Save** the remote worker configuration.
  
## Cloud workers

<img align="right" width="200"  src="https://github.com/user-attachments/assets/a053f3ae-22f0-4e1c-8f2e-f26a1f660adf" />

> ComfyUI instances running on a cloud service like Runpod. 

### Deploy Cloud Worker on Runpod

📺 [Watch Tutorial](https://www.youtube.com/watch?v=wxKKWMQhYTk)

**On Runpod:**
> If using your own template, make sure you launch ComfyUI with the `--enable-cors-header` argument and you `git clone ComfyUI-Distributed` into custom_nodes. ⚠️ **Required!**

1. Register a [Runpod](https://get.runpod.io/0bw29uf3ug0p) account.
2. On Runpod, go to Storage > New Network Volume and create a volume that will store the models you need. Start with 40 GB, you can always add more later. Learn more [about Network Volumes](https://docs.runpod.io/pods/storage/create-network-volumes).
3. Use the [ComfyUI Distributed Pod](https://console.runpod.io/deploy?template=m21ynvo8yo&ref=0bw29uf3ug0p) template.
4. Make sure your Network Volume is mounted and choose a suitable GPU.
> ⚠️ To use the ComfyUI Distributed Pod template, you will need to filter instances by CUDA 12.8 (add filter in Additional Filters).
6. Press Edit Template to configure the pod's Environment Variables:
	- CIVITAI_API_TOKEN: [get your token here](https://civitai.com/user/account)
	- HF_API_TOKEN: [get your token here](https://huggingface.co/settings/tokens)
	- SAGE_ATTENTION: optional optimisation (set to true/false)
5. Deploy your pod.
6. Connect to your pod using JupyterLabs. This gives us access to the pod's file system.
7. Download models into /workspaces/ComfyUI/models/ (these will remain on your network drive even after you terminate the pod). Example commands below:
```
# Download from CivitAI
comfy model download --url https://civitai.com/api/download/models/1759168 --relative-path /workspace/ComfyUI/models/checkpoints --set-civitai-api-token $CIVITAI_API_TOKEN
# Download model from Hugging Face
comfy model download --url https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors --relative-path /workspace/ComfyUI/models/unet --set-hf-api-token $HF_API_TOKEN
```
> ℹ️ Use [this guide](model-download-script.md) to make this process easy. It will generate a shell script that automatically downloads the models for a given workflow.
9. Access ComfyUI through the Runpod URL.
10. Download any additional custom nodes you need using the ComfyUI Manager.

**On the Main Machine:**
1. **Launch** a Cloudflare tunnel.
   - Download from here: [https://github.com/cloudflare/cloudflared/releases](https://github.com/cloudflare/cloudflared/releases)
	- Then run, for example: `cloudflared-windows-amd64.exe tunnel --url http://localhost:8188`
> ℹ️ Cloudflare tunnels create secure connections without exposing ports directly to the internet and are required for Cloud Workers.
2. **Copy** the Cloudflare address
3. **Launch** ComfyUI with `--enable-cors-header` launch argument.
4. **Open** the Distributed GPU panel (sidebar on the left).
5. **Edit** the Master's settings to change the host address to the Cloudflare address.
6. **Click** "Add Worker."
7. **Choose** "Cloud".
8. **Configure** your cloud worker:
	- **Host**: The ComfyUI Runpod address. For example: `wcegfo9tbbml9l-8188.proxy.runpod.net`
	- **Port**: 443
9. **Save** the remote worker configuration.

---

### Deploy Cloud Worker on Other Platforms

**On the Cloud Worker machine:**
   - Your cloud worker container needs to have the same models and custom nodes as the workflow you want to run on your local machine.
   - If your cloud platform doesn't provide a secure connection, use Cloudflare to create a tunnel for the worker. Each GPU needs their own tunnel for their respective port.
	- For example: `./cloudflared tunnel --url http://localhost:8188`
1. **Launch** ComfyUI with the `--listen --enable-cors-header` arguments. ⚠️ **Required!**
2. **Add** workers in the UI panel if the cloud machine has more than one GPU.
   - Make sure that they also have `--listen` set in `Extra Args`.
   - Then launch them.
  
**On the Main Machine:**
1. **Launch** a Cloudflare tunnel on your local machine.
   - Download from here: [https://github.com/cloudflare/cloudflared/releases](https://github.com/cloudflare/cloudflared/releases)
   - Then run, for example: `cloudflared-windows-amd64.exe tunnel --url http://localhost:8188`
2. **Copy** the Cloudflare address
3. **Launch** ComfyUI with `--enable-cors-header` launch argument.
4. **Open** the Distributed GPU panel (sidebar on the left).
5. **Edit** the Master's host address and replace it with the Cloudflare address.
6. **Click** "Add Worker."
7. **Choose** "Cloud".
8. **Configure** your cloud worker:
   - **Host**: The remote worker's IP address/domain
   - **Port**: 443
9. **Save** the remote worker configuration.


================================================
FILE: nodes/__init__.py
================================================
from .utilities import (
    DistributedSeed,
    DistributedModelName,
    DistributedValue,
    ImageBatchDivider,
    AudioBatchDivider,
    DistributedEmptyImage,
    AnyType,
    ByPassTypeTuple,
    any_type,
)
from .collector import DistributedCollectorNode

NODE_CLASS_MAPPINGS = {
    "DistributedCollector": DistributedCollectorNode,
    "DistributedSeed": DistributedSeed,
    "DistributedModelName": DistributedModelName,
    "DistributedValue": DistributedValue,
    "ImageBatchDivider": ImageBatchDivider,
    "AudioBatchDivider": AudioBatchDivider,
    "DistributedEmptyImage": DistributedEmptyImage,
}
NODE_DISPLAY_NAME_MAPPINGS = {
    "DistributedCollector": "Distributed Collector",
    "DistributedSeed": "Distributed Seed",
    "DistributedModelName": "Distributed Model Name",
    "DistributedValue": "Distributed Value",
    "ImageBatchDivider": "Image Batch Divider",
    "AudioBatchDivider": "Audio Batch Divider",
    "DistributedEmptyImage": "Distributed Empty Image",
}


================================================
FILE: nodes/collector.py
================================================
import torch
import io
import json
import asyncio
import time
import base64

import aiohttp
import server as _server
import comfy.model_management
from comfy.utils import ProgressBar

from ..utils.logging import debug_log, log
from ..utils.config import get_worker_timeout_seconds, load_config, is_master_delegate_only
from ..utils.constants import HEARTBEAT_INTERVAL
from ..utils.image import tensor_to_pil, pil_to_tensor, ensure_contiguous
from ..utils.network import build_worker_url, get_client_session, probe_worker
from ..utils.audio_payload import encode_audio_payload
from ..utils.async_helpers import run_async_in_server_loop

prompt_server = _server.PromptServer.instance


class DistributedCollectorNode:
    EMPTY_AUDIO = {"waveform": torch.zeros(1, 2, 1), "sample_rate": 44100}

    @classmethod
    def INPUT_TYPES(s):
        return {
            "required": {
                "images": ("IMAGE",),
                "load_balance": (
                    "BOOLEAN",
                    {
                        "default": False,
                        "tooltip": "Run this workflow on one least-busy participant (master included when participating).",
                    },
                ),
            },
            "optional": { "audio": ("AUDIO",) },
            "hidden": {
                "multi_job_id": ("STRING", {"default": ""}),
                "is_worker": ("BOOLEAN", {"default": False}),
                "master_url": ("STRING", {"default": ""}),
                "enabled_worker_ids": ("STRING", {"default": "[]"}),
                "worker_batch_size": ("INT", {"default": 1, "min": 1, "max": 1024}),
                "worker_id": ("STRING", {"default": ""}),
                "pass_through": ("BOOLEAN", {"default": False}),
                "delegate_only": ("BOOLEAN", {"default": False}),
            },
        }

    RETURN_TYPES = ("IMAGE", "AUDIO")
    RETURN_NAMES = ("images", "audio")
    FUNCTION = "run"
    CATEGORY = "image"
    
    def run(self, images, load_balance=False, audio=None, multi_job_id="", is_worker=False, master_url="", enabled_worker_ids="[]", worker_batch_size=1, worker_id="", pass_through=False, delegate_only=False):
        # Create empty audio if not provided
        empty_audio = {"waveform": torch.zeros(1, 2, 1), "sample_rate": 44100}

        if not multi_job_id or pass_through:
            if pass_through:
                debug_log("Collector: pass-through mode enabled, returning images unchanged")
            return (images, audio if audio is not None else empty_audio)

        # Use async helper to run in server loop
        result = run_async_in_server_loop(
            self.execute(
                images,
                audio,
                load_balance,
                multi_job_id,
                is_worker,
                master_url,
                enabled_worker_ids,
                worker_batch_size,
                worker_id,
                delegate_only,
            )
        )
        return result

    async def send_batch_to_master(self, image_batch, audio, multi_job_id, master_url, worker_id):
        """Send image batch to master via canonical JSON envelopes."""
        batch_size = image_batch.shape[0]
        if batch_size == 0:
            return

        encoded_audio = encode_audio_payload(audio)

        session = await get_client_session()
        url = f"{master_url}/distributed/job_complete"
        for batch_idx in range(batch_size):
            img = tensor_to_pil(image_batch[batch_idx:batch_idx+1], 0)
            byte_io = io.BytesIO()
            img.save(byte_io, format='PNG', compress_level=0)
            encoded_image = base64.b64encode(byte_io.getvalue()).decode('utf-8')
            payload = {
                "job_id": str(multi_job_id),
                "worker_id": str(worker_id),
                "batch_idx": int(batch_idx),
                "image": f"data:image/png;base64,{encoded_image}",
                "is_last": bool(batch_idx == batch_size - 1),
            }
            if payload["is_last"] and encoded_audio is not None:
                payload["audio"] = encoded_audio

            try:
                async with session.post(
                    url,
                    json=payload,
                    timeout=aiohttp.ClientTimeout(total=60),
                ) as response:
                    response.raise_for_status()
            except Exception as e:
                log(f"Worker - Failed to send canonical image envelope to master: {e}")
                debug_log(f"Worker - Full error details: URL={url}")
                raise  # Re-raise to handle at caller level

    def _combine_audio(self, master_audio, worker_audio, empty_audio, worker_order=None):
        """Combine audio from master and workers into a single audio output.

        Ordering: master first, then workers in `worker_order` (if provided),
        then any unexpected worker ids in sorted order.
        """
        audio_pieces = []
        sample_rate = 44100

        # Add master audio first if present
        if master_audio is not None:
            waveform = master_audio.get("waveform")
            if waveform is not None and waveform.numel() > 0:
                audio_pieces.append(waveform)
                sample_rate = master_audio.get("sample_rate", 44100)

        # Add worker audio in configured enabled-worker order first.
        ordered_worker_ids = [str(worker_id) for worker_id in (worker_order or [])]
        seen = set()
        for worker_id_str in ordered_worker_ids:
            seen.add(worker_id_str)
            w_audio = worker_audio.get(worker_id_str)
            if w_audio is not None:
                waveform = w_audio.get("waveform")
                if waveform is not None and waveform.numel() > 0:
                    audio_pieces.append(waveform)
                    # Use first available sample rate
                    if sample_rate == 44100:
                        sample_rate = w_audio.get("sample_rate", 44100)

        # Append any audio from unexpected worker ids deterministically.
        for worker_id_str in sorted(worker_audio.keys()):
            if worker_id_str in seen:
                continue
            w_audio = worker_audio[worker_id_str]
            if w_audio is not None:
                waveform = w_audio.get("waveform")
                if waveform is not None and waveform.numel() > 0:
                    audio_pieces.append(waveform)
                    if sample_rate == 44100:
                        sample_rate = w_audio.get("sample_rate", 44100)

        if not audio_pieces:
            return empty_audio

        try:
            # Concatenate along the samples dimension (dim=-1)
            # Ensure all pieces have same batch and channel dimensions
            combined_waveform = torch.cat(audio_pieces, dim=-1)
            debug_log(f"Master - Combined audio: {len(audio_pieces)} pieces, final shape={combined_waveform.shape}")
            return {"waveform": combined_waveform, "sample_rate": sample_rate}
        except Exception as e:
            log(f"[Distributed] Master - Audio combination failed, returning silence: {e}")
            return empty_audio

    def _store_worker_result(self, worker_images: dict, item: dict) -> int:
        """Store one canonical queue item in worker_images in-place.

        Canonical format:
        - item has 'worker_id', 'image_index', and 'tensor'
        Returns 1 when stored, otherwise 0.
        """
        worker_id = item['worker_id']
        tensor = item.get('tensor')
        image_index = item.get('image_index')
        if tensor is None or image_index is None:
            return 0

        worker_images.setdefault(worker_id, {})
        worker_images[worker_id][image_index] = tensor
        return 1

    def _reorder_and_combine_tensors(
        self,
        worker_images: dict,
        worker_order: list,
        master_batch_size: int,
        images_on_cpu,
        delegate_mode: bool,
        fallback_images,
    ) -> torch.Tensor:
        """Assemble final tensor: master first, then workers in enabled order."""
        ordered_tensors = []
        if not delegate_mode and images_on_cpu is not None:
            for i in range(master_batch_size):
                ordered_tensors.append(images_on_cpu[i:i+1])

        ordered_worker_ids = [str(worker_id) for worker_id in (worker_order or [])]
        seen = set()
        for worker_id_str in ordered_worker_ids:
            seen.add(worker_id_str)
            if worker_id_str not in worker_images:
                continue
            for idx in sorted(worker_images[worker_id_str].keys()):
                ordered_tensors.append(worker_images[worker_id_str][idx])

        # Append any unexpected worker ids deterministically.
        for worker_id_str in sorted(worker_images.keys()):
            if worker_id_str in seen:
                continue
            for idx in sorted(worker_images[worker_id_str].keys()):
                ordered_tensors.append(worker_images[worker_id_str][idx])

        cpu_tensors = []
        for t in ordered_tensors:
            if t.is_cuda:
                t = t.cpu()
            t = ensure_contiguous(t)
            cpu_tensors.append(t)

        if cpu_tensors:
            return ensure_contiguous(torch.cat(cpu_tensors, dim=0))
        elif fallback_images is not None:
            return ensure_contiguous(fallback_images)
        else:
            raise ValueError("No image data collected from master or workers")

    async def execute(self, images, audio, load_balance=False, multi_job_id="", is_worker=False, master_url="", enabled_worker_ids="[]", worker_batch_size=1, worker_id="", delegate_only=False):
        if is_worker:
            # Worker mode: send images and audio to master in a single batch
            debug_log(f"Worker - Job {multi_job_id} complete. Sending {images.shape[0]} image(s) to master")
            await self.send_batch_to_master(images, audio, multi_job_id, master_url, worker_id)
            return (images, audio if audio is not None else self.EMPTY_AUDIO)
        else:
            delegate_mode = delegate_only or is_master_delegate_only()
            # Master mode: collect images and audio from workers
            enabled_workers_raw = json.loads(enabled_worker_ids)
            enabled_workers = []
            seen_enabled = set()
            for worker_id in enabled_workers_raw:
                worker_id_str = str(worker_id)
                if worker_id_str in seen_enabled:
                    continue
                seen_enabled.add(worker_id_str)
                enabled_workers.append(worker_id_str)
            expected_workers = set(enabled_workers)
            num_workers = len(expected_workers)
            if num_workers == 0:
                return (images, audio if audio is not None else self.EMPTY_AUDIO)

            # Create the queue before any expensive local work to avoid job_complete race.
            async with prompt_server.distributed_jobs_lock:
                if multi_job_id not in prompt_server.distributed_pending_jobs:
                    prompt_server.distributed_pending_jobs[multi_job_id] = asyncio.Queue()
                    debug_log(f"Master - Initialized queue early for job {multi_job_id}")
                else:
                    existing_size = prompt_server.distributed_pending_jobs[multi_job_id].qsize()
                    debug_log(f"Master - Using existing queue for job {multi_job_id} (current size: {existing_size})")

            if delegate_mode:
                master_batch_size = 0
                images_on_cpu = None
                master_audio = None
                debug_log(f"Master - Job {multi_job_id}: Delegate-only mode enabled, collecting exclusively from {num_workers} workers")
            else:
                images_on_cpu = images.cpu()
                master_batch_size = images.shape[0]
                master_audio = audio  # Keep master's audio for later
                debug_log(f"Master - Job {multi_job_id}: Master has {master_batch_size} images, collecting from {num_workers} workers...")

                # Ensure master images are contiguous
                images_on_cpu = ensure_contiguous(images_on_cpu)


            # Initialize storage for collected images and audio
            worker_images = {}  # Dict to store images by worker_id and index
            worker_audio = {}   # Dict to store audio by worker_id
            
            # Collect images until all workers report they're done
            collected_count = 0
            workers_done = set()
            
            # Use unified worker timeout from config/UI with simple sliced waits
            base_timeout = float(get_worker_timeout_seconds())
            slice_timeout = min(max(0.1, HEARTBEAT_INTERVAL / 20.0), base_timeout)
            last_activity = time.time()
            
            
            # Get queue size before starting
            async with prompt_server.distributed_jobs_lock:
                q = prompt_server.distributed_pending_jobs[multi_job_id]
                initial_size = q.qsize()

            # NEW: Initialize progress bar for workers (total = num_workers)
            p = ProgressBar(num_workers)

            def mark_worker_done(done_worker_id):
                done_worker_id = str(done_worker_id)
                if done_worker_id not in expected_workers:
                    debug_log(
                        f"Master - Ignoring completion from unexpected worker {done_worker_id} for job {multi_job_id}"
                    )
                    return
                if done_worker_id in workers_done:
                    debug_log(
                        f"Master - Ignoring duplicate completion from worker {done_worker_id} for job {multi_job_id}"
                    )
                    return
                workers_done.add(done_worker_id)
                p.update(1)  # +1 per completed expected worker

            try:
                while len(workers_done) < num_workers:
                    # Check for user interruption to abort collection promptly
                    comfy.model_management.throw_exception_if_processing_interrupted()
                    try:
                        # Get the queue again each time to ensure we have the right reference
                        async with prompt_server.distributed_jobs_lock:
                            q = prompt_server.distributed_pending_jobs[multi_job_id]
                            current_size = q.qsize()
                        
                        result = await asyncio.wait_for(q.get(), timeout=slice_timeout)
                        worker_id = result['worker_id']
                        is_last = result.get('is_last', False)
                        count = self._store_worker_result(worker_images, result)
                        collected_count += count
                        debug_log(
                            f"Master - Got canonical result from worker {worker_id}, "
                            f"image {result.get('image_index', 0)}, is_last={is_last}"
                        )

                        # Collect audio data if present
                        result_audio = result.get('audio')
                        if result_audio is not None:
                            worker_audio[worker_id] = result_audio
                            debug_log(f"Master - Got audio from worker {worker_id}")

                        # Record activity and refresh timeout baseline
                        last_activity = time.time()
                        base_timeout = float(get_worker_timeout_seconds())

                        if is_last:
                            mark_worker_done(worker_id)
                        
                    except asyncio.TimeoutError:
                        # If we still have time, continue polling; otherwise handle timeout
                        if (time.time() - last_activity) < base_timeout:
                            comfy.model_management.throw_exception_if_processing_interrupted()
                            continue
                        # Re-check for user interruption after timeout expiry
                        comfy.model_management.throw_exception_if_processing_interrupted()
                        missing_workers = set(str(w) for w in enabled_workers) - workers_done
                        elapsed = time.time() - last_activity
                        for missing_worker_id in sorted(missing_workers):
                            log(
                                "Master - Heartbeat timeout: "
                                f"worker={missing_worker_id}, elapsed={elapsed:.1f}s"
                            )
                        log(
                            f"Master - Heartbeat timeout. Still waiting for workers: {list(missing_workers)} "
                            f"(elapsed={elapsed:.1f}s)"
                        )

                        # Probe missing workers' /prompt endpoints to check if they are actively processing
                        any_busy = False
                        try:
                            cfg = load_config()
                            cfg_workers = cfg.get('workers', [])
                            for wid in list(missing_workers):
                                wrec = next((w for w in cfg_workers if str(w.get('id')) == str(wid)), None)
                                if not wrec:
                                    debug_log(f"Collector probe: worker {wid} not found in config")
                                    continue
                                worker_url = build_worker_url(wrec)
                                try:
                                    payload = await probe_worker(worker_url, timeout=2.0)
                                    queue_remaining = None
                                    if payload is not None:
                                        queue_remaining = int(payload.get('exec_info', {}).get('queue_remaining', 0))
                                    debug_log(
                                        "Collector probe: worker "
                                        f"{wid} online={payload is not None} queue_remaining={queue_remaining}"
                                    )
                                    if payload is not None and queue_remaining and queue_remaining > 0:
                                        any_busy = True
                                        log(
                                            f"Master - Probe grace: worker {wid} appears busy "
                                            f"(queue_remaining={queue_remaining}). Continuing to wait."
                                        )
                                        break
                                except Exception as e:
                                    debug_log(f"Collector probe failed for worker {wid}: {e}")
                        except Exception as e:
                            debug_log(f"Collector probe setup error: {e}")

                        if any_busy:
                            # Refresh last_activity and continue waiting
                            last_activity = time.time()
                            # Refresh base timeout in case the user changed it in UI
                            base_timeout = float(get_worker_timeout_seconds())
                            continue
                        
                        # Check queue size again with lock
                        async with prompt_server.distributed_jobs_lock:
                            if multi_job_id in prompt_server.distributed_pending_jobs:
                                final_q = prompt_server.distributed_pending_jobs[multi_job_id]
                                final_size = final_q.qsize()
                                
                                # Try to drain any remaining items
                                remaining_items = []
                                while not final_q.empty():
                                    try:
                                        item = final_q.get_nowait()
                                        remaining_items.append(item)
                                    except asyncio.QueueEmpty:
                                        break
                                
                                if remaining_items:
                                    # Process them
                                    for item in remaining_items:
                                        worker_id = item['worker_id']
                                        is_last = item.get('is_last', False)

                                        collected_count += self._store_worker_result(worker_images, item)
                                        
                                        if is_last:
                                            mark_worker_done(worker_id)
                            else:
                                log(f"Master - Queue {multi_job_id} no longer exists!")
                        break
            except comfy.model_management.InterruptProcessingException:
                # Cleanup queue on interruption and re-raise to abort prompt cleanly
                async with prompt_server.distributed_jobs_lock:
                    if multi_job_id in prompt_server.distributed_pending_jobs:
                        del prompt_server.distributed_pending_jobs[multi_job_id]
                raise
            
            total_collected = sum(len(imgs) for imgs in worker_images.values())
            
            # Clean up job queue
            async with prompt_server.distributed_jobs_lock:
                if multi_job_id in prompt_server.distributed_pending_jobs:
                    del prompt_server.distributed_pending_jobs[multi_job_id]

            try:
                combined = self._reorder_and_combine_tensors(
                    worker_images, enabled_workers, master_batch_size, images_on_cpu, delegate_mode, images
                )
                debug_log(f"Master - Job {multi_job_id} complete. Combined {combined.shape[0]} images total "
                          f"(master: {master_batch_size}, workers: {combined.shape[0] - master_batch_size})")

                # Combine audio from master and workers
                combined_audio = self._combine_audio(master_audio, worker_audio, self.EMPTY_AUDIO, enabled_workers)

                return (combined, combined_audio)
            except Exception as e:
                log(f"Master - Error combining images: {e}")
                # Return just the master images as fallback
                return (images, audio if audio is not None else self.EMPTY_AUDIO)


================================================
FILE: nodes/distributed_upscale.py
================================================
import json
import math
from functools import wraps

import comfy.samplers

from ..utils.logging import debug_log, log
from ..utils.async_helpers import run_async_in_server_loop
from ..upscale.job_store import ensure_tile_jobs_initialized

from ..upscale.tile_ops import TileOpsMixin
from ..upscale.result_collector import ResultCollectorMixin
from ..upscale.worker_comms import WorkerCommsMixin
from ..upscale.job_state import JobStateMixin
from ..upscale.modes.single_gpu import SingleGpuModeMixin
from ..upscale.modes.static import StaticModeMixin
from ..upscale.modes.dynamic import DynamicModeMixin

def sync_wrapper(async_func):
    """Decorator to wrap async methods for synchronous execution."""
    @wraps(async_func)
    def sync_func(self, *args, **kwargs):
        # Use run_async_in_server_loop for ComfyUI compatibility
        return run_async_in_server_loop(
            async_func(self, *args, **kwargs),
            timeout=600.0  # 10 minute timeout for long operations
        )
    return sync_func

def _parse_enabled_worker_ids(enabled_worker_ids):
    """Parse enabled worker IDs from either JSON or list input."""
    if isinstance(enabled_worker_ids, list):
        return [str(worker_id) for worker_id in enabled_worker_ids]
    if not enabled_worker_ids:
        return []
    if isinstance(enabled_worker_ids, str):
        try:
            parsed = json.loads(enabled_worker_ids)
        except json.JSONDecodeError:
            log("USDU Dist: Invalid enabled_worker_ids JSON; defaulting to no workers.")
            return []
        if isinstance(parsed, list):
            return [str(wid) for wid in parsed]
    return []

class UltimateSDUpscaleDistributed(
    DynamicModeMixin,
    StaticModeMixin,
    SingleGpuModeMixin,
    ResultCollectorMixin,
    WorkerCommsMixin,
    JobStateMixin,
    TileOpsMixin,
):

    """
    Distributed version of Ultimate SD Upscale (No Upscale).
    
    Supports three processing modes:
    1. Single GPU: No workers available, process everything locally
    2. Static Mode: Small batches, distributes tiles across workers (flattened)
    3. Dynamic Mode: Large batches, assigns whole images to workers dynamically
    
    Features:
    - Multi-mode batch handling for efficient video/image upscaling
    - Tiled VAE support for memory efficiency
    - Dynamic load balancing for large batches
    - Backward compatible with single-image workflows
    
    Environment Variables:
    - COMFYUI_MAX_BATCH: Chunk size for tile sending (default 20)
    - COMFYUI_MAX_PAYLOAD_SIZE: Max API payload bytes (default 50MB)
    
    Threshold: dynamic_threshold input controls mode switch (default 8)
    """

    def __init__(self):
        """Initialize the node and ensure persistent storage exists."""
        # Pre-initialize the persistent storage on node creation
        ensure_tile_jobs_initialized()
        debug_log("UltimateSDUpscaleDistributed - Node initialized")

    @classmethod
    def INPUT_TYPES(s):
        return {
            "required": {
                "upscaled_image": ("IMAGE",),
                "model": ("MODEL",),
                "positive": ("CONDITIONING",),
                "negative": ("CONDITIONING",),
                "vae": ("VAE",),
                "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
                "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
                "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
                "sampler_name": (comfy.samplers.KSampler.SAMPLERS,),
                "scheduler": (comfy.samplers.KSampler.SCHEDULERS,),
                "denoise": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
                "tile_width": ("INT", {"default": 512, "min": 64, "max": 2048, "step": 8}),
                "tile_height": ("INT", {"default": 512, "min": 64, "max": 2048, "step": 8}),
                "padding": ("INT", {"default": 32, "min": 0, "max": 256, "step": 8}),
                "mask_blur": ("INT", {"default": 8, "min": 0, "max": 256}),
                "force_uniform_tiles": ("BOOLEAN", {"default": True}),
                "tiled_decode": ("BOOLEAN", {"default": False}),
            },
            "hidden": {
                "multi_job_id": ("STRING", {"default": ""}),
                "is_worker": ("BOOLEAN", {"default": False}),
                "master_url": ("STRING", {"default": ""}),
                "enabled_worker_ids": ("STRING", {"default": "[]"}),
                "worker_id": ("STRING", {"default": ""}),
                "tile_indices": ("STRING", {"default": ""}),  # Unused - kept for compatibility
                "dynamic_threshold": ("INT", {"default": 8, "min": 1, "max": 64}),
            },
        }

    RETURN_TYPES = ("IMAGE",)
    FUNCTION = "run"
    CATEGORY = "image/upscaling"

    @classmethod
    def IS_CHANGED(cls, **kwargs):
        """Force re-execution."""
        return float("nan")  # Always re-execute

    def run(self, upscaled_image, model, positive, negative, vae, seed, steps, cfg, 
            sampler_name, scheduler, denoise, tile_width, tile_height, padding, 
            mask_blur, force_uniform_tiles, tiled_decode,
            multi_job_id="", is_worker=False, master_url="", enabled_worker_ids="[]", 
            worker_id="", tile_indices="", dynamic_threshold=8):
        """Entry point - runs SYNCHRONOUSLY like Ultimate SD Upscaler."""
        # Strict WAN/FLOW batching: error if batch is not 4n+1 (except allow 1)
        try:
            batch_size = int(getattr(upscaled_image, 'shape', [1])[0])
        except Exception:
            batch_size = 1
        # Enforce 4n+1 batches globally for any model when batch > 1 (master only)
        if not is_worker and batch_size != 1 and (batch_size % 4 != 1):
            raise ValueError(
                f"Batch size {batch_size} is not of the form 4n+1. "
                "This node requires batch sizes of 1 or 4n+1 (1, 5, 9, 13, ...). "
                "Please adjust the batch size."
            )
        if not multi_job_id:
            # No distributed processing, run single GPU version
            return self.process_single_gpu(upscaled_image, model, positive, negative, vae,
                                          seed, steps, cfg, sampler_name, scheduler, denoise,
                                          tile_width, tile_height, padding, mask_blur, force_uniform_tiles, tiled_decode)
        
        if is_worker:
            # Worker mode: process tiles synchronously
            return self.process_worker(upscaled_image, model, positive, negative, vae,
                                      seed, steps, cfg, sampler_name, scheduler, denoise,
                                      tile_width, tile_height, padding, mask_blur,
                                      force_uniform_tiles, tiled_decode, multi_job_id, master_url,
                                      worker_id, enabled_worker_ids, dynamic_threshold)
        else:
            # Master mode: distribute and collect synchronously
            return self.process_master(upscaled_image, model, positive, negative, vae,
                                     seed, steps, cfg, sampler_name, scheduler, denoise,
                                     tile_width, tile_height, padding, mask_blur,
                                     force_uniform_tiles, tiled_decode, multi_job_id, enabled_worker_ids, 
                                     dynamic_threshold)

    def process_worker(self, upscaled_image, model, positive, negative, vae,
                      seed, steps, cfg, sampler_name, scheduler, denoise,
                      tile_width, tile_height, padding, mask_blur,
                      force_uniform_tiles, tiled_decode, multi_job_id, master_url,
                      worker_id, enabled_worker_ids, dynamic_threshold):
        """Unified worker processing - handles both static and dynamic modes."""
        # Get batch size to determine mode
        batch_size = upscaled_image.shape[0]
        
        # Ensure mode consistency across master/workers via shared threshold
        # Determine mode (must match master's logic)
        enabled_workers = json.loads(enabled_worker_ids)
        num_workers = len(enabled_workers)
        # Compute number of tiles for this image to decide if tile distribution makes sense
        _, height, width, _ = upscaled_image.shape
        all_tiles = self.calculate_tiles(width, height, self.round_to_multiple(tile_width), self.round_to_multiple(tile_height), force_uniform_tiles)
        num_tiles_per_image = len(all_tiles)

        mode = self._determine_processing_mode(batch_size, num_workers, dynamic_threshold)
        # For USDU-style processing, we want tile distribution whenever workers are available
        # and there is more than one tile to process, even if batch == 1.
        if num_workers > 0 and num_tiles_per_image > 1:
            mode = "static"
            
        debug_log(f"USDU Dist Worker - Batch size {batch_size}")
        
        if mode == "dynamic":
            return self.process_worker_dynamic(upscaled_image, model, positive, negative, vae,
                                             seed, steps, cfg, sampler_name, scheduler, denoise,
                                             tile_width, tile_height, padding, mask_blur,
                                             force_uniform_tiles, tiled_decode, multi_job_id, master_url,
                                             worker_id, enabled_worker_ids, dynamic_threshold)
        
        # Static mode - enhanced with health monitoring and retry logic
        return self._process_worker_static_sync(upscaled_image, model, positive, negative, vae,
                                               seed, steps, cfg, sampler_name, scheduler, denoise,
                                               tile_width, tile_height, padding, mask_blur,
                                               force_uniform_tiles, tiled_decode, multi_job_id, master_url,
                                               worker_id, enabled_workers)

    def process_master(self, upscaled_image, model, positive, negative, vae,
                      seed, steps, cfg, sampler_name, scheduler, denoise,
                      tile_width, tile_height, padding, mask_blur,
                      force_uniform_tiles, tiled_decode, multi_job_id, enabled_worker_ids, 
                      dynamic_threshold):
        """Unified master processing with enhanced monitoring and failure handling."""
        # Round tile dimensions
        tile_width = self.round_to_multiple(tile_width)
        tile_height = self.round_to_multiple(tile_height)
        
        # Get image dimensions and batch size
        batch_size, height, width, _ = upscaled_image.shape
        
        # Calculate all tiles and grid
        all_tiles = self.calculate_tiles(width, height, tile_width, tile_height, force_uniform_tiles)
        num_tiles_per_image = len(all_tiles)
        rows = math.ceil(height / tile_height)
        cols = math.ceil(width / tile_width)
        log(
            f"USDU Dist: Canvas {width}x{height} | Tile {tile_width}x{tile_height} | Grid {rows}x{cols} ({num_tiles_per_image} tiles/image) | Batch {batch_size}"
        )
        
        # Parse enabled workers
        enabled_workers = json.loads(enabled_worker_ids)
        num_workers = len(enabled_workers)
        
        # Determine processing mode
        mode = self._determine_processing_mode(batch_size, num_workers, dynamic_threshold)
        # Prefer tile-based static distribution when workers are available and there are multiple tiles,
        # even for batch == 1, to spread tiles across GPUs like the legacy dynamic tile queue.
        if num_workers > 0 and num_tiles_per_image > 1:
            mode = "static"
        
        log(f"USDU Dist: Workers {num_workers} | Mode {mode} | Threshold {dynamic_threshold}")

        if mode == "single_gpu":
            # No workers, process all tiles locally
            return self.process_single_gpu(upscaled_image, model, positive, negative, vae,
                                         seed, steps, cfg, sampler_name, scheduler, denoise,
                                         tile_width, tile_height, padding, mask_blur, force_uniform_tiles, tiled_decode)
        
        elif mode == "dynamic":
            # Dynamic mode for large batches
            return self.process_master_dynamic(upscaled_image, model, positive, negative, vae,
                                             seed, steps, cfg, sampler_name, scheduler, denoise,
                                             tile_width, tile_height, padding, mask_blur,
                                             force_uniform_tiles, tiled_decode, multi_job_id, enabled_workers)
        
        # Static mode - enhanced with unified job management
        return self._process_master_static_sync(upscaled_image, model, positive, negative, vae,
                                               seed, steps, cfg, sampler_name, scheduler, denoise,
                                               tile_width, tile_height, padding, mask_blur,
                                               force_uniform_tiles, tiled_decode, multi_job_id, enabled_workers,
                                 
Download .txt
gitextract_vxcmpaik/

├── .github/
│   ├── FUNDING.yml
│   └── workflows/
│       └── publish_action.yml
├── .gitignore
├── .nvmrc
├── LICENSE
├── README.md
├── __init__.py
├── api/
│   ├── __init__.py
│   ├── config_routes.py
│   ├── job_routes.py
│   ├── orchestration/
│   │   ├── __init__.py
│   │   ├── dispatch.py
│   │   ├── media_sync.py
│   │   └── prompt_transform.py
│   ├── queue_orchestration.py
│   ├── queue_request.py
│   ├── schemas.py
│   ├── tunnel_routes.py
│   ├── usdu_routes.py
│   └── worker_routes.py
├── conftest.py
├── distributed.py
├── docs/
│   ├── comfyui-distributed-api.md
│   ├── model-download-script.md
│   ├── video-upscaler-runpod-preset.md
│   └── worker-setup-guides.md
├── nodes/
│   ├── __init__.py
│   ├── collector.py
│   ├── distributed_upscale.py
│   └── utilities.py
├── package.json
├── pyproject.toml
├── scripts/
│   └── test-web.sh
├── tests/
│   ├── api/
│   │   ├── test_config_routes.py
│   │   ├── test_distributed_queue.py
│   │   ├── test_media_sync.py
│   │   ├── test_usdu_routes.py
│   │   └── test_worker_routes.py
│   ├── conftest.py
│   ├── test_async_helpers.py
│   ├── test_batch_dividers.py
│   ├── test_config.py
│   ├── test_detection.py
│   ├── test_dispatch_selection.py
│   ├── test_distributed_value.py
│   ├── test_job_timeout.py
│   ├── test_network_helpers.py
│   ├── test_payload_parsers.py
│   ├── test_prompt_transform.py
│   ├── test_queue_request.py
│   ├── test_static_mode.py
│   └── test_worker_process_runtime.py
├── upscale/
│   ├── __init__.py
│   ├── conditioning.py
│   ├── job_models.py
│   ├── job_state.py
│   ├── job_store.py
│   ├── job_timeout.py
│   ├── modes/
│   │   ├── __init__.py
│   │   ├── dynamic.py
│   │   ├── single_gpu.py
│   │   └── static.py
│   ├── payload_parsers.py
│   ├── result_collector.py
│   ├── tile_ops.py
│   └── worker_comms.py
├── utils/
│   ├── __init__.py
│   ├── async_helpers.py
│   ├── audio_payload.py
│   ├── cloudflare/
│   │   ├── __init__.py
│   │   ├── binary.py
│   │   ├── process_reader.py
│   │   ├── state.py
│   │   └── tunnel.py
│   ├── config.py
│   ├── constants.py
│   ├── crop_model_patch.py
│   ├── exceptions.py
│   ├── image.py
│   ├── logging.py
│   ├── network.py
│   ├── process.py
│   ├── trace_logger.py
│   ├── usdu_managment.py
│   └── usdu_utils.py
├── vitest.config.js
├── web/
│   ├── apiClient.js
│   ├── constants.js
│   ├── distributed.css
│   ├── distributedValue.js
│   ├── executionUtils.js
│   ├── image_batch_divider.js
│   ├── main.js
│   ├── masterDetection.js
│   ├── sidebar/
│   │   ├── actionsSection.js
│   │   ├── settingsSection.js
│   │   └── workersSection.js
│   ├── sidebarRenderer.js
│   ├── stateManager.js
│   ├── tests/
│   │   ├── apiClient.test.js
│   │   ├── executionUtils.test.js
│   │   ├── urlUtils.test.js
│   │   ├── workerLifecycle.test.js
│   │   └── workerSettings.test.js
│   ├── tunnelManager.js
│   ├── ui/
│   │   ├── buttonHelpers.js
│   │   ├── cloudflareWarning.js
│   │   ├── entityCard.js
│   │   ├── logModal.js
│   │   └── settingsForm.js
│   ├── ui.js
│   ├── urlUtils.js
│   ├── workerLifecycle.js
│   ├── workerSettings.js
│   └── workerUtils.js
├── workers/
│   ├── __init__.py
│   ├── detection.py
│   ├── process/
│   │   ├── __init__.py
│   │   ├── launch_builder.py
│   │   ├── lifecycle.py
│   │   ├── persistence.py
│   │   └── root_discovery.py
│   ├── process_manager.py
│   ├── startup.py
│   └── worker_monitor.py
└── workflows/
    ├── distributed-txt2img.json
    ├── distributed-upscale-video.json
    ├── distributed-upscale.json
    ├── distributed-wan-2.2_14b_t2v.json
    └── distributed-wan.json
Download .txt
SYMBOL INDEX (919 symbols across 94 files)

FILE: api/config_routes.py
  function save_config (line 15) | def save_config(_config):
  function config_transaction (line 19) | async def config_transaction():
  function _positive_int (line 29) | def _positive_int(value):
  function _apply_field_patch (line 77) | def _apply_field_patch(target: dict, data: dict, field_rules: list) -> N...
  function get_config_endpoint (line 90) | async def get_config_endpoint(request):
  function update_config_endpoint (line 96) | async def update_config_endpoint(request):
  function queue_status_endpoint (line 148) | async def queue_status_endpoint(request):
  function update_worker_endpoint (line 166) | async def update_worker_endpoint(request):
  function delete_worker_endpoint (line 210) | async def delete_worker_endpoint(request):
  function update_setting_endpoint (line 242) | async def update_setting_endpoint(request):
  function update_master_endpoint (line 265) | async def update_master_endpoint(request):

FILE: api/job_routes.py
  function ensure_distributed_state (line 23) | def ensure_distributed_state():
  function _decode_image_sync (line 33) | def _decode_image_sync(image_path):
  function _check_file_sync (line 79) | def _check_file_sync(filename, expected_hash):
  function _decode_canonical_png_tensor (line 104) | def _decode_canonical_png_tensor(image_payload):
  function _decode_audio_payload (line 135) | def _decode_audio_payload(audio_payload):
  function prepare_job_endpoint (line 143) | async def prepare_job_endpoint(request):
  function clear_memory_endpoint (line 161) | async def clear_memory_endpoint(request):
  function distributed_queue_endpoint (line 207) | async def distributed_queue_endpoint(request):
  function load_image_endpoint (line 239) | async def load_image_endpoint(request):
  function check_file_endpoint (line 256) | async def check_file_endpoint(request):
  function job_complete_endpoint (line 274) | async def job_complete_endpoint(request):

FILE: api/orchestration/dispatch.py
  function trace_debug (line 12) | def trace_debug(*_args, **_kwargs):
  function trace_info (line 15) | def trace_info(*_args, **_kwargs):
  function parse_positive_int (line 21) | def parse_positive_int(value, default):
  function worker_is_active (line 31) | async def worker_is_active(worker):
  function worker_ws_is_active (line 37) | async def worker_ws_is_active(worker):
  function _probe_worker_active (line 56) | async def _probe_worker_active(worker, use_websocket, semaphore):
  function _dispatch_via_websocket (line 62) | async def _dispatch_via_websocket(worker_url, payload, client_id, timeou...
  function dispatch_worker_prompt (line 98) | async def dispatch_worker_prompt(
  function select_active_workers (line 144) | async def select_active_workers(
  function _extract_queue_remaining (line 194) | def _extract_queue_remaining(payload):
  function _probe_worker_queue (line 204) | async def _probe_worker_queue(worker, semaphore, probe_timeout):
  function _select_idle_round_robin (line 216) | def _select_idle_round_robin(statuses):
  function select_least_busy_worker (line 225) | async def select_least_busy_worker(

FILE: api/orchestration/media_sync.py
  function _normalize_media_reference (line 26) | def _normalize_media_reference(value):
  function convert_paths_for_platform (line 36) | def convert_paths_for_platform(obj, target_separator):
  function _find_media_references (line 70) | def _find_media_references(prompt_obj):
  function _rewrite_prompt_media_inputs (line 84) | def _rewrite_prompt_media_inputs(prompt_obj, worker_media_paths):
  function _load_media_file_sync (line 105) | def _load_media_file_sync(filename):
  function fetch_worker_path_separator (line 127) | async def fetch_worker_path_separator(worker, trace_execution_id=None):
  function _upload_media_to_worker (line 146) | async def _upload_media_to_worker(worker, filename, file_bytes, file_has...
  function sync_worker_media (line 196) | async def sync_worker_media(worker, prompt_obj, trace_execution_id=None):

FILE: api/orchestration/prompt_transform.py
  class PromptIndex (line 7) | class PromptIndex:
    method __init__ (line 10) | def __init__(self, prompt_obj):
    method copy_prompt (line 24) | def copy_prompt(self):
    method nodes_for_class (line 27) | def nodes_for_class(self, class_name):
    method has_upstream (line 30) | def has_upstream(self, start_node_id, target_class):
  function _iter_prompt_nodes (line 56) | def _iter_prompt_nodes(prompt_obj):
  function find_nodes_by_class (line 62) | def find_nodes_by_class(prompt_obj, class_name):
  function _find_downstream_nodes (line 70) | def _find_downstream_nodes(prompt_obj, start_ids):
  function _create_numeric_id_generator (line 91) | def _create_numeric_id_generator(prompt_obj):
  function _find_upstream_nodes (line 111) | def _find_upstream_nodes(prompt_obj, start_ids):
  function prune_prompt_for_worker (line 128) | def prune_prompt_for_worker(prompt_obj):
  function prepare_delegate_master_prompt (line 165) | def prepare_delegate_master_prompt(prompt_obj, collector_ids):
  function generate_job_id_map (line 217) | def generate_job_id_map(prompt_index, prefix):
  function _override_seed_nodes (line 228) | def _override_seed_nodes(prompt_copy, prompt_index, is_master, participa...
  function _override_collector_nodes (line 242) | def _override_collector_nodes(
  function _override_upscale_nodes (line 276) | def _override_upscale_nodes(
  function _override_value_nodes (line 302) | def _override_value_nodes(prompt_copy, prompt_index, is_master, particip...
  function apply_participant_overrides (line 316) | def apply_participant_overrides(

FILE: api/queue_orchestration.py
  function _generate_execution_trace_id (line 38) | def _generate_execution_trace_id():
  function ensure_distributed_state (line 42) | def ensure_distributed_state(server_instance=None):
  function _ensure_distributed_queue (line 55) | async def _ensure_distributed_queue(job_id):
  function _resolve_enabled_workers (line 63) | def _resolve_enabled_workers(config, requested_ids=None):
  function _resolve_orchestration_limits (line 96) | def _resolve_orchestration_limits(config):
  function _is_load_balance_enabled (line 123) | def _is_load_balance_enabled(value):
  function _prompt_requests_load_balance (line 133) | def _prompt_requests_load_balance(prompt_index):
  function _prepare_worker_payload (line 141) | async def _prepare_worker_payload(
  function orchestrate_distributed_execution (line 200) | async def orchestrate_distributed_execution(

FILE: api/queue_request.py
  class QueueRequestPayload (line 6) | class QueueRequestPayload:
  function parse_queue_request_payload (line 16) | def parse_queue_request_payload(data: Any) -> QueueRequestPayload:

FILE: api/schemas.py
  function require_fields (line 1) | def require_fields(data: dict, *fields) -> list[str]:
  function validate_worker_id (line 21) | def validate_worker_id(worker_id: str, config: dict) -> bool:
  function validate_positive_int (line 28) | def validate_positive_int(value, field_name: str) -> str | None:
  function parse_positive_int (line 39) | def parse_positive_int(value, default: int) -> int:
  function parse_positive_float (line 48) | def parse_positive_float(value, default: float) -> float:

FILE: api/tunnel_routes.py
  function tunnel_status_endpoint (line 11) | async def tunnel_status_endpoint(request):
  function tunnel_start_endpoint (line 26) | async def tunnel_start_endpoint(request):
  function tunnel_stop_endpoint (line 40) | async def tunnel_stop_endpoint(request):

FILE: api/usdu_routes.py
  function heartbeat_endpoint (line 17) | async def heartbeat_endpoint(request):
  function submit_tiles_endpoint (line 41) | async def submit_tiles_endpoint(request):
  function submit_image_endpoint (line 109) | async def submit_image_endpoint(request):
  function request_image_endpoint (line 169) | async def request_image_endpoint(request):
  function job_status_endpoint (line 219) | async def job_status_endpoint(request):

FILE: api/worker_routes.py
  class PromptValidationError (line 36) | class PromptValidationError(RuntimeError):
    method __init__ (line 37) | def __init__(self, message, validation_error=None, node_errors=None):
  function worker_ws_endpoint (line 44) | async def worker_ws_endpoint(request):
  function clear_launching_state (line 116) | async def clear_launching_state(request):
  function get_network_ips (line 142) | def get_network_ips():
  function get_recommended_ip (line 202) | def get_recommended_ip(ips):
  function _get_cuda_info (line 237) | def _get_cuda_info():
  function _collect_network_info_sync (line 277) | def _collect_network_info_sync():
  function _read_worker_log_sync (line 292) | def _read_worker_log_sync(log_file, lines_to_read):
  function _parse_positive_int_query (line 328) | def _parse_positive_int_query(value, default, minimum=1, maximum=10000):
  function _find_worker_by_id (line 340) | def _find_worker_by_id(config, worker_id):
  function get_local_log_endpoint (line 349) | async def get_local_log_endpoint(request):
  function get_network_info_endpoint (line 394) | async def get_network_info_endpoint(request):
  function get_system_info_endpoint (line 409) | async def get_system_info_endpoint(request):
  function launch_worker_endpoint (line 433) | async def launch_worker_endpoint(request):
  function stop_worker_endpoint (line 494) | async def stop_worker_endpoint(request):
  function get_managed_workers_endpoint (line 524) | async def get_managed_workers_endpoint(request):
  function get_local_worker_status_endpoint (line 537) | async def get_local_worker_status_endpoint(request):
  function get_worker_log_endpoint (line 607) | async def get_worker_log_endpoint(request):
  function get_remote_worker_log_endpoint (line 650) | async def get_remote_worker_log_endpoint(request):

FILE: conftest.py
  function _patched_pkg_setup (line 16) | def _patched_pkg_setup(self) -> None:

FILE: distributed.py
  function _cleanup_session (line 36) | async def _cleanup_session():

FILE: nodes/collector.py
  class DistributedCollectorNode (line 24) | class DistributedCollectorNode:
    method INPUT_TYPES (line 28) | def INPUT_TYPES(s):
    method run (line 58) | def run(self, images, load_balance=False, audio=None, multi_job_id="",...
    method send_batch_to_master (line 84) | async def send_batch_to_master(self, image_batch, audio, multi_job_id,...
    method _combine_audio (line 121) | def _combine_audio(self, master_audio, worker_audio, empty_audio, work...
    method _store_worker_result (line 176) | def _store_worker_result(self, worker_images: dict, item: dict) -> int:
    method _reorder_and_combine_tensors (line 193) | def _reorder_and_combine_tensors(
    method execute (line 238) | async def execute(self, images, audio, load_balance=False, multi_job_i...

FILE: nodes/distributed_upscale.py
  function sync_wrapper (line 19) | def sync_wrapper(async_func):
  function _parse_enabled_worker_ids (line 30) | def _parse_enabled_worker_ids(enabled_worker_ids):
  class UltimateSDUpscaleDistributed (line 46) | class UltimateSDUpscaleDistributed(
    method __init__ (line 77) | def __init__(self):
    method INPUT_TYPES (line 84) | def INPUT_TYPES(s):
    method IS_CHANGED (line 121) | def IS_CHANGED(cls, **kwargs):
    method run (line 125) | def run(self, upscaled_image, model, positive, negative, vae, seed, st...
    method process_worker (line 164) | def process_worker(self, upscaled_image, model, positive, negative, vae,
    method process_master (line 204) | def process_master(self, upscaled_image, model, positive, negative, vae,
    method _determine_processing_mode (line 259) | def _determine_processing_mode(self, batch_size: int, num_workers: int...

FILE: nodes/utilities.py
  function _chunk_bounds (line 7) | def _chunk_bounds(total_items: int, n_splits: int) -> list[tuple[int, in...
  class DistributedSeed (line 23) | class DistributedSeed:
    method INPUT_TYPES (line 31) | def INPUT_TYPES(cls):
    method distribute (line 52) | def distribute(self, seed, is_worker=False, worker_id=""):
  class AnyType (line 79) | class AnyType(str):
    method __ne__ (line 80) | def __ne__(self, __value: object) -> bool:
  class DistributedValue (line 86) | class DistributedValue:
    method INPUT_TYPES (line 95) | def INPUT_TYPES(cls):
    method _coerce (line 113) | def _coerce(value, value_type):
    method _coerce_safe (line 122) | def _coerce_safe(value, value_type):
    method distribute (line 129) | def distribute(self, default_value, worker_values="{}", is_worker=Fals...
  class DistributedModelName (line 164) | class DistributedModelName:
    method INPUT_TYPES (line 166) | def INPUT_TYPES(cls):
    method _stringify (line 183) | def _stringify(self, value):
    method _update_workflow (line 193) | def _update_workflow(self, extra_pnginfo, unique_id, values):
    method log_input (line 211) | def log_input(self, text, unique_id=None, extra_pnginfo=None):
  class ByPassTypeTuple (line 226) | class ByPassTypeTuple(tuple):
    method __getitem__ (line 227) | def __getitem__(self, index):
  class ImageBatchDivider (line 235) | class ImageBatchDivider:
    method INPUT_TYPES (line 237) | def INPUT_TYPES(s):
    method divide_batch (line 258) | def divide_batch(self, images, divide_by):
  class AudioBatchDivider (line 271) | class AudioBatchDivider:
    method INPUT_TYPES (line 275) | def INPUT_TYPES(s):
    method divide_audio (line 296) | def divide_audio(self, audio, divide_by):
  class DistributedEmptyImage (line 332) | class DistributedEmptyImage:
    method INPUT_TYPES (line 336) | def INPUT_TYPES(cls):
    method create (line 349) | def create(self, height, width, channels):

FILE: tests/api/test_config_routes.py
  class _FakeResponse (line 10) | class _FakeResponse:
    method __init__ (line 11) | def __init__(self, payload, status=200):
  class _FakeRequest (line 16) | class _FakeRequest:
    method __init__ (line 17) | def __init__(self, payload=None):
    method json (line 20) | async def json(self):
  function _load_config_routes_module (line 24) | def _load_config_routes_module():
  class ConfigRoutesTests (line 108) | class ConfigRoutesTests(unittest.IsolatedAsyncioTestCase):
    method test_get_config_returns_core_sections (line 109) | async def test_get_config_returns_core_sections(self):
    method test_update_config_valid_field_persists (line 119) | async def test_update_config_valid_field_persists(self):
    method test_update_config_unknown_field_returns_400 (line 130) | async def test_update_config_unknown_field_returns_400(self):
    method test_update_config_wrong_type_returns_400 (line 138) | async def test_update_config_wrong_type_returns_400(self):

FILE: tests/api/test_distributed_queue.py
  class _FakeResponse (line 15) | class _FakeResponse:
    method __init__ (line 16) | def __init__(self, payload, status=200):
  class _FakeRequest (line 21) | class _FakeRequest:
    method __init__ (line 22) | def __init__(self, payload):
    method json (line 25) | async def json(self):
  function _load_job_routes_module (line 29) | def _load_job_routes_module():
  class DistributedQueueEndpointTests (line 223) | class DistributedQueueEndpointTests(unittest.IsolatedAsyncioTestCase):
    method test_distributed_queue_happy_path_returns_prompt_metadata (line 224) | async def test_distributed_queue_happy_path_returns_prompt_metadata(se...
    method test_distributed_queue_missing_prompt_returns_400 (line 246) | async def test_distributed_queue_missing_prompt_returns_400(self):
    method test_distributed_queue_missing_enabled_worker_ids_returns_400 (line 257) | async def test_distributed_queue_missing_enabled_worker_ids_returns_40...
  class JobCompleteAudioPayloadTests (line 269) | class JobCompleteAudioPayloadTests(unittest.IsolatedAsyncioTestCase):
    method _encoded_audio_payload (line 270) | def _encoded_audio_payload(self):
    method test_job_complete_accepts_audio_payload (line 279) | async def test_job_complete_accepts_audio_payload(self):
    method test_decode_audio_payload_rejects_bad_shape (line 305) | def test_decode_audio_payload_rejects_bad_shape(self):
    method test_decode_audio_payload_rejects_bad_dtype (line 315) | def test_decode_audio_payload_rejects_bad_dtype(self):

FILE: tests/api/test_media_sync.py
  function _load_media_sync_module (line 8) | def _load_media_sync_module():
  class ConvertPathsForPlatformTests (line 89) | class ConvertPathsForPlatformTests(unittest.TestCase):
    method test_forward_slash_target_normalises_backslashes (line 90) | def test_forward_slash_target_normalises_backslashes(self):
    method test_backslash_target_normalises_forward_slashes (line 95) | def test_backslash_target_normalises_forward_slashes(self):
    method test_relative_media_paths_always_stay_forward_slash (line 101) | def test_relative_media_paths_always_stay_forward_slash(self):
    method test_relative_audio_paths_stay_forward_slash (line 107) | def test_relative_audio_paths_stay_forward_slash(self):
    method test_annotated_relative_media_path_stays_forward_slash (line 112) | def test_annotated_relative_media_path_stays_forward_slash(self):
    method test_non_filename_strings_are_untouched (line 118) | def test_non_filename_strings_are_untouched(self):
    method test_url_strings_are_untouched (line 124) | def test_url_strings_are_untouched(self):
    method test_invalid_separator_returns_obj_unchanged (line 129) | def test_invalid_separator_returns_obj_unchanged(self):
    method test_nested_dict_is_processed_recursively (line 134) | def test_nested_dict_is_processed_recursively(self):
    method test_list_items_are_processed_recursively (line 139) | def test_list_items_are_processed_recursively(self):
    method test_non_string_scalar_values_are_untouched (line 145) | def test_non_string_scalar_values_are_untouched(self):
    method test_absolute_unix_path_to_windows (line 151) | def test_absolute_unix_path_to_windows(self):
    method test_already_normalised_path_is_idempotent (line 156) | def test_already_normalised_path_is_idempotent(self):
  class FindMediaReferencesTests (line 166) | class FindMediaReferencesTests(unittest.TestCase):
    method test_finds_image_input (line 167) | def test_finds_image_input(self):
    method test_finds_video_input (line 172) | def test_finds_video_input(self):
    method test_finds_file_input_for_load_video (line 177) | def test_finds_file_input_for_load_video(self):
    method test_finds_audio_input (line 182) | def test_finds_audio_input(self):
    method test_strips_annotation_suffix (line 187) | def test_strips_annotation_suffix(self):
    method test_normalises_backslashes_in_path (line 193) | def test_normalises_backslashes_in_path(self):
    method test_ignores_non_media_text_inputs (line 198) | def test_ignores_non_media_text_inputs(self):
    method test_ignores_node_link_values (line 203) | def test_ignores_node_link_values(self):
    method test_deduplicates_same_file_across_nodes (line 209) | def test_deduplicates_same_file_across_nodes(self):
    method test_returns_sorted_list (line 217) | def test_returns_sorted_list(self):
    method test_ignores_non_dict_nodes (line 225) | def test_ignores_non_dict_nodes(self):
    method test_empty_prompt_returns_empty_list (line 230) | def test_empty_prompt_returns_empty_list(self):
    method test_multiple_media_types_all_found (line 233) | def test_multiple_media_types_all_found(self):
  class RewritePromptMediaInputsTests (line 245) | class RewritePromptMediaInputsTests(unittest.TestCase):
    method test_rewrites_video_file_input_to_worker_path (line 246) | def test_rewrites_video_file_input_to_worker_path(self):
    method test_rewrites_audio_input_and_strips_annotation_when_matching (line 253) | def test_rewrites_audio_input_and_strips_annotation_when_matching(self):

FILE: tests/api/test_usdu_routes.py
  class _FakeResponse (line 13) | class _FakeResponse:
    method __init__ (line 14) | def __init__(self, payload, status=200):
  class _FakeRequest (line 19) | class _FakeRequest:
    method __init__ (line 20) | def __init__(self, json_payload=None, post_payload=None, headers=None,...
    method json (line 26) | async def json(self):
    method post (line 29) | async def post(self):
  class _Routes (line 33) | class _Routes:
    method post (line 34) | def post(self, _path):
    method get (line 40) | def get(self, _path):
  function _load_usdu_routes_module (line 47) | def _load_usdu_routes_module():
  class _UploadField (line 177) | class _UploadField:
    method __init__ (line 178) | def __init__(self, data):
  function _tiny_png_bytes (line 182) | def _tiny_png_bytes():
  class USDURoutesTests (line 189) | class USDURoutesTests(unittest.IsolatedAsyncioTestCase):
    method asyncSetUp (line 190) | async def asyncSetUp(self):
    method test_heartbeat_updates_worker_status (line 196) | async def test_heartbeat_updates_worker_status(self):
    method test_heartbeat_missing_fields_returns_400 (line 208) | async def test_heartbeat_missing_fields_returns_400(self):
    method test_request_image_dynamic_assigns_next_image (line 215) | async def test_request_image_dynamic_assigns_next_image(self):
    method test_request_image_static_assigns_tile_and_batched_flag (line 230) | async def test_request_image_static_assigns_tile_and_batched_flag(self):
    method test_submit_tiles_completion_signal_enqueues_last_marker (line 245) | async def test_submit_tiles_completion_signal_enqueues_last_marker(self):
    method test_submit_image_enqueues_processed_image_payload (line 268) | async def test_submit_image_enqueues_processed_image_payload(self):
    method test_job_status_endpoint_reports_ready (line 292) | async def test_job_status_endpoint_reports_ready(self):

FILE: tests/api/test_worker_routes.py
  class _FakeResponse (line 12) | class _FakeResponse:
    method __init__ (line 13) | def __init__(self, payload, status=200):
  class _FakeRequest (line 18) | class _FakeRequest:
    method __init__ (line 19) | def __init__(self, payload=None, match_info=None, query=None):
    method json (line 24) | async def json(self):
  class _FakeHTTPClientResponse (line 28) | class _FakeHTTPClientResponse:
    method __init__ (line 29) | def __init__(self, payload, status=200):
    method __aenter__ (line 33) | async def __aenter__(self):
    method __aexit__ (line 36) | async def __aexit__(self, _exc_type, _exc, _tb):
    method json (line 39) | async def json(self):
    method text (line 42) | async def text(self):
  class _FakeHTTPClientSession (line 46) | class _FakeHTTPClientSession:
    method __init__ (line 47) | def __init__(self, payload, status=200):
    method get (line 52) | def get(self, url, params=None, timeout=None):
  class _DummyWorkerManager (line 57) | class _DummyWorkerManager:
    method __init__ (line 58) | def __init__(self):
    method launch_worker (line 61) | def launch_worker(self, worker):
    method _is_process_running (line 70) | def _is_process_running(self, _pid):
    method save_processes (line 73) | def save_processes(self):
    method stop_worker (line 76) | def stop_worker(self, _worker_id):
    method get_managed_workers (line 79) | def get_managed_workers(self):
  class _ImmediateLoop (line 83) | class _ImmediateLoop:
    method run_in_executor (line 84) | async def run_in_executor(self, _executor, func, *args):
  function _load_worker_routes_module (line 88) | def _load_worker_routes_module():
  class WorkerRoutesTests (line 271) | class WorkerRoutesTests(unittest.IsolatedAsyncioTestCase):
    method test_launch_worker_valid_id_returns_200 (line 272) | async def test_launch_worker_valid_id_returns_200(self):
    method test_launch_worker_unknown_id_returns_404 (line 288) | async def test_launch_worker_unknown_id_returns_404(self):
    method test_worker_log_returns_content_json (line 301) | async def test_worker_log_returns_content_json(self):
    method test_local_log_reads_memory_buffer (line 328) | async def test_local_log_reads_memory_buffer(self):
    method test_remote_worker_log_proxies_to_worker_local_log_endpoint (line 353) | async def test_remote_worker_log_proxies_to_worker_local_log_endpoint(...
    method test_remote_worker_log_rejects_local_workers (line 390) | async def test_remote_worker_log_rejects_local_workers(self):

FILE: tests/test_async_helpers.py
  class _PromptQueue (line 8) | class _PromptQueue:
    method __init__ (line 9) | def __init__(self):
    method put (line 12) | def put(self, item):
  function _load_async_helpers_module (line 16) | def _load_async_helpers_module():
  class QueuePromptPayloadTests (line 64) | class QueuePromptPayloadTests(unittest.IsolatedAsyncioTestCase):
    method test_queue_prompt_payload_includes_create_time_and_client_metadata (line 65) | async def test_queue_prompt_payload_includes_create_time_and_client_me...

FILE: tests/test_batch_dividers.py
  function _load_utilities_module (line 10) | def _load_utilities_module():
  class ImageBatchDividerTests (line 48) | class ImageBatchDividerTests(unittest.TestCase):
    method test_divides_images_into_contiguous_chunks (line 49) | def test_divides_images_into_contiguous_chunks(self):
    method test_unused_image_outputs_are_empty (line 62) | def test_unused_image_outputs_are_empty(self):
  class AudioBatchDividerTests (line 73) | class AudioBatchDividerTests(unittest.TestCase):
    method test_divides_audio_samples_into_contiguous_chunks (line 74) | def test_divides_audio_samples_into_contiguous_chunks(self):
    method test_unused_audio_outputs_are_empty (line 87) | def test_unused_audio_outputs_are_empty(self):

FILE: tests/test_config.py
  function _load_config_module (line 12) | def _load_config_module():
  class MergeWithDefaultsTests (line 47) | class MergeWithDefaultsTests(unittest.TestCase):
    method test_non_dict_input_returns_defaults (line 48) | def test_non_dict_input_returns_defaults(self):
    method test_fills_missing_keys_with_defaults (line 52) | def test_fills_missing_keys_with_defaults(self):
    method test_loaded_value_overrides_default (line 56) | def test_loaded_value_overrides_default(self):
    method test_nested_dict_merges_recursively (line 61) | def test_nested_dict_merges_recursively(self):
    method test_preserves_unknown_keys_for_forward_compatibility (line 68) | def test_preserves_unknown_keys_for_forward_compatibility(self):
    method test_none_loaded_value_overrides_default (line 72) | def test_none_loaded_value_overrides_default(self):
    method test_non_dict_nested_loaded_value_replaces_dict_default (line 77) | def test_non_dict_nested_loaded_value_replaces_dict_default(self):
  class LoadConfigTests (line 89) | class LoadConfigTests(unittest.TestCase):
    method setUp (line 90) | def setUp(self):
    method tearDown (line 93) | def tearDown(self):
    method test_returns_defaults_when_file_missing (line 96) | def test_returns_defaults_when_file_missing(self):
    method test_loads_valid_json_file (line 103) | def test_loads_valid_json_file(self):
    method test_merges_loaded_file_with_defaults (line 121) | def test_merges_loaded_file_with_defaults(self):
    method test_falls_back_to_defaults_on_invalid_json (line 137) | def test_falls_back_to_defaults_on_invalid_json(self):
    method test_second_call_returns_cached_object (line 149) | def test_second_call_returns_cached_object(self):
    method test_invalidate_cache_forces_reload (line 161) | def test_invalidate_cache_forces_reload(self):
  class SaveConfigTests (line 185) | class SaveConfigTests(unittest.TestCase):
    method setUp (line 186) | def setUp(self):
    method tearDown (line 189) | def tearDown(self):
    method test_saves_and_reloads_correctly (line 192) | def test_saves_and_reloads_correctly(self):
    method test_returns_false_when_path_unwritable (line 203) | def test_returns_false_when_path_unwritable(self):
    method test_save_invalidates_cache (line 208) | def test_save_invalidates_cache(self):
    method test_written_file_is_valid_json (line 218) | def test_written_file_is_valid_json(self):
  class GetWorkerTimeoutSecondsTests (line 233) | class GetWorkerTimeoutSecondsTests(unittest.TestCase):
    method test_returns_configured_value (line 234) | def test_returns_configured_value(self):
    method test_clamps_zero_to_one (line 240) | def test_clamps_zero_to_one(self):
    method test_clamps_negative_to_one (line 246) | def test_clamps_negative_to_one(self):
    method test_falls_back_to_provided_default_when_key_missing (line 252) | def test_falls_back_to_provided_default_when_key_missing(self):
    method test_fallback_also_clamped_to_one (line 260) | def test_fallback_also_clamped_to_one(self):
  class IsMasterDelegateOnlyTests (line 272) | class IsMasterDelegateOnlyTests(unittest.TestCase):
    method test_returns_false_by_default (line 273) | def test_returns_false_by_default(self):
    method test_returns_true_when_enabled (line 278) | def test_returns_true_when_enabled(self):
    method test_returns_false_on_exception (line 284) | def test_returns_false_on_exception(self):

FILE: tests/test_detection.py
  function _load_detection_module (line 9) | def _load_detection_module():
  class IsDockerEnvironmentTests (line 75) | class IsDockerEnvironmentTests(unittest.TestCase):
    method test_true_when_dockerenv_file_exists (line 76) | def test_true_when_dockerenv_file_exists(self):
    method test_true_when_docker_container_env_var_is_set (line 82) | def test_true_when_docker_container_env_var_is_set(self):
    method test_true_when_platform_node_contains_docker (line 88) | def test_true_when_platform_node_contains_docker(self):
    method test_false_when_none_of_the_signals_are_present (line 94) | def test_false_when_none_of_the_signals_are_present(self):
    method test_docker_node_name_is_case_insensitive (line 100) | def test_docker_node_name_is_case_insensitive(self):
    method test_docker_env_var_empty_string_is_falsy (line 106) | def test_docker_env_var_empty_string_is_falsy(self):
  class IsRunpodEnvironmentTests (line 118) | class IsRunpodEnvironmentTests(unittest.TestCase):
    method test_true_when_runpod_pod_id_is_set (line 119) | def test_true_when_runpod_pod_id_is_set(self):
    method test_true_when_runpod_api_key_is_set (line 123) | def test_true_when_runpod_api_key_is_set(self):
    method test_true_when_both_vars_are_set (line 127) | def test_true_when_both_vars_are_set(self):
    method test_false_when_neither_var_is_set (line 135) | def test_false_when_neither_var_is_set(self):
    method test_true_when_pod_id_is_empty_string (line 139) | def test_true_when_pod_id_is_empty_string(self):
  class IsLocalWorkerTests (line 149) | class IsLocalWorkerTests(unittest.IsolatedAsyncioTestCase):
    method test_true_for_localhost_host (line 150) | async def test_true_for_localhost_host(self):
    method test_true_for_127_0_0_1 (line 154) | async def test_true_for_127_0_0_1(self):
    method test_true_for_0_0_0_0 (line 158) | async def test_true_for_0_0_0_0(self):
    method test_true_when_type_is_local (line 162) | async def test_true_when_type_is_local(self):
    method test_false_for_remote_host (line 166) | async def test_false_for_remote_host(self):
    method test_true_when_no_host_key (line 170) | async def test_true_when_no_host_key(self):
  class GetMachineIdTests (line 180) | class GetMachineIdTests(unittest.TestCase):
    method test_returns_a_string (line 181) | def test_returns_a_string(self):
    method test_returns_non_empty_string (line 185) | def test_returns_non_empty_string(self):
    method test_stable_across_calls (line 189) | def test_stable_across_calls(self):

FILE: tests/test_dispatch_selection.py
  function _load_dispatch_module (line 10) | def _load_dispatch_module():
  class DispatchSelectionTests (line 103) | class DispatchSelectionTests(unittest.IsolatedAsyncioTestCase):
    method test_select_active_workers_filters_offline (line 104) | async def test_select_active_workers_filters_offline(self):
    method test_select_active_workers_disables_delegate_when_all_offline (line 125) | async def test_select_active_workers_disables_delegate_when_all_offlin...
    method test_select_active_workers_uses_websocket_probe_when_enabled (line 142) | async def test_select_active_workers_uses_websocket_probe_when_enabled...
    method test_probe_concurrency_is_bounded (line 167) | async def test_probe_concurrency_is_bounded(self):
    method test_select_least_busy_worker_round_robins_idle_workers (line 190) | async def test_select_least_busy_worker_round_robins_idle_workers(self):
    method test_select_least_busy_worker_chooses_smallest_queue_when_all_busy (line 216) | async def test_select_least_busy_worker_chooses_smallest_queue_when_al...
    method test_select_least_busy_worker_returns_none_when_all_probes_fail (line 237) | async def test_select_least_busy_worker_returns_none_when_all_probes_f...

FILE: tests/test_distributed_value.py
  class DistributedValueTests (line 5) | class DistributedValueTests(unittest.TestCase):
    method _make_node (line 8) | def _make_node(self):
    method setUp (line 47) | def setUp(self):
    method test_master_returns_default (line 50) | def test_master_returns_default(self):
    method test_master_coerces_default_int (line 59) | def test_master_coerces_default_int(self):
    method test_master_coerces_default_float (line 70) | def test_master_coerces_default_float(self):
    method test_worker_returns_specific_value (line 81) | def test_worker_returns_specific_value(self):
    method test_worker_second_index (line 91) | def test_worker_second_index(self):
    method test_worker_falls_back_to_default_when_key_missing (line 101) | def test_worker_falls_back_to_default_when_key_missing(self):
    method test_worker_falls_back_to_default_on_empty_value (line 112) | def test_worker_falls_back_to_default_on_empty_value(self):
    method test_worker_falls_back_on_invalid_json (line 122) | def test_worker_falls_back_on_invalid_json(self):
    method test_worker_falls_back_on_invalid_worker_id (line 131) | def test_worker_falls_back_on_invalid_worker_id(self):
    method test_worker_id_as_direct_integer (line 141) | def test_worker_id_as_direct_integer(self):
    method test_type_int_coerces_value (line 151) | def test_type_int_coerces_value(self):
    method test_type_float_coerces_value (line 162) | def test_type_float_coerces_value(self):
    method test_type_combo_stays_string (line 173) | def test_type_combo_stays_string(self):
    method test_type_string_default_stays_string (line 184) | def test_type_string_default_stays_string(self):
    method test_int_coerce_handles_float_string (line 195) | def test_int_coerce_handles_float_string(self):

FILE: tests/test_job_timeout.py
  function _load_job_timeout_module (line 11) | def _load_job_timeout_module():
  class JobTimeoutRequeueTests (line 129) | class JobTimeoutRequeueTests(unittest.IsolatedAsyncioTestCase):
    method asyncSetUp (line 130) | async def asyncSetUp(self):
    method test_requeues_only_incomplete_dynamic_tasks_for_timed_out_worker (line 140) | async def test_requeues_only_incomplete_dynamic_tasks_for_timed_out_wo...
    method test_busy_probe_graces_worker_and_skips_requeue (line 159) | async def test_busy_probe_graces_worker_and_skips_requeue(self):
    method test_completed_dynamic_task_is_not_requeued (line 178) | async def test_completed_dynamic_task_is_not_requeued(self):

FILE: tests/test_network_helpers.py
  function _load_network_module (line 8) | def _load_network_module():
  class NetworkHelpersTests (line 58) | class NetworkHelpersTests(unittest.TestCase):
    method test_normalize_host_strips_protocol_and_path (line 59) | def test_normalize_host_strips_protocol_and_path(self):
    method test_normalize_host_keeps_none (line 62) | def test_normalize_host_keeps_none(self):
    method test_build_worker_url_defaults_to_server_address (line 65) | def test_build_worker_url_defaults_to_server_address(self):
    method test_build_worker_url_cloud_defaults_to_https (line 69) | def test_build_worker_url_cloud_defaults_to_https(self):
    method test_build_worker_url_keeps_explicit_scheme (line 73) | def test_build_worker_url_keeps_explicit_scheme(self):
    method test_build_master_url_uses_https_for_cloud_host (line 77) | def test_build_master_url_uses_https_for_cloud_host(self):
    method test_build_master_url_keeps_explicit_scheme (line 85) | def test_build_master_url_keeps_explicit_scheme(self):
    method test_build_master_url_ignores_stale_saved_port_and_uses_runtime_port (line 93) | def test_build_master_url_ignores_stale_saved_port_and_uses_runtime_po...
    method test_build_master_url_keeps_explicit_port_in_host (line 101) | def test_build_master_url_keeps_explicit_port_in_host(self):
    method test_build_master_url_falls_back_to_server_address (line 109) | def test_build_master_url_falls_back_to_server_address(self):
    method test_build_master_callback_url_uses_loopback_for_local_worker (line 117) | def test_build_master_callback_url_uses_loopback_for_local_worker(self):
    method test_build_master_callback_url_keeps_public_master_url_for_remote_worker (line 126) | def test_build_master_callback_url_keeps_public_master_url_for_remote_...

FILE: tests/test_payload_parsers.py
  function _load_payload_parsers_module (line 16) | def _load_payload_parsers_module():
  function _make_png_bytes (line 34) | def _make_png_bytes(width=64, height=64, color=(128, 64, 32)):
  class _MockFileField (line 42) | class _MockFileField:
    class _MockFile (line 45) | class _MockFile:
      method __init__ (line 46) | def __init__(self, data: bytes):
      method read (line 49) | def read(self) -> bytes:
    method __init__ (line 52) | def __init__(self, data: bytes):
  function _make_form (line 56) | def _make_form(n_tiles, *, padding=None, extra_meta=None, image_color=(1...
  class ParseTilesFromFormTests (line 85) | class ParseTilesFromFormTests(unittest.TestCase):
    method test_single_tile_returns_one_entry (line 89) | def test_single_tile_returns_one_entry(self):
    method test_multiple_tiles_all_returned (line 93) | def test_multiple_tiles_all_returned(self):
    method test_tile_image_is_pil_image (line 97) | def test_tile_image_is_pil_image(self):
    method test_tile_metadata_fields_are_parsed (line 101) | def test_tile_metadata_fields_are_parsed(self):
    method test_padding_is_parsed_from_form (line 110) | def test_padding_is_parsed_from_form(self):
    method test_default_padding_is_zero (line 114) | def test_default_padding_is_zero(self):
    method test_invalid_padding_string_falls_back_to_zero (line 120) | def test_invalid_padding_string_falls_back_to_zero(self):
    method test_optional_batch_idx_included_when_present (line 126) | def test_optional_batch_idx_included_when_present(self):
    method test_optional_global_idx_included_when_present (line 131) | def test_optional_global_idx_included_when_present(self):
    method test_batch_idx_and_global_idx_absent_when_not_in_metadata (line 136) | def test_batch_idx_and_global_idx_absent_when_not_in_metadata(self):
    method test_tile_indices_match_metadata_order (line 141) | def test_tile_indices_match_metadata_order(self):
    method test_x_coordinates_reflect_metadata (line 146) | def test_x_coordinates_reflect_metadata(self):
    method test_missing_tiles_metadata_raises_value_error (line 153) | def test_missing_tiles_metadata_raises_value_error(self):
    method test_invalid_json_metadata_raises_value_error (line 157) | def test_invalid_json_metadata_raises_value_error(self):
    method test_non_list_metadata_raises_value_error (line 162) | def test_non_list_metadata_raises_value_error(self):
    method test_missing_tile_file_field_raises_value_error (line 167) | def test_missing_tile_file_field_raises_value_error(self):
    method test_tile_field_without_file_attr_raises_value_error (line 175) | def test_tile_field_without_file_attr_raises_value_error(self):
    method test_non_image_bytes_raises_value_error (line 183) | def test_non_image_bytes_raises_value_error(self):
    method test_invalid_metadata_value_type_raises_value_error (line 197) | def test_invalid_metadata_value_type_raises_value_error(self):

FILE: tests/test_prompt_transform.py
  function _load_prompt_transform_module (line 9) | def _load_prompt_transform_module():
  function _linear_prompt (line 55) | def _linear_prompt():
  function _collector_only_prompt (line 66) | def _collector_only_prompt():
  function _delegate_prompt (line 74) | def _delegate_prompt():
  function _apply (line 84) | def _apply(prompt, participant_id, enabled_worker_ids=None, delegate_mas...
  class PromptIndexTests (line 104) | class PromptIndexTests(unittest.TestCase):
    method test_nodes_by_class_groups_correctly (line 105) | def test_nodes_by_class_groups_correctly(self):
    method test_nodes_for_class_unknown_returns_empty (line 115) | def test_nodes_for_class_unknown_returns_empty(self):
    method test_nodes_without_class_type_are_indexed_under_none (line 119) | def test_nodes_without_class_type_are_indexed_under_none(self):
    method test_copy_prompt_is_a_deep_copy (line 125) | def test_copy_prompt_is_a_deep_copy(self):
    method test_has_upstream_direct_connection (line 132) | def test_has_upstream_direct_connection(self):
    method test_has_upstream_transitive_connection (line 137) | def test_has_upstream_transitive_connection(self):
    method test_has_upstream_returns_false_when_no_path (line 142) | def test_has_upstream_returns_false_when_no_path(self):
    method test_has_upstream_result_is_cached (line 147) | def test_has_upstream_result_is_cached(self):
    method test_has_upstream_does_not_infinite_loop_on_cycle (line 154) | def test_has_upstream_does_not_infinite_loop_on_cycle(self):
  class FindNodesByClassTests (line 170) | class FindNodesByClassTests(unittest.TestCase):
    method test_finds_matching_nodes (line 171) | def test_finds_matching_nodes(self):
    method test_returns_empty_when_no_match (line 179) | def test_returns_empty_when_no_match(self):
    method test_skips_non_dict_nodes (line 183) | def test_skips_non_dict_nodes(self):
  class PrunePromptForWorkerTests (line 193) | class PrunePromptForWorkerTests(unittest.TestCase):
    method test_no_distributed_nodes_returns_prompt_unchanged (line 194) | def test_no_distributed_nodes_returns_prompt_unchanged(self):
    method test_keeps_collector_and_upstream (line 202) | def test_keeps_collector_and_upstream(self):
    method test_removes_downstream_of_collector (line 208) | def test_removes_downstream_of_collector(self):
    method test_injects_preview_image_when_downstream_exists (line 213) | def test_injects_preview_image_when_downstream_exists(self):
    method test_no_preview_image_when_no_downstream (line 220) | def test_no_preview_image_when_no_downstream(self):
    method test_unrelated_nodes_are_pruned (line 225) | def test_unrelated_nodes_are_pruned(self):
    method test_result_is_a_copy_not_same_object (line 234) | def test_result_is_a_copy_not_same_object(self):
    method test_upscale_node_is_treated_as_distributed (line 242) | def test_upscale_node_is_treated_as_distributed(self):
  class PrepareDelegateMasterPromptTests (line 258) | class PrepareDelegateMasterPromptTests(unittest.TestCase):
    method test_keeps_collector_and_downstream (line 259) | def test_keeps_collector_and_downstream(self):
    method test_removes_dangling_upstream_refs (line 267) | def test_removes_dangling_upstream_refs(self):
    method test_injects_empty_image_placeholder (line 280) | def test_injects_empty_image_placeholder(self):
    method test_one_placeholder_per_collector (line 288) | def test_one_placeholder_per_collector(self):
    method test_result_is_independent_copy (line 299) | def test_result_is_independent_copy(self):
  class GenerateJobIdMapTests (line 311) | class GenerateJobIdMapTests(unittest.TestCase):
    method test_maps_collector_nodes (line 312) | def test_maps_collector_nodes(self):
    method test_maps_upscale_nodes (line 322) | def test_maps_upscale_nodes(self):
    method test_empty_prompt_returns_empty_map (line 330) | def test_empty_prompt_returns_empty_map(self):
    method test_stable_ids_across_calls (line 334) | def test_stable_ids_across_calls(self):
  class ApplyOverridesCollectorTests (line 346) | class ApplyOverridesCollectorTests(unittest.TestCase):
    method _collector_prompt (line 347) | def _collector_prompt(self):
    method test_worker_sets_is_worker_true (line 350) | def test_worker_sets_is_worker_true(self):
    method test_worker_sets_master_url (line 354) | def test_worker_sets_master_url(self):
    method test_worker_sets_worker_id (line 358) | def test_worker_sets_worker_id(self):
    method test_worker_sets_delegate_only_false (line 362) | def test_worker_sets_delegate_only_false(self):
    method test_master_sets_is_worker_false (line 366) | def test_master_sets_is_worker_false(self):
    method test_master_clears_stale_master_url (line 370) | def test_master_clears_stale_master_url(self):
    method test_master_clears_stale_worker_id (line 375) | def test_master_clears_stale_worker_id(self):
    method test_master_with_delegate_master_sets_delegate_only_true (line 380) | def test_master_with_delegate_master_sets_delegate_only_true(self):
    method test_master_without_delegate_master_sets_delegate_only_false (line 384) | def test_master_without_delegate_master_sets_delegate_only_false(self):
    method test_enabled_worker_ids_serialized_as_json (line 388) | def test_enabled_worker_ids_serialized_as_json(self):
    method test_multi_job_id_is_set_from_job_map (line 393) | def test_multi_job_id_is_set_from_job_map(self):
  class ApplyOverridesSeedTests (line 413) | class ApplyOverridesSeedTests(unittest.TestCase):
    method _seed_prompt (line 414) | def _seed_prompt(self):
    method test_worker_sets_is_worker_true (line 417) | def test_worker_sets_is_worker_true(self):
    method test_worker_id_reflects_index_in_enabled_list (line 421) | def test_worker_id_reflects_index_in_enabled_list(self):
    method test_master_sets_is_worker_false (line 425) | def test_master_sets_is_worker_false(self):
    method test_master_sets_empty_worker_id (line 429) | def test_master_sets_empty_worker_id(self):
  class ApplyOverridesUpscaleTests (line 438) | class ApplyOverridesUpscaleTests(unittest.TestCase):
    method _upscale_prompt (line 439) | def _upscale_prompt(self):
    method test_worker_sets_is_worker_true (line 442) | def test_worker_sets_is_worker_true(self):
    method test_worker_sets_master_url_and_worker_id (line 446) | def test_worker_sets_master_url_and_worker_id(self):
    method test_master_clears_master_url_and_worker_id (line 451) | def test_master_clears_master_url_and_worker_id(self):
    method test_collector_downstream_of_upscale_gets_pass_through (line 457) | def test_collector_downstream_of_upscale_gets_pass_through(self):
  class ApplyOverridesValueTests (line 471) | class ApplyOverridesValueTests(unittest.TestCase):
    method _value_prompt (line 472) | def _value_prompt(self):
    method test_worker_sets_is_worker_true (line 475) | def test_worker_sets_is_worker_true(self):
    method test_worker_id_reflects_index_in_enabled_list (line 479) | def test_worker_id_reflects_index_in_enabled_list(self):
    method test_master_sets_is_worker_false (line 483) | def test_master_sets_is_worker_false(self):
    method test_master_sets_empty_worker_id (line 487) | def test_master_sets_empty_worker_id(self):

FILE: tests/test_queue_request.py
  function _load_queue_request_module (line 6) | def _load_queue_request_module():
  class QueueRequestPayloadTests (line 19) | class QueueRequestPayloadTests(unittest.TestCase):
    method _base_payload (line 20) | def _base_payload(self):
    method test_normalizes_enabled_worker_ids (line 27) | def test_normalizes_enabled_worker_ids(self):
    method test_supports_legacy_workers_field (line 37) | def test_supports_legacy_workers_field(self):
    method test_supports_auto_prepare_prompt_fallback (line 46) | def test_supports_auto_prepare_prompt_fallback(self):
    method test_normalizes_trace_execution_id (line 61) | def test_normalizes_trace_execution_id(self):
    method test_blank_trace_execution_id_normalizes_to_none (line 69) | def test_blank_trace_execution_id_normalizes_to_none(self):
    method test_auto_prepare_defaults_true (line 77) | def test_auto_prepare_defaults_true(self):
    method test_workers_field_must_be_list (line 81) | def test_workers_field_must_be_list(self):
    method test_trace_execution_id_must_be_string (line 88) | def test_trace_execution_id_must_be_string(self):
    method test_auto_prepare_false_still_falls_back_to_workflow_prompt (line 94) | def test_auto_prepare_false_still_falls_back_to_workflow_prompt(self):
    method test_auto_prepare_must_be_boolean (line 105) | def test_auto_prepare_must_be_boolean(self):
    method test_invalid_delegate_master_type_raises (line 111) | def test_invalid_delegate_master_type_raises(self):
    method test_invalid_enabled_worker_ids_type_raises (line 117) | def test_invalid_enabled_worker_ids_type_raises(self):
    method test_invalid_top_level_payload_raises (line 123) | def test_invalid_top_level_payload_raises(self):
    method test_missing_prompt_raises (line 127) | def test_missing_prompt_raises(self):
    method test_missing_enabled_worker_ids_raises (line 137) | def test_missing_enabled_worker_ids_raises(self):
    method test_missing_client_id_raises (line 143) | def test_missing_client_id_raises(self):

FILE: tests/test_static_mode.py
  function _load_static_mode_module (line 11) | def _load_static_mode_module():
  class _FakeStaticWorker (line 134) | class _FakeStaticWorker(static_mode.StaticModeMixin):
    method __init__ (line 135) | def __init__(self):
    method round_to_multiple (line 142) | def round_to_multiple(self, value):
    method calculate_tiles (line 145) | def calculate_tiles(self, _width, _height, _tile_width, _tile_height, ...
    method _poll_job_ready (line 148) | def _poll_job_ready(self, *_args, **_kwargs):
    method _request_tile_from_master (line 151) | async def _request_tile_from_master(self, *_args, **_kwargs):
    method _send_heartbeat_to_master (line 155) | async def _send_heartbeat_to_master(self, *_args, **_kwargs):
    method send_tiles_batch_to_master (line 158) | async def send_tiles_batch_to_master(
    method _extract_and_process_tile (line 174) | def _extract_and_process_tile(self, upscaled_image, *_args, **_kwargs):
    method create_tile_mask (line 179) | def create_tile_mask(self, *_args, **_kwargs):
    method blend_tile (line 183) | def blend_tile(self, base_image, *_args, **_kwargs):
  function _call_worker_static (line 187) | def _call_worker_static(fake_worker):
  class StaticModeWorkerFlowTests (line 214) | class StaticModeWorkerFlowTests(unittest.TestCase):
    method test_worker_static_aborts_when_job_not_ready (line 215) | def test_worker_static_aborts_when_job_not_ready(self):
    method test_worker_static_requests_tiles_and_flushes_final_batch (line 226) | def test_worker_static_requests_tiles_and_flushes_final_batch(self):
    method test_flush_empty_final_still_sends_completion_signal (line 241) | def test_flush_empty_final_still_sends_completion_signal(self):

FILE: tests/test_worker_process_runtime.py
  function _load_process_module (line 10) | def _load_process_module(module_filename: str):
  class ComfyRootDiscoveryTests (line 58) | class ComfyRootDiscoveryTests(unittest.TestCase):
    method test_prefers_loaded_comfyui_module_path (line 59) | def test_prefers_loaded_comfyui_module_path(self):
  class LaunchCommandBuilderTests (line 72) | class LaunchCommandBuilderTests(unittest.TestCase):
    method test_inherits_runtime_layout_args_for_desktop (line 73) | def test_inherits_runtime_layout_args_for_desktop(self):

FILE: upscale/conditioning.py
  function clone_control_chain (line 4) | def clone_control_chain(control, clone_hint=True):
  function clone_conditioning (line 17) | def clone_conditioning(cond_list, clone_hints=True):

FILE: upscale/job_models.py
  class BaseJobState (line 6) | class BaseJobState:
  class TileJobState (line 11) | class TileJobState(BaseJobState):
  class ImageJobState (line 28) | class ImageJobState(BaseJobState):
    method pending_tasks (line 44) | def pending_tasks(self):
    method completed_tasks (line 48) | def completed_tasks(self):

FILE: upscale/job_state.py
  class JobStateMixin (line 9) | class JobStateMixin:
    method _get_job_data (line 10) | async def _get_job_data(self, multi_job_id):
    method _get_all_completed_tasks (line 16) | async def _get_all_completed_tasks(self, multi_job_id):
    method _get_next_image_index (line 25) | async def _get_next_image_index(self, multi_job_id):
    method _get_next_tile_index (line 42) | async def _get_next_tile_index(self, multi_job_id):
    method _get_total_completed_count (line 59) | async def _get_total_completed_count(self, multi_job_id):
    method _get_all_completed_images (line 70) | async def _get_all_completed_images(self, multi_job_id):
    method _get_pending_count (line 79) | async def _get_pending_count(self, multi_job_id):
    method _drain_worker_results_queue (line 90) | async def _drain_worker_results_queue(self, multi_job_id):
    method _check_and_requeue_timed_out_workers (line 133) | async def _check_and_requeue_timed_out_workers(self, multi_job_id, bat...

FILE: upscale/job_store.py
  function ensure_tile_jobs_initialized (line 15) | def ensure_tile_jobs_initialized():
  function _init_job_queue (line 34) | async def _init_job_queue(
  function init_dynamic_job (line 83) | async def init_dynamic_job(
  function init_static_job_batched (line 100) | async def init_static_job_batched(
  function _drain_results_queue (line 117) | async def _drain_results_queue(multi_job_id):
  function _get_completed_count (line 155) | async def _get_completed_count(multi_job_id):
  function _mark_task_completed (line 165) | async def _mark_task_completed(multi_job_id, task_id, result):
  function _cleanup_job (line 174) | async def _cleanup_job(multi_job_id):

FILE: upscale/job_timeout.py
  function _find_worker_record (line 11) | def _find_worker_record(worker_id):
  function _check_and_requeue_timed_out_workers (line 17) | async def _check_and_requeue_timed_out_workers(multi_job_id, total_tasks):

FILE: upscale/modes/dynamic.py
  class DynamicModeMixin (line 12) | class DynamicModeMixin:
    method process_master_dynamic (line 22) | def process_master_dynamic(self, upscaled_image, model, positive, nega...
    method process_worker_dynamic (line 213) | def process_worker_dynamic(self, upscaled_image, model, positive, nega...

FILE: upscale/modes/single_gpu.py
  class SingleGpuModeMixin (line 7) | class SingleGpuModeMixin:
    method process_single_gpu (line 8) | def process_single_gpu(self, upscaled_image, model, positive, negative...

FILE: upscale/modes/static.py
  class StaticModeMixin (line 23) | class StaticModeMixin:
    method _poll_job_ready (line 33) | def _poll_job_ready(self, multi_job_id, master_url, worker_id=None, ma...
    method _extract_and_process_tile (line 49) | def _extract_and_process_tile(
    method _flush_tiles_to_master (line 85) | def _flush_tiles_to_master(
    method _master_process_one_tile (line 122) | def _master_process_one_tile(
    method _process_worker_static_sync (line 191) | def _process_worker_static_sync(self, upscaled_image, model, positive,...
    method _async_collect_and_monitor_static (line 316) | async def _async_collect_and_monitor_static(self, multi_job_id, total_...
    method _process_master_static_sync (line 371) | def _process_master_static_sync(self, upscaled_image, model, positive,...

FILE: upscale/payload_parsers.py
  function _parse_tiles_from_form (line 7) | def _parse_tiles_from_form(data):

FILE: upscale/result_collector.py
  class ResultCollectorMixin (line 12) | class ResultCollectorMixin:
    method _log_worker_timeout_status (line 22) | def _log_worker_timeout_status(self, job_data, current_time: float, mu...
    method _async_collect_results (line 36) | async def _async_collect_results(self, multi_job_id, num_workers, mode...
    method _async_collect_worker_tiles (line 180) | async def _async_collect_worker_tiles(self, multi_job_id, num_workers):
    method _mark_image_completed (line 184) | async def _mark_image_completed(self, multi_job_id, image_idx, image_p...
    method _async_collect_dynamic_images (line 194) | async def _async_collect_dynamic_images(self, multi_job_id, remaining_...

FILE: upscale/tile_ops.py
  class TileOpsMixin (line 13) | class TileOpsMixin:
    method round_to_multiple (line 14) | def round_to_multiple(self, value: int, multiple: int = 8) -> int:
    method calculate_tiles (line 18) | def calculate_tiles(self, image_width: int, image_height: int,
    method extract_tile_with_padding (line 34) | def extract_tile_with_padding(self, image: torch.Tensor, x: int, y: int,
    method extract_batch_tile_with_padding (line 96) | def extract_batch_tile_with_padding(self, images: torch.Tensor, x: int...
    method process_tile (line 157) | def process_tile(self, tile_tensor: torch.Tensor, model, positive, neg...
    method process_tiles_batch (line 239) | def process_tiles_batch(self, tile_batch: torch.Tensor, model, positiv...
    method create_tile_mask (line 289) | def create_tile_mask(self, image_width: int, image_height: int,
    method blend_tile (line 310) | def blend_tile(self, base_image: Image.Image, tile_image: Image.Image,
    method _slice_conditioning (line 351) | def _slice_conditioning(self, positive, negative, batch_idx):
    method _process_and_blend_tile (line 374) | def _process_and_blend_tile(self, tile_idx, tile_pos, upscaled_image, ...
    method _process_single_tile (line 402) | def _process_single_tile(self, global_idx, num_tiles_per_image, upscal...

FILE: upscale/worker_comms.py
  class WorkerCommsMixin (line 11) | class WorkerCommsMixin:
    method _send_heartbeat_to_master (line 12) | async def _send_heartbeat_to_master(self, multi_job_id, master_url, wo...
    method send_tiles_batch_to_master (line 16) | async def send_tiles_batch_to_master(self, processed_tiles, multi_job_...
    method _send_tiles_completion_signal (line 110) | async def _send_tiles_completion_signal(self, multi_job_id, master_url...
    method _request_work_item_from_master (line 124) | async def _request_work_item_from_master(
    method _request_image_from_master (line 171) | async def _request_image_from_master(self, multi_job_id, master_url, w...
    method _request_tile_from_master (line 180) | async def _request_tile_from_master(self, multi_job_id, master_url, wo...
    method _send_full_image_to_master (line 190) | async def _send_full_image_to_master(self, image_pil, image_idx, multi...
    method _send_worker_complete_signal (line 230) | async def _send_worker_complete_signal(self, multi_job_id, master_url,...
    method _check_job_status (line 246) | async def _check_job_status(self, multi_job_id, master_url):
    method _async_yield (line 260) | async def _async_yield(self):

FILE: utils/async_helpers.py
  function run_async_in_server_loop (line 13) | def run_async_in_server_loop(coro: Coroutine, timeout: Optional[float] =...
  function _summarize_node_errors (line 60) | def _summarize_node_errors(node_errors: dict) -> str:
  class PromptValidationError (line 82) | class PromptValidationError(RuntimeError):
    method __init__ (line 85) | def __init__(self, error_payload, node_errors=None):
  function queue_prompt_payload (line 108) | async def queue_prompt_payload(

FILE: utils/audio_payload.py
  function encode_audio_payload (line 16) | def encode_audio_payload(audio_payload):
  function decode_audio_payload (line 46) | def decode_audio_payload(audio_payload):

FILE: utils/cloudflare/binary.py
  function _get_project_root (line 13) | def _get_project_root():
  function _get_cloudflared_dir (line 17) | def _get_cloudflared_dir():
  function _get_platform_binary_name (line 21) | def _get_platform_binary_name():
  function _get_binary_path (line 41) | def _get_binary_path(bin_dir=None):
  function _download_cloudflared (line 47) | def _download_cloudflared():
  function ensure_binary (line 69) | def ensure_binary() -> str:

FILE: utils/cloudflare/process_reader.py
  class ProcessReader (line 14) | class ProcessReader:
    method __init__ (line 15) | def __init__(self, log_file=None):
    method set_log_file (line 25) | def set_log_file(self, log_file):
    method _append_log (line 28) | def _append_log(self, line):
    method _reader (line 40) | def _reader(self):
    method start (line 66) | def start(self, process, loop):
    method wait_for_url (line 76) | async def wait_for_url(self, timeout):
    method stop (line 82) | def stop(self):
    method get_url (line 90) | def get_url(self):
    method get_last_error (line 93) | def get_last_error(self):
    method get_recent_logs (line 96) | def get_recent_logs(self):

FILE: utils/cloudflare/state.py
  function _get_tunnel_config (line 7) | def _get_tunnel_config(cfg):
  function load_tunnel_state (line 14) | def load_tunnel_state():
  function persist_tunnel_state (line 28) | def persist_tunnel_state(
  function clear_tunnel_state (line 56) | def clear_tunnel_state(log_file=None, previous_host=None, master_host=No...
  function resolve_restore_master_host (line 67) | def resolve_restore_master_host(previous_master_host):

FILE: utils/cloudflare/tunnel.py
  class CloudflareTunnelManager (line 19) | class CloudflareTunnelManager:
    method __init__ (line 20) | def __init__(self):
    method base_dir (line 36) | def base_dir(self):
    method _restore_state (line 39) | def _restore_state(self):
    method start_tunnel (line 56) | async def start_tunnel(self):
    method stop_tunnel (line 156) | async def stop_tunnel(self):
    method get_status (line 190) | def get_status(self):

FILE: utils/config.py
  function _config_path (line 19) | def _config_path():
  function get_default_config (line 22) | def get_default_config():
  function _merge_with_defaults (line 47) | def _merge_with_defaults(data, defaults):
  function invalidate_config_cache (line 68) | def invalidate_config_cache():
  function load_config (line 75) | def load_config():
  function save_config (line 99) | def save_config(config):
  function config_transaction (line 120) | async def config_transaction():
  function ensure_config_exists (line 131) | def ensure_config_exists():
  function get_worker_timeout_seconds (line 141) | def get_worker_timeout_seconds(default: int = HEARTBEAT_TIMEOUT) -> int:
  function is_master_delegate_only (line 160) | def is_master_delegate_only() -> bool:

FILE: utils/crop_model_patch.py
  function crop_model_cond (line 10) | def crop_model_cond(model, crop_regions, init_size, canvas_size, tile_si...
  class ModelPatchCropper (line 44) | class ModelPatchCropper:
    method __init__ (line 47) | def __init__(self, patch):
    method __del__ (line 69) | def __del__(self):
    method crop (line 74) | def crop(self, crop_regions, canvas_size, latent_crop=True):

FILE: utils/exceptions.py
  class DistributedError (line 4) | class DistributedError(Exception):
  class WorkerError (line 8) | class WorkerError(DistributedError):
    method __init__ (line 11) | def __init__(self, message, worker_id=None):
  class WorkerTimeoutError (line 16) | class WorkerTimeoutError(WorkerError):
  class WorkerNotAvailableError (line 20) | class WorkerNotAvailableError(WorkerError):
  class JobQueueError (line 24) | class JobQueueError(DistributedError):
  class TileCollectionError (line 28) | class TileCollectionError(DistributedError):
  class ProcessError (line 32) | class ProcessError(DistributedError):
    method __init__ (line 35) | def __init__(self, message, pid=None, worker_id=None):
  class TunnelError (line 41) | class TunnelError(DistributedError):

FILE: utils/image.py
  function tensor_to_pil (line 8) | def tensor_to_pil(img_tensor, batch_index=0):
  function pil_to_tensor (line 12) | def pil_to_tensor(image):
  function ensure_contiguous (line 20) | def ensure_contiguous(tensor):

FILE: utils/logging.py
  function is_debug_enabled (line 15) | def is_debug_enabled():
  function debug_log (line 36) | def debug_log(message):
  function log (line 41) | def log(message):

FILE: utils/network.py
  function get_client_session (line 14) | async def get_client_session():
  function cleanup_client_session (line 28) | async def cleanup_client_session():
  function handle_api_error (line 35) | async def handle_api_error(request, error, status=500):
  function get_server_port (line 46) | def get_server_port():
  function get_server_loop (line 51) | def get_server_loop():
  function normalize_host (line 57) | def normalize_host(value):
  function _split_host_and_port (line 69) | def _split_host_and_port(host):
  function build_worker_url (line 88) | def build_worker_url(worker, endpoint=""):
  function probe_worker (line 108) | async def probe_worker(worker_url: str, timeout: float = 5.0) -> dict | ...
  function build_master_url (line 139) | def build_master_url(config=None, prompt_server_instance=None):
  function build_master_callback_url (line 185) | def build_master_callback_url(worker, config=None, prompt_server_instanc...

FILE: utils/process.py
  function is_process_alive (line 9) | def is_process_alive(pid):
  function terminate_process (line 24) | def terminate_process(process, timeout=5):
  function get_python_executable (line 34) | def get_python_executable():

FILE: utils/trace_logger.py
  function trace_prefix (line 4) | def trace_prefix(trace_execution_id: str) -> str:
  function trace_debug (line 8) | def trace_debug(trace_execution_id: str, message: str) -> None:
  function trace_info (line 12) | def trace_info(trace_execution_id: str, message: str) -> None:

FILE: utils/usdu_managment.py
  function _send_heartbeat_to_master (line 28) | async def _send_heartbeat_to_master(multi_job_id, master_url, worker_id):

FILE: utils/usdu_utils.py
  function tensor_to_pil (line 14) | def tensor_to_pil(img_tensor, batch_index=0):
  function pil_to_tensor (line 20) | def pil_to_tensor(image):
  function controlnet_hint_to_pil (line 29) | def controlnet_hint_to_pil(tensor, batch_index=0):
  function pil_to_controlnet_hint (line 33) | def pil_to_controlnet_hint(img):
  function crop_tensor (line 37) | def crop_tensor(tensor, region):
  function resize_tensor (line 43) | def resize_tensor(tensor, size, mode="nearest-exact"):
  function get_crop_region (line 49) | def get_crop_region(mask, pad=0):
  function fix_crop_region (line 65) | def fix_crop_region(region, image_size):
  function expand_crop (line 76) | def expand_crop(region, width, height, target_width, target_height):
  function resize_region (line 115) | def resize_region(region, init_size, resize_size):
  function pad_image (line 127) | def pad_image(image, left_pad, right_pad, top_pad, bottom_pad, fill=Fals...
  function pad_image2 (line 169) | def pad_image2(image, left_pad, right_pad, top_pad, bottom_pad, fill=Fal...
  function pad_tensor (line 206) | def pad_tensor(tensor, left_pad, right_pad, top_pad, bottom_pad, fill=Fa...
  function resize_and_pad_image (line 242) | def resize_and_pad_image(image, width, height, fill=False, blur=False):
  function resize_and_pad_tensor (line 269) | def resize_and_pad_tensor(tensor, width, height, fill=False, blur=False):
  function crop_controlnet (line 297) | def crop_controlnet(cond_dict, region, init_size, canvas_size, tile_size...
  function region_intersection (line 315) | def region_intersection(region1, region2):
  function crop_gligen (line 335) | def crop_gligen(cond_dict, region, init_size, canvas_size, tile_size, w_...
  function crop_area (line 381) | def crop_area(cond_dict, region, init_size, canvas_size, tile_size, w_pa...
  function crop_mask (line 415) | def crop_mask(cond_dict, region, init_size, canvas_size, tile_size, w_pa...
  function crop_reference_latents (line 445) | def crop_reference_latents(cond_dict, region, init_size, canvas_size, ti...
  function crop_cond (line 506) | def crop_cond(cond, region, init_size, canvas_size, tile_size, w_pad=0, ...

FILE: web/apiClient.js
  function createApiClient (line 4) | function createApiClient(baseUrl) {

FILE: web/constants.js
  constant BUTTON_STYLES (line 1) | const BUTTON_STYLES = {
  constant STATUS_COLORS (line 25) | const STATUS_COLORS = {
  constant UI_COLORS (line 32) | const UI_COLORS = {
  constant PULSE_ANIMATION_CSS (line 44) | const PULSE_ANIMATION_CSS = `
  constant UI_STYLES (line 132) | const UI_STYLES = {
  constant TIMEOUTS (line 172) | const TIMEOUTS = {
  constant ENDPOINTS (line 194) | const ENDPOINTS = {
  constant NODE_CLASSES (line 225) | const NODE_CLASSES = {
  function generateUUID (line 233) | function generateUUID() {

FILE: web/distributedValue.js
  constant NODE_CLASS (line 4) | const NODE_CLASS = "DistributedValue";
  constant CONVERTED_WIDGET (line 5) | const CONVERTED_WIDGET = "converted-widget";
  constant DYNAMIC_DEFAULT_WIDGET (line 6) | const DYNAMIC_DEFAULT_WIDGET = "_dv_default";
  constant DYNAMIC_WORKER_WIDGET_PREFIX (line 7) | const DYNAMIC_WORKER_WIDGET_PREFIX = "_dv_worker_";
  constant WORKERS_CHANGED_EVENT (line 8) | const WORKERS_CHANGED_EVENT = "distributed:workers-changed";
  function filterEnabledWorkers (line 13) | function filterEnabledWorkers(workers) {
  function fetchWorkers (line 18) | async function fetchWorkers() {
  function getRawDefaultWidget (line 29) | function getRawDefaultWidget(node) {
  function getRawWorkerValuesWidget (line 33) | function getRawWorkerValuesWidget(node) {
  function getDynamicDefaultWidget (line 37) | function getDynamicDefaultWidget(node) {
  function getDynamicWorkerWidgets (line 41) | function getDynamicWorkerWidgets(node) {
  function hideWidgetForGood (line 45) | function hideWidgetForGood(node, widget, suffix = "") {
  function hideRawWidgets (line 66) | function hideRawWidgets(node) {
  function removeDynamicDefaultWidget (line 71) | function removeDynamicDefaultWidget(node) {
  function removeDynamicWorkerWidgets (line 78) | function removeDynamicWorkerWidgets(node) {
  function readWorkerStore (line 87) | function readWorkerStore(node) {
  function writeWorkerStore (line 98) | function writeWorkerStore(node, store) {
  function normalizeComboOptions (line 104) | function normalizeComboOptions(options) {
  function resolveGraphLink (line 111) | function resolveGraphLink(graph, linkId) {
  function detectTargetType (line 125) | function detectTargetType(node) {
  function normalizeNumber (line 180) | function normalizeNumber(value, fallback) {
  function getDefaultInitialValue (line 185) | function getDefaultInitialValue(node, inputType, comboOptions) {
  function setRawDefaultValue (line 202) | function setRawDefaultValue(node, value) {
  function serializeWorkerStoreFromWidgets (line 208) | function serializeWorkerStoreFromWidgets(node, inputType, comboOptions) {
  function updateWorkerStoreTypeMetadata (line 232) | function updateWorkerStoreTypeMetadata(node, inputType, comboOptions) {
  function createDynamicDefaultWidget (line 243) | function createDynamicDefaultWidget(node, inputType, comboOptions) {
  function getWorkerInitialValue (line 297) | function getWorkerInitialValue(store, key, workerId, inputType, comboOpt...
  function createWorkerWidgets (line 319) | function createWorkerWidgets(node, workers, inputType, comboOptions) {
  function rebuildWidgets (line 384) | function rebuildWidgets(node) {
  function refreshNodeWorkers (line 413) | function refreshNodeWorkers(node, workers) {
  function refreshTrackedNodes (line 419) | async function refreshTrackedNodes(workers = null) {
  function attachWorkersChangedListener (line 426) | function attachWorkersChangedListener() {
  method nodeCreated (line 444) | async nodeCreated(node) {

FILE: web/executionUtils.js
  function setupInterceptor (line 6) | function setupInterceptor(extension) {
  function executeParallelDistributed (line 25) | async function executeParallelDistributed(extension, promptWrapper) {
  function performPreflightCheck (line 108) | async function performPreflightCheck(extension, workers) {

FILE: web/image_batch_divider.js
  constant BATCH_DIVIDER_NODES (line 4) | const BATCH_DIVIDER_NODES = {
  method nodeCreated (line 11) | async nodeCreated(node) {
  method nodeBeforeRemove (line 83) | nodeBeforeRemove(node) {

FILE: web/main.js
  constant WORKERS_CHANGED_EVENT (line 16) | const WORKERS_CHANGED_EVENT = "distributed:workers-changed";
  class DistributedExtension (line 18) | class DistributedExtension {
    method constructor (line 19) | constructor() {
    method log (line 61) | log(message, level = "info") {
    method injectStyles (line 70) | injectStyles() {
    method enabledWorkers (line 97) | get enabledWorkers() {
    method isEnabled (line 101) | get isEnabled() {
    method isMasterParticipationEnabled (line 105) | isMasterParticipationEnabled() {
    method isMasterFallbackActive (line 109) | isMasterFallbackActive() {
    method isMasterParticipating (line 113) | isMasterParticipating() {
    method updateMasterParticipation (line 117) | async updateMasterParticipation(enabled) {
    method loadConfig (line 133) | async loadConfig() {
    method _emitWorkersChanged (line 162) | _emitWorkersChanged() {
    method _applyMasterHost (line 171) | _applyMasterHost(host) {
    method _parseHostInput (line 181) | _parseHostInput(value) {
    method updateTunnelUIElements (line 185) | updateTunnelUIElements(isRunning, isStarting) {
    method refreshTunnelStatus (line 189) | async refreshTunnelStatus() {
    method handleTunnelToggle (line 193) | async handleTunnelToggle(button) {
    method updateWorkerEnabled (line 197) | async updateWorkerEnabled(workerId, enabled) {
    method _updateSetting (line 229) | async _updateSetting(key, value) {
    method registerSidebarTab (line 269) | registerSidebarTab() {
    method onPanelOpen (line 289) | onPanelOpen() {
    method onPanelClose (line 298) | onPanelClose() {
    method _applyNodes2Style (line 317) | _applyNodes2Style() {
    method _parseColorToRgba (line 323) | _parseColorToRgba(colorValue) {
    method _isPanelLightTheme (line 363) | _isPanelLightTheme() {
    method _applyThemeToneClass (line 384) | _applyThemeToneClass() {
    method _startThemeObserver (line 391) | _startThemeObserver() {
    method _stopThemeObserver (line 411) | _stopThemeObserver() {
    method _setupNodes2Listener (line 419) | _setupNodes2Listener() {
    method setupInterceptor (line 431) | setupInterceptor() {
    method updateWorkerCard (line 435) | updateWorkerCard(workerId, newStatus) {
    method cleanup (line 442) | cleanup() {
    method getMasterUrl (line 456) | getMasterUrl() {
    method detectMasterIP (line 460) | async detectMasterIP() {
    method _handleInterruptWorkers (line 464) | _handleInterruptWorkers(button) {
    method _handleClearMemory (line 468) | _handleClearMemory(button) {
  method setup (line 475) | async setup() {

FILE: web/masterDetection.js
  function detectMasterIP (line 3) | async function detectMasterIP(extension) {

FILE: web/sidebar/actionsSection.js
  function renderActionsSection (line 3) | function renderActionsSection(extension) {

FILE: web/sidebar/settingsSection.js
  function renderSettingsSection (line 3) | function renderSettingsSection(extension) {

FILE: web/sidebar/workersSection.js
  function renderWorkersSection (line 3) | function renderWorkersSection(extension) {

FILE: web/sidebarRenderer.js
  function updateWorkerCard (line 7) | function updateWorkerCard(extension, workerId, newStatus = {}) {
  function renderSidebarContent (line 36) | async function renderSidebarContent(extension, el) {

FILE: web/stateManager.js
  function createStateManager (line 1) | function createStateManager() {

FILE: web/tunnelManager.js
  function updateTunnelUIElements (line 1) | function updateTunnelUIElements(extension, isRunning, isStarting) {
  function refreshTunnelStatus (line 58) | async function refreshTunnelStatus(extension) {
  function handleTunnelToggle (line 75) | async function handleTunnelToggle(extension, button) {

FILE: web/ui.js
  class DistributedUI (line 128) | class DistributedUI {
    method constructor (line 129) | constructor() {
    method createStatusDot (line 134) | createStatusDot(id, color = "#666", title = "Status") {
    method createButton (line 142) | createButton(text, onClick, customStyle = "") {
    method createButtonGroup (line 151) | createButtonGroup(buttons, style = "") {
    method createWorkerControls (line 158) | createWorkerControls(workerId, handlers = {}) {
    method createFormGroup (line 190) | createFormGroup(label, value, id, type = "text", placeholder = "") {
    method createInfoBox (line 213) | createInfoBox(text) {
    method addHoverEffect (line 221) | addHoverEffect(element, onHover, onLeave) {
    method createCard (line 226) | createCard(type = 'worker', options = {}) {
    method createCardColumn (line 259) | createCardColumn(type = 'checkbox', options = {}) {
    method createInfoRow (line 279) | createInfoRow(options = {}) {
    method createWorkerContent (line 286) | createWorkerContent() {
    method createSettingsForm (line 292) | createSettingsForm(fields = [], options = {}) {
    method createButtonHelper (line 332) | createButtonHelper(text, onClick, style) {
    method updateMasterDisplay (line 336) | updateMasterDisplay(extension) {
    method showToast (line 358) | showToast(app, severity, summary, detail, life = 3000) {
    method showCloudflareWarning (line 364) | showCloudflareWarning(extension, masterHost) {
    method updateStatusDot (line 368) | updateStatusDot(workerId, color, title, pulsing = false) {
    method showLogModal (line 394) | showLogModal(extension, workerId, logData, fetchLog = null) {
    method createWorkerSettingsForm (line 422) | createWorkerSettingsForm(extension, worker) {
    method createSettingsToggle (line 426) | createSettingsToggle() {
    method createCheckboxOrIconColumn (line 446) | createCheckboxOrIconColumn(config, data, extension) {
    method createStatusDotHelper (line 516) | createStatusDotHelper(config, data, extension) {
    method createSettingsToggleHelper (line 544) | createSettingsToggleHelper(expandedId, extension) {
    method createControlsSection (line 561) | createControlsSection(config, data, extension, isRemote) {
    method createSettingsSection (line 649) | createSettingsSection(config, data, extension) {
    method createMasterSettingsForm (line 681) | createMasterSettingsForm(extension, data) {
    method addPlaceholderHover (line 760) | addPlaceholderHover(card, leftColumn, entityType) {
    method renderEntityCard (line 777) | renderEntityCard(entityType, data, extension) {

FILE: web/ui/buttonHelpers.js
  function createButtonHelper (line 1) | function createButtonHelper(ui, text, onClick, style) {
  function createCheckboxSetting (line 5) | function createCheckboxSetting(id, label, tooltip, checked, onChange) {
  function createNumberSetting (line 28) | function createNumberSetting(id, label, tooltip, value, min, step, onCha...

FILE: web/ui/cloudflareWarning.js
  function showCloudflareWarning (line 1) | function showCloudflareWarning(extension, masterHost) {

FILE: web/ui/entityCard.js
  function renderEntityCard (line 4) | function renderEntityCard(ui, cardConfigs, entityType, data, extension) {

FILE: web/ui/logModal.js
  function formatFileSize (line 3) | function formatFileSize(bytes) {
  function createLogModal (line 9) | function createLogModal() {

FILE: web/ui/settingsForm.js
  function createWorkerSettingsForm (line 4) | function createWorkerSettingsForm(ui, extension, worker) {

FILE: web/urlUtils.js
  function normalizeWorkerUrl (line 1) | function normalizeWorkerUrl(rawUrl) {
  function parseHostInput (line 24) | function parseHostInput(value) {
  function buildWorkerUrl (line 43) | function buildWorkerUrl(worker, endpoint = "", windowLocation = window.l...
  function buildWorkerWebSocketUrl (line 70) | function buildWorkerWebSocketUrl(workerUrl) {
  function getMasterUrl (line 76) | function getMasterUrl(config, windowLocation = window.location, log = nu...

FILE: web/workerLifecycle.js
  function setStatusDotClass (line 10) | function setStatusDotClass(dot, statusClass) {
  function setButtonClass (line 26) | function setButtonClass(button, className) {
  function setButtonVisibility (line 36) | function setButtonVisibility(button, visible) {
  function checkAllWorkerStatuses (line 44) | async function checkAllWorkerStatuses(extension) {
  function checkMasterStatus (line 84) | async function checkMasterStatus(extension) {
  function getWorkerUrl (line 138) | function getWorkerUrl(extension, worker, endpoint = "") {
  function checkWorkerStatus (line 142) | async function checkWorkerStatus(extension, worker) {
  function launchWorker (line 207) | async function launchWorker(extension, workerId) {
  function stopWorker (line 274) | async function stopWorker(extension, workerId) {
  function clearLaunchingFlag (line 339) | async function clearLaunchingFlag(extension, workerId) {
  function loadManagedWorkers (line 348) | async function loadManagedWorkers(extension) {
  function updateWorkerControls (line 372) | function updateWorkerControls(extension, workerId) {
  function viewWorkerLog (line 439) | async function viewWorkerLog(extension, workerId, isRemote = false) {
  function refreshLog (line 493) | async function refreshLog(extension, workerId, silent = false) {
  function startLogAutoRefresh (line 537) | function startLogAutoRefresh(extension, workerId) {
  function stopLogAutoRefresh (line 547) | function stopLogAutoRefresh(extension) {
  function toggleWorkerExpanded (line 554) | function toggleWorkerExpanded(extension, workerId) {

FILE: web/workerSettings.js
  constant WORKERS_CHANGED_EVENT (line 6) | const WORKERS_CHANGED_EVENT = "distributed:workers-changed";
  function emitWorkersChanged (line 8) | function emitWorkersChanged(extension) {
  function isRemoteWorker (line 17) | function isRemoteWorker(extension, worker) {
  function isCloudWorker (line 36) | function isCloudWorker(extension, worker) {
  function saveWorkerSettings (line 40) | async function saveWorkerSettings(extension, workerId) {
  function cancelWorkerSettings (line 200) | function cancelWorkerSettings(extension, workerId) {
  function deleteWorker (line 221) | async function deleteWorker(extension, workerId) {
  function addNewWorker (line 263) | async function addNewWorker(extension) {

FILE: web/workerUtils.js
  function handleWorkerOperation (line 4) | async function handleWorkerOperation(extension, button, operation, succe...
  function handleInterruptWorkers (line 73) | async function handleInterruptWorkers(extension, button) {
  function handleClearMemory (line 82) | async function handleClearMemory(extension, button) {
  function findNodesByClass (line 90) | function findNodesByClass(apiPrompt, className) {
  function applyProbeResultToWorkerDot (line 97) | function applyProbeResultToWorkerDot(workerId, probeResult) {

FILE: workers/__init__.py
  function get_worker_manager (line 6) | def get_worker_manager() -> WorkerProcessManager:

FILE: workers/detection.py
  function is_local_worker (line 11) | async def is_local_worker(worker_config):
  function is_same_physical_host (line 23) | async def is_same_physical_host(worker_config):
  function get_machine_id (line 49) | def get_machine_id():
  function is_docker_environment (line 64) | def is_docker_environment():
  function is_runpod_environment (line 70) | def is_runpod_environment():

FILE: workers/process/launch_builder.py
  class LaunchCommandBuilder (line 10) | class LaunchCommandBuilder:
    method _extend_arg (line 13) | def _extend_arg(self, cmd, flag, value):
    method _extend_grouped_args (line 18) | def _extend_grouped_args(self, cmd, flag, values):
    method _get_runtime_args (line 25) | def _get_runtime_args(self):
    method _build_runtime_launch_args (line 33) | def _build_runtime_launch_args(self):
    method _find_windows_terminal (line 69) | def _find_windows_terminal(self):
    method build_launch_command (line 90) | def build_launch_command(self, worker_config, comfy_root):

FILE: workers/process/lifecycle.py
  class ProcessLifecycle (line 21) | class ProcessLifecycle:
    method __init__ (line 24) | def __init__(self, manager):
    method launch_worker (line 27) | def launch_worker(self, worker_config, show_window=False):
    method stop_worker (line 118) | def stop_worker(self, worker_id):
    method get_managed_workers (line 165) | def get_managed_workers(self):
    method cleanup_all (line 182) | def cleanup_all(self):
    method _is_process_running (line 194) | def _is_process_running(self, pid):
    method _check_worker_process (line 198) | def _check_worker_process(self, worker_id, proc_info):
    method _kill_process_tree (line 210) | def _kill_process_tree(self, pid):

FILE: workers/process/persistence.py
  class ProcessPersistence (line 5) | class ProcessPersistence:
    method __init__ (line 8) | def __init__(self, manager):
    method load_processes (line 11) | def load_processes(self):
    method save_processes (line 30) | def save_processes(self):

FILE: workers/process/root_discovery.py
  class ComfyRootDiscovery (line 7) | class ComfyRootDiscovery:
    method _find_root_from_loaded_modules (line 10) | def _find_root_from_loaded_modules(self):
    method find_comfy_root (line 25) | def find_comfy_root(self):

FILE: workers/process_manager.py
  class WorkerProcessManager (line 4) | class WorkerProcessManager:
    method __init__ (line 7) | def __init__(self):
    method find_comfy_root (line 15) | def find_comfy_root(self):
    method _find_windows_terminal (line 18) | def _find_windows_terminal(self):
    method build_launch_command (line 21) | def build_launch_command(self, worker_config, comfy_root):
    method launch_worker (line 24) | def launch_worker(self, worker_config, show_window=False):
    method stop_worker (line 27) | def stop_worker(self, worker_id):
    method get_managed_workers (line 30) | def get_managed_workers(self):
    method cleanup_all (line 33) | def cleanup_all(self):
    method load_processes (line 36) | def load_processes(self):
    method save_processes (line 39) | def save_processes(self):
    method _is_process_running (line 42) | def _is_process_running(self, pid):
    method _check_worker_process (line 45) | def _check_worker_process(self, worker_id, proc_info):
    method _kill_process_tree (line 48) | def _kill_process_tree(self, pid):

FILE: workers/startup.py
  function auto_launch_workers (line 19) | def auto_launch_workers():
  function delayed_auto_launch (line 79) | def delayed_auto_launch():
  function async_cleanup_and_exit (line 87) | async def async_cleanup_and_exit(signum=None):
  function register_async_signals (line 114) | def register_async_signals():
  function sync_cleanup (line 150) | def sync_cleanup():

FILE: workers/worker_monitor.py
  function is_process_alive (line 23) | def is_process_alive(pid):
  function monitor_and_run (line 41) | def monitor_and_run(master_pid, command):
Condensed preview — 130 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (972K chars).
[
  {
    "path": ".github/FUNDING.yml",
    "chars": 65,
    "preview": "# These are supported funding model platforms\n\ngithub: robertvoy\n"
  },
  {
    "path": ".github/workflows/publish_action.yml",
    "chars": 545,
    "preview": "name: Publish to Comfy registry\non:\n  workflow_dispatch:\n  push:\n    branches:\n      - main\n    paths:\n      - \"pyprojec"
  },
  {
    "path": ".gitignore",
    "chars": 105,
    "preview": "bin/\nlogs/\ngpu_config.json\n__pycache__/\n**/__pycache__/\n*.py[cod]\nnode_modules/\nnpm-debug.log*\nAGENTS.md\n"
  },
  {
    "path": ".nvmrc",
    "chars": 3,
    "preview": "20\n"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 10798,
    "preview": "<div align=\"center\">\r\n<img width=\"250\" src=\"https://github.com/user-attachments/assets/533bb98d-0c4a-499f-9bca-5c937e361"
  },
  {
    "path": "__init__.py",
    "chars": 975,
    "preview": "# Import everything needed from the main module\nfrom .distributed import (\n    NODE_CLASS_MAPPINGS as DISTRIBUTED_CLASS_"
  },
  {
    "path": "api/__init__.py",
    "chars": 205,
    "preview": "from . import config_routes  # noqa: F401\nfrom . import tunnel_routes  # noqa: F401\nfrom . import worker_routes  # noqa:"
  },
  {
    "path": "api/config_routes.py",
    "chars": 9586,
    "preview": "import json\nfrom contextlib import asynccontextmanager\n\nfrom aiohttp import web\nimport server\n\ntry:\n    from ..utils.con"
  },
  {
    "path": "api/job_routes.py",
    "chars": 12951,
    "preview": "import json\nimport asyncio\nimport io\nimport os\nimport base64\nimport binascii\nimport time\n\nfrom aiohttp import web\nimport"
  },
  {
    "path": "api/orchestration/__init__.py",
    "chars": 226,
    "preview": "# Orchestration helpers split out from queue_orchestration.py:\n# - prompt_transform.py: graph pruning + hidden input ove"
  },
  {
    "path": "api/orchestration/dispatch.py",
    "chars": 9203,
    "preview": "import asyncio\nimport json\nimport uuid\n\nimport aiohttp\n\nfrom ...utils.logging import debug_log, log\nfrom ...utils.networ"
  },
  {
    "path": "api/orchestration/media_sync.py",
    "chars": 9739,
    "preview": "import asyncio\nimport hashlib\nimport mimetypes\nimport os\nimport re\n\nimport aiohttp\n\nfrom ...utils.logging import debug_l"
  },
  {
    "path": "api/orchestration/prompt_transform.py",
    "chars": 12291,
    "preview": "import json\nfrom collections import deque\n\nfrom ...utils.logging import debug_log\n\n\nclass PromptIndex:\n    \"\"\"Cache prom"
  },
  {
    "path": "api/queue_orchestration.py",
    "chars": 14766,
    "preview": "import asyncio\nimport time\nimport uuid\n\nimport server\n\nfrom ..utils.async_helpers import queue_prompt_payload\nfrom ..uti"
  },
  {
    "path": "api/queue_request.py",
    "chars": 3106,
    "preview": "from dataclasses import dataclass\nfrom typing import Any, Dict, List, Optional\n\n\n@dataclass(frozen=True)\nclass QueueRequ"
  },
  {
    "path": "api/schemas.py",
    "chars": 1752,
    "preview": "def require_fields(data: dict, *fields) -> list[str]:\n    \"\"\"Return field names that are missing or empty in a JSON obje"
  },
  {
    "path": "api/tunnel_routes.py",
    "chars": 1878,
    "preview": "from aiohttp import web\nimport server\n\nfrom ..utils.cloudflare import cloudflare_tunnel_manager\nfrom ..utils.config impo"
  },
  {
    "path": "api/usdu_routes.py",
    "chars": 10864,
    "preview": "import asyncio\nimport io\nimport time\n\nfrom aiohttp import web\nfrom PIL import Image\nimport server\n\nfrom ..upscale.job_mo"
  },
  {
    "path": "api/worker_routes.py",
    "chars": 25809,
    "preview": "import json\nimport asyncio\nimport os\nimport time\nimport platform\nimport subprocess\nimport socket\n\nimport torch\nimport ai"
  },
  {
    "path": "conftest.py",
    "chars": 1007,
    "preview": "# conftest.py — project-level pytest configuration.\n#\n# Problem: custom_nodes/ComfyUI-Distributed/__init__.py uses relat"
  },
  {
    "path": "distributed.py",
    "chars": 1406,
    "preview": "\"\"\"\nComfyUI-Distributed: thin entry point.\nAll implementation lives in workers/, nodes/, api/.\n\"\"\"\nimport atexit\nimport "
  },
  {
    "path": "docs/comfyui-distributed-api.md",
    "chars": 7872,
    "preview": "# ComfyUI-Distributed API (Experimental)\n\nThis document describes the **public HTTP API** added to ComfyUI-Distributed t"
  },
  {
    "path": "docs/model-download-script.md",
    "chars": 2864,
    "preview": "## Automating ComfyUI Model Downloads\n> This guide will walk you through creating a shell script to automatically downlo"
  },
  {
    "path": "docs/video-upscaler-runpod-preset.md",
    "chars": 1265,
    "preview": "![Clipboard Image](https://github.com/user-attachments/assets/5dc5224f-3f47-442c-b94a-116afeb28132)\n\n**Accelerated Creat"
  },
  {
    "path": "docs/worker-setup-guides.md",
    "chars": 8624,
    "preview": "## Worker Setup Guide\n\n**Master**: The main ComfyUI instance that coordinates and distributes work. This is where you lo"
  },
  {
    "path": "nodes/__init__.py",
    "chars": 998,
    "preview": "from .utilities import (\n    DistributedSeed,\n    DistributedModelName,\n    DistributedValue,\n    ImageBatchDivider,\n   "
  },
  {
    "path": "nodes/collector.py",
    "chars": 22742,
    "preview": "import torch\nimport io\nimport json\nimport asyncio\nimport time\nimport base64\n\nimport aiohttp\nimport server as _server\nimp"
  },
  {
    "path": "nodes/distributed_upscale.py",
    "chars": 14124,
    "preview": "import json\nimport math\nfrom functools import wraps\n\nimport comfy.samplers\n\nfrom ..utils.logging import debug_log, log\nf"
  },
  {
    "path": "nodes/utilities.py",
    "chars": 11692,
    "preview": "import torch\nimport json\n\nfrom ..utils.logging import debug_log, log\n\n\ndef _chunk_bounds(total_items: int, n_splits: int"
  },
  {
    "path": "package.json",
    "chars": 261,
    "preview": "{\n  \"name\": \"comfyui-distributed-web-tests\",\n  \"private\": true,\n  \"type\": \"module\",\n  \"scripts\": {\n    \"test:web\": \"bash"
  },
  {
    "path": "pyproject.toml",
    "chars": 603,
    "preview": "[project]\nname = \"ComfyUI-Distributed\"\ndescription = \"ComfyUI extension that enables multi-GPU processing locally, remot"
  },
  {
    "path": "scripts/test-web.sh",
    "chars": 743,
    "preview": "#!/usr/bin/env bash\nset -euo pipefail\n\nSCRIPT_DIR=\"$(cd -- \"$(dirname -- \"${BASH_SOURCE[0]}\")\" && pwd)\"\nREPO_ROOT=\"$(cd "
  },
  {
    "path": "tests/api/test_config_routes.py",
    "chars": 5487,
    "preview": "import copy\nimport importlib.util\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\nfrom unittest.mock im"
  },
  {
    "path": "tests/api/test_distributed_queue.py",
    "chars": 12491,
    "preview": "import importlib.util\nimport sys\nimport types\nimport unittest\nimport asyncio\nimport base64\nfrom dataclasses import datac"
  },
  {
    "path": "tests/api/test_media_sync.py",
    "chars": 10922,
    "preview": "import importlib.util\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\n\n\ndef _load_media_sync_module():\n"
  },
  {
    "path": "tests/api/test_usdu_routes.py",
    "chars": 11163,
    "preview": "import asyncio\nimport importlib.util\nimport io\nimport sys\nimport types\nimport unittest\nfrom dataclasses import dataclass"
  },
  {
    "path": "tests/api/test_worker_routes.py",
    "chars": 14589,
    "preview": "import importlib.util\nimport os\nimport sys\nimport tempfile\nimport types\nimport unittest\nfrom collections import deque\nfr"
  },
  {
    "path": "tests/conftest.py",
    "chars": 152,
    "preview": "# This conftest.py marks the tests/ directory as the pytest collection root,\n# preventing pytest from traversing into th"
  },
  {
    "path": "tests/test_async_helpers.py",
    "chars": 3196,
    "preview": "import importlib.util\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\n\n\nclass _PromptQueue:\n    def __i"
  },
  {
    "path": "tests/test_batch_dividers.py",
    "chars": 3477,
    "preview": "import importlib.util\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\n\nimport torch\n\n\ndef _load_utiliti"
  },
  {
    "path": "tests/test_config.py",
    "chars": 12092,
    "preview": "import importlib.util\nimport json\nimport os\nimport sys\nimport tempfile\nimport types\nimport unittest\nfrom pathlib import "
  },
  {
    "path": "tests/test_detection.py",
    "chars": 7952,
    "preview": "import importlib.util\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\nfrom unittest.mock import patch\n\n"
  },
  {
    "path": "tests/test_dispatch_selection.py",
    "chars": 9227,
    "preview": "import asyncio\nimport importlib.util\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\nfrom unittest.mock"
  },
  {
    "path": "tests/test_distributed_value.py",
    "chars": 6817,
    "preview": "import json\nimport unittest\n\n\nclass DistributedValueTests(unittest.TestCase):\n    \"\"\"Unit tests for the DistributedValue"
  },
  {
    "path": "tests/test_job_timeout.py",
    "chars": 7698,
    "preview": "import asyncio\nimport importlib.util\nimport sys\nimport time\nimport types\nimport unittest\nfrom dataclasses import datacla"
  },
  {
    "path": "tests/test_network_helpers.py",
    "chars": 5691,
    "preview": "import importlib.util\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\n\n\ndef _load_network_module():\n   "
  },
  {
    "path": "tests/test_payload_parsers.py",
    "chars": 7218,
    "preview": "import importlib.util\nimport io\nimport json\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\n\ntry:\n    f"
  },
  {
    "path": "tests/test_prompt_transform.py",
    "chars": 21142,
    "preview": "import importlib.util\nimport json\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\n\n\ndef _load_prompt_tr"
  },
  {
    "path": "tests/test_queue_request.py",
    "chars": 5953,
    "preview": "import importlib.util\nimport unittest\nfrom pathlib import Path\n\n\ndef _load_queue_request_module():\n    module_path = Pat"
  },
  {
    "path": "tests/test_static_mode.py",
    "chars": 8776,
    "preview": "import asyncio\nimport importlib.util\nimport sys\nimport types\nimport unittest\nfrom pathlib import Path\n\nimport torch\n\n\nde"
  },
  {
    "path": "tests/test_worker_process_runtime.py",
    "chars": 5351,
    "preview": "import importlib.util\nimport sys\nimport types\nimport unittest\nfrom argparse import Namespace\nfrom pathlib import Path\nfr"
  },
  {
    "path": "upscale/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "upscale/conditioning.py",
    "chars": 1434,
    "preview": "import copy\n\n\ndef clone_control_chain(control, clone_hint=True):\n    \"\"\"Shallow copy the ControlNet chain, optionally cl"
  },
  {
    "path": "upscale/job_models.py",
    "chars": 1609,
    "preview": "from dataclasses import dataclass, field\nimport asyncio\nimport time\n\n\nclass BaseJobState:\n    \"\"\"Marker base class for t"
  },
  {
    "path": "upscale/job_state.py",
    "chars": 5792,
    "preview": "import asyncio\n\nfrom ..utils.logging import debug_log\nfrom .job_store import ensure_tile_jobs_initialized\nfrom .job_time"
  },
  {
    "path": "upscale/job_store.py",
    "chars": 6883,
    "preview": "import asyncio\nimport os\nimport time\nfrom typing import List, Optional\n\nimport server\n\nfrom ..utils.logging import debug"
  },
  {
    "path": "upscale/job_timeout.py",
    "chars": 6650,
    "preview": "import time\n\nfrom ..utils.config import load_config\nfrom ..utils.constants import HEARTBEAT_TIMEOUT\nfrom ..utils.logging"
  },
  {
    "path": "upscale/modes/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "upscale/modes/dynamic.py",
    "chars": 15123,
    "preview": "import asyncio, torch\nfrom PIL import Image\nimport comfy.model_management\nfrom ...utils.logging import debug_log, log\nfr"
  },
  {
    "path": "upscale/modes/single_gpu.py",
    "chars": 3395,
    "preview": "import math, torch\nfrom PIL import Image\nfrom ...utils.logging import debug_log, log\nfrom ...utils.image import tensor_t"
  },
  {
    "path": "upscale/modes/static.py",
    "chars": 23116,
    "preview": "import asyncio, time, torch\nfrom PIL import Image\nimport comfy.model_management\nfrom ...utils.logging import debug_log, "
  },
  {
    "path": "upscale/payload_parsers.py",
    "chars": 2025,
    "preview": "import io\nimport json\n\nfrom PIL import Image\n\n\ndef _parse_tiles_from_form(data):\n    \"\"\"Parse tiles submitted via multip"
  },
  {
    "path": "upscale/result_collector.py",
    "chars": 10523,
    "preview": "import asyncio, time\nimport comfy.model_management\nimport server\nfrom ..utils.constants import DYNAMIC_MODE_MAX_POLL_TIM"
  },
  {
    "path": "upscale/tile_ops.py",
    "chars": 21248,
    "preview": "import math, torch\nfrom contextlib import nullcontext\nfrom PIL import Image, ImageFilter, ImageDraw\nfrom typing import L"
  },
  {
    "path": "upscale/worker_comms.py",
    "chars": 11636,
    "preview": "import asyncio, io, json, time\nimport aiohttp\nfrom PIL import Image\nfrom ..utils.logging import debug_log, log\nfrom ..ut"
  },
  {
    "path": "utils/__init__.py",
    "chars": 96,
    "preview": "\"\"\"\nUtility modules for ComfyUI-Distributed extension.\n\"\"\"\n\n# Make utils importable as a package"
  },
  {
    "path": "utils/async_helpers.py",
    "chars": 4747,
    "preview": "\"\"\"\nAsync helper utilities for ComfyUI-Distributed.\n\"\"\"\nimport asyncio\nimport threading\nimport time\nimport uuid\nimport e"
  },
  {
    "path": "utils/audio_payload.py",
    "chars": 3595,
    "preview": "import base64\nimport binascii\nimport os\n\nimport numpy as np\nimport torch\n\nfrom .image import ensure_contiguous\n\n\nMAX_AUD"
  },
  {
    "path": "utils/cloudflare/__init__.py",
    "chars": 167,
    "preview": "from .tunnel import CloudflareTunnelManager\n\ncloudflare_tunnel_manager = CloudflareTunnelManager()\n\n__all__ = [\"Cloudfla"
  },
  {
    "path": "utils/cloudflare/binary.py",
    "chars": 2527,
    "preview": "\"\"\"Cloudflared binary discovery and download helpers.\"\"\"\n\nimport os\nimport platform\nimport shutil\nimport stat\nfrom urlli"
  },
  {
    "path": "utils/cloudflare/process_reader.py",
    "chars": 3081,
    "preview": "\"\"\"Background cloudflared process output reader.\"\"\"\n\nimport asyncio\nimport re\nimport threading\n\nfrom ..constants import "
  },
  {
    "path": "utils/cloudflare/state.py",
    "chars": 2347,
    "preview": "\"\"\"Cloudflare tunnel state persistence helpers.\"\"\"\n\nfrom ..config import load_config, save_config\nfrom ..network import "
  },
  {
    "path": "utils/cloudflare/tunnel.py",
    "chars": 7448,
    "preview": "\"\"\"Cloudflare tunnel lifecycle manager.\"\"\"\n\nimport asyncio\nimport os\nimport shutil\nimport signal\nimport subprocess\nimpor"
  },
  {
    "path": "utils/config.py",
    "chars": 5342,
    "preview": "\"\"\"\nConfiguration management for ComfyUI-Distributed.\n\"\"\"\nimport asyncio\nimport os\nimport json\nfrom contextlib import as"
  },
  {
    "path": "utils/constants.py",
    "chars": 1873,
    "preview": "\"\"\"\nShared constants for ComfyUI-Distributed.\n\"\"\"\nimport os\n\n# Timeouts (in seconds)\nWORKER_JOB_TIMEOUT = 30.0\nTILE_COLL"
  },
  {
    "path": "utils/crop_model_patch.py",
    "chars": 4179,
    "preview": "from contextlib import contextmanager\n\nimport torch\n\nfrom .logging import debug_log\nfrom .usdu_utils import resize_regio"
  },
  {
    "path": "utils/exceptions.py",
    "chars": 1083,
    "preview": "\"\"\"Custom exceptions for ComfyUI-Distributed.\"\"\"\n\n\nclass DistributedError(Exception):\n    \"\"\"Base exception for all Comf"
  },
  {
    "path": "utils/image.py",
    "chars": 850,
    "preview": "\"\"\"\nImage and tensor conversion utilities for ComfyUI-Distributed.\n\"\"\"\nimport torch\nimport numpy as np\nfrom PIL import I"
  },
  {
    "path": "utils/logging.py",
    "chars": 1182,
    "preview": "\"\"\"\nShared logging utilities for ComfyUI-Distributed.\n\"\"\"\nimport os\nimport json\nimport time\n\n# Config file is in parent "
  },
  {
    "path": "utils/network.py",
    "chars": 7585,
    "preview": "\"\"\"\nNetwork and API utilities for ComfyUI-Distributed.\n\"\"\"\nimport asyncio\nimport aiohttp\nimport re\nimport server\nfrom ai"
  },
  {
    "path": "utils/process.py",
    "chars": 1093,
    "preview": "\"\"\"\nProcess management utilities for ComfyUI-Distributed.\n\"\"\"\nimport os\nimport subprocess\nimport platform\nimport signal\n"
  },
  {
    "path": "utils/trace_logger.py",
    "chars": 394,
    "preview": "from .logging import debug_log, log\n\n\ndef trace_prefix(trace_execution_id: str) -> str:\n    return f\"[Distributed][exec:"
  },
  {
    "path": "utils/usdu_managment.py",
    "chars": 1638,
    "preview": "\"\"\"Backward-compatibility shim for USDU helpers.\n\nRoute handlers and job logic now live in:\n- upscale.job_store\n- upscal"
  },
  {
    "path": "utils/usdu_utils.py",
    "chars": 21498,
    "preview": "import numpy as np\nfrom PIL import Image, ImageFilter\nimport torch\nimport torch.nn.functional as F\nfrom torchvision.tran"
  },
  {
    "path": "vitest.config.js",
    "chars": 162,
    "preview": "import { defineConfig } from \"vitest/config\";\n\nexport default defineConfig({\n  test: {\n    include: [\"web/tests/**/*.tes"
  },
  {
    "path": "web/apiClient.js",
    "chars": 10063,
    "preview": "import { TIMEOUTS } from './constants.js';\nimport { normalizeWorkerUrl } from './urlUtils.js';\n\nexport function createAp"
  },
  {
    "path": "web/constants.js",
    "chars": 8658,
    "preview": "export const BUTTON_STYLES = {\n    // Base styles with unified padding\n    base: \"width: 100%; padding: 4px 14px; color:"
  },
  {
    "path": "web/distributed.css",
    "chars": 11523,
    "preview": ":root {\n    --btn-stop: #7c4a4a;\n    --btn-launch: #4a7c4a;\n    --btn-log: #685434;\n    --btn-working: #666;\n    --btn-s"
  },
  {
    "path": "web/distributedValue.js",
    "chars": 16384,
    "preview": "import { app } from \"/scripts/app.js\";\nimport { ENDPOINTS } from \"./constants.js\";\n\nconst NODE_CLASS = \"DistributedValue"
  },
  {
    "path": "web/executionUtils.js",
    "chars": 7149,
    "preview": "import { api } from \"../../scripts/api.js\";\nimport { applyProbeResultToWorkerDot, findNodesByClass } from './workerUtils"
  },
  {
    "path": "web/image_batch_divider.js",
    "chars": 3332,
    "preview": "import { app } from \"/scripts/app.js\";\n\n// Configuration for each batch divider node type\nconst BATCH_DIVIDER_NODES = {\n"
  },
  {
    "path": "web/main.js",
    "chars": 16003,
    "preview": "import { app } from \"../../scripts/app.js\";\nimport { api } from \"../../scripts/api.js\";\nimport { DistributedUI } from '."
  },
  {
    "path": "web/masterDetection.js",
    "chars": 6321,
    "preview": "import { generateUUID } from './constants.js';\n\nexport async function detectMasterIP(extension) {\n    try {\n        cons"
  },
  {
    "path": "web/sidebar/actionsSection.js",
    "chars": 1316,
    "preview": "import { BUTTON_STYLES } from \"../constants.js\";\n\nexport function renderActionsSection(extension) {\n    const actionsSec"
  },
  {
    "path": "web/sidebar/settingsSection.js",
    "chars": 5353,
    "preview": "import { createCheckboxSetting, createNumberSetting } from \"../ui/buttonHelpers.js\";\n\nexport function renderSettingsSect"
  },
  {
    "path": "web/sidebar/workersSection.js",
    "chars": 1123,
    "preview": "import { addNewWorker } from \"../workerSettings.js\";\n\nexport function renderWorkersSection(extension) {\n    const worker"
  },
  {
    "path": "web/sidebarRenderer.js",
    "chars": 5843,
    "preview": "import { STATUS_COLORS } from './constants.js';\nimport { checkAllWorkerStatuses, loadManagedWorkers, updateWorkerControl"
  },
  {
    "path": "web/stateManager.js",
    "chars": 1815,
    "preview": "export function createStateManager() {\n    const state = {\n        workers: new Map(), // Unified worker state: { status"
  },
  {
    "path": "web/tests/apiClient.test.js",
    "chars": 3303,
    "preview": "import { afterEach, beforeEach, describe, expect, it, vi } from \"vitest\";\n\nimport { createApiClient } from \"../apiClient"
  },
  {
    "path": "web/tests/executionUtils.test.js",
    "chars": 531,
    "preview": "import { describe, expect, it } from \"vitest\";\n\nimport { buildWorkerWebSocketUrl } from \"../urlUtils.js\";\n\n\ndescribe(\"ex"
  },
  {
    "path": "web/tests/urlUtils.test.js",
    "chars": 8622,
    "preview": "import { afterEach, beforeEach, describe, expect, it } from \"vitest\";\n\nimport {\n    buildWorkerUrl,\n    buildWorkerWebSo"
  },
  {
    "path": "web/tests/workerLifecycle.test.js",
    "chars": 1829,
    "preview": "import { afterEach, beforeEach, describe, expect, it } from \"vitest\";\n\nimport { getWorkerUrl } from \"../workerLifecycle."
  },
  {
    "path": "web/tests/workerSettings.test.js",
    "chars": 4111,
    "preview": "import { afterEach, beforeEach, describe, expect, it, vi } from \"vitest\";\n\nimport { addNewWorker, isRemoteWorker } from "
  },
  {
    "path": "web/tunnelManager.js",
    "chars": 5681,
    "preview": "export function updateTunnelUIElements(extension, isRunning, isStarting) {\n    void isRunning;\n    void isStarting;\n\n   "
  },
  {
    "path": "web/ui/buttonHelpers.js",
    "chars": 1834,
    "preview": "export function createButtonHelper(ui, text, onClick, style) {\n    return ui.createButton(text, onClick, style);\n}\n\nexpo"
  },
  {
    "path": "web/ui/cloudflareWarning.js",
    "chars": 4074,
    "preview": "export function showCloudflareWarning(extension, masterHost) {\n    const existingBanner = document.getElementById('cloud"
  },
  {
    "path": "web/ui/entityCard.js",
    "chars": 4886,
    "preview": "import { updateWorkerControls, toggleWorkerExpanded } from \"../workerLifecycle.js\";\nimport { isRemoteWorker } from \"../w"
  },
  {
    "path": "web/ui/logModal.js",
    "chars": 6191,
    "preview": "import { TIMEOUTS } from '../constants.js';\n\nfunction formatFileSize(bytes) {\n    if (bytes < 1024) return bytes + ' B';"
  },
  {
    "path": "web/ui/settingsForm.js",
    "chars": 6900,
    "preview": "import { BUTTON_STYLES } from '../constants.js';\nimport { cancelWorkerSettings, deleteWorker, isRemoteWorker, saveWorker"
  },
  {
    "path": "web/ui.js",
    "chars": 32965,
    "preview": "import { BUTTON_STYLES, UI_STYLES, STATUS_COLORS, UI_COLORS, TIMEOUTS } from './constants.js';\nimport { createButtonHelp"
  },
  {
    "path": "web/urlUtils.js",
    "chars": 4005,
    "preview": "export function normalizeWorkerUrl(rawUrl) {\n    if (!rawUrl || typeof rawUrl !== \"string\") {\n        return \"\";\n    }\n\n"
  },
  {
    "path": "web/workerLifecycle.js",
    "chars": 20565,
    "preview": "import { TIMEOUTS, STATUS_COLORS } from './constants.js';\nimport { buildWorkerUrl, normalizeWorkerUrl } from './urlUtils"
  },
  {
    "path": "web/workerSettings.js",
    "chars": 13184,
    "preview": "import { renderSidebarContent } from './sidebarRenderer.js';\nimport { generateUUID } from './constants.js';\nimport { par"
  },
  {
    "path": "web/workerUtils.js",
    "chars": 4371,
    "preview": "import { TIMEOUTS, ENDPOINTS } from './constants.js';\nimport { checkAllWorkerStatuses, getWorkerUrl } from './workerLife"
  },
  {
    "path": "workers/__init__.py",
    "chars": 326,
    "preview": "from .process_manager import WorkerProcessManager\n\n_worker_manager: WorkerProcessManager | None = None\n\n\ndef get_worker_"
  },
  {
    "path": "workers/detection.py",
    "chars": 2621,
    "preview": "import os\nimport platform\nimport uuid\n\nimport aiohttp\n\nfrom ..utils.network import normalize_host, get_client_session\nfr"
  },
  {
    "path": "workers/process/__init__.py",
    "chars": 299,
    "preview": "from .launch_builder import LaunchCommandBuilder\nfrom .lifecycle import ProcessLifecycle\nfrom .persistence import Proces"
  },
  {
    "path": "workers/process/launch_builder.py",
    "chars": 5817,
    "preview": "import glob\nimport os\nimport shlex\nimport shutil\n\nfrom ...utils.logging import debug_log\nfrom ...utils.process import ge"
  },
  {
    "path": "workers/process/lifecycle.py",
    "chars": 12311,
    "preview": "import os\nimport platform\nimport signal\nimport subprocess\nimport time\n\nfrom ...utils.config import load_config, save_con"
  },
  {
    "path": "workers/process/persistence.py",
    "chars": 1893,
    "preview": "from ...utils.config import load_config, save_config\nfrom ...utils.logging import debug_log\n\n\nclass ProcessPersistence:\n"
  },
  {
    "path": "workers/process/root_discovery.py",
    "chars": 3400,
    "preview": "import os\nimport sys\n\nfrom ...utils.logging import debug_log, log\n\n\nclass ComfyRootDiscovery:\n    \"\"\"Resolve the ComfyUI"
  },
  {
    "path": "workers/process_manager.py",
    "chars": 1717,
    "preview": "from .process import ComfyRootDiscovery, LaunchCommandBuilder, ProcessLifecycle, ProcessPersistence\n\n\nclass WorkerProces"
  },
  {
    "path": "workers/startup.py",
    "chars": 7543,
    "preview": "import asyncio\nimport threading\nimport time\nimport atexit\nimport signal\nimport sys\nimport platform\n\nimport server\n\nfrom "
  },
  {
    "path": "workers/worker_monitor.py",
    "chars": 4859,
    "preview": "#!/usr/bin/env python3\n\"\"\"\nWorker process monitor - monitors if the master process is still alive\nand terminates the wor"
  },
  {
    "path": "workflows/distributed-txt2img.json",
    "chars": 10329,
    "preview": "{\n  \"id\": \"c9a4d248-9b83-408f-b45e-3ef61dd56ef5\",\n  \"revision\": 0,\n  \"last_node_id\": 13,\n  \"last_link_id\": 19,\n  \"nodes\""
  },
  {
    "path": "workflows/distributed-upscale-video.json",
    "chars": 21325,
    "preview": "{\n  \"id\": \"707da2be-c7d6-481f-b3b0-3ec8207924a1\",\n  \"revision\": 0,\n  \"last_node_id\": 76,\n  \"last_link_id\": 122,\n  \"nodes"
  },
  {
    "path": "workflows/distributed-upscale.json",
    "chars": 21553,
    "preview": "{\n  \"id\": \"817bbfe2-06b8-44c8-8c14-b82b63b335d5\",\n  \"revision\": 0,\n  \"last_node_id\": 137,\n  \"last_link_id\": 211,\n  \"node"
  },
  {
    "path": "workflows/distributed-wan-2.2_14b_t2v.json",
    "chars": 22718,
    "preview": "{\n  \"id\": \"8968d33f-abd1-4e8a-8e55-5d87a104afb8\",\n  \"revision\": 0,\n  \"last_node_id\": 92,\n  \"last_link_id\": 187,\n  \"nodes"
  },
  {
    "path": "workflows/distributed-wan.json",
    "chars": 32114,
    "preview": "{\n  \"id\": \"00000000-0000-0000-0000-000000000000\",\n  \"revision\": 0,\n  \"last_node_id\": 234,\n  \"last_link_id\": 79,\n  \"nodes"
  }
]

About this extraction

This page contains the full source code of the robertvoy/ComfyUI-Distributed GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 130 files (895.4 KB), approximately 211.4k tokens, and a symbol index with 919 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!