Full Code of Comfy-Org/ComfyUI for AI

master 589228e671e8 cached
718 files
23.3 MB
5.2M tokens
5205 symbols
3 requests
Copy disabled (too large) Download .txt
Showing preview only (20,923K chars total). Download the full file to get everything.
Repository: Comfy-Org/ComfyUI
Branch: master
Commit: 589228e671e8
Files: 718
Total size: 23.3 MB

Directory structure:
gitextract_r6aji11h/

├── .ci/
│   ├── update_windows/
│   │   ├── update.py
│   │   ├── update_comfyui.bat
│   │   └── update_comfyui_stable.bat
│   ├── windows_amd_base_files/
│   │   ├── README_VERY_IMPORTANT.txt
│   │   ├── run_amd_gpu.bat
│   │   └── run_amd_gpu_disable_smart_memory.bat
│   ├── windows_nightly_base_files/
│   │   └── run_nvidia_gpu_fast.bat
│   └── windows_nvidia_base_files/
│       ├── README_VERY_IMPORTANT.txt
│       ├── advanced/
│       │   └── run_nvidia_gpu_disable_api_nodes.bat
│       ├── run_cpu.bat
│       ├── run_nvidia_gpu.bat
│       └── run_nvidia_gpu_fast_fp16_accumulation.bat
├── .coderabbit.yaml
├── .gitattributes
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.yml
│   │   ├── config.yml
│   │   ├── feature-request.yml
│   │   └── user-support.yml
│   ├── PULL_REQUEST_TEMPLATE/
│   │   └── api-node.md
│   ├── scripts/
│   │   └── check-ai-co-authors.sh
│   └── workflows/
│       ├── api-node-template.yml
│       ├── check-ai-co-authors.yml
│       ├── check-line-endings.yml
│       ├── pullrequest-ci-run.yml
│       ├── release-stable-all.yml
│       ├── release-webhook.yml
│       ├── ruff.yml
│       ├── stable-release.yml
│       ├── stale-issues.yml
│       ├── test-build.yml
│       ├── test-ci.yml
│       ├── test-execution.yml
│       ├── test-launch.yml
│       ├── test-unit.yml
│       ├── update-api-stubs.yml
│       ├── update-ci-container.yml
│       ├── update-version.yml
│       ├── windows_release_dependencies.yml
│       ├── windows_release_dependencies_manual.yml
│       ├── windows_release_nightly_pytorch.yml
│       └── windows_release_package.yml
├── .gitignore
├── CODEOWNERS
├── CONTRIBUTING.md
├── LICENSE
├── QUANTIZATION.md
├── README.md
├── alembic.ini
├── alembic_db/
│   ├── README.md
│   ├── env.py
│   ├── script.py.mako
│   └── versions/
│       ├── 0001_assets.py
│       ├── 0002_merge_to_asset_references.py
│       └── 0003_add_metadata_job_id.py
├── api_server/
│   ├── __init__.py
│   ├── routes/
│   │   ├── __init__.py
│   │   └── internal/
│   │       ├── README.md
│   │       ├── __init__.py
│   │       └── internal_routes.py
│   ├── services/
│   │   ├── __init__.py
│   │   └── terminal_service.py
│   └── utils/
│       └── file_operations.py
├── app/
│   ├── __init__.py
│   ├── app_settings.py
│   ├── assets/
│   │   ├── api/
│   │   │   ├── routes.py
│   │   │   ├── schemas_in.py
│   │   │   ├── schemas_out.py
│   │   │   └── upload.py
│   │   ├── database/
│   │   │   ├── models.py
│   │   │   └── queries/
│   │   │       ├── __init__.py
│   │   │       ├── asset.py
│   │   │       ├── asset_reference.py
│   │   │       ├── common.py
│   │   │       └── tags.py
│   │   ├── helpers.py
│   │   ├── scanner.py
│   │   ├── seeder.py
│   │   └── services/
│   │       ├── __init__.py
│   │       ├── asset_management.py
│   │       ├── bulk_ingest.py
│   │       ├── file_utils.py
│   │       ├── hashing.py
│   │       ├── ingest.py
│   │       ├── metadata_extract.py
│   │       ├── path_utils.py
│   │       ├── schemas.py
│   │       └── tagging.py
│   ├── custom_node_manager.py
│   ├── database/
│   │   ├── db.py
│   │   └── models.py
│   ├── frontend_management.py
│   ├── logger.py
│   ├── model_manager.py
│   ├── node_replace_manager.py
│   ├── subgraph_manager.py
│   └── user_manager.py
├── blueprints/
│   ├── .glsl/
│   │   ├── Brightness_and_Contrast_1.frag
│   │   ├── Chromatic_Aberration_16.frag
│   │   ├── Color_Adjustment_15.frag
│   │   ├── Edge-Preserving_Blur_128.frag
│   │   ├── Film_Grain_15.frag
│   │   ├── Glow_30.frag
│   │   ├── Hue_and_Saturation_1.frag
│   │   ├── Image_Blur_1.frag
│   │   ├── Image_Channels_23.frag
│   │   ├── Image_Levels_1.frag
│   │   ├── README.md
│   │   ├── Sharpen_23.frag
│   │   ├── Unsharp_Mask_26.frag
│   │   └── update_blueprints.py
│   ├── Brightness and Contrast.json
│   ├── Canny to Image (Z-Image-Turbo).json
│   ├── Canny to Video (LTX 2.0).json
│   ├── Chromatic Aberration.json
│   ├── Color Adjustment.json
│   ├── Depth to Image (Z-Image-Turbo).json
│   ├── Depth to Video (ltx 2.0).json
│   ├── Edge-Preserving Blur.json
│   ├── Film Grain.json
│   ├── Glow.json
│   ├── Hue and Saturation.json
│   ├── Image Blur.json
│   ├── Image Captioning (gemini).json
│   ├── Image Channels.json
│   ├── Image Edit (Flux.2 Klein 4B).json
│   ├── Image Edit (Qwen 2511).json
│   ├── Image Inpainting (Qwen-image).json
│   ├── Image Levels.json
│   ├── Image Outpainting (Qwen-Image).json
│   ├── Image Upscale(Z-image-Turbo).json
│   ├── Image to Depth Map (Lotus).json
│   ├── Image to Layers(Qwen-Image Layered).json
│   ├── Image to Model (Hunyuan3d 2.1).json
│   ├── Image to Video (Wan 2.2).json
│   ├── Pose to Image (Z-Image-Turbo).json
│   ├── Pose to Video (LTX 2.0).json
│   ├── Prompt Enhance.json
│   ├── Sharpen.json
│   ├── Text to Audio (ACE-Step 1.5).json
│   ├── Text to Image (Z-Image-Turbo).json
│   ├── Text to Video (Wan 2.2).json
│   ├── Unsharp Mask.json
│   ├── Video Captioning (Gemini).json
│   ├── Video Inpaint(Wan2.1 VACE).json
│   ├── Video Stitch.json
│   ├── Video Upscale(GAN x4).json
│   └── put_blueprints_here
├── comfy/
│   ├── audio_encoders/
│   │   ├── audio_encoders.py
│   │   ├── wav2vec2.py
│   │   └── whisper.py
│   ├── cldm/
│   │   ├── cldm.py
│   │   ├── control_types.py
│   │   ├── dit_embedder.py
│   │   └── mmdit.py
│   ├── cli_args.py
│   ├── clip_config_bigg.json
│   ├── clip_model.py
│   ├── clip_vision.py
│   ├── clip_vision_config_g.json
│   ├── clip_vision_config_h.json
│   ├── clip_vision_config_vitl.json
│   ├── clip_vision_config_vitl_336.json
│   ├── clip_vision_config_vitl_336_llava.json
│   ├── clip_vision_siglip2_base_naflex.json
│   ├── clip_vision_siglip_384.json
│   ├── clip_vision_siglip_512.json
│   ├── comfy_types/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── examples/
│   │   │   └── example_nodes.py
│   │   └── node_typing.py
│   ├── conds.py
│   ├── context_windows.py
│   ├── controlnet.py
│   ├── diffusers_convert.py
│   ├── diffusers_load.py
│   ├── extra_samplers/
│   │   └── uni_pc.py
│   ├── float.py
│   ├── gligen.py
│   ├── hooks.py
│   ├── image_encoders/
│   │   ├── dino2.py
│   │   ├── dino2_giant.json
│   │   └── dino2_large.json
│   ├── k_diffusion/
│   │   ├── deis.py
│   │   ├── sa_solver.py
│   │   ├── sampling.py
│   │   └── utils.py
│   ├── latent_formats.py
│   ├── ldm/
│   │   ├── ace/
│   │   │   ├── ace_step15.py
│   │   │   ├── attention.py
│   │   │   ├── lyric_encoder.py
│   │   │   ├── model.py
│   │   │   └── vae/
│   │   │       ├── autoencoder_dc.py
│   │   │       ├── music_dcae_pipeline.py
│   │   │       ├── music_log_mel.py
│   │   │       └── music_vocoder.py
│   │   ├── anima/
│   │   │   └── model.py
│   │   ├── audio/
│   │   │   ├── autoencoder.py
│   │   │   ├── dit.py
│   │   │   └── embedders.py
│   │   ├── aura/
│   │   │   └── mmdit.py
│   │   ├── cascade/
│   │   │   ├── common.py
│   │   │   ├── controlnet.py
│   │   │   ├── stage_a.py
│   │   │   ├── stage_b.py
│   │   │   ├── stage_c.py
│   │   │   └── stage_c_coder.py
│   │   ├── chroma/
│   │   │   ├── layers.py
│   │   │   └── model.py
│   │   ├── chroma_radiance/
│   │   │   ├── layers.py
│   │   │   └── model.py
│   │   ├── common_dit.py
│   │   ├── cosmos/
│   │   │   ├── blocks.py
│   │   │   ├── cosmos_tokenizer/
│   │   │   │   ├── layers3d.py
│   │   │   │   ├── patching.py
│   │   │   │   └── utils.py
│   │   │   ├── model.py
│   │   │   ├── position_embedding.py
│   │   │   ├── predict2.py
│   │   │   └── vae.py
│   │   ├── flux/
│   │   │   ├── controlnet.py
│   │   │   ├── layers.py
│   │   │   ├── math.py
│   │   │   ├── model.py
│   │   │   └── redux.py
│   │   ├── genmo/
│   │   │   ├── joint_model/
│   │   │   │   ├── asymm_models_joint.py
│   │   │   │   ├── layers.py
│   │   │   │   ├── rope_mixed.py
│   │   │   │   ├── temporal_rope.py
│   │   │   │   └── utils.py
│   │   │   └── vae/
│   │   │       └── model.py
│   │   ├── hidream/
│   │   │   └── model.py
│   │   ├── hunyuan3d/
│   │   │   ├── model.py
│   │   │   └── vae.py
│   │   ├── hunyuan3dv2_1/
│   │   │   └── hunyuandit.py
│   │   ├── hunyuan_video/
│   │   │   ├── model.py
│   │   │   ├── upsampler.py
│   │   │   ├── vae.py
│   │   │   └── vae_refiner.py
│   │   ├── hydit/
│   │   │   ├── attn_layers.py
│   │   │   ├── controlnet.py
│   │   │   ├── models.py
│   │   │   ├── poolers.py
│   │   │   └── posemb_layers.py
│   │   ├── kandinsky5/
│   │   │   └── model.py
│   │   ├── lightricks/
│   │   │   ├── av_model.py
│   │   │   ├── embeddings_connector.py
│   │   │   ├── latent_upsampler.py
│   │   │   ├── model.py
│   │   │   ├── symmetric_patchifier.py
│   │   │   ├── vae/
│   │   │   │   ├── audio_vae.py
│   │   │   │   ├── causal_audio_autoencoder.py
│   │   │   │   ├── causal_conv3d.py
│   │   │   │   ├── causal_video_autoencoder.py
│   │   │   │   ├── conv_nd_factory.py
│   │   │   │   ├── dual_conv3d.py
│   │   │   │   └── pixel_norm.py
│   │   │   └── vocoders/
│   │   │       └── vocoder.py
│   │   ├── lumina/
│   │   │   ├── controlnet.py
│   │   │   └── model.py
│   │   ├── mmaudio/
│   │   │   └── vae/
│   │   │       ├── __init__.py
│   │   │       ├── activations.py
│   │   │       ├── alias_free_torch.py
│   │   │       ├── autoencoder.py
│   │   │       ├── bigvgan.py
│   │   │       ├── distributions.py
│   │   │       ├── vae.py
│   │   │       └── vae_modules.py
│   │   ├── models/
│   │   │   └── autoencoder.py
│   │   ├── modules/
│   │   │   ├── attention.py
│   │   │   ├── diffusionmodules/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── mmdit.py
│   │   │   │   ├── model.py
│   │   │   │   ├── openaimodel.py
│   │   │   │   ├── upscaling.py
│   │   │   │   └── util.py
│   │   │   ├── distributions/
│   │   │   │   ├── __init__.py
│   │   │   │   └── distributions.py
│   │   │   ├── ema.py
│   │   │   ├── encoders/
│   │   │   │   ├── __init__.py
│   │   │   │   └── noise_aug_modules.py
│   │   │   ├── sdpose.py
│   │   │   ├── sub_quadratic_attention.py
│   │   │   └── temporal_ae.py
│   │   ├── omnigen/
│   │   │   └── omnigen2.py
│   │   ├── pixart/
│   │   │   ├── blocks.py
│   │   │   └── pixartms.py
│   │   ├── qwen_image/
│   │   │   ├── controlnet.py
│   │   │   └── model.py
│   │   ├── util.py
│   │   └── wan/
│   │       ├── model.py
│   │       ├── model_animate.py
│   │       ├── model_multitalk.py
│   │       ├── vae.py
│   │       └── vae2_2.py
│   ├── lora.py
│   ├── lora_convert.py
│   ├── memory_management.py
│   ├── model_base.py
│   ├── model_detection.py
│   ├── model_management.py
│   ├── model_patcher.py
│   ├── model_sampling.py
│   ├── nested_tensor.py
│   ├── ops.py
│   ├── options.py
│   ├── patcher_extension.py
│   ├── pinned_memory.py
│   ├── pixel_space_convert.py
│   ├── quant_ops.py
│   ├── rmsnorm.py
│   ├── sample.py
│   ├── sampler_helpers.py
│   ├── samplers.py
│   ├── sd.py
│   ├── sd1_clip.py
│   ├── sd1_clip_config.json
│   ├── sd1_tokenizer/
│   │   ├── merges.txt
│   │   ├── special_tokens_map.json
│   │   ├── tokenizer_config.json
│   │   └── vocab.json
│   ├── sdxl_clip.py
│   ├── supported_models.py
│   ├── supported_models_base.py
│   ├── t2i_adapter/
│   │   └── adapter.py
│   ├── taesd/
│   │   ├── taehv.py
│   │   └── taesd.py
│   ├── text_encoders/
│   │   ├── ace.py
│   │   ├── ace15.py
│   │   ├── ace_lyrics_tokenizer/
│   │   │   └── vocab.json
│   │   ├── ace_text_cleaners.py
│   │   ├── anima.py
│   │   ├── aura_t5.py
│   │   ├── bert.py
│   │   ├── byt5_config_small_glyph.json
│   │   ├── byt5_tokenizer/
│   │   │   ├── added_tokens.json
│   │   │   ├── special_tokens_map.json
│   │   │   └── tokenizer_config.json
│   │   ├── cosmos.py
│   │   ├── flux.py
│   │   ├── genmo.py
│   │   ├── hidream.py
│   │   ├── hunyuan_image.py
│   │   ├── hunyuan_video.py
│   │   ├── hydit.py
│   │   ├── hydit_clip.json
│   │   ├── hydit_clip_tokenizer/
│   │   │   ├── special_tokens_map.json
│   │   │   ├── tokenizer_config.json
│   │   │   └── vocab.txt
│   │   ├── jina_clip_2.py
│   │   ├── kandinsky5.py
│   │   ├── llama.py
│   │   ├── llama_tokenizer/
│   │   │   ├── tokenizer.json
│   │   │   └── tokenizer_config.json
│   │   ├── long_clipl.py
│   │   ├── longcat_image.py
│   │   ├── lt.py
│   │   ├── lumina2.py
│   │   ├── mt5_config_xl.json
│   │   ├── newbie.py
│   │   ├── omnigen2.py
│   │   ├── ovis.py
│   │   ├── pixart_t5.py
│   │   ├── qwen25_tokenizer/
│   │   │   ├── merges.txt
│   │   │   ├── tokenizer_config.json
│   │   │   └── vocab.json
│   │   ├── qwen_image.py
│   │   ├── qwen_vl.py
│   │   ├── sa_t5.py
│   │   ├── sd2_clip.py
│   │   ├── sd2_clip_config.json
│   │   ├── sd3_clip.py
│   │   ├── spiece_tokenizer.py
│   │   ├── t5.py
│   │   ├── t5_config_base.json
│   │   ├── t5_config_xxl.json
│   │   ├── t5_old_config_xxl.json
│   │   ├── t5_pile_config_xl.json
│   │   ├── t5_pile_tokenizer/
│   │   │   └── tokenizer.model
│   │   ├── t5_tokenizer/
│   │   │   ├── special_tokens_map.json
│   │   │   ├── tokenizer.json
│   │   │   └── tokenizer_config.json
│   │   ├── umt5_config_base.json
│   │   ├── umt5_config_xxl.json
│   │   ├── wan.py
│   │   └── z_image.py
│   ├── utils.py
│   ├── weight_adapter/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── boft.py
│   │   ├── bypass.py
│   │   ├── glora.py
│   │   ├── loha.py
│   │   ├── lokr.py
│   │   ├── lora.py
│   │   └── oft.py
│   └── windows.py
├── comfy_api/
│   ├── feature_flags.py
│   ├── generate_api_stubs.py
│   ├── input/
│   │   ├── __init__.py
│   │   ├── basic_types.py
│   │   └── video_types.py
│   ├── input_impl/
│   │   ├── __init__.py
│   │   └── video_types.py
│   ├── internal/
│   │   ├── __init__.py
│   │   ├── api_registry.py
│   │   ├── async_to_sync.py
│   │   └── singleton.py
│   ├── latest/
│   │   ├── __init__.py
│   │   ├── _caching.py
│   │   ├── _input/
│   │   │   ├── __init__.py
│   │   │   ├── basic_types.py
│   │   │   └── video_types.py
│   │   ├── _input_impl/
│   │   │   ├── __init__.py
│   │   │   └── video_types.py
│   │   ├── _io.py
│   │   ├── _io_public.py
│   │   ├── _ui.py
│   │   ├── _ui_public.py
│   │   ├── _util/
│   │   │   ├── __init__.py
│   │   │   ├── geometry_types.py
│   │   │   ├── image_types.py
│   │   │   └── video_types.py
│   │   └── generated/
│   │       └── ComfyAPISyncStub.pyi
│   ├── torch_helpers/
│   │   ├── __init__.py
│   │   └── torch_compile.py
│   ├── util/
│   │   ├── __init__.py
│   │   └── video_types.py
│   ├── util.py
│   ├── v0_0_1/
│   │   ├── __init__.py
│   │   └── generated/
│   │       └── ComfyAPISyncStub.pyi
│   ├── v0_0_2/
│   │   ├── __init__.py
│   │   └── generated/
│   │       └── ComfyAPISyncStub.pyi
│   └── version_list.py
├── comfy_api_nodes/
│   ├── __init__.py
│   ├── apis/
│   │   ├── __init__.py
│   │   ├── bfl.py
│   │   ├── bria.py
│   │   ├── bytedance.py
│   │   ├── elevenlabs.py
│   │   ├── gemini.py
│   │   ├── grok.py
│   │   ├── hitpaw.py
│   │   ├── hunyuan3d.py
│   │   ├── ideogram.py
│   │   ├── kling.py
│   │   ├── luma.py
│   │   ├── magnific.py
│   │   ├── meshy.py
│   │   ├── minimax.py
│   │   ├── moonvalley.py
│   │   ├── openai.py
│   │   ├── pixverse.py
│   │   ├── recraft.py
│   │   ├── reve.py
│   │   ├── rodin.py
│   │   ├── runway.py
│   │   ├── stability.py
│   │   ├── topaz.py
│   │   ├── tripo.py
│   │   ├── veo.py
│   │   ├── vidu.py
│   │   └── wavespeed.py
│   ├── nodes_bfl.py
│   ├── nodes_bria.py
│   ├── nodes_bytedance.py
│   ├── nodes_elevenlabs.py
│   ├── nodes_gemini.py
│   ├── nodes_grok.py
│   ├── nodes_hitpaw.py
│   ├── nodes_hunyuan3d.py
│   ├── nodes_ideogram.py
│   ├── nodes_kling.py
│   ├── nodes_ltxv.py
│   ├── nodes_luma.py
│   ├── nodes_magnific.py
│   ├── nodes_meshy.py
│   ├── nodes_minimax.py
│   ├── nodes_moonvalley.py
│   ├── nodes_openai.py
│   ├── nodes_pixverse.py
│   ├── nodes_recraft.py
│   ├── nodes_reve.py
│   ├── nodes_rodin.py
│   ├── nodes_runway.py
│   ├── nodes_sora.py
│   ├── nodes_stability.py
│   ├── nodes_topaz.py
│   ├── nodes_tripo.py
│   ├── nodes_veo2.py
│   ├── nodes_vidu.py
│   ├── nodes_wan.py
│   ├── nodes_wavespeed.py
│   └── util/
│       ├── __init__.py
│       ├── _helpers.py
│       ├── client.py
│       ├── common_exceptions.py
│       ├── conversions.py
│       ├── download_helpers.py
│       ├── request_logger.py
│       ├── upload_helpers.py
│       └── validation_utils.py
├── comfy_config/
│   ├── config_parser.py
│   └── types.py
├── comfy_execution/
│   ├── cache_provider.py
│   ├── caching.py
│   ├── graph.py
│   ├── graph_utils.py
│   ├── jobs.py
│   ├── progress.py
│   ├── utils.py
│   └── validation.py
├── comfy_extras/
│   ├── chainner_models/
│   │   └── model_loading.py
│   ├── nodes_ace.py
│   ├── nodes_advanced_samplers.py
│   ├── nodes_align_your_steps.py
│   ├── nodes_apg.py
│   ├── nodes_attention_multiply.py
│   ├── nodes_audio.py
│   ├── nodes_audio_encoder.py
│   ├── nodes_camera_trajectory.py
│   ├── nodes_canny.py
│   ├── nodes_cfg.py
│   ├── nodes_chroma_radiance.py
│   ├── nodes_clip_sdxl.py
│   ├── nodes_color.py
│   ├── nodes_compositing.py
│   ├── nodes_cond.py
│   ├── nodes_context_windows.py
│   ├── nodes_controlnet.py
│   ├── nodes_cosmos.py
│   ├── nodes_custom_sampler.py
│   ├── nodes_dataset.py
│   ├── nodes_differential_diffusion.py
│   ├── nodes_easycache.py
│   ├── nodes_edit_model.py
│   ├── nodes_eps.py
│   ├── nodes_flux.py
│   ├── nodes_freelunch.py
│   ├── nodes_fresca.py
│   ├── nodes_gits.py
│   ├── nodes_glsl.py
│   ├── nodes_hidream.py
│   ├── nodes_hooks.py
│   ├── nodes_hunyuan.py
│   ├── nodes_hunyuan3d.py
│   ├── nodes_hypernetwork.py
│   ├── nodes_hypertile.py
│   ├── nodes_image_compare.py
│   ├── nodes_images.py
│   ├── nodes_ip2p.py
│   ├── nodes_kandinsky5.py
│   ├── nodes_latent.py
│   ├── nodes_load_3d.py
│   ├── nodes_logic.py
│   ├── nodes_lora_debug.py
│   ├── nodes_lora_extract.py
│   ├── nodes_lotus.py
│   ├── nodes_lt.py
│   ├── nodes_lt_audio.py
│   ├── nodes_lt_upsampler.py
│   ├── nodes_lumina2.py
│   ├── nodes_mahiro.py
│   ├── nodes_mask.py
│   ├── nodes_math.py
│   ├── nodes_mochi.py
│   ├── nodes_model_advanced.py
│   ├── nodes_model_downscale.py
│   ├── nodes_model_merging.py
│   ├── nodes_model_merging_model_specific.py
│   ├── nodes_model_patch.py
│   ├── nodes_morphology.py
│   ├── nodes_nag.py
│   ├── nodes_nop.py
│   ├── nodes_optimalsteps.py
│   ├── nodes_pag.py
│   ├── nodes_painter.py
│   ├── nodes_perpneg.py
│   ├── nodes_photomaker.py
│   ├── nodes_pixart.py
│   ├── nodes_post_processing.py
│   ├── nodes_preview_any.py
│   ├── nodes_primitive.py
│   ├── nodes_qwen.py
│   ├── nodes_rebatch.py
│   ├── nodes_replacements.py
│   ├── nodes_resolution.py
│   ├── nodes_rope.py
│   ├── nodes_sag.py
│   ├── nodes_sd3.py
│   ├── nodes_sdpose.py
│   ├── nodes_sdupscale.py
│   ├── nodes_slg.py
│   ├── nodes_stable3d.py
│   ├── nodes_stable_cascade.py
│   ├── nodes_string.py
│   ├── nodes_tcfg.py
│   ├── nodes_textgen.py
│   ├── nodes_tomesd.py
│   ├── nodes_toolkit.py
│   ├── nodes_torch_compile.py
│   ├── nodes_train.py
│   ├── nodes_upscale_model.py
│   ├── nodes_video.py
│   ├── nodes_video_model.py
│   ├── nodes_wan.py
│   ├── nodes_wanmove.py
│   ├── nodes_webcam.py
│   └── nodes_zimage.py
├── comfyui_version.py
├── cuda_malloc.py
├── custom_nodes/
│   └── example_node.py.example
├── execution.py
├── extra_model_paths.yaml.example
├── folder_paths.py
├── hook_breaker_ac10a0.py
├── latent_preview.py
├── main.py
├── manager_requirements.txt
├── middleware/
│   ├── __init__.py
│   └── cache_middleware.py
├── new_updater.py
├── node_helpers.py
├── nodes.py
├── protocol.py
├── pyproject.toml
├── pytest.ini
├── requirements.txt
├── script_examples/
│   ├── basic_api_example.py
│   ├── websockets_api_example.py
│   └── websockets_api_example_ws_images.py
├── server.py
├── tests/
│   ├── README.md
│   ├── __init__.py
│   ├── compare/
│   │   ├── conftest.py
│   │   └── test_quality.py
│   ├── conftest.py
│   ├── execution/
│   │   ├── test_async_nodes.py
│   │   ├── test_execution.py
│   │   ├── test_jobs.py
│   │   ├── test_preview_method.py
│   │   ├── test_progress_isolation.py
│   │   ├── test_public_api.py
│   │   └── testing_nodes/
│   │       └── testing-pack/
│   │           ├── __init__.py
│   │           ├── api_test_nodes.py
│   │           ├── async_test_nodes.py
│   │           ├── conditions.py
│   │           ├── flow_control.py
│   │           ├── specific_tests.py
│   │           ├── stubs.py
│   │           ├── tools.py
│   │           └── util.py
│   └── inference/
│       ├── __init__.py
│       ├── graphs/
│       │   └── default_graph_sdxl1_0.json
│       └── test_inference.py
├── tests-unit/
│   ├── README.md
│   ├── app_test/
│   │   ├── __init__.py
│   │   ├── custom_node_manager_test.py
│   │   ├── frontend_manager_test.py
│   │   ├── model_manager_test.py
│   │   ├── test_migrations.py
│   │   └── user_manager_system_user_test.py
│   ├── assets_test/
│   │   ├── conftest.py
│   │   ├── helpers.py
│   │   ├── queries/
│   │   │   ├── conftest.py
│   │   │   ├── test_asset.py
│   │   │   ├── test_asset_info.py
│   │   │   ├── test_cache_state.py
│   │   │   ├── test_metadata.py
│   │   │   └── test_tags.py
│   │   ├── services/
│   │   │   ├── __init__.py
│   │   │   ├── conftest.py
│   │   │   ├── test_asset_management.py
│   │   │   ├── test_bulk_ingest.py
│   │   │   ├── test_enrich.py
│   │   │   ├── test_ingest.py
│   │   │   ├── test_tag_histogram.py
│   │   │   └── test_tagging.py
│   │   ├── test_assets_missing_sync.py
│   │   ├── test_crud.py
│   │   ├── test_downloads.py
│   │   ├── test_file_utils.py
│   │   ├── test_list_filter.py
│   │   ├── test_metadata_filters.py
│   │   ├── test_prune_orphaned_assets.py
│   │   ├── test_sync_references.py
│   │   ├── test_tags_api.py
│   │   └── test_uploads.py
│   ├── comfy_api_test/
│   │   ├── input_impl_test.py
│   │   └── video_types_test.py
│   ├── comfy_extras_test/
│   │   ├── __init__.py
│   │   ├── image_stitch_test.py
│   │   └── nodes_math_test.py
│   ├── comfy_quant/
│   │   └── test_mixed_precision.py
│   ├── comfy_test/
│   │   ├── folder_path_test.py
│   │   └── model_detection_test.py
│   ├── execution_test/
│   │   ├── preview_method_override_test.py
│   │   ├── test_cache_provider.py
│   │   └── validate_node_input_test.py
│   ├── feature_flags_test.py
│   ├── folder_paths_test/
│   │   ├── __init__.py
│   │   ├── filter_by_content_types_test.py
│   │   ├── misc_test.py
│   │   └── system_user_test.py
│   ├── prompt_server_test/
│   │   ├── __init__.py
│   │   ├── system_user_endpoint_test.py
│   │   └── user_manager_test.py
│   ├── requirements.txt
│   ├── seeder_test/
│   │   └── test_seeder.py
│   ├── server/
│   │   └── utils/
│   │       └── file_operations_test.py
│   ├── server_test/
│   │   └── test_cache_control.py
│   ├── utils/
│   │   ├── extra_config_test.py
│   │   └── json_util_test.py
│   └── websocket_feature_flags_test.py
└── utils/
    ├── __init__.py
    ├── extra_config.py
    ├── install_util.py
    ├── json_util.py
    └── mime_types.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .ci/update_windows/update.py
================================================
import pygit2
from datetime import datetime
import sys
import os
import shutil
import filecmp

def pull(repo, remote_name='origin', branch='master'):
    for remote in repo.remotes:
        if remote.name == remote_name:
            remote.fetch()
            remote_master_id = repo.lookup_reference('refs/remotes/origin/%s' % (branch)).target
            merge_result, _ = repo.merge_analysis(remote_master_id)
            # Up to date, do nothing
            if merge_result & pygit2.GIT_MERGE_ANALYSIS_UP_TO_DATE:
                return
            # We can just fastforward
            elif merge_result & pygit2.GIT_MERGE_ANALYSIS_FASTFORWARD:
                repo.checkout_tree(repo.get(remote_master_id))
                try:
                    master_ref = repo.lookup_reference('refs/heads/%s' % (branch))
                    master_ref.set_target(remote_master_id)
                except KeyError:
                    repo.create_branch(branch, repo.get(remote_master_id))
                repo.head.set_target(remote_master_id)
            elif merge_result & pygit2.GIT_MERGE_ANALYSIS_NORMAL:
                repo.merge(remote_master_id)

                if repo.index.conflicts is not None:
                    for conflict in repo.index.conflicts:
                        print('Conflicts found in:', conflict[0].path)  # noqa: T201
                    raise AssertionError('Conflicts, ahhhhh!!')

                user = repo.default_signature
                tree = repo.index.write_tree()
                repo.create_commit('HEAD',
                                    user,
                                    user,
                                    'Merge!',
                                    tree,
                                    [repo.head.target, remote_master_id])
                # We need to do this or git CLI will think we are still merging.
                repo.state_cleanup()
            else:
                raise AssertionError('Unknown merge analysis result')

pygit2.option(pygit2.GIT_OPT_SET_OWNER_VALIDATION, 0)
repo_path = str(sys.argv[1])
repo = pygit2.Repository(repo_path)
ident = pygit2.Signature('comfyui', 'comfy@ui')
try:
    print("stashing current changes")  # noqa: T201
    repo.stash(ident)
except KeyError:
    print("nothing to stash")  # noqa: T201
except:
    print("Could not stash, cleaning index and trying again.")  # noqa: T201
    repo.state_cleanup()
    repo.index.read_tree(repo.head.peel().tree)
    repo.index.write()
    try:
        repo.stash(ident)
    except KeyError:
        print("nothing to stash.")  # noqa: T201

backup_branch_name = 'backup_branch_{}'.format(datetime.today().strftime('%Y-%m-%d_%H_%M_%S'))
print("creating backup branch: {}".format(backup_branch_name))  # noqa: T201
try:
    repo.branches.local.create(backup_branch_name, repo.head.peel())
except:
    pass

print("checking out master branch")  # noqa: T201
branch = repo.lookup_branch('master')
if branch is None:
    try:
        ref = repo.lookup_reference('refs/remotes/origin/master')
    except:
        print("fetching.")  # noqa: T201
        for remote in repo.remotes:
            if remote.name == "origin":
                remote.fetch()
        ref = repo.lookup_reference('refs/remotes/origin/master')
    repo.checkout(ref)
    branch = repo.lookup_branch('master')
    if branch is None:
        repo.create_branch('master', repo.get(ref.target))
else:
    ref = repo.lookup_reference(branch.name)
    repo.checkout(ref)

print("pulling latest changes")  # noqa: T201
pull(repo)

if "--stable" in sys.argv:
    def latest_tag(repo):
        versions = []
        for k in repo.references:
            try:
                prefix = "refs/tags/v"
                if k.startswith(prefix):
                    version = list(map(int, k[len(prefix):].split(".")))
                    versions.append((version[0] * 10000000000 + version[1] * 100000 + version[2], k))
            except:
                pass
        versions.sort()
        if len(versions) > 0:
            return versions[-1][1]
        return None
    latest_tag = latest_tag(repo)
    if latest_tag is not None:
        repo.checkout(latest_tag)

print("Done!")  # noqa: T201

self_update = True
if len(sys.argv) > 2:
    self_update = '--skip_self_update' not in sys.argv

update_py_path = os.path.realpath(__file__)
repo_update_py_path = os.path.join(repo_path, ".ci/update_windows/update.py")

cur_path = os.path.dirname(update_py_path)


req_path = os.path.join(cur_path, "current_requirements.txt")
repo_req_path = os.path.join(repo_path, "requirements.txt")


def files_equal(file1, file2):
    try:
        return filecmp.cmp(file1, file2, shallow=False)
    except:
        return False

def file_size(f):
    try:
        return os.path.getsize(f)
    except:
        return 0


if self_update and not files_equal(update_py_path, repo_update_py_path) and file_size(repo_update_py_path) > 10:
    shutil.copy(repo_update_py_path, os.path.join(cur_path, "update_new.py"))
    exit()

if not os.path.exists(req_path) or not files_equal(repo_req_path, req_path):
    import subprocess
    try:
        subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', '-r', repo_req_path])
        shutil.copy(repo_req_path, req_path)
    except:
        pass


stable_update_script = os.path.join(repo_path, ".ci/update_windows/update_comfyui_stable.bat")
stable_update_script_to = os.path.join(cur_path, "update_comfyui_stable.bat")

try:
    if not file_size(stable_update_script_to) > 10:
        shutil.copy(stable_update_script, stable_update_script_to)
except:
    pass



================================================
FILE: .ci/update_windows/update_comfyui.bat
================================================
@echo off
..\python_embeded\python.exe .\update.py ..\ComfyUI\
if exist update_new.py (
  move /y update_new.py update.py
  echo Running updater again since it got updated.
  ..\python_embeded\python.exe .\update.py ..\ComfyUI\ --skip_self_update
)
if "%~1"=="" pause


================================================
FILE: .ci/update_windows/update_comfyui_stable.bat
================================================
@echo off
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --stable
if exist update_new.py (
  move /y update_new.py update.py
  echo Running updater again since it got updated.
  ..\python_embeded\python.exe .\update.py ..\ComfyUI\ --skip_self_update --stable
)
if "%~1"=="" pause


================================================
FILE: .ci/windows_amd_base_files/README_VERY_IMPORTANT.txt
================================================
As of the time of writing this you need this driver for best results:
https://www.amd.com/en/resources/support-articles/release-notes/RN-AMDGPU-WINDOWS-PYTORCH-7-1-1.html

HOW TO RUN:

If you have a AMD gpu:

run_amd_gpu.bat

If you have memory issues you can try disabling the smart memory management by running comfyui with:

run_amd_gpu_disable_smart_memory.bat

IF YOU GET A RED ERROR IN THE UI MAKE SURE YOU HAVE A MODEL/CHECKPOINT IN: ComfyUI\models\checkpoints

You can download the stable diffusion XL one from: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors


RECOMMENDED WAY TO UPDATE:
To update the ComfyUI code: update\update_comfyui.bat


TO SHARE MODELS BETWEEN COMFYUI AND ANOTHER UI:
In the ComfyUI directory you will find a file: extra_model_paths.yaml.example
Rename this file to: extra_model_paths.yaml and edit it with your favorite text editor.





================================================
FILE: .ci/windows_amd_base_files/run_amd_gpu.bat
================================================
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
pause


================================================
FILE: .ci/windows_amd_base_files/run_amd_gpu_disable_smart_memory.bat
================================================
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --disable-smart-memory
pause


================================================
FILE: .ci/windows_nightly_base_files/run_nvidia_gpu_fast.bat
================================================
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --fast
pause


================================================
FILE: .ci/windows_nvidia_base_files/README_VERY_IMPORTANT.txt
================================================
HOW TO RUN:

if you have a NVIDIA gpu:

run_nvidia_gpu.bat

if you want to enable the fast fp16 accumulation (faster for fp16 models with slightly less quality):

run_nvidia_gpu_fast_fp16_accumulation.bat


To run it in slow CPU mode:

run_cpu.bat



IF YOU GET A RED ERROR IN THE UI MAKE SURE YOU HAVE A MODEL/CHECKPOINT IN: ComfyUI\models\checkpoints

You can download the stable diffusion 1.5 one from: https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/blob/main/v1-5-pruned-emaonly-fp16.safetensors


RECOMMENDED WAY TO UPDATE:
To update the ComfyUI code: update\update_comfyui.bat



To update ComfyUI with the python dependencies, note that you should ONLY run this if you have issues with python dependencies.
update\update_comfyui_and_python_dependencies.bat


TO SHARE MODELS BETWEEN COMFYUI AND ANOTHER UI:
In the ComfyUI directory you will find a file: extra_model_paths.yaml.example
Rename this file to: extra_model_paths.yaml and edit it with your favorite text editor.


================================================
FILE: .ci/windows_nvidia_base_files/advanced/run_nvidia_gpu_disable_api_nodes.bat
================================================
..\python_embeded\python.exe -s ..\ComfyUI\main.py --windows-standalone-build --disable-api-nodes
echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest. If you get a c10.dll error you need to install vc redist that you can find: https://aka.ms/vc14/vc_redist.x64.exe
pause


================================================
FILE: .ci/windows_nvidia_base_files/run_cpu.bat
================================================
.\python_embeded\python.exe -s ComfyUI\main.py --cpu --windows-standalone-build
pause


================================================
FILE: .ci/windows_nvidia_base_files/run_nvidia_gpu.bat
================================================
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest. If you get a c10.dll error you need to install vc redist that you can find: https://aka.ms/vc14/vc_redist.x64.exe
pause


================================================
FILE: .ci/windows_nvidia_base_files/run_nvidia_gpu_fast_fp16_accumulation.bat
================================================
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --fast fp16_accumulation
echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest. If you get a c10.dll error you need to install vc redist that you can find: https://aka.ms/vc14/vc_redist.x64.exe
pause


================================================
FILE: .coderabbit.yaml
================================================
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
language: "en-US"
early_access: false
tone_instructions: "Only comment on issues introduced by this PR's changes. Do not flag pre-existing problems in moved, re-indented, or reformatted code."

reviews:
  profile: "chill"
  request_changes_workflow: false
  high_level_summary: false
  poem: false
  review_status: false
  review_details: false
  commit_status: true
  collapse_walkthrough: true
  changed_files_summary: false
  sequence_diagrams: false
  estimate_code_review_effort: false
  assess_linked_issues: false
  related_issues: false
  related_prs: false
  suggested_labels: false
  auto_apply_labels: false
  suggested_reviewers: false
  auto_assign_reviewers: false
  in_progress_fortune: false
  enable_prompt_for_ai_agents: true

  path_filters:
    - "!comfy_api_nodes/apis/**"
    - "!**/generated/*.pyi"
    - "!.ci/**"
    - "!script_examples/**"
    - "!**/__pycache__/**"
    - "!**/*.ipynb"
    - "!**/*.png"
    - "!**/*.bat"

  path_instructions:
    - path: "**"
      instructions: |
        IMPORTANT: Only comment on issues directly introduced by this PR's code changes.
        Do NOT flag pre-existing issues in code that was merely moved, re-indented,
        de-indented, or reformatted without logic changes. If code appears in the diff
        only due to whitespace or structural reformatting (e.g., removing a `with:` block),
        treat it as unchanged. Contributors should not feel obligated to address
        pre-existing issues outside the scope of their contribution.
    - path: "comfy/**"
      instructions: |
        Core ML/diffusion engine. Focus on:
        - Backward compatibility (breaking changes affect all custom nodes)
        - Memory management and GPU resource handling
        - Performance implications in hot paths
        - Thread safety for concurrent execution
    - path: "comfy_api_nodes/**"
      instructions: |
        Third-party API integration nodes. Focus on:
        - No hardcoded API keys or secrets
        - Proper error handling for API failures (timeouts, rate limits, auth errors)
        - Correct Pydantic model usage
        - Security of user data passed to external APIs
    - path: "comfy_extras/**"
      instructions: |
        Community-contributed extra nodes. Focus on:
        - Consistency with node patterns (INPUT_TYPES, RETURN_TYPES, FUNCTION, CATEGORY)
        - No breaking changes to existing node interfaces
    - path: "comfy_execution/**"
      instructions: |
        Execution engine (graph execution, caching, jobs). Focus on:
        - Caching correctness
        - Concurrent execution safety
        - Graph validation edge cases
    - path: "nodes.py"
      instructions: |
        Core node definitions (2500+ lines). Focus on:
        - Backward compatibility of NODE_CLASS_MAPPINGS
        - Consistency of INPUT_TYPES return format
    - path: "alembic_db/**"
      instructions: |
        Database migrations. Focus on:
        - Migration safety and rollback support
        - Data preservation during schema changes

  auto_review:
    enabled: true
    auto_incremental_review: true
    drafts: false
    ignore_title_keywords:
      - "WIP"
      - "DO NOT REVIEW"
      - "DO NOT MERGE"

  finishing_touches:
    docstrings:
      enabled: false
    unit_tests:
      enabled: false

  tools:
    ruff:
      enabled: false
    pylint:
      enabled: false
    flake8:
      enabled: false
    gitleaks:
      enabled: true
    shellcheck:
      enabled: false
    markdownlint:
      enabled: false
    yamllint:
      enabled: false
    languagetool:
      enabled: false
    github-checks:
      enabled: true
      timeout_ms: 90000
    ast-grep:
      essential_rules: true

chat:
  auto_reply: true

knowledge_base:
  opt_out: false
  learnings:
    scope: "auto"


================================================
FILE: .gitattributes
================================================
/web/assets/** linguist-generated
/web/** linguist-vendored
comfy_api_nodes/apis/__init__.py linguist-generated


================================================
FILE: .github/ISSUE_TEMPLATE/bug-report.yml
================================================
name: Bug Report
description: "Something is broken inside of ComfyUI. (Do not use this if you're just having issues and need help, or if the issue relates to a custom node)"
labels: ["Potential Bug"]
body:
  - type: markdown
    attributes:
      value: |
        Before submitting a **Bug Report**, please ensure the following:

        - **1:** You are running the latest version of ComfyUI.
        - **2:** You have your ComfyUI logs and relevant workflow on hand and will post them in this bug report.
        - **3:** You confirmed that the bug is not caused by a custom node. You can disable all custom nodes by passing
        `--disable-all-custom-nodes` command line argument. If you have custom node try updating them to the latest version.
        - **4:** This is an actual bug in ComfyUI, not just a support question. A bug is when you can specify exact
        steps to replicate what went wrong and others will be able to repeat your steps and see the same issue happen.

        ## Very Important

        Please make sure that you post ALL your ComfyUI logs in the bug report **even if there is no crash**. Just paste everything. The startup log (everything before "To see the GUI go to: ...") contains critical information to developers trying to help. For a performance issue or crash, paste everything from "got prompt" to the end, including the crash. More is better - always. A bug report without logs will likely be ignored.
  - type: checkboxes
    id: custom-nodes-test
    attributes:
      label: Custom Node Testing
      description: Please confirm you have tried to reproduce the issue with all custom nodes disabled.
      options:
        - label: I have tried disabling custom nodes and the issue persists (see [how to disable custom nodes](https://docs.comfy.org/troubleshooting/custom-node-issues#step-1%3A-test-with-all-custom-nodes-disabled) if you need help)
          required: false
  - type: textarea
    attributes:
      label: Expected Behavior
      description: "What you expected to happen."
    validations:
      required: true
  - type: textarea
    attributes:
      label: Actual Behavior
      description: "What actually happened. Please include a screenshot of the issue if possible."
    validations:
      required: true
  - type: textarea
    attributes:
      label: Steps to Reproduce
      description: "Describe how to reproduce the issue. Please be sure to attach a workflow JSON or PNG, ideally one that doesn't require custom nodes to test. If the bug open happens when certain custom nodes are used, most likely that custom node is what has the bug rather than ComfyUI, in which case it should be reported to the node's author."
    validations:
      required: true
  - type: textarea
    attributes:
      label: Debug Logs
      description: "Please copy the output from your terminal logs here."
      render: powershell
    validations:
      required: true
  - type: textarea
    attributes:
      label: Other
      description: "Any other additional information you think might be helpful."
    validations:
      required: false


================================================
FILE: .github/ISSUE_TEMPLATE/config.yml
================================================
blank_issues_enabled: true
contact_links:
  - name: ComfyUI Frontend Issues
    url: https://github.com/Comfy-Org/ComfyUI_frontend/issues
    about: Issues related to the ComfyUI frontend (display issues, user interaction bugs), please go to the frontend repo to file the issue
  - name: ComfyUI Matrix Space
    url: https://app.element.io/#/room/%23comfyui_space%3Amatrix.org
    about: The ComfyUI Matrix Space is available for support and general discussion related to ComfyUI (Matrix is like Discord but open source).
  - name: Comfy Org Discord
    url: https://discord.gg/comfyorg
    about: The Comfy Org Discord is available for support and general discussion related to ComfyUI.


================================================
FILE: .github/ISSUE_TEMPLATE/feature-request.yml
================================================
name: Feature Request
description: "You have an idea for something new you would like to see added to ComfyUI's core."
labels: [ "Feature" ]
body:
    - type: markdown
      attributes:
        value: |
                Before submitting a **Feature Request**, please ensure the following:

                **1:** You are running the latest version of ComfyUI.
                **2:** You have looked to make sure there is not already a feature that does what you need, and there is not already a Feature Request listed for the same idea.
                **3:** This is something that makes sense to add to ComfyUI Core, and wouldn't make more sense as a custom node.

                If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
    - type: textarea
      attributes:
            label: Feature Idea
            description: "Describe the feature you want to see."
      validations:
            required: true
    - type: textarea
      attributes:
                label: Existing Solutions
                description: "Please search through available custom nodes / extensions to see if there are existing custom solutions for this. If so, please link the options you found here as a reference."
      validations:
                required: false
    - type: textarea
      attributes:
                label: Other
                description: "Any other additional information you think might be helpful."
      validations:
                required: false


================================================
FILE: .github/ISSUE_TEMPLATE/user-support.yml
================================================
name: User Support
description: "Use this if you need help with something, or you're experiencing an issue."
labels: [ "User Support" ]
body:
    - type: markdown
      attributes:
        value: |
            Before submitting a **User Report** issue, please ensure the following:

            **1:** You are running the latest version of ComfyUI.
            **2:** You have made an effort to find public answers to your question before asking here. In other words, you googled it first, and scrolled through recent help topics.

                If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
    - type: checkboxes
      id: custom-nodes-test
      attributes:
        label: Custom Node Testing
        description: Please confirm you have tried to reproduce the issue with all custom nodes disabled.
        options:
          - label: I have tried disabling custom nodes and the issue persists (see [how to disable custom nodes](https://docs.comfy.org/troubleshooting/custom-node-issues#step-1%3A-test-with-all-custom-nodes-disabled) if you need help)
            required: false
    - type: textarea
      attributes:
            label: Your question
            description: "Post your question here. Please be as detailed as possible."
      validations:
            required: true
    - type: textarea
      attributes:
                label: Logs
                description: "If your question relates to an issue you're experiencing, please go to `Server` -> `Logs` -> potentially set `View Type` to `Debug` as well, then copypaste all the text into here."
                render: powershell
      validations:
                required: false
    - type: textarea
      attributes:
                label: Other
                description: "Any other additional information you think might be helpful."
      validations:
                required: false


================================================
FILE: .github/PULL_REQUEST_TEMPLATE/api-node.md
================================================
<!-- API_NODE_PR_CHECKLIST: do not remove -->

## API Node PR Checklist

### Scope
- [ ] **Is API Node Change**

### Pricing & Billing
- [ ] **Need pricing update**
- [ ] **No pricing update**

If **Need pricing update**:
- [ ] Metronome rate cards updated
- [ ] Auto‑billing tests updated and passing

### QA
- [ ] **QA done**
- [ ] **QA not required**

### Comms
- [ ] Informed **Kosinkadink**


================================================
FILE: .github/scripts/check-ai-co-authors.sh
================================================
#!/usr/bin/env bash
# Checks pull request commits for AI agent Co-authored-by trailers.
# Exits non-zero when any are found and prints fix instructions.
set -euo pipefail

base_sha="${1:?usage: check-ai-co-authors.sh <base_sha> <head_sha>}"
head_sha="${2:?usage: check-ai-co-authors.sh <base_sha> <head_sha>}"

# Known AI coding-agent trailer patterns (case-insensitive).
# Each entry is an extended-regex fragment matched against Co-authored-by lines.
AGENT_PATTERNS=(
    # Anthropic — Claude Code / Amp
    'noreply@anthropic\.com'
    # Cursor
    'cursoragent@cursor\.com'
    # GitHub Copilot
    'copilot-swe-agent\[bot\]'
    'copilot@github\.com'
    # OpenAI Codex
    'noreply@openai\.com'
    'codex@openai\.com'
    # Aider
    'aider@aider\.chat'
    # Google — Gemini / Jules
    'gemini@google\.com'
    'jules@google\.com'
    # Windsurf / Codeium
    '@codeium\.com'
    # Devin
    'devin-ai-integration\[bot\]'
    'devin@cognition\.ai'
    'devin@cognition-labs\.com'
    # Amazon Q Developer
    'amazon-q-developer'
    '@amazon\.com.*[Qq].[Dd]eveloper'
    # Cline
    'cline-bot'
    'cline@cline\.ai'
    # Continue
    'continue-agent'
    'continue@continue\.dev'
    # Sourcegraph
    'noreply@sourcegraph\.com'
    # Generic catch-alls for common agent name patterns
    'Co-authored-by:.*\b[Cc]laude\b'
    'Co-authored-by:.*\b[Cc]opilot\b'
    'Co-authored-by:.*\b[Cc]ursor\b'
    'Co-authored-by:.*\b[Cc]odex\b'
    'Co-authored-by:.*\b[Gg]emini\b'
    'Co-authored-by:.*\b[Aa]ider\b'
    'Co-authored-by:.*\b[Dd]evin\b'
    'Co-authored-by:.*\b[Ww]indsurf\b'
    'Co-authored-by:.*\b[Cc]line\b'
    'Co-authored-by:.*\b[Aa]mazon Q\b'
    'Co-authored-by:.*\b[Jj]ules\b'
    'Co-authored-by:.*\bOpenCode\b'
)

# Build a single alternation regex from all patterns.
regex=""
for pattern in "${AGENT_PATTERNS[@]}"; do
    if [[ -n "$regex" ]]; then
        regex="${regex}|${pattern}"
    else
        regex="$pattern"
    fi
done

# Collect Co-authored-by lines from every commit in the PR range.
violations=""
while IFS= read -r sha; do
    message="$(git log -1 --format='%B' "$sha")"
    matched_lines="$(echo "$message" | grep -iE "^Co-authored-by:" || true)"
    if [[ -z "$matched_lines" ]]; then
        continue
    fi

    while IFS= read -r line; do
        if echo "$line" | grep -iqE "$regex"; then
            short="$(git log -1 --format='%h' "$sha")"
            violations="${violations}  ${short}: ${line}"$'\n'
        fi
    done <<< "$matched_lines"
done < <(git rev-list "${base_sha}..${head_sha}")

if [[ -n "$violations" ]]; then
    echo "::error::AI agent Co-authored-by trailers detected in PR commits."
    echo ""
    echo "The following commits contain Co-authored-by trailers from AI coding agents:"
    echo ""
    echo "$violations"
    echo "These trailers should be removed before merging."
    echo ""
    echo "To fix, rewrite the commit messages with:"
    echo "  git rebase -i ${base_sha}"
    echo ""
    echo "and remove the Co-authored-by lines, then force-push your branch."
    echo ""
    echo "If you believe this is a false positive, please open an issue."
    exit 1
fi

echo "No AI agent Co-authored-by trailers found."


================================================
FILE: .github/workflows/api-node-template.yml
================================================
name: Append API Node PR template

on:
  pull_request_target:
    types: [opened, reopened, synchronize, ready_for_review]
    paths:
      - 'comfy_api_nodes/**'   # only run if these files changed

permissions:
  contents: read
  pull-requests: write

jobs:
  inject:
    runs-on: ubuntu-latest
    steps:
      - name: Ensure template exists and append to PR body
        uses: actions/github-script@v7
        with:
          script: |
            const { owner, repo } = context.repo;
            const number = context.payload.pull_request.number;
            const templatePath = '.github/PULL_REQUEST_TEMPLATE/api-node.md';
            const marker = '<!-- API_NODE_PR_CHECKLIST: do not remove -->';

            const { data: pr } = await github.rest.pulls.get({ owner, repo, pull_number: number });

            let templateText;
            try {
              const res = await github.rest.repos.getContent({
                owner,
                repo,
                path: templatePath,
                ref: pr.base.ref
              });
              const buf = Buffer.from(res.data.content, res.data.encoding || 'base64');
              templateText = buf.toString('utf8');
            } catch (e) {
              core.setFailed(`Required PR template not found at "${templatePath}" on ${pr.base.ref}. Please add it to the repo.`);
              return;
            }

            // Enforce the presence of the marker inside the template (for idempotence)
            if (!templateText.includes(marker)) {
              core.setFailed(`Template at "${templatePath}" does not contain the required marker:\n${marker}\nAdd it so we can detect duplicates safely.`);
              return;
            }

            // If the PR already contains the marker, do not append again.
            const body = pr.body || '';
            if (body.includes(marker)) {
              core.info('Template already present in PR body; nothing to inject.');
              return;
            }

            const newBody = (body ? body + '\n\n' : '') + templateText + '\n';
            await github.rest.pulls.update({ owner, repo, pull_number: number, body: newBody });
            core.notice('API Node template appended to PR description.');


================================================
FILE: .github/workflows/check-ai-co-authors.yml
================================================
name: Check AI Co-Authors

on:
  pull_request:
    branches: ['*']

jobs:
  check-ai-co-authors:
    name: Check for AI agent co-author trailers
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Check commits for AI co-author trailers
        run: bash .github/scripts/check-ai-co-authors.sh "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}"


================================================
FILE: .github/workflows/check-line-endings.yml
================================================
name: Check for Windows Line Endings

on:
  pull_request:
    branches: ['*'] # Trigger on all pull requests to any branch

jobs:
  check-line-endings:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4
        with:
          fetch-depth: 0 # Fetch all history to compare changes

      - name: Check for Windows line endings (CRLF)
        run: |
          # Get the list of changed files in the PR
          CHANGED_FILES=$(git diff --name-only ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }})

          # Flag to track if CRLF is found
          CRLF_FOUND=false

          # Loop through each changed file
          for FILE in $CHANGED_FILES; do
            # Check if the file exists and is a text file
            if [ -f "$FILE" ] && file "$FILE" | grep -q "text"; then
              # Check for CRLF line endings
              if grep -UP '\r$' "$FILE"; then
                echo "Error: Windows line endings (CRLF) detected in $FILE"
                CRLF_FOUND=true
              fi
            fi
          done

          # Exit with error if CRLF was found
          if [ "$CRLF_FOUND" = true ]; then
            exit 1
          fi


================================================
FILE: .github/workflows/pullrequest-ci-run.yml
================================================
# This is the GitHub Workflow that drives full-GPU-enabled tests of pull requests to ComfyUI, when the 'Run-CI-Test' label is added
# Results are reported as checkmarks on the commits, as well as onto https://ci.comfy.org/
name: Pull Request CI Workflow Runs
on:
    pull_request_target:
        types: [labeled]

jobs:
  pr-test-stable:
    if: ${{ github.event.label.name == 'Run-CI-Test' }}
    strategy:
      fail-fast: false
      matrix:
        os: [macos, linux, windows]
        python_version: ["3.9", "3.10", "3.11", "3.12"]
        cuda_version: ["12.1"]
        torch_version: ["stable"]
        include:
          - os: macos
            runner_label: [self-hosted, macOS]
            flags: "--use-pytorch-cross-attention"
          - os: linux
            runner_label: [self-hosted, Linux]
            flags: ""
          - os: windows
            runner_label: [self-hosted, Windows]
            flags: ""
    runs-on: ${{ matrix.runner_label }}
    steps:
      - name: Test Workflows
        uses: comfy-org/comfy-action@main
        with:
          os: ${{ matrix.os }}
          python_version: ${{ matrix.python_version }}
          torch_version: ${{ matrix.torch_version }}
          google_credentials: ${{ secrets.GCS_SERVICE_ACCOUNT_JSON }}
          comfyui_flags: ${{ matrix.flags }}
          use_prior_commit: 'true'
  comment:
    if: ${{ github.event.label.name == 'Run-CI-Test' }}
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
    steps:
      - uses: actions/github-script@v6
        with:
          script: |
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: '(Automated Bot Message) CI Tests are running, you can view the results at https://ci.comfy.org/?branch=${{ github.event.pull_request.number }}%2Fmerge'
            })


================================================
FILE: .github/workflows/release-stable-all.yml
================================================
name: "Release Stable All Portable Versions"

on:
  workflow_dispatch:
    inputs:
      git_tag:
        description: 'Git tag'
        required: true
        type: string

jobs:
  release_nvidia_default:
    permissions:
      contents: "write"
      packages: "write"
      pull-requests: "read"
    name: "Release NVIDIA Default (cu130)"
    uses: ./.github/workflows/stable-release.yml
    with:
      git_tag: ${{ inputs.git_tag }}
      cache_tag: "cu130"
      python_minor: "13"
      python_patch: "11"
      rel_name: "nvidia"
      rel_extra_name: ""
      test_release: true
    secrets: inherit

  release_nvidia_cu128:
    permissions:
      contents: "write"
      packages: "write"
      pull-requests: "read"
    name: "Release NVIDIA cu128"
    uses: ./.github/workflows/stable-release.yml
    with:
      git_tag: ${{ inputs.git_tag }}
      cache_tag: "cu128"
      python_minor: "12"
      python_patch: "10"
      rel_name: "nvidia"
      rel_extra_name: "_cu128"
      test_release: true
    secrets: inherit

  release_nvidia_cu126:
    permissions:
      contents: "write"
      packages: "write"
      pull-requests: "read"
    name: "Release NVIDIA cu126"
    uses: ./.github/workflows/stable-release.yml
    with:
      git_tag: ${{ inputs.git_tag }}
      cache_tag: "cu126"
      python_minor: "12"
      python_patch: "10"
      rel_name: "nvidia"
      rel_extra_name: "_cu126"
      test_release: true
    secrets: inherit

  release_amd_rocm:
    permissions:
      contents: "write"
      packages: "write"
      pull-requests: "read"
    name: "Release AMD ROCm 7.2"
    uses: ./.github/workflows/stable-release.yml
    with:
      git_tag: ${{ inputs.git_tag }}
      cache_tag: "rocm72"
      python_minor: "12"
      python_patch: "10"
      rel_name: "amd"
      rel_extra_name: ""
      test_release: false
    secrets: inherit


================================================
FILE: .github/workflows/release-webhook.yml
================================================
name: Release Webhook

on:
  release:
    types: [published]

jobs:
  send-webhook:
    runs-on: ubuntu-latest
    env:
      DESKTOP_REPO_DISPATCH_TOKEN: ${{ secrets.DESKTOP_REPO_DISPATCH_TOKEN }}
    steps:
      - name: Send release webhook
        env:
          WEBHOOK_URL: ${{ secrets.RELEASE_GITHUB_WEBHOOK_URL }}
          WEBHOOK_SECRET: ${{ secrets.RELEASE_GITHUB_WEBHOOK_SECRET }}
        run: |
          # Generate UUID for delivery ID
          DELIVERY_ID=$(uuidgen)
          HOOK_ID="release-webhook-$(date +%s)"
          
          # Create webhook payload matching GitHub release webhook format
          PAYLOAD=$(cat <<EOF
          {
            "action": "published",
            "release": {
              "id": ${{ github.event.release.id }},
              "node_id": "${{ github.event.release.node_id }}",
              "url": "${{ github.event.release.url }}",
              "html_url": "${{ github.event.release.html_url }}",
              "assets_url": "${{ github.event.release.assets_url }}",
              "upload_url": "${{ github.event.release.upload_url }}",
              "tag_name": "${{ github.event.release.tag_name }}",
              "target_commitish": "${{ github.event.release.target_commitish }}",
              "name": ${{ toJSON(github.event.release.name) }},
              "body": ${{ toJSON(github.event.release.body) }},
              "draft": ${{ github.event.release.draft }},
              "prerelease": ${{ github.event.release.prerelease }},
              "created_at": "${{ github.event.release.created_at }}",
              "published_at": "${{ github.event.release.published_at }}",
              "author": {
                "login": "${{ github.event.release.author.login }}",
                "id": ${{ github.event.release.author.id }},
                "node_id": "${{ github.event.release.author.node_id }}",
                "avatar_url": "${{ github.event.release.author.avatar_url }}",
                "url": "${{ github.event.release.author.url }}",
                "html_url": "${{ github.event.release.author.html_url }}",
                "type": "${{ github.event.release.author.type }}",
                "site_admin": ${{ github.event.release.author.site_admin }}
              },
              "tarball_url": "${{ github.event.release.tarball_url }}",
              "zipball_url": "${{ github.event.release.zipball_url }}",
              "assets": ${{ toJSON(github.event.release.assets) }}
            },
            "repository": {
              "id": ${{ github.event.repository.id }},
              "node_id": "${{ github.event.repository.node_id }}",
              "name": "${{ github.event.repository.name }}",
              "full_name": "${{ github.event.repository.full_name }}",
              "private": ${{ github.event.repository.private }},
              "owner": {
                "login": "${{ github.event.repository.owner.login }}",
                "id": ${{ github.event.repository.owner.id }},
                "node_id": "${{ github.event.repository.owner.node_id }}",
                "avatar_url": "${{ github.event.repository.owner.avatar_url }}",
                "url": "${{ github.event.repository.owner.url }}",
                "html_url": "${{ github.event.repository.owner.html_url }}",
                "type": "${{ github.event.repository.owner.type }}",
                "site_admin": ${{ github.event.repository.owner.site_admin }}
              },
              "html_url": "${{ github.event.repository.html_url }}",
              "clone_url": "${{ github.event.repository.clone_url }}",
              "git_url": "${{ github.event.repository.git_url }}",
              "ssh_url": "${{ github.event.repository.ssh_url }}",
              "url": "${{ github.event.repository.url }}",
              "created_at": "${{ github.event.repository.created_at }}",
              "updated_at": "${{ github.event.repository.updated_at }}",
              "pushed_at": "${{ github.event.repository.pushed_at }}",
              "default_branch": "${{ github.event.repository.default_branch }}",
              "fork": ${{ github.event.repository.fork }}
            },
            "sender": {
              "login": "${{ github.event.sender.login }}",
              "id": ${{ github.event.sender.id }},
              "node_id": "${{ github.event.sender.node_id }}",
              "avatar_url": "${{ github.event.sender.avatar_url }}",
              "url": "${{ github.event.sender.url }}",
              "html_url": "${{ github.event.sender.html_url }}",
              "type": "${{ github.event.sender.type }}",
              "site_admin": ${{ github.event.sender.site_admin }}
            }
          }
          EOF
          )
          
          # Generate HMAC-SHA256 signature
          SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$WEBHOOK_SECRET" -hex | cut -d' ' -f2)
          
          # Send webhook with required headers
          curl -X POST "$WEBHOOK_URL" \
            -H "Content-Type: application/json" \
            -H "X-GitHub-Event: release" \
            -H "X-GitHub-Delivery: $DELIVERY_ID" \
            -H "X-GitHub-Hook-ID: $HOOK_ID" \
            -H "X-Hub-Signature-256: sha256=$SIGNATURE" \
            -H "User-Agent: GitHub-Actions-Webhook/1.0" \
            -d "$PAYLOAD" \
            --fail --silent --show-error
          
          echo "✅ Release webhook sent successfully"

      - name: Send repository dispatch to desktop
        env:
          DISPATCH_TOKEN: ${{ env.DESKTOP_REPO_DISPATCH_TOKEN }}
          RELEASE_TAG: ${{ github.event.release.tag_name }}
          RELEASE_URL: ${{ github.event.release.html_url }}
        run: |
          set -euo pipefail

          if [ -z "${DISPATCH_TOKEN:-}" ]; then
            echo "::error::DESKTOP_REPO_DISPATCH_TOKEN is required but not set."
            exit 1
          fi

          PAYLOAD="$(jq -n \
            --arg release_tag "$RELEASE_TAG" \
            --arg release_url "$RELEASE_URL" \
            '{
              event_type: "comfyui_release_published",
              client_payload: {
                release_tag: $release_tag,
                release_url: $release_url
              }
            }')"

          curl -fsSL \
            -X POST \
            -H "Accept: application/vnd.github+json" \
            -H "Content-Type: application/json" \
            -H "Authorization: Bearer ${DISPATCH_TOKEN}" \
            https://api.github.com/repos/Comfy-Org/desktop/dispatches \
            -d "$PAYLOAD"

          echo "✅ Dispatched ComfyUI release ${RELEASE_TAG} to Comfy-Org/desktop"


================================================
FILE: .github/workflows/ruff.yml
================================================
name: Python Linting

on: [push, pull_request]

jobs:
  ruff:
    name: Run Ruff
    runs-on: ubuntu-latest

    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: 3.x

    - name: Install Ruff
      run: pip install ruff

    - name: Run Ruff
      run: ruff check .

  pylint:
    name: Run Pylint
    runs-on: ubuntu-latest

    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Set up Python
      uses: actions/setup-python@v4
      with:
        python-version: '3.12'

    - name: Install requirements
      run: |
        python -m pip install --upgrade pip
        pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
        pip install -r requirements.txt

    - name: Install Pylint
      run: pip install pylint

    - name: Run Pylint
      run: pylint comfy_api_nodes


================================================
FILE: .github/workflows/stable-release.yml
================================================

name: "Release Stable Version"

on:
  workflow_call:
    inputs:
      git_tag:
        description: 'Git tag'
        required: true
        type: string
      cache_tag:
        description: 'Cached dependencies tag'
        required: true
        type: string
        default: "cu129"
      python_minor:
        description: 'Python minor version'
        required: true
        type: string
        default: "13"
      python_patch:
        description: 'Python patch version'
        required: true
        type: string
        default: "6"
      rel_name:
        description: 'Release name'
        required: true
        type: string
        default: "nvidia"
      rel_extra_name:
        description: 'Release extra name'
        required: false
        type: string
        default: ""
      test_release:
        description: 'Test Release'
        required: true
        type: boolean
        default: true
  workflow_dispatch:
    inputs:
      git_tag:
        description: 'Git tag'
        required: true
        type: string
      cache_tag:
        description: 'Cached dependencies tag'
        required: true
        type: string
        default: "cu129"
      python_minor:
        description: 'Python minor version'
        required: true
        type: string
        default: "13"
      python_patch:
        description: 'Python patch version'
        required: true
        type: string
        default: "6"
      rel_name:
        description: 'Release name'
        required: true
        type: string
        default: "nvidia"
      rel_extra_name:
        description: 'Release extra name'
        required: false
        type: string
        default: ""
      test_release:
        description: 'Test Release'
        required: true
        type: boolean
        default: true

jobs:
  package_comfy_windows:
    permissions:
      contents: "write"
      packages: "write"
      pull-requests: "read"
    runs-on: windows-latest
    steps:
      - uses: actions/checkout@v4
        with:
          ref: ${{ inputs.git_tag }}
          fetch-depth: 150
          persist-credentials: false
      - uses: actions/cache/restore@v4
        id: cache
        with:
          path: |
            ${{ inputs.cache_tag }}_python_deps.tar
            update_comfyui_and_python_dependencies.bat
          key: ${{ runner.os }}-build-${{ inputs.cache_tag }}-${{ inputs.python_minor }}
      - shell: bash
        run: |
          mv ${{ inputs.cache_tag }}_python_deps.tar ../
          mv update_comfyui_and_python_dependencies.bat ../
          cd ..
          tar xf ${{ inputs.cache_tag }}_python_deps.tar
          pwd
          ls

      - shell: bash
        run: |
          cd ..
          cp -r ComfyUI ComfyUI_copy
          curl https://www.python.org/ftp/python/3.${{ inputs.python_minor }}.${{ inputs.python_patch }}/python-3.${{ inputs.python_minor }}.${{ inputs.python_patch }}-embed-amd64.zip -o python_embeded.zip
          unzip python_embeded.zip -d python_embeded
          cd python_embeded
          echo ${{ env.MINOR_VERSION }}
          echo 'import site' >> ./python3${{ inputs.python_minor }}._pth
          curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
          ./python.exe get-pip.py
          ./python.exe -s -m pip install ../${{ inputs.cache_tag }}_python_deps/*

          grep comfy ../ComfyUI/requirements.txt > ./requirements_comfyui.txt
          ./python.exe -s -m pip install -r requirements_comfyui.txt
          rm requirements_comfyui.txt

          sed -i '1i../ComfyUI' ./python3${{ inputs.python_minor }}._pth

          if test -f ./Lib/site-packages/torch/lib/dnnl.lib; then
            rm ./Lib/site-packages/torch/lib/dnnl.lib #I don't think this is actually used and I need the space
            rm ./Lib/site-packages/torch/lib/libprotoc.lib
            rm ./Lib/site-packages/torch/lib/libprotobuf.lib
          fi

          cd ..

          git clone --depth 1 https://github.com/comfyanonymous/taesd
          cp taesd/*.safetensors ./ComfyUI_copy/models/vae_approx/

          mkdir ComfyUI_windows_portable
          mv python_embeded ComfyUI_windows_portable
          mv ComfyUI_copy ComfyUI_windows_portable/ComfyUI

          cd ComfyUI_windows_portable

          mkdir update
          cp -r ComfyUI/.ci/update_windows/* ./update/
          cp -r ComfyUI/.ci/windows_${{ inputs.rel_name }}_base_files/* ./
          cp ../update_comfyui_and_python_dependencies.bat ./update/

          cd ..

          "C:\Program Files\7-Zip\7z.exe" a -t7z -m0=lzma2 -mx=9 -mfb=128 -md=768m -ms=on -mf=BCJ2 ComfyUI_windows_portable.7z ComfyUI_windows_portable
          mv ComfyUI_windows_portable.7z ComfyUI/ComfyUI_windows_portable_${{ inputs.rel_name }}${{ inputs.rel_extra_name }}.7z

      - shell: bash
        if: ${{ inputs.test_release }}
        run: |
          cd ..
          cd ComfyUI_windows_portable
          python_embeded/python.exe -s ComfyUI/main.py --quick-test-for-ci --cpu

          python_embeded/python.exe -s ./update/update.py ComfyUI/

          ls

      - name: Upload binaries to release
        uses: softprops/action-gh-release@v2
        with:
          files: ComfyUI_windows_portable_${{ inputs.rel_name }}${{ inputs.rel_extra_name }}.7z
          tag_name: ${{ inputs.git_tag }}
          draft: true
          overwrite_files: true


================================================
FILE: .github/workflows/stale-issues.yml
================================================
name: 'Close stale issues'
on:
  schedule:
    # Run daily at 430 am PT
    - cron: '30 11 * * *'
permissions:
  issues: write

jobs:
  stale:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/stale@v9
        with:
          stale-issue-message: "This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically."
          days-before-stale: 30
          days-before-close: 7
          stale-issue-label: 'Stale'
          only-labels: 'User Support'
          exempt-all-assignees: true
          exempt-all-milestones: true


================================================
FILE: .github/workflows/test-build.yml
================================================
name: Build package

#
# This workflow is a test of the python package build.
# Install Python dependencies across different Python versions.
#

on:
  push:
    paths:
      - "requirements.txt"
      - ".github/workflows/test-build.yml"

jobs:
  build:
    name: Build Test
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        python-version: ["3.10", "3.11", "3.12", "3.13", "3.14"]
    steps:
      - uses: actions/checkout@v4
      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v4
        with:
          python-version: ${{ matrix.python-version }}
      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install -r requirements.txt


================================================
FILE: .github/workflows/test-ci.yml
================================================
# This is the GitHub Workflow that drives automatic full-GPU-enabled tests of all new commits to the master branch of ComfyUI
# Results are reported as checkmarks on the commits, as well as onto https://ci.comfy.org/
name: Full Comfy CI Workflow Runs
on:
  push:
    branches:
      - master
      - release/**
    paths-ignore:
      - 'app/**'
      - 'input/**'
      - 'output/**'
      - 'notebooks/**'
      - 'script_examples/**'
      - '.github/**'
      - 'web/**'
  workflow_dispatch:

jobs:
  test-stable:
    strategy:
      fail-fast: false
      matrix:
        # os: [macos, linux, windows]
        # os: [macos, linux]
        os: [linux]
        python_version: ["3.10", "3.11", "3.12"]
        cuda_version: ["12.1"]
        torch_version: ["stable"]
        include:
          # - os: macos
          #   runner_label: [self-hosted, macOS]
          #   flags: "--use-pytorch-cross-attention"
          - os: linux
            runner_label: [self-hosted, Linux]
            flags: ""
          # - os: windows
          #   runner_label: [self-hosted, Windows]
          #   flags: ""
    runs-on: ${{ matrix.runner_label }}
    steps:
      - name: Test Workflows
        uses: comfy-org/comfy-action@main
        with:
          os: ${{ matrix.os }}
          python_version: ${{ matrix.python_version }}
          torch_version: ${{ matrix.torch_version }}
          google_credentials: ${{ secrets.GCS_SERVICE_ACCOUNT_JSON }}
          comfyui_flags: ${{ matrix.flags }}

  # test-win-nightly:
  #   strategy:
  #     fail-fast: true
  #     matrix:
  #       os: [windows]
  #       python_version: ["3.9", "3.10", "3.11", "3.12"]
  #       cuda_version: ["12.1"]
  #       torch_version: ["nightly"]
  #       include:
  #         - os: windows
  #           runner_label: [self-hosted, Windows]
  #           flags: ""
  #   runs-on: ${{ matrix.runner_label }}
  #   steps:
  #     - name: Test Workflows
  #       uses: comfy-org/comfy-action@main
  #       with:
  #         os: ${{ matrix.os }}
  #         python_version: ${{ matrix.python_version }}
  #         torch_version: ${{ matrix.torch_version }}
  #         google_credentials: ${{ secrets.GCS_SERVICE_ACCOUNT_JSON }}
  #         comfyui_flags: ${{ matrix.flags }}

  test-unix-nightly:
    strategy:
      fail-fast: false
      matrix:
        # os: [macos, linux]
        os: [linux]
        python_version: ["3.11"]
        cuda_version: ["12.1"]
        torch_version: ["nightly"]
        include:
          # - os: macos
          #   runner_label: [self-hosted, macOS]
          #   flags: "--use-pytorch-cross-attention"
          - os: linux
            runner_label: [self-hosted, Linux]
            flags: ""
    runs-on: ${{ matrix.runner_label }}
    steps:
      - name: Test Workflows
        uses: comfy-org/comfy-action@main
        with:
          os: ${{ matrix.os }}
          python_version: ${{ matrix.python_version }}
          torch_version: ${{ matrix.torch_version }}
          google_credentials: ${{ secrets.GCS_SERVICE_ACCOUNT_JSON }}
          comfyui_flags: ${{ matrix.flags }}


================================================
FILE: .github/workflows/test-execution.yml
================================================
name: Execution Tests

on:
  push:
    branches: [ main, master, release/** ]
  pull_request:
    branches: [ main, master, release/** ]

jobs:
  test:
    strategy:
      matrix:
        os: [ubuntu-latest, windows-latest, macos-latest]
    runs-on: ${{ matrix.os }}
    continue-on-error: true
    steps:
    - uses: actions/checkout@v4
    - name: Set up Python      
      uses: actions/setup-python@v4
      with:
        python-version: '3.12'
    - name: Install requirements
      run: |
        python -m pip install --upgrade pip
        pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
        pip install -r requirements.txt
        pip install -r tests-unit/requirements.txt
    - name: Run Execution Tests
      run: |
        python -m pytest tests/execution -v --skip-timing-checks


================================================
FILE: .github/workflows/test-launch.yml
================================================
name: Test server launches without errors

on:
  push:
    branches: [ main, master, release/** ]
  pull_request:
    branches: [ main, master, release/** ]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout ComfyUI
      uses: actions/checkout@v4
      with:
        repository: "Comfy-Org/ComfyUI"
        path: "ComfyUI"
    - uses: actions/setup-python@v4
      with:
        python-version: '3.10'
    - name: Install requirements
      run: |
        python -m pip install --upgrade pip
        pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
        pip install -r requirements.txt
        pip install wait-for-it
      working-directory: ComfyUI
    - name: Start ComfyUI server
      run: |
        python main.py --cpu 2>&1 | tee console_output.log &
        wait-for-it --service 127.0.0.1:8188 -t 30
      working-directory: ComfyUI
    - name: Check for unhandled exceptions in server log
      run: |
        grep -v "Found comfy_kitchen backend triton: {'available': False, 'disabled': True, 'unavailable_reason': \"ImportError: No module named 'triton'\", 'capabilities': \[\]}" console_output.log | grep -v "Found comfy_kitchen backend triton: {'available': False, 'disabled': False, 'unavailable_reason': \"ImportError: No module named 'triton'\", 'capabilities': \[\]}" > console_output_filtered.log
        cat console_output_filtered.log
        if grep -qE "Exception|Error" console_output_filtered.log; then
          echo "Unhandled exception/error found in server log."
          exit 1
        fi
      working-directory: ComfyUI
    - uses: actions/upload-artifact@v4
      if: always()
      with:
        name: console-output
        path: ComfyUI/console_output.log
        retention-days: 30


================================================
FILE: .github/workflows/test-unit.yml
================================================
name: Unit Tests

on:
  push:
    branches: [ main, master, release/** ]
  pull_request:
    branches: [ main, master, release/** ]

jobs:
  test:
    strategy:
      matrix:
        os: [ubuntu-latest, windows-2022, macos-latest]
    runs-on: ${{ matrix.os }}
    continue-on-error: true
    steps:
    - uses: actions/checkout@v4
    - name: Set up Python      
      uses: actions/setup-python@v4
      with:
        python-version: '3.12'
    - name: Install requirements
      run: |
        python -m pip install --upgrade pip
        pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
        pip install -r requirements.txt
    - name: Run Unit Tests
      run: |
        pip install -r tests-unit/requirements.txt
        python -m pytest tests-unit


================================================
FILE: .github/workflows/update-api-stubs.yml
================================================
name: Generate Pydantic Stubs from api.comfy.org

on:
  schedule:
    - cron: '0 0 * * 1'
  workflow_dispatch:

jobs:
  generate-models:
    runs-on: ubuntu-latest
    
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
      
      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'
      
      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install 'datamodel-code-generator[http]'
          npm install @redocly/cli
      
      - name: Download OpenAPI spec
        run: |
          curl -o openapi.yaml https://api.comfy.org/openapi
      
      - name: Filter OpenAPI spec with Redocly
        run: |
          npx @redocly/cli bundle openapi.yaml --output filtered-openapi.yaml --config comfy_api_nodes/redocly.yaml --remove-unused-components
      
      - name: Generate API models
        run: |
          datamodel-codegen --use-subclass-enum --input filtered-openapi.yaml --output comfy_api_nodes/apis --output-model-type pydantic_v2.BaseModel
      
      - name: Check for changes
        id: git-check
        run: |
          git diff --exit-code comfy_api_nodes/apis || echo "changes=true" >> $GITHUB_OUTPUT
      
      - name: Create Pull Request
        if: steps.git-check.outputs.changes == 'true'
        uses: peter-evans/create-pull-request@v5
        with:
          commit-message: 'chore: update API models from OpenAPI spec'
          title: 'Update API models from api.comfy.org'
          body: |
            This PR updates the API models based on the latest api.comfy.org OpenAPI specification.
            
            Generated automatically by the a Github workflow.
          branch: update-api-stubs
          delete-branch: true
          base: master


================================================
FILE: .github/workflows/update-ci-container.yml
================================================
name: "CI: Update CI Container"

on:
  release:
    types: [published]
  workflow_dispatch:
    inputs:
      version:
        description: 'ComfyUI version (e.g., v0.7.0)'
        required: true
        type: string

jobs:
  update-ci-container:
    runs-on: ubuntu-latest
    # Skip pre-releases unless manually triggered
    if: github.event_name == 'workflow_dispatch' || !github.event.release.prerelease
    steps:
      - name: Get version
        id: version
        run: |
          if [ "${{ github.event_name }}" = "release" ]; then
            VERSION="${{ github.event.release.tag_name }}"
          else
            VERSION="${{ inputs.version }}"
          fi
          echo "version=$VERSION" >> $GITHUB_OUTPUT

      - name: Checkout comfyui-ci-container
        uses: actions/checkout@v4
        with:
          repository: comfy-org/comfyui-ci-container
          token: ${{ secrets.CI_CONTAINER_PAT }}

      - name: Check current version
        id: current
        run: |
          CURRENT=$(grep -oP 'ARG COMFYUI_VERSION=\K.*' Dockerfile || echo "unknown")
          echo "current_version=$CURRENT" >> $GITHUB_OUTPUT

      - name: Update Dockerfile
        run: |
          VERSION="${{ steps.version.outputs.version }}"
          sed -i "s/^ARG COMFYUI_VERSION=.*/ARG COMFYUI_VERSION=${VERSION}/" Dockerfile

      - name: Create Pull Request
        id: create-pr
        uses: peter-evans/create-pull-request@v7
        with:
          token: ${{ secrets.CI_CONTAINER_PAT }}
          branch: automation/comfyui-${{ steps.version.outputs.version }}
          title: "chore: bump ComfyUI to ${{ steps.version.outputs.version }}"
          body: |
            Updates ComfyUI version from `${{ steps.current.outputs.current_version }}` to `${{ steps.version.outputs.version }}`

            **Triggered by:** ${{ github.event_name == 'release' && format('[Release {0}]({1})', github.event.release.tag_name, github.event.release.html_url) || 'Manual workflow dispatch' }}

          labels: automation
          commit-message: "chore: bump ComfyUI to ${{ steps.version.outputs.version }}"


================================================
FILE: .github/workflows/update-version.yml
================================================
name: Update Version File

on:
  pull_request:
    paths:
      - "pyproject.toml"
    branches:
      - master
      - release/**

jobs:
  update-version:
    runs-on: ubuntu-latest
    # Don't run on fork PRs
    if: github.event.pull_request.head.repo.full_name == github.repository
    permissions:
      pull-requests: write
      contents: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: "3.11"

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip

      - name: Update comfyui_version.py
        run: |
          # Read version from pyproject.toml and update comfyui_version.py
          python -c '
          import tomllib

          # Read version from pyproject.toml
          with open("pyproject.toml", "rb") as f:
              config = tomllib.load(f)
              version = config["project"]["version"]

          # Write version to comfyui_version.py
          with open("comfyui_version.py", "w") as f:
              f.write("# This file is automatically generated by the build process when version is\n")
              f.write("# updated in pyproject.toml.\n")
              f.write(f"__version__ = \"{version}\"\n")
          '

      - name: Commit changes
        run: |
          git config --local user.name "github-actions"
          git config --local user.email "github-actions@github.com"
          git fetch origin ${{ github.head_ref }}
          git checkout -B ${{ github.head_ref }} origin/${{ github.head_ref }}
          git add comfyui_version.py
          git diff --quiet && git diff --staged --quiet || git commit -m "chore: Update comfyui_version.py to match pyproject.toml"
          git push origin HEAD:${{ github.head_ref }}


================================================
FILE: .github/workflows/windows_release_dependencies.yml
================================================
name: "Windows Release dependencies"

on:
  workflow_dispatch:
    inputs:
      xformers:
        description: 'xformers version'
        required: false
        type: string
        default: ""
      extra_dependencies:
        description: 'extra dependencies'
        required: false
        type: string
        default: ""
      cu:
        description: 'cuda version'
        required: true
        type: string
        default: "130"

      python_minor:
        description: 'python minor version'
        required: true
        type: string
        default: "13"

      python_patch:
        description: 'python patch version'
        required: true
        type: string
        default: "11"
#  push:
#    branches:
#      - master

jobs:
  build_dependencies:
    runs-on: windows-latest
    steps:
        - uses: actions/checkout@v4
        - uses: actions/setup-python@v5
          with:
            python-version: 3.${{ inputs.python_minor }}.${{ inputs.python_patch }}

        - shell: bash
          run: |
            echo "@echo off
            call update_comfyui.bat nopause
            echo -
            echo This will try to update pytorch and all python dependencies.
            echo -
            echo If you just want to update normally, close this and run update_comfyui.bat instead.
            echo -
            pause
            ..\python_embeded\python.exe -s -m pip install --upgrade torch torchvision torchaudio ${{ inputs.xformers }} --extra-index-url https://download.pytorch.org/whl/cu${{ inputs.cu }} -r ../ComfyUI/requirements.txt pygit2
            pause" > update_comfyui_and_python_dependencies.bat

            grep -v comfyui requirements.txt > requirements_nocomfyui.txt
            python -m pip wheel --no-cache-dir torch torchvision torchaudio ${{ inputs.xformers }} ${{ inputs.extra_dependencies }} --extra-index-url https://download.pytorch.org/whl/cu${{ inputs.cu }} -r requirements_nocomfyui.txt pygit2 -w ./temp_wheel_dir
            python -m pip install --no-cache-dir ./temp_wheel_dir/*
            echo installed basic
            ls -lah temp_wheel_dir
            mv temp_wheel_dir cu${{ inputs.cu }}_python_deps
            tar cf cu${{ inputs.cu }}_python_deps.tar cu${{ inputs.cu }}_python_deps

        - uses: actions/cache/save@v4
          with:
            path: |
              cu${{ inputs.cu }}_python_deps.tar
              update_comfyui_and_python_dependencies.bat
            key: ${{ runner.os }}-build-cu${{ inputs.cu }}-${{ inputs.python_minor }}


================================================
FILE: .github/workflows/windows_release_dependencies_manual.yml
================================================
name: "Windows Release dependencies Manual"

on:
  workflow_dispatch:
    inputs:
      torch_dependencies:
        description: 'torch dependencies'
        required: false
        type: string
        default: "torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu128"
      cache_tag:
        description: 'Cached dependencies tag'
        required: true
        type: string
        default: "cu128"

      python_minor:
        description: 'python minor version'
        required: true
        type: string
        default: "12"

      python_patch:
        description: 'python patch version'
        required: true
        type: string
        default: "10"

jobs:
  build_dependencies:
    runs-on: windows-latest
    steps:
        - uses: actions/checkout@v4
        - uses: actions/setup-python@v5
          with:
            python-version: 3.${{ inputs.python_minor }}.${{ inputs.python_patch }}

        - shell: bash
          run: |
            echo "@echo off
            call update_comfyui.bat nopause
            echo -
            echo This will try to update pytorch and all python dependencies.
            echo -
            echo If you just want to update normally, close this and run update_comfyui.bat instead.
            echo -
            pause
            ..\python_embeded\python.exe -s -m pip install --upgrade ${{ inputs.torch_dependencies }} -r ../ComfyUI/requirements.txt pygit2
            pause" > update_comfyui_and_python_dependencies.bat

            grep -v comfyui requirements.txt > requirements_nocomfyui.txt
            python -m pip wheel --no-cache-dir ${{ inputs.torch_dependencies }} -r requirements_nocomfyui.txt pygit2 -w ./temp_wheel_dir
            python -m pip install --no-cache-dir ./temp_wheel_dir/*
            echo installed basic
            ls -lah temp_wheel_dir
            mv temp_wheel_dir ${{ inputs.cache_tag }}_python_deps
            tar cf ${{ inputs.cache_tag }}_python_deps.tar ${{ inputs.cache_tag }}_python_deps

        - uses: actions/cache/save@v4
          with:
            path: |
              ${{ inputs.cache_tag }}_python_deps.tar
              update_comfyui_and_python_dependencies.bat
            key: ${{ runner.os }}-build-${{ inputs.cache_tag }}-${{ inputs.python_minor }}


================================================
FILE: .github/workflows/windows_release_nightly_pytorch.yml
================================================
name: "Windows Release Nightly pytorch"

on:
  workflow_dispatch:
    inputs:
      cu:
        description: 'cuda version'
        required: true
        type: string
        default: "129"

      python_minor:
        description: 'python minor version'
        required: true
        type: string
        default: "13"

      python_patch:
        description: 'python patch version'
        required: true
        type: string
        default: "5"
#  push:
#    branches:
#      - master

jobs:
  build:
    permissions:
        contents: "write"
        packages: "write"
        pull-requests: "read"
    runs-on: windows-latest
    steps:
        - uses: actions/checkout@v4
          with:
            fetch-depth: 30
            persist-credentials: false
        - uses: actions/setup-python@v5
          with:
            python-version: 3.${{ inputs.python_minor }}.${{ inputs.python_patch }}
        - shell: bash
          run: |
            cd ..
            cp -r ComfyUI ComfyUI_copy
            curl https://www.python.org/ftp/python/3.${{ inputs.python_minor }}.${{ inputs.python_patch }}/python-3.${{ inputs.python_minor }}.${{ inputs.python_patch }}-embed-amd64.zip -o python_embeded.zip
            unzip python_embeded.zip -d python_embeded
            cd python_embeded
            echo 'import site' >> ./python3${{ inputs.python_minor }}._pth
            curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
            ./python.exe get-pip.py
            python -m pip wheel torch torchvision torchaudio --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu${{ inputs.cu }} -r ../ComfyUI/requirements.txt pygit2 -w ../temp_wheel_dir
            ls ../temp_wheel_dir
            ./python.exe -s -m pip install --pre ../temp_wheel_dir/*
            sed -i '1i../ComfyUI' ./python3${{ inputs.python_minor }}._pth

            rm ./Lib/site-packages/torch/lib/dnnl.lib #I don't think this is actually used and I need the space
            cd ..

            git clone --depth 1 https://github.com/comfyanonymous/taesd
            cp taesd/*.safetensors ./ComfyUI_copy/models/vae_approx/

            mkdir ComfyUI_windows_portable_nightly_pytorch
            mv python_embeded ComfyUI_windows_portable_nightly_pytorch
            mv ComfyUI_copy ComfyUI_windows_portable_nightly_pytorch/ComfyUI

            cd ComfyUI_windows_portable_nightly_pytorch

            mkdir update
            cp -r ComfyUI/.ci/update_windows/* ./update/
            cp -r ComfyUI/.ci/windows_nvidia_base_files/* ./
            cp -r ComfyUI/.ci/windows_nightly_base_files/* ./

            echo "call update_comfyui.bat nopause
            ..\python_embeded\python.exe -s -m pip install --upgrade --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cu${{ inputs.cu }} -r ../ComfyUI/requirements.txt pygit2
            pause" > ./update/update_comfyui_and_python_dependencies.bat
            cd ..

            "C:\Program Files\7-Zip\7z.exe" a -t7z -m0=lzma2 -mx=9 -mfb=128 -md=512m -ms=on -mf=BCJ2 ComfyUI_windows_portable_nightly_pytorch.7z ComfyUI_windows_portable_nightly_pytorch
            mv ComfyUI_windows_portable_nightly_pytorch.7z ComfyUI/ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch.7z

            cd ComfyUI_windows_portable_nightly_pytorch
            python_embeded/python.exe -s ComfyUI/main.py --quick-test-for-ci --cpu

            ls

        - name: Upload binaries to release
          uses: svenstaro/upload-release-action@v2
          with:
                repo_token: ${{ secrets.GITHUB_TOKEN }}
                file: ComfyUI_windows_portable_nvidia_or_cpu_nightly_pytorch.7z
                tag: "latest"
                overwrite: true


================================================
FILE: .github/workflows/windows_release_package.yml
================================================
name: "Windows Release packaging"

on:
  workflow_dispatch:
    inputs:
      cu:
        description: 'cuda version'
        required: true
        type: string
        default: "129"

      python_minor:
        description: 'python minor version'
        required: true
        type: string
        default: "13"

      python_patch:
        description: 'python patch version'
        required: true
        type: string
        default: "6"
#  push:
#    branches:
#      - master

jobs:
  package_comfyui:
    permissions:
        contents: "write"
        packages: "write"
        pull-requests: "read"
    runs-on: windows-latest
    steps:
        - uses: actions/cache/restore@v4
          id: cache
          with:
            path: |
              cu${{ inputs.cu }}_python_deps.tar
              update_comfyui_and_python_dependencies.bat
            key: ${{ runner.os }}-build-cu${{ inputs.cu }}-${{ inputs.python_minor }}
        - shell: bash
          run: |
            mv cu${{ inputs.cu }}_python_deps.tar ../
            mv update_comfyui_and_python_dependencies.bat ../
            cd ..
            tar xf cu${{ inputs.cu }}_python_deps.tar
            pwd
            ls

        - uses: actions/checkout@v4
          with:
            fetch-depth: 150
            persist-credentials: false
        - shell: bash
          run: |
            cd ..
            cp -r ComfyUI ComfyUI_copy
            curl https://www.python.org/ftp/python/3.${{ inputs.python_minor }}.${{ inputs.python_patch }}/python-3.${{ inputs.python_minor }}.${{ inputs.python_patch }}-embed-amd64.zip -o python_embeded.zip
            unzip python_embeded.zip -d python_embeded
            cd python_embeded
            echo 'import site' >> ./python3${{ inputs.python_minor }}._pth
            curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
            ./python.exe get-pip.py
            ./python.exe -s -m pip install ../cu${{ inputs.cu }}_python_deps/*
            sed -i '1i../ComfyUI' ./python3${{ inputs.python_minor }}._pth

            rm ./Lib/site-packages/torch/lib/dnnl.lib #I don't think this is actually used and I need the space
            rm ./Lib/site-packages/torch/lib/libprotoc.lib
            rm ./Lib/site-packages/torch/lib/libprotobuf.lib
            cd ..

            git clone --depth 1 https://github.com/comfyanonymous/taesd
            cp taesd/*.safetensors ./ComfyUI_copy/models/vae_approx/

            mkdir ComfyUI_windows_portable
            mv python_embeded ComfyUI_windows_portable
            mv ComfyUI_copy ComfyUI_windows_portable/ComfyUI

            cd ComfyUI_windows_portable

            mkdir update
            cp -r ComfyUI/.ci/update_windows/* ./update/
            cp -r ComfyUI/.ci/windows_nvidia_base_files/* ./
            cp ../update_comfyui_and_python_dependencies.bat ./update/

            cd ..

            "C:\Program Files\7-Zip\7z.exe" a -t7z -m0=lzma2 -mx=9 -mfb=128 -md=768m -ms=on -mf=BCJ2 ComfyUI_windows_portable.7z ComfyUI_windows_portable
            mv ComfyUI_windows_portable.7z ComfyUI/new_ComfyUI_windows_portable_nvidia_cu${{ inputs.cu }}_or_cpu.7z

            cd ComfyUI_windows_portable
            python_embeded/python.exe -s ComfyUI/main.py --quick-test-for-ci --cpu

            python_embeded/python.exe -s ./update/update.py ComfyUI/

            ls

        - name: Upload binaries to release
          uses: svenstaro/upload-release-action@v2
          with:
                repo_token: ${{ secrets.GITHUB_TOKEN }}
                file: new_ComfyUI_windows_portable_nvidia_cu${{ inputs.cu }}_or_cpu.7z
                tag: "latest"
                overwrite: true



================================================
FILE: .gitignore
================================================
__pycache__/
*.py[cod]
/output/
/input/
!/input/example.png
/models/
/temp/
/custom_nodes/
!custom_nodes/example_node.py.example
extra_model_paths.yaml
/.vs
.vscode/
.idea/
venv*/
.venv/
/web/extensions/*
!/web/extensions/logging.js.example
!/web/extensions/core/
/tests-ui/data/object_info.json
/user/
*.log
web_custom_versions/
.DS_Store
openapi.yaml
filtered-openapi.yaml
uv.lock


================================================
FILE: CODEOWNERS
================================================
# Admins
* @comfyanonymous @kosinkadink @guill


================================================
FILE: CONTRIBUTING.md
================================================
# Contributing to ComfyUI

Welcome, and thank you for your interest in contributing to ComfyUI!

There are several ways in which you can contribute, beyond writing code. The goal of this document is to provide a high-level overview of how you can get involved.

## Asking Questions

Have a question? Instead of opening an issue, please ask on [Discord](https://comfy.org/discord) or [Matrix](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) channels. Our team and the community will help you.

## Providing Feedback

Your comments and feedback are welcome, and the development team is available via a handful of different channels.

See the `#bug-report`, `#feature-request` and `#feedback` channels on Discord.

## Reporting Issues

Have you identified a reproducible problem in ComfyUI? Do you have a feature request? We want to hear about it! Here's how you can report your issue as effectively as possible.


### Look For an Existing Issue

Before you create a new issue, please do a search in [open issues](https://github.com/comfyanonymous/ComfyUI/issues) to see if the issue or feature request has already been filed.

If you find your issue already exists, make relevant comments and add your [reaction](https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comments). Use a reaction in place of a "+1" comment:

* 👍 - upvote
* 👎 - downvote

If you cannot find an existing issue that describes your bug or feature, create a new issue. We have an issue template in place to organize new issues.


### Creating Pull Requests

* Please refer to the article on [creating pull requests](https://github.com/comfyanonymous/ComfyUI/wiki/How-to-Contribute-Code) and contributing to this project.


## Thank You

Your contributions to open source, large or small, make great projects like this possible. Thank you for taking the time to contribute.


================================================
FILE: LICENSE
================================================
                    GNU GENERAL PUBLIC LICENSE
                       Version 3, 29 June 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU General Public License is a free, copyleft license for
software and other kinds of works.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.  We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors.  You can apply it to
your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights.  Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.

  For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received.  You must make sure that they, too, receive
or can get the source code.  And you must show them these terms so they
know their rights.

  Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.

  For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software.  For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.

  Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so.  This is fundamentally incompatible with the aim of
protecting users' freedom to change the software.  The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable.  Therefore, we
have designed this version of the GPL to prohibit the practice for those
products.  If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.

  Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary.  To prevent this, the GPL assures that
patents cannot be used to render the program non-free.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Use with the GNU Affero General Public License.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program.  If not, see <https://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:

    <program>  Copyright (C) <year>  <name of author>
    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License.  Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.

  The GNU General Public License does not permit incorporating your program
into proprietary programs.  If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library.  If this is what you want to do, use the GNU Lesser General
Public License instead of this License.  But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.


================================================
FILE: QUANTIZATION.md
================================================
# The Comfy guide to Quantization


## How does quantization work?

Quantization aims to map a high-precision value x_f to a lower precision format with minimal loss in accuracy. These smaller formats then serve to reduce the models memory footprint and increase throughput by using specialized hardware.

When simply converting a value from FP16 to FP8 using the round-nearest method we might hit two issues:
- The dynamic range of FP16 (-65,504, 65,504) far exceeds FP8 formats like E4M3 (-448, 448) or E5M2 (-57,344, 57,344), potentially resulting in clipped values
- The original values are concentrated in a small range (e.g. -1,1) leaving many FP8-bits "unused"

By using a scaling factor, we aim to map these values into the quantized-dtype range, making use of the full spectrum. One of the easiest approaches, and common, is using per-tensor absolute-maximum scaling.

```
absmax = max(abs(tensor))
scale = amax / max_dynamic_range_low_precision

# Quantization
tensor_q = (tensor / scale).to(low_precision_dtype)

# De-Quantization
tensor_dq = tensor_q.to(fp16) * scale

tensor_dq ~ tensor
```

Given that additional information (scaling factor) is needed to "interpret" the quantized values, we describe those as derived datatypes.


## Quantization in Comfy

```
QuantizedTensor (torch.Tensor subclass)
  ↓ __torch_dispatch__
Two-Level Registry (generic + layout handlers)
  ↓
MixedPrecisionOps + Metadata Detection
```

### Representation

To represent these derived datatypes, ComfyUI uses a subclass of torch.Tensor to implements these using the `QuantizedTensor` class found in `comfy/quant_ops.py`

A `Layout` class defines how a specific quantization format behaves:
- Required parameters
- Quantize method
- De-Quantize method

```python
from comfy.quant_ops import QuantizedLayout

class MyLayout(QuantizedLayout):
    @classmethod
    def quantize(cls, tensor, **kwargs):
        # Convert to quantized format
        qdata = ...
        params = {'scale': ..., 'orig_dtype': tensor.dtype}
        return qdata, params
    
    @staticmethod
    def dequantize(qdata, scale, orig_dtype, **kwargs):
        return qdata.to(orig_dtype) * scale
```

To then run operations using these QuantizedTensors we use two registry systems to define supported operations. 
The first is a **generic registry** that handles operations common to all quantized formats (e.g., `.to()`, `.clone()`, `.reshape()`).

The second registry is layout-specific and allows to implement fast-paths like nn.Linear.
```python
from comfy.quant_ops import register_layout_op

@register_layout_op(torch.ops.aten.linear.default, MyLayout)
def my_linear(func, args, kwargs):
    # Extract tensors, call optimized kernel
    ...
```
When `torch.nn.functional.linear()` is called with QuantizedTensor arguments, `__torch_dispatch__` automatically routes to the registered implementation.
For any unsupported operation, QuantizedTensor will fallback to call `dequantize` and dispatch using the high-precision implementation.


### Mixed Precision

The `MixedPrecisionOps` class (lines 542-648 in `comfy/ops.py`) enables per-layer quantization decisions, allowing different layers in a model to use different precisions. This is activated when a model config contains a `layer_quant_config` dictionary that specifies which layers should be quantized and how.

**Architecture:**

```python
class MixedPrecisionOps(disable_weight_init):
    _layer_quant_config = {}  # Maps layer names to quantization configs
    _compute_dtype = torch.bfloat16  # Default compute / dequantize precision
```

**Key mechanism:**

The custom `Linear._load_from_state_dict()` method inspects each layer during model loading:
- If the layer name is **not** in `_layer_quant_config`: load weight as regular tensor in `_compute_dtype`
- If the layer name **is** in `_layer_quant_config`: 
  - Load weight as `QuantizedTensor` with the specified layout (e.g., `TensorCoreFP8Layout`)
  - Load associated quantization parameters (scales, block_size, etc.)

**Why it's needed:**

Not all layers tolerate quantization equally. Sensitive operations like final projections can be kept in higher precision, while compute-heavy matmuls are quantized. This provides most of the performance benefits while maintaining quality.

The system is selected in `pick_operations()` when `model_config.layer_quant_config` is present, making it the highest-priority operation mode.


## Checkpoint Format

Quantized checkpoints are stored as standard safetensors files with quantized weight tensors and associated scaling parameters, plus a `_quantization_metadata` JSON entry describing the quantization scheme.

The quantized checkpoint will contain the same layers as the original checkpoint but:
- The weights are stored as quantized values, sometimes using a different storage datatype. E.g. uint8 container for fp8.
- For each quantized weight a number of additional scaling parameters are stored alongside depending on the recipe.
- We store a metadata.json in the metadata of the final safetensor containing the `_quantization_metadata` describing which layers are quantized and what layout has been used.

### Scaling Parameters details
We define 4 possible scaling parameters that should cover most recipes in the near-future:
- **weight_scale**: quantization scalers for the weights
- **weight_scale_2**: global scalers in the context of double scaling
- **pre_quant_scale**: scalers used for smoothing salient weights
- **input_scale**: quantization scalers for the activations

| Format | Storage dtype | weight_scale | weight_scale_2 | pre_quant_scale | input_scale |
|--------|---------------|--------------|----------------|-----------------|-------------|
| float8_e4m3fn | float32 | float32 (scalar) | - | - | float32 (scalar) |

You can find the defined formats in `comfy/quant_ops.py` (QUANT_ALGOS).

### Quantization Metadata

The metadata stored alongside the checkpoint contains:
- **format_version**: String to define a version of the standard
- **layers**: A dictionary mapping layer names to their quantization format. The format string maps to the definitions found in `QUANT_ALGOS`. 

Example:
```json
{
  "_quantization_metadata": {
    "format_version": "1.0",
    "layers": {
      "model.layers.0.mlp.up_proj": "float8_e4m3fn",
      "model.layers.0.mlp.down_proj": "float8_e4m3fn",
      "model.layers.1.mlp.up_proj": "float8_e4m3fn"
    }
  }
}
```


## Creating Quantized Checkpoints

To create compatible checkpoints, use any quantization tool provided the output follows the checkpoint format described above and uses a layout defined in `QUANT_ALGOS`.

### Weight Quantization

Weight quantization is straightforward - compute the scaling factor directly from the weight tensor using the absolute maximum method described earlier. Each layer's weights are quantized independently and stored with their corresponding `weight_scale` parameter.

### Calibration (for Activation Quantization)

Activation quantization (e.g., for FP8 Tensor Core operations) requires `input_scale` parameters that cannot be determined from static weights alone. Since activation values depend on actual inputs, we use **post-training calibration (PTQ)**:

1. **Collect statistics**: Run inference on N representative samples
2. **Track activations**: Record the absolute maximum (`amax`) of inputs to each quantized layer
3. **Compute scales**: Derive `input_scale` from collected statistics
4. **Store in checkpoint**: Save `input_scale` parameters alongside weights

The calibration dataset should be representative of your target use case. For diffusion models, this typically means a diverse set of prompts and generation parameters.

================================================
FILE: README.md
================================================
<div align="center">

# ComfyUI
**The most powerful and modular visual AI engine and application.**


[![Website][website-shield]][website-url]
[![Dynamic JSON Badge][discord-shield]][discord-url]
[![Twitter][twitter-shield]][twitter-url]
[![Matrix][matrix-shield]][matrix-url]
<br>
[![][github-release-shield]][github-release-link]
[![][github-release-date-shield]][github-release-link]
[![][github-downloads-shield]][github-downloads-link]
[![][github-downloads-latest-shield]][github-downloads-link]

[matrix-shield]: https://img.shields.io/badge/Matrix-000000?style=flat&logo=matrix&logoColor=white
[matrix-url]: https://app.element.io/#/room/%23comfyui_space%3Amatrix.org
[website-shield]: https://img.shields.io/badge/ComfyOrg-4285F4?style=flat
[website-url]: https://www.comfy.org/
<!-- Workaround to display total user from https://github.com/badges/shields/issues/4500#issuecomment-2060079995 -->
[discord-shield]: https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2Fcomfyorg%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&logo=discord&logoColor=white&label=Discord&color=green&suffix=%20total
[discord-url]: https://www.comfy.org/discord
[twitter-shield]: https://img.shields.io/twitter/follow/ComfyUI
[twitter-url]: https://x.com/ComfyUI

[github-release-shield]: https://img.shields.io/github/v/release/comfyanonymous/ComfyUI?style=flat&sort=semver
[github-release-link]: https://github.com/comfyanonymous/ComfyUI/releases
[github-release-date-shield]: https://img.shields.io/github/release-date/comfyanonymous/ComfyUI?style=flat
[github-downloads-shield]: https://img.shields.io/github/downloads/comfyanonymous/ComfyUI/total?style=flat
[github-downloads-latest-shield]: https://img.shields.io/github/downloads/comfyanonymous/ComfyUI/latest/total?style=flat&label=downloads%40latest
[github-downloads-link]: https://github.com/comfyanonymous/ComfyUI/releases

![ComfyUI Screenshot](https://github.com/user-attachments/assets/7ccaf2c1-9b72-41ae-9a89-5688c94b7abe)
</div>

ComfyUI lets you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Available on Windows, Linux, and macOS.

## Get Started

### Local

#### [Desktop Application](https://www.comfy.org/download)
- The easiest way to get started.
- Available on Windows & macOS.

#### [Windows Portable Package](#installing)
- Get the latest commits and completely portable.
- Available on Windows.

#### [Manual Install](#manual-install-windows-linux)
Supports all operating systems and GPU types (NVIDIA, AMD, Intel, Apple Silicon, Ascend).

### Cloud

#### [Comfy Cloud](https://www.comfy.org/cloud)
- Our official paid cloud version for those who can't afford local hardware.

## Examples
See what ComfyUI can do with the [newer template workflows](https://comfy.org/workflows) or old [example workflows](https://comfyanonymous.github.io/ComfyUI_examples/).

## Features
- Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
- Image Models
   - SD1.x, SD2.x ([unCLIP](https://comfyanonymous.github.io/ComfyUI_examples/unclip/))
   - [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/)
   - [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/)
   - [SD3 and SD3.5](https://comfyanonymous.github.io/ComfyUI_examples/sd3/)
   - Pixart Alpha and Sigma
   - [AuraFlow](https://comfyanonymous.github.io/ComfyUI_examples/aura_flow/)
   - [HunyuanDiT](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_dit/)
   - [Flux](https://comfyanonymous.github.io/ComfyUI_examples/flux/)
   - [Lumina Image 2.0](https://comfyanonymous.github.io/ComfyUI_examples/lumina2/)
   - [HiDream](https://comfyanonymous.github.io/ComfyUI_examples/hidream/)
   - [Qwen Image](https://comfyanonymous.github.io/ComfyUI_examples/qwen_image/)
   - [Hunyuan Image 2.1](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_image/)
   - [Flux 2](https://comfyanonymous.github.io/ComfyUI_examples/flux2/)
   - [Z Image](https://comfyanonymous.github.io/ComfyUI_examples/z_image/)
- Image Editing Models
   - [Omnigen 2](https://comfyanonymous.github.io/ComfyUI_examples/omnigen/)
   - [Flux Kontext](https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux-kontext-image-editing-model)
   - [HiDream E1.1](https://comfyanonymous.github.io/ComfyUI_examples/hidream/#hidream-e11)
   - [Qwen Image Edit](https://comfyanonymous.github.io/ComfyUI_examples/qwen_image/#edit-model)
- Video Models
   - [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/)
   - [Mochi](https://comfyanonymous.github.io/ComfyUI_examples/mochi/)
   - [LTX-Video](https://comfyanonymous.github.io/ComfyUI_examples/ltxv/)
   - [Hunyuan Video](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/)
   - [Wan 2.1](https://comfyanonymous.github.io/ComfyUI_examples/wan/)
   - [Wan 2.2](https://comfyanonymous.github.io/ComfyUI_examples/wan22/)
   - [Hunyuan Video 1.5](https://docs.comfy.org/tutorials/video/hunyuan/hunyuan-video-1-5)
- Audio Models
   - [Stable Audio](https://comfyanonymous.github.io/ComfyUI_examples/audio/)
   - [ACE Step](https://comfyanonymous.github.io/ComfyUI_examples/audio/)
- 3D Models
   - [Hunyuan3D 2.0](https://docs.comfy.org/tutorials/3d/hunyuan3D-2)
- Asynchronous Queue system
- Many optimizations: Only re-executes the parts of the workflow that changes between executions.
- Smart memory management: can automatically run large models on GPUs with as low as 1GB vram with smart offloading.
- Works even if you don't have a GPU with: ```--cpu``` (slow)
- Can load ckpt and safetensors: All in one checkpoints or standalone diffusion models, VAEs and CLIP models.
- Safe loading of ckpt, pt, pth, etc.. files.
- Embeddings/Textual inversion
- [Loras (regular, locon and loha)](https://comfyanonymous.github.io/ComfyUI_examples/lora/)
- [Hypernetworks](https://comfyanonymous.github.io/ComfyUI_examples/hypernetworks/)
- Loading full workflows (with seeds) from generated PNG, WebP and FLAC files.
- Saving/Loading workflows as Json files.
- Nodes interface can be used to create complex workflows like one for [Hires fix](https://comfyanonymous.github.io/ComfyUI_examples/2_pass_txt2img/) or much more advanced ones.
- [Area Composition](https://comfyanonymous.github.io/ComfyUI_examples/area_composition/)
- [Inpainting](https://comfyanonymous.github.io/ComfyUI_examples/inpaint/) with both regular and inpainting models.
- [ControlNet and T2I-Adapter](https://comfyanonymous.github.io/ComfyUI_examples/controlnet/)
- [Upscale Models (ESRGAN, ESRGAN variants, SwinIR, Swin2SR, etc...)](https://comfyanonymous.github.io/ComfyUI_examples/upscale_models/)
- [GLIGEN](https://comfyanonymous.github.io/ComfyUI_examples/gligen/)
- [Model Merging](https://comfyanonymous.github.io/ComfyUI_examples/model_merging/)
- [LCM models and Loras](https://comfyanonymous.github.io/ComfyUI_examples/lcm/)
- Latent previews with [TAESD](#how-to-show-high-quality-previews)
- Works fully offline: core will never download anything unless you want to.
- Optional API nodes to use paid models from external providers through the online [Comfy API](https://docs.comfy.org/tutorials/api-nodes/overview) disable with: `--disable-api-nodes`
- [Config file](extra_model_paths.yaml.example) to set the search paths for models.

Workflow examples can be found on the [Examples page](https://comfyanonymous.github.io/ComfyUI_examples/)

## Release Process

ComfyUI follows a weekly release cycle targeting Monday but this regularly changes because of model releases or large changes to the codebase. There are three interconnected repositories:

1. **[ComfyUI Core](https://github.com/comfyanonymous/ComfyUI)**
   - Releases a new stable version (e.g., v0.7.0) roughly every week.
   - Starting from v0.4.0 patch versions will be used for fixes backported onto the current stable release.
   - Minor versions will be used for releases off the master branch.
   - Patch versions may still be used for releases on the master branch in cases where a backport would not make sense.
   - Commits outside of the stable release tags may be very unstable and break many custom nodes.
   - Serves as the foundation for the desktop release

2. **[ComfyUI Desktop](https://github.com/Comfy-Org/desktop)**
   - Builds a new release using the latest stable core version

3. **[ComfyUI Frontend](https://github.com/Comfy-Org/ComfyUI_frontend)**
   - Weekly frontend updates are merged into the core repository
   - Features are frozen for the upcoming core release
   - Development continues for the next release cycle

## Shortcuts

| Keybind                            | Explanation                                                                                                        |
|------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| `Ctrl` + `Enter`                      | Queue up current graph for generation                                                                              |
| `Ctrl` + `Shift` + `Enter`              | Queue up current graph as first for generation                                                                     |
| `Ctrl` + `Alt` + `Enter`                | Cancel current generation                                                                                          |
| `Ctrl` + `Z`/`Ctrl` + `Y`                 | Undo/Redo                                                                                                          |
| `Ctrl` + `S`                          | Save workflow                                                                                                      |
| `Ctrl` + `O`                          | Load workflow                                                                                                      |
| `Ctrl` + `A`                          | Select all nodes                                                                                                   |
| `Alt `+ `C`                           | Collapse/uncollapse selected nodes                                                                                 |
| `Ctrl` + `M`                          | Mute/unmute selected nodes                                                                                         |
| `Ctrl` + `B`                           | Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through)            |
| `Delete`/`Backspace`                   | Delete selected nodes                                                                                              |
| `Ctrl` + `Backspace`                   | Delete the current graph                                                                                           |
| `Space`                              | Move the canvas around when held and moving the cursor                                                             |
| `Ctrl`/`Shift` + `Click`                 | Add clicked node to selection                                                                                      |
| `Ctrl` + `C`/`Ctrl` + `V`                  | Copy and paste selected nodes (without maintaining connections to outputs of unselected nodes)                     |
| `Ctrl` + `C`/`Ctrl` + `Shift` + `V`          | Copy and paste selected nodes (maintaining connections from outputs of unselected nodes to inputs of pasted nodes) |
| `Shift` + `Drag`                       | Move multiple selected nodes at the same time                                                                      |
| `Ctrl` + `D`                           | Load default graph                                                                                                 |
| `Alt` + `+`                          | Canvas Zoom in                                                                                                     |
| `Alt` + `-`                          | Canvas Zoom out                                                                                                    |
| `Ctrl` + `Shift` + LMB + Vertical drag | Canvas Zoom in/out                                                                                                 |
| `P`                                  | Pin/Unpin selected nodes                                                                                           |
| `Ctrl` + `G`                           | Group selected nodes                                                                                               |
| `Q`                                 | Toggle visibility of the queue                                                                                     |
| `H`                                  | Toggle visibility of history                                                                                       |
| `R`                                  | Refresh graph                                                                                                      |
| `F`                                  | Show/Hide menu                                                                                                      |
| `.`                                  | Fit view to selection (Whole graph when nothing is selected)                                                        |
| Double-Click LMB                   | Open node quick search palette                                                                                     |
| `Shift` + Drag                       | Move multiple wires at once                                                                                        |
| `Ctrl` + `Alt` + LMB                   | Disconnect all wires from clicked slot                                                                             |

`Ctrl` can also be replaced with `Cmd` instead for macOS users

# Installing

## Windows Portable

There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the [releases page](https://github.com/comfyanonymous/ComfyUI/releases).

### [Direct link to download](https://github.com/comfyanonymous/ComfyUI/releases/latest/download/ComfyUI_windows_portable_nvidia.7z)

Simply download, extract with [7-Zip](https://7-zip.org) or with the windows explorer on recent windows versions and run. For smaller models you normally only need to put the checkpoints (the huge ckpt/safetensors files) in: ComfyUI\models\checkpoints but many of the larger models have multiple files. Make sure to follow the instructions to know which subfolder to put them in ComfyUI\models\

If you have trouble extracting it, right click the file -> properties -> unblock

The portable above currently comes with python 3.13 and pytorch cuda 13.0. Update your Nvidia drivers if it doesn't start.

#### Alternative Downloads:

[Experimental portable for AMD GPUs](https://github.com/comfyanonymous/ComfyUI/releases/latest/download/ComfyUI_windows_portable_amd.7z)

[Portable with pytorch cuda 12.6 and python 3.12](https://github.com/comfyanonymous/ComfyUI/releases/latest/download/ComfyUI_windows_portable_nvidia_cu126.7z) (Supports Nvidia 10 series and older GPUs).

#### How do I share models between another UI and ComfyUI?

See the [Config file](extra_model_paths.yaml.example) to set the search paths for models. In the standalone windows build you can find this file in the ComfyUI directory. Rename this file to extra_model_paths.yaml and edit it with your favorite text editor.


## [comfy-cli](https://docs.comfy.org/comfy-cli/getting-started)

You can install and start ComfyUI using comfy-cli:
```bash
pip install comfy-cli
comfy install
```

## Manual Install (Windows, Linux)

Python 3.14 works but some custom nodes may have issues. The free threaded variant works but some dependencies will enable the GIL so it's not fully supported.

Python 3.13 is very well supported. If you have trouble with some custom node dependencies on 3.13 you can try 3.12

torch 2.4 and above is supported but some features and optimizations might only work on newer versions. We generally recommend using the latest major version of pytorch with the latest cuda version unless it is less than 2 weeks old.

### Instructions:

Git clone this repo.

Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints

Put your VAE in: models/vae


### AMD GPUs (Linux)

AMD users can install rocm and pytorch with pip if you don't have it already installed, this is the command to install the stable version:

```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm7.1```

This is the command to install the nightly with ROCm 7.2 which might have some performance improvements:

```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm7.2```


### AMD GPUs (Experimental: Windows and Linux), RDNA 3, 3.5 and 4 only.

These have less hardware support than the builds above but they work on windows. You also need to install the pytorch version specific to your hardware.

RDNA 3 (RX 7000 series):

```pip install --pre torch torchvision torchaudio --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/```

RDNA 3.5 (Strix halo/Ryzen AI Max+ 365):

```pip install --pre torch torchvision torchaudio --index-url https://rocm.nightlies.amd.com/v2/gfx1151/```

RDNA 4 (RX 9000 series):

```pip install --pre torch torchvision torchaudio --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/```

### Intel GPUs (Windows and Linux)

Intel Arc GPU users can install native PyTorch with torch.xpu support using pip. More information can be found [here](https://pytorch.org/docs/main/notes/get_start_xpu.html)

1. To install PyTorch xpu, use the following command:

```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu```

This is the command to install the Pytorch xpu nightly which might have some performance improvements:

```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu```

### NVIDIA

Nvidia users should install stable pytorch using this command:

```pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130```

This is the command to install pytorch nightly instead which might have performance improvements.

```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu130```

#### Troubleshooting

If you get the "Torch not compiled with CUDA enabled" error, uninstall torch with:

```pip uninstall torch```

And install it again with the command above.

### Dependencies

Install the dependencies by opening your terminal inside the ComfyUI folder and:

```pip install -r requirements.txt```

After this you should have everything installed and can proceed to running ComfyUI.

### Others:

#### Apple Mac silicon

You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS version.

1. Install pytorch nightly. For instructions, read the [Accelerated PyTorch training on Mac](https://developer.apple.com/metal/pytorch/) Apple Developer guide (make sure to install the latest pytorch nightly).
1. Follow the [ComfyUI manual installation](#manual-install-windows-linux) instructions for Windows and Linux.
1. Install the ComfyUI [dependencies](#dependencies). If you have another Stable Diffusion UI [you might be able to reuse the dependencies](#i-already-have-another-ui-for-stable-diffusion-installed-do-i-really-have-to-install-all-of-these-dependencies).
1. Launch ComfyUI by running `python main.py`

> **Note**: Remember to add your models, VAE, LoRAs etc. to the corresponding Comfy folders, as discussed in [ComfyUI manual installation](#manual-install-windows-linux).

#### Ascend NPUs

For models compatible with Ascend Extension for PyTorch (torch_npu). To get started, ensure your environment meets the prerequisites outlined on the [installation](https://ascend.github.io/docs/sources/ascend/quick_install.html) page. Here's a step-by-step guide tailored to your platform and installation method:

1. Begin by installing the recommended or newer kernel version for Linux as specified in the Installation page of torch-npu, if necessary.
2. Proceed with the installation of Ascend Basekit, which includes the driver, firmware, and CANN, following the instructions provided for your specific platform.
3. Next, install the necessary packages for torch-npu by adhering to the platform-specific instructions on the [Installation](https://ascend.github.io/docs/sources/pytorch/install.html#pytorch) page.
4. Finally, adhere to the [ComfyUI manual installation](#manual-install-windows-linux) guide for Linux. Once all components are installed, you can run ComfyUI as described earlier.

#### Cambricon MLUs

For models compatible with Cambricon Extension for PyTorch (torch_mlu). Here's a step-by-step guide tailored to your platform and installation method:

1. Install the Cambricon CNToolkit by adhering to the platform-specific instructions on the [Installation](https://www.cambricon.com/docs/sdk_1.15.0/cntoolkit_3.7.2/cntoolkit_install_3.7.2/index.html)
2. Next, install the PyTorch(torch_mlu) following the instructions on the [Installation](https://www.cambricon.com/docs/sdk_1.15.0/cambricon_pytorch_1.17.0/user_guide_1.9/index.html)
3. Launch ComfyUI by running `python main.py`

#### Iluvatar Corex

For models compatible with Iluvatar Extension for PyTorch. Here's a step-by-step guide tailored to your platform and installation method:

1. Install the Iluvatar Corex Toolkit by adhering to the platform-specific instructions on the [Installation](https://support.iluvatar.com/#/DocumentCentre?id=1&nameCenter=2&productId=520117912052801536)
2. Launch ComfyUI by running `python main.py`


## [ComfyUI-Manager](https://github.com/Comfy-Org/ComfyUI-Manager/tree/manager-v4)

**ComfyUI-Manager** is an extension that allows you to easily install, update, and manage custom nodes for ComfyUI.

### Setup

1. Install the manager dependencies:
   ```bash
   pip install -r manager_requirements.txt
   ```

2. Enable the manager with the `--enable-manager` flag when running ComfyUI:
   ```bash
   python main.py --enable-manager
   ```

### Command Line Options

| Flag | Description |
|------|-------------|
| `--enable-manager` | Enable ComfyUI-Manager |
| `--enable-manager-legacy-ui` | Use the legacy manager UI instead of the new UI (requires `--enable-manager`) |
| `--disable-manager-ui` | Disable the manager UI and endpoints while keeping background features like security checks and scheduled installation completion (requires `--enable-manager`) |


# Running

```python main.py```

### For AMD cards not officially supported by ROCm

Try running it with this command if you have issues:

For 6700, 6600 and maybe other RDNA2 or older: ```HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py```

For AMD 7600 and maybe other RDNA3 cards: ```HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py```

### AMD ROCm Tips

You can enable experimental memory efficient attention on recent pytorch in ComfyUI on some AMD GPUs using this command, it should already be enabled by default on RDNA3. If this improves speed for you on latest pytorch on your GPU please report it so that I can enable it by default.

```TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention```

You can also try setting this env variable `PYTORCH_TUNABLEOP_ENABLED=1` which might speed things up at the cost of a very slow initial run.

# Notes

Only parts of the graph that have an output with all the correct inputs will be executed.

Only parts of the graph that change from each execution to the next will be executed, if you submit the same graph twice only the first will be executed. If you change the last part of the graph only the part you changed and the part that depends on it will be executed.

Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it.

You can use () to change emphasis of a word or phrase like: (good code:1.2) or (bad code:0.8). The default emphasis for () is 1.1. To use () characters in your actual prompt escape them like \\( or \\).

You can use {day|night}, for wildcard/dynamic prompts. With this syntax "{wild|card|test}" will be randomly replaced by either "wild", "card" or "test" by the frontend every time you queue the prompt. To use {} characters in your actual prompt escape them like: \\{ or \\}.

Dynamic prompts also support C-style comments, like `// comment` or `/* comment */`.

To use a textual inversion concepts/embeddings in a text prompt put them in the models/embeddings directory and use them in the CLIPTextEncode node like this (you can omit the .pt extension):

```embedding:embedding_filename.pt```


## How to show high-quality previews?

Use ```--preview-method auto``` to enable previews.

The default installation includes a fast latent preview method that's low-resolution. To enable higher-quality previews with [TAESD](https://github.com/madebyollin/taesd), download the [taesd_decoder.pth, taesdxl_decoder.pth, taesd3_decoder.pth and taef1_decoder.pth](https://github.com/madebyollin/taesd/) and place them in the `models/vae_approx` folder. Once they're installed, restart ComfyUI and launch it with `--preview-method taesd` to enable high-quality previews.

## How to use TLS/SSL?
Generate a self-signed certificate (not appropriate for shared/production use) and key by running the command: `openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes -subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"`

Use `--tls-keyfile key.pem --tls-certfile cert.pem` to enable TLS/SSL, the app will now be accessible with `https://...` instead of `http://...`.

> Note: Windows users can use [alexisrolland/docker-openssl](https://github.com/alexisrolland/docker-openssl) or one of the [3rd party binary distributions](https://wiki.openssl.org/index.php/Binaries) to run the command example above.
<br/><br/>If you use a container, note that the volume mount `-v` can be a relative path so `... -v ".\:/openssl-certs" ...` would create the key & cert files in the current directory of your command prompt or powershell terminal.

## Support and dev channel

[Discord](https://comfy.org/discord): Try the #help or #feedback channels.

[Matrix space: #comfyui_space:matrix.org](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) (it's like discord but open source).

See also: [https://www.comfy.org/](https://www.comfy.org/)

## Frontend Development

As of August 15, 2024, we have transitioned to a new frontend, which is now hosted in a separate repository: [ComfyUI Frontend](https://github.com/Comfy-Org/ComfyUI_frontend). This repository now hosts the compiled JS (from TS/Vue) under the `web/` directory.

### Reporting Issues and Requesting Features

For any bugs, issues, or feature requests related to the frontend, please use the [ComfyUI Frontend repository](https://github.com/Comfy-Org/ComfyUI_frontend). This will help us manage and address frontend-specific concerns more efficiently.

### Using the Latest Frontend

The new frontend is now the default for ComfyUI. However, please note:

1. The frontend in the main ComfyUI repository is updated fortnightly.
2. Daily releases are available in the separate frontend repository.

To use the most up-to-date frontend version:

1. For the latest daily release, launch ComfyUI with this command line argument:

   ```
   --front-end-version Comfy-Org/ComfyUI_frontend@latest
   ```

2. For a specific version, replace `latest` with the desired version number:

   ```
   --front-end-version Comfy-Org/ComfyUI_frontend@1.2.2
   ```

This approach allows you to easily switch between the stable fortnightly release and the cutting-edge daily updates, or even specific versions for testing purposes.

### Accessing the Legacy Frontend

If you need to use the legacy frontend for any reason, you can access it using the following command line argument:

```
--front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
```

This will use a snapshot of the legacy frontend preserved in the [ComfyUI Legacy Frontend repository](https://github.com/Comfy-Org/ComfyUI_legacy_frontend).

# QA

### Which GPU should I buy for this?

[See this page for some recommendations](https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI)


================================================
FILE: alembic.ini
================================================
# A generic, single database configuration.

[alembic]
# path to migration scripts
# Use forward slashes (/) also on windows to provide an os agnostic path
script_location = alembic_db

# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
# Uncomment the line below if you want the files to be prepended with date and time
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
# for all available tokens
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s

# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .

# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python>=3.9 or backports.zoneinfo library and tzdata library.
# Any required deps can installed by adding `alembic[tz]` to the pip requirements
# string value is passed to ZoneInfo()
# leave blank for localtime
# timezone =

# max length of characters to apply to the "slug" field
# truncate_slug_length = 40

# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false

# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false

# version location specification; This defaults
# to alembic_db/versions.  When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:alembic_db/versions

# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
# version_path_separator = newline
#
# Use os.pathsep. Default configuration used for new projects.
version_path_separator = os

# set to 'true' to search source files recursively
# in each "version_locations" directory
# new in Alembic version 1.10
# recursive_version_locations = false

# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8

sqlalchemy.url = sqlite:///user/comfyui.db


[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts.  See the documentation for further
# detail and examples

# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME

# lint with attempts to fix using "ruff" - use the exec runner, execute a binary
# hooks = ruff
# ruff.type = exec
# ruff.executable = %(here)s/.venv/bin/ruff
# ruff.options = check --fix REVISION_SCRIPT_FILENAME


================================================
FILE: alembic_db/README.md
================================================
## Generate new revision

1. Update models in `/app/database/models.py`
2. Run `alembic revision --autogenerate -m "{your message}"`


================================================
FILE: alembic_db/env.py
================================================
from sqlalchemy import engine_from_config
from sqlalchemy import pool

from alembic import context

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config


from app.database.models import Base, NAMING_CONVENTION
target_metadata = Base.metadata

# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.


def run_migrations_offline() -> None:
    """Run migrations in 'offline' mode.
    This configures the context with just a URL
    and not an Engine, though an Engine is acceptable
    here as well.  By skipping the Engine creation
    we don't even need a DBAPI to be available.
    Calls to context.execute() here emit the given string to the
    script output.
    """
    url = config.get_main_option("sqlalchemy.url")
    context.configure(
        url=url,
        target_metadata=target_metadata,
        literal_binds=True,
        dialect_opts={"paramstyle": "named"},
    )

    with context.begin_transaction():
        context.run_migrations()


def run_migrations_online() -> None:
    """Run migrations in 'online' mode.
    In this scenario we need to create an Engine
    and associate a connection with the context.
    """
    connectable = engine_from_config(
        config.get_section(config.config_ini_section, {}),
        prefix="sqlalchemy.",
        poolclass=pool.NullPool,
    )

    with connectable.connect() as connection:
        context.configure(
            connection=connection,
            target_metadata=target_metadata,
            render_as_batch=True,
            naming_convention=NAMING_CONVENTION,
        )

        with context.begin_transaction():
            context.run_migrations()


if context.is_offline_mode():
    run_migrations_offline()
else:
    run_migrations_online()


================================================
FILE: alembic_db/script.py.mako
================================================
"""${message}

Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}

"""
from typing import Sequence, Union

from alembic import op
import sqlalchemy as sa
${imports if imports else ""}

# revision identifiers, used by Alembic.
revision: str = ${repr(up_revision)}
down_revision: Union[str, None] = ${repr(down_revision)}
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}


def upgrade() -> None:
    """Upgrade schema."""
    ${upgrades if upgrades else "pass"}


def downgrade() -> None:
    """Downgrade schema."""
    ${downgrades if downgrades else "pass"}


================================================
FILE: alembic_db/versions/0001_assets.py
================================================
"""
Initial assets schema
Revision ID: 0001_assets
Revises: None
Create Date: 2025-12-10 00:00:00
"""

from alembic import op
import sqlalchemy as sa

revision = "0001_assets"
down_revision = None
branch_labels = None
depends_on = None


def upgrade() -> None:
    # ASSETS: content identity
    op.create_table(
        "assets",
        sa.Column("id", sa.String(length=36), primary_key=True),
        sa.Column("hash", sa.String(length=256), nullable=True),
        sa.Column("size_bytes", sa.BigInteger(), nullable=False, server_default="0"),
        sa.Column("mime_type", sa.String(length=255), nullable=True),
        sa.Column("created_at", sa.DateTime(timezone=False), nullable=False),
        sa.CheckConstraint("size_bytes >= 0", name="ck_assets_size_nonneg"),
    )
    op.create_index("uq_assets_hash", "assets", ["hash"], unique=True)
    op.create_index("ix_assets_mime_type", "assets", ["mime_type"])

    # ASSETS_INFO: user-visible references
    op.create_table(
        "assets_info",
        sa.Column("id", sa.String(length=36), primary_key=True),
        sa.Column("owner_id", sa.String(length=128), nullable=False, server_default=""),
        sa.Column("name", sa.String(length=512), nullable=False),
        sa.Column("asset_id", sa.String(length=36), sa.ForeignKey("assets.id", ondelete="RESTRICT"), nullable=False),
        sa.Column("preview_id", sa.String(length=36), sa.ForeignKey("assets.id", ondelete="SET NULL"), nullable=True),
        sa.Column("user_metadata", sa.JSON(), nullable=True),
        sa.Column("created_at", sa.DateTime(timezone=False), nullable=False),
        sa.Column("updated_at", sa.DateTime(timezone=False), nullable=False),
        sa.Column("last_access_time", sa.DateTime(timezone=False), nullable=False),
        sa.UniqueConstraint("asset_id", "owner_id", "name", name="uq_assets_info_asset_owner_name"),
    )
    op.create_index("ix_assets_info_owner_id", "assets_info", ["owner_id"])
    op.create_index("ix_assets_info_asset_id", "assets_info", ["asset_id"])
    op.create_index("ix_assets_info_name", "assets_info", ["name"])
    op.create_index("ix_assets_info_created_at", "assets_info", ["created_at"])
    op.create_index("ix_assets_info_last_access_time", "assets_info", ["last_access_time"])
    op.create_index("ix_assets_info_owner_name", "assets_info", ["owner_id", "name"])

    # TAGS: normalized tag vocabulary
    op.create_table(
        "tags",
        sa.Column("name", sa.String(length=512), primary_key=True),
        sa.Column("tag_type", sa.String(length=32), nullable=False, server_default="user"),
        sa.CheckConstraint("name = lower(name)", name="ck_tags_lowercase"),
    )
    op.create_index("ix_tags_tag_type", "tags", ["tag_type"])

    # ASSET_INFO_TAGS: many-to-many for tags on AssetInfo
    op.create_table(
        "asset_info_tags",
        sa.Column("asset_info_id", sa.String(length=36), sa.ForeignKey("assets_info.id", ondelete="CASCADE"), nullable=False),
        sa.Column("tag_name", sa.String(length=512), sa.ForeignKey("tags.name", ondelete="RESTRICT"), nullable=False),
        sa.Column("origin", sa.String(length=32), nullable=False, server_default="manual"),
        sa.Column("added_at", sa.DateTime(timezone=False), nullable=False),
        sa.PrimaryKeyConstraint("asset_info_id", "tag_name", name="pk_asset_info_tags"),
    )
    op.create_index("ix_asset_info_tags_tag_name", "asset_info_tags", ["tag_name"])
    op.create_index("ix_asset_info_tags_asset_info_id", "asset_info_tags", ["asset_info_id"])

    # ASSET_CACHE_STATE: N:1 local cache rows per Asset
    op.create_table(
        "asset_cache_state",
        sa.Column("id", sa.Integer(), primary_key=True, autoincrement=True),
        sa.Column("asset_id", sa.String(length=36), sa.ForeignKey("assets.id", ondelete="CASCADE"), nullable=False),
        sa.Column("file_path", sa.Text(), nullable=False),  # absolute local path to cached file
        sa.Column("mtime_ns", sa.BigInteger(), nullable=True),
        sa.Column("needs_verify", sa.Boolean(), nullable=False, server_default=sa.text("false")),
        sa.CheckConstraint("(mtime_ns IS NULL) OR (mtime_ns >= 0)", name="ck_acs_mtime_nonneg"),
        sa.UniqueConstraint("file_path", name="uq_asset_cache_state_file_path"),
    )
    op.create_index("ix_asset_cache_state_file_path", "asset_cache_state", ["file_path"])
    op.create_index("ix_asset_cache_state_asset_id", "asset_cache_state", ["asset_id"])

    # ASSET_INFO_META: typed KV projection of user_metadata for filtering/sorting
    op.create_table(
        "asset_info_meta",
        sa.Column("asset_info_id", sa.String(length=36), sa.ForeignKey("assets_info.id", ondelete="CASCADE"), nullable=False),
        sa.Column("key", sa.String(length=256), nullable=False),
        sa.Column("ordinal", sa.Integer(), nullable=False, server_default="0"),
        sa.Column("val_str", sa.String(length=2048), nullable=True),
        sa.Column("val_num", sa.Numeric(38, 10), nullable=True),
        sa.Column("val_bool", sa.Boolean(), nullable=True),
        sa.Column("val_json", sa.JSON(), nullable=True),
        sa.PrimaryKeyConstraint("asset_info_id", "key", "ordinal", name="pk_asset_info_meta"),
    )
    op.create_index("ix_asset_info_meta_key", "asset_info_meta", ["key"])
    op.create_index("ix_asset_info_meta_key_val_str", "asset_info_meta", ["key", "val_str"])
    op.create_index("ix_asset_info_meta_key_val_num", "asset_info_meta", ["key", "val_num"])
    op.create_index("ix_asset_info_meta_key_val_bool", "asset_info_meta", ["key", "val_bool"])

    # Tags vocabulary
    tags_table = sa.table(
        "tags",
        sa.column("name", sa.String(length=512)),
        sa.column("tag_type", sa.String()),
    )
    op.bulk_insert(
        tags_table,
        [
            {"name": "models", "tag_type": "system"},
            {"name": "input", "tag_type": "system"},
            {"name": "output", "tag_type": "system"},

            {"name": "configs", "tag_type": "system"},
            {"name": "checkpoints", "tag_type": "system"},
            {"name": "loras", "tag_type": "system"},
            {"name": "vae", "tag_type": "system"},
            {"name": "text_encoders", "tag_type": "system"},
            {"name": "diffusion_models", "tag_type": "system"},
            {"name": "clip_vision", "tag_type": "system"},
            {"name": "style_models", "tag_type": "system"},
            {"name": "embeddings", "tag_type": "system"},
            {"name": "diffusers", "tag_type": "system"},
            {"name": "vae_approx", "tag_type": "system"},
            {"name": "controlnet", "tag_type": "system"},
            {"name": "gligen", "tag_type": "system"},
            {"name": "upscale_models", "tag_type": "system"},
            {"name": "hypernetworks", "tag_type": "system"},
            {"name": "photomaker", "tag_type": "system"},
            {"name": "classifiers", "tag_type": "system"},

            {"name": "encoder", "tag_type": "system"},
            {"name": "decoder", "tag_type": "system"},

            {"name": "missing", "tag_type": "system"},
            {"name": "rescan", "tag_type": "system"},
        ],
    )


def downgrade() -> None:
    op.drop_index("ix_asset_info_meta_key_val_bool", table_name="asset_info_meta")
    op.drop_index("ix_asset_info_meta_key_val_num", table_name="asset_info_meta")
    op.drop_index("ix_asset_info_meta_key_val_str", table_name="asset_info_meta")
    op.drop_index("ix_asset_info_meta_key", table_name="asset_info_meta")
    op.drop_table("asset_info_meta")

    op.drop_index("ix_asset_cache_state_asset_id", table_name="asset_cache_state")
    op.drop_index("ix_asset_cache_state_file_path", table_name="asset_cache_state")
    op.drop_constraint("uq_asset_cache_state_file_path", table_name="asset_cache_state")
    op.drop_table("asset_cache_state")

    op.drop_index("ix_asset_info_tags_asset_info_id", table_name="asset_info_tags")
    op.drop_index("ix_asset_info_tags_tag_name", table_name="asset_info_tags")
    op.drop_table("asset_info_tags")

    op.drop_index("ix_tags_tag_type", table_name="tags")
    op.drop_table("tags")

    op.drop_constraint("uq_assets_info_asset_owner_name", table_name="assets_info")
    op.drop_index("ix_assets_info_owner_name", table_name="assets_info")
    op.drop_index("ix_assets_info_last_access_time", table_name="assets_info")
    op.drop_index("ix_assets_info_created_at", table_name="assets_info")
    op.drop_index("ix_assets_info_name", table_name="assets_info")
    op.drop_index("ix_assets_info_asset_id", table_name="assets_info")
    op.drop_index("ix_assets_info_owner_id", table_name="assets_info")
    op.drop_table("assets_info")

    op.drop_index("uq_assets_hash", table_name="assets")
    op.drop_index("ix_assets_mime_type", table_name="assets")
    op.drop_table("assets")


================================================
FILE: alembic_db/versions/0002_merge_to_asset_references.py
================================================
"""
Merge AssetInfo and AssetCacheState into unified asset_references table.

This migration drops old tables and creates the new unified schema.
All existing data is discarded.

Revision ID: 0002_merge_to_asset_references
Revises: 0001_assets
Create Date: 2025-02-11
"""

from alembic import op
import sqlalchemy as sa

revision = "0002_merge_to_asset_references"
down_revision = "0001_assets"
branch_labels = None
depends_on = None


def upgrade() -> None:
    # Drop old tables (order matters due to FK constraints)
    op.drop_index("ix_asset_info_meta_key_val_bool", table_name="asset_info_meta")
    op.drop_index("ix_asset_info_meta_key_val_num", table_name="asset_info_meta")
    op.drop_index("ix_asset_info_meta_key_val_str", table_name="asset_info_meta")
    op.drop_index("ix_asset_info_meta_key", table_name="asset_info_meta")
    op.drop_table("asset_info_meta")

    op.drop_index("ix_asset_info_tags_asset_info_id", table_name="asset_info_tags")
    op.drop_index("ix_asset_info_tags_tag_name", table_name="asset_info_tags")
    op.drop_table("asset_info_tags")

    op.drop_index("ix_asset_cache_state_asset_id", table_name="asset_cache_state")
    op.drop_index("ix_asset_cache_state_file_path", table_name="asset_cache_state")
    op.drop_table("asset_cache_state")

    op.drop_index("ix_assets_info_owner_name", table_name="assets_info")
    op.drop_index("ix_assets_info_last_access_time", table_name="assets_info")
    op.drop_index("ix_assets_info_created_at", table_name="assets_info")
    op.drop_index("ix_assets_info_name", table_name="assets_info")
    op.drop_index("ix_assets_info_asset_id", table_name="assets_info")
    op.drop_index("ix_assets_info_owner_id", table_name="assets_info")
    op.drop_table("assets_info")

    # Truncate assets table (cascades handled by dropping dependent tables first)
    op.execute("DELETE FROM assets")

    # Create asset_references table
    op.create_table(
        "asset_references",
        sa.Column("id", sa.String(length=36), primary_key=True),
        sa.Column(
            "asset_id",
            sa.String(length=36),
            sa.ForeignKey("assets.id", ondelete="CASCADE"),
            nullable=False,
        ),
        sa.Column("file_path", sa.Text(), nullable=True),
        sa.Column("mtime_ns", sa.BigInteger(), nullable=True),
        sa.Column(
            "needs_verify",
            sa.Boolean(),
            nullable=False,
            server_default=sa.text("false"),
        ),
        sa.Column(
            "is_missing", sa.Boolean(), nullable=False, server_default=sa.text("false")
        ),
        sa.Column("enrichment_level", sa.Integer(), nullable=False, server_default="0"),
        sa.Column("owner_id", sa.String(length=128), nullable=False, server_default=""),
        sa.Column("name", sa.String(length=512), nullable=False),
        sa.Column(
            "preview_id",
            sa.String(length=36),
            sa.ForeignKey("assets.id", ondelete="SET NULL"),
            nullable=True,
        ),
        sa.Column("user_metadata", sa.JSON(), nullable=True),
        sa.Column("created_at", sa.DateTime(timezone=False), nullable=False),
        sa.Column("updated_at", sa.DateTime(timezone=False), nullable=False),
        sa.Column("last_access_time", sa.DateTime(timezone=False), nullable=False),
        sa.Column("deleted_at", sa.DateTime(timezone=False), nullable=True),
        sa.CheckConstraint(
            "(mtime_ns IS NULL) OR (mtime_ns >= 0)", name="ck_ar_mtime_nonneg"
        ),
        sa.CheckConstraint(
            "enrichment_level >= 0 AND enrichment_level <= 2",
            name="ck_ar_enrichment_level_range",
        ),
    )
    op.create_index(
        "uq_asset_references_file_path", "asset_references", ["file_path"], unique=True
    )
    op.create_index("ix_asset_references_asset_id", "asset_references", ["asset_id"])
    op.create_index("ix_asset_references_owner_id", "asset_references", ["owner_id"])
    op.create_index("ix_asset_references_name", "asset_references", ["name"])
    op.create_index("ix_asset_references_is_missing", "asset_references", ["is_missing"])
    op.create_index(
        "ix_asset_references_enrichment_level", "asset_references", ["enrichment_level"]
    )
    op.create_index("ix_asset_references_created_at", "asset_references", ["created_at"])
    op.create_index(
        "ix_asset_references_last_access_time", "asset_references", ["last_access_time"]
    )
    op.create_index(
        "ix_asset_references_owner_name", "asset_references", ["owner_id", "name"]
    )
    op.create_index("ix_asset_references_deleted_at", "asset_references", ["deleted_at"])

    # Create asset_reference_tags table
    op.create_table(
        "asset_reference_tags",
        sa.Column(
            "asset_reference_id",
            sa.String(length=36),
            sa.ForeignKey("asset_references.id", ondelete="CASCADE"),
            nullable=False,
        ),
        sa.Column(
            "tag_name",
            sa.String(length=512),
            sa.ForeignKey("tags.name", ondelete="RESTRICT"),
            nullable=False,
        ),
        sa.Column(
            "origin", sa.String(length=32), nullable=False, server_default="manual"
        ),
        sa.Column("added_at", sa.DateTime(timezone=False), nullable=False),
        sa.PrimaryKeyConstraint(
            "asset_reference_id", "tag_name", name="pk_asset_reference_tags"
        ),
    )
    op.create_index(
        "ix_asset_reference_tags_tag_name", "asset_reference_tags", ["tag_name"]
    )
    op.create_index(
        "ix_asset_reference_tags_asset_reference_id",
        "asset_reference_tags",
        ["asset_reference_id"],
    )

    # Create asset_reference_meta table
    op.create_table(
        "asset_reference_meta",
        sa.Column(
            "asset_reference_id",
            sa.String(length=36),
            sa.ForeignKey("asset_references.id", ondelete="CASCADE"),
            nullable=False,
        ),
        sa.Column("key", sa.String(length=256), nullable=False),
        sa.Column("ordinal", sa.Integer(), nullable=False, server_default="0"),
        sa.Column("val_str", sa.String(length=2048), nullable=True),
        sa.Column("val_num", sa.Numeric(38, 10), nullable=True),
        sa.Column("val_bool", sa.Boolean(), nullable=True),
        sa.Column("val_json", sa.JSON(), nullable=True),
        sa.PrimaryKeyConstraint(
            "asset_reference_id", "key", "ordinal", name="pk_asset_reference_meta"
        ),
    )
    op.create_index("ix_asset_reference_meta_key", "asset_reference_meta", ["key"])
    op.create_index(
        "ix_asset_reference_meta_key_val_str", "asset_reference_meta", ["key", "val_str"]
    )
    op.create_index(
        "ix_asset_reference_meta_key_val_num", "asset_reference_meta", ["key", "val_num"]
    )
    op.create_index(
        "ix_asset_reference_meta_key_val_bool",
        "asset_reference_meta",
        ["key", "val_bool"],
    )


def downgrade() -> None:
    """Reverse 0002_merge_to_asset_references: drop new tables, recreate old schema.

    NOTE: Data is not recoverable. The upgrade discards all rows from the old
    tables and truncates assets. After downgrade the old schema will be empty.
    A filesystem rescan will repopulate data once the older code is running.
    """
    # Drop new tables (order matters due to FK constraints)
    op.drop_index("ix_asset_reference_meta_key_val_bool", table_name="asset_reference_meta")
    op.drop_index("ix_asset_reference_meta_key_val_num", table_name="asset_reference_meta")
    op.drop_index("ix_asset_reference_meta_key_val_str", table_name="asset_reference_meta")
    op.drop_index("ix_asset_reference_meta_key", table_name="asset_reference_meta")
    op.drop_table("asset_reference_meta")

    op.drop_index("ix_asset_reference_tags_asset_reference_id", table_name="asset_reference_tags")
    op.drop_index("ix_asset_reference_tags_tag_name", table_name="asset_reference_tags")
    op.drop_table("asset_reference_tags")

    op.drop_index("ix_asset_references_deleted_at", table_name="asset_references")
    op.drop_index("ix_asset_references_owner_name", table_name="asset_references")
    op.drop_index("ix_asset_references_last_access_time", table_name="asset_references")
    op.drop_index("ix_asset_references_created_at", table_name="asset_references")
    op.drop_index("ix_asset_references_enrichment_level", table_name="asset_references")
    op.drop_index("ix_asset_references_is_missing", table_name="asset_references")
    op.drop_index("ix_asset_references_name", table_name="asset_references")
    op.drop_index("ix_asset_references_owner_id", table_name="asset_references")
    op.drop_index("ix_asset_references_asset_id", table_name="asset_references")
    op.drop_index("uq_asset_references_file_path", table_name="asset_references")
    op.drop_table("asset_references")

    # Truncate assets (upgrade deleted all rows; downgrade starts fresh too)
    op.execute("DELETE FROM assets")

    # Recreate old tables from 0001_assets schema
    op.create_table(
        "assets_info",
        sa.Column("id", sa.String(length=36), primary_key=True),
        sa.Column("owner_id", sa.String(length=128), nullable=False, server_default=""),
        sa.Column("name", sa.String(length=512), nullable=False),
        sa.Column("asset_id", sa.String(length=36), sa.ForeignKey("assets.id", ondelete="RESTRICT"), nullable=False),
        sa.Column("preview_id", sa.String(length=36), sa.ForeignKey("assets.id", ondelete="SET NULL"), nullable=True),
        sa.Column("user_metadata", sa.JSON(), nullable=True),
        sa.Column("created_at", sa.DateTime(timezone=False), nullable=False),
        sa.Column("updated_at", sa.DateTime(timezone=False), nullable=False),
        sa.Column("last_access_time", sa.DateTime(timezone=False), nullable=False),
        sa.UniqueConstraint("asset_id", "owner_id", "name", name="uq_assets_info_asset_owner_name"),
    )
    op.create_index("ix_assets_info_owner_id", "assets_info", ["owner_id"])
    op.create_index("ix_assets_info_asset_id", "assets_info", ["asset_id"])
    op.create_index("ix_assets_info_name", "assets_info", ["name"])
    op.create_index("ix_assets_info_created_at", "assets_info", ["created_at"])
    op.create_index("ix_assets_info_last_access_time", "assets_info", ["last_access_time"])
    op.create_index("ix_assets_info_owner_name", "assets_info", ["owner_id", "name"])

    op.create_table(
        "asset_cache_state",
        sa.Column("id", sa.Integer(), primary_key=True, autoincrement=True),
        sa.Column("asset_id", sa.String(length=36), sa.ForeignKey("assets.id", ondelete="CASCADE"), nullable=False),
        sa.Column("file_path", sa.Text(), nullable=False),
        sa.Column("mtime_ns", sa.BigInteger(), nullable=True),
        sa.Column("needs_verify", sa.Boolean(), nullable=False, server_default=sa.text("false")),
        sa.CheckConstraint("(mtime_ns IS NULL) OR (mtime_ns >= 0)", name="ck_acs_mtime_nonneg"),
        sa.UniqueConstraint("file_path", name="uq_asset_cache_state_file_path"),
    )
    op.create_index("ix_asset_cache_state_file_path", "asset_cache_state", ["file_path"])
    op.create_index("ix_asset_cache_state_asset_id", "asset_cache_state", ["asset_id"])

    op.create_table(
        "asset_info_tags",
        sa.Column("asset_info_id", sa.String(length=36), sa.ForeignKey("assets_info.id", ondelete="CASCADE"), nullable=False),
        sa.Column("tag_name", sa.String(length=512), sa.ForeignKey("tags.name", ondelete="RESTRICT"), nullable=False),
        sa.Column("origin", sa.String(length=32), nullable=False, server_default="manual"),
        sa.Column("added_at", sa.DateTime(timezone=False), nullable=False),
        sa.PrimaryKeyConstraint("asset_info_id", "tag_name", name="pk_asset_info_tags"),
    )
    op.create_index("ix_asset_info_tags_tag_name", "asset_info_tags", ["tag_name"])
    op.create_index("ix_asset_info_tags_asset_info_id", "asset_info_tags", ["asset_info_id"])

    op.create_table(
        "asset_info_meta",
        sa.Column("asset_info_id", sa.String(length=36), sa.ForeignKey("assets_info.id", ondelete="CASCADE"), nullable=False),
        sa.Column("key", sa.String(length=256), nullable=False),
        sa.Column("ordinal", sa.Integer(), nullable=False, server_default="0"),
        sa.Column("val_str", sa.String(length=2048), nullable=True),
        sa.Column("val_num", sa.Numeric(38, 10), nullable=True),
        sa.Column("val_bool", sa.Boolean(), nullable=True),
        sa.Column("val_json", sa.JSON(), nullable=True),
        sa.PrimaryKeyConstraint("asset_info_id", "key", "ordinal", name="pk_asset_info_meta"),
    )
    op.create_index("ix_asset_info_meta_key", "asset_info_meta", ["key"])
    op.create_index("ix_asset_info_meta_key_val_str", "asset_info_meta", ["key", "val_str"])
    op.create_index("ix_asset_info_meta_key_val_num", "asset_info_meta", ["key", "val_num"])
    op.create_index("ix_asset_info_meta_key_val_bool", "asset_info_meta", ["key", "val_bool"])


================================================
FILE: alembic_db/versions/0003_add_metadata_job_id.py
================================================
"""
Add system_metadata and job_id columns to asset_references.
Change preview_id FK from assets.id to asset_references.id.

Revision ID: 0003_add_metadata_job_id
Revises: 0002_merge_to_asset_references
Create Date: 2026-03-09
"""

from alembic import op
import sqlalchemy as sa

from app.database.models import NAMING_CONVENTION

revision = "0003_add_metadata_job_id"
down_revision = "0002_merge_to_asset_references"
branch_labels = None
depends_on = None


def upgrade() -> None:
    with op.batch_alter_table("asset_references") as batch_op:
        batch_op.add_column(
            sa.Column("system_metadata", sa.JSON(), nullable=True)
        )
        batch_op.add_column(
            sa.Column("job_id", sa.String(length=36), nullable=True)
        )

    # Change preview_id FK from assets.id to asset_references.id (self-ref).
    # Existing values are asset-content IDs that won't match reference IDs,
    # so null them out first.
    op.execute("UPDATE asset_references SET preview_id = NULL WHERE preview_id IS NOT NULL")
    with op.batch_alter_table(
        "asset_references", naming_convention=NAMING_CONVENTION
    ) as batch_op:
        batch_op.drop_constraint(
            "fk_asset_references_preview_id_assets", type_="foreignkey"
        )
        batch_op.create_foreign_key(
            "fk_asset_references_preview_id_asset_references",
            "asset_references",
            ["preview_id"],
            ["id"],
            ondelete="SET NULL",
        )
        batch_op.create_index(
            "ix_asset_references_preview_id", ["preview_id"]
        )

    # Purge any all-null meta rows before adding the constraint
    op.execute(
        "DELETE FROM asset_reference_meta"
        " WHERE val_str IS NULL AND val_num IS NULL AND val_bool IS NULL AND val_json IS NULL"
    )
    with op.batch_alter_table("asset_reference_meta") as batch_op:
        batch_op.create_check_constraint(
            "ck_asset_reference_meta_has_value",
            "val_str IS NOT NULL OR val_num IS NOT NULL OR val_bool IS NOT NULL OR val_json IS NOT NULL",
        )


def downgrade() -> None:
    # SQLite doesn't reflect CHECK constraints, so we must declare it
    # explicitly via table_args for the batch recreate to find it.
    # Use the fully-rendered constraint name to avoid the naming convention
    # doubling the prefix.
    with op.batch_alter_table(
        "asset_reference_meta",
        table_args=[
            sa.CheckConstraint(
                "val_str IS NOT NULL OR val_num IS NOT NULL OR val_bool IS NOT NULL OR val_json IS NOT NULL",
                name="ck_asset_reference_meta_has_value",
            ),
        ],
    ) as batch_op:
        batch_op.drop_constraint(
            "ck_asset_reference_meta_has_value", type_="check"
        )

    with op.batch_alter_table(
        "asset_references", naming_convention=NAMING_CONVENTION
    ) as batch_op:
        batch_op.drop_index("ix_asset_references_preview_id")
        batch_op.drop_constraint(
            "fk_asset_references_preview_id_asset_references", type_="foreignkey"
        )
        batch_op.create_foreign_key(
            "fk_asset_references_preview_id_assets",
            "assets",
            ["preview_id"],
            ["id"],
            ondelete="SET NULL",
        )

    with op.batch_alter_table("asset_references") as batch_op:
        batch_op.drop_column("job_id")
        batch_op.drop_column("system_metadata")


================================================
FILE: api_server/__init__.py
================================================


================================================
FILE: api_server/routes/__init__.py
================================================


================================================
FILE: api_server/routes/internal/README.md
================================================
# ComfyUI Internal Routes

All routes under the `/internal` path are designated for **internal use by ComfyUI only**. These routes are not intended for use by external applications may change at any time without notice.


================================================
FILE: api_server/routes/internal/__init__.py
================================================


================================================
FILE: api_server/routes/internal/internal_routes.py
================================================
from aiohttp import web
from typing import Optional
from folder_paths import folder_names_and_paths, get_directory_by_type
from api_server.services.terminal_service import TerminalService
import app.logger
import os

class InternalRoutes:
    '''
    The top level web router for internal routes: /internal/*
    The endpoints here should NOT be depended upon. It is for ComfyUI frontend use only.
    Check README.md for more information.
    '''

    def __init__(self, prompt_server):
        self.routes: web.RouteTableDef = web.RouteTableDef()
        self._app: Optional[web.Application] = None
        self.prompt_server = prompt_server
        self.terminal_service = TerminalService(prompt_server)

    def setup_routes(self):
        @self.routes.get('/logs')
        async def get_logs(request):
            return web.json_response("".join([(l["t"] + " - " + l["m"]) for l in app.logger.get_logs()]))

        @self.routes.get('/logs/raw')
        async def get_raw_logs(request):
            self.terminal_service.update_size()
            return web.json_response({
                "entries": list(app.logger.get_logs()),
                "size": {"cols": self.terminal_service.cols, "rows": self.terminal_service.rows}
            })

        @self.routes.patch('/logs/subscribe')
        async def subscribe_logs(request):
            json_data = await request.json()
            client_id = json_data["clientId"]
            enabled = json_data["enabled"]
            if enabled:
                self.terminal_service.subscribe(client_id)
            else:
                self.terminal_service.unsubscribe(client_id)

            return web.Response(status=200)


        @self.routes.get('/folder_paths')
        async def get_folder_paths(request):
            response = {}
            for key in folder_names_and_paths:
                response[key] = folder_names_and_paths[key][0]
            return web.json_response(response)

        @self.routes.get('/files/{directory_type}')
        async def get_files(request: web.Request) -> web.Response:
            directory_type = request.match_info['directory_type']
            if directory_type not in ("output", "input", "temp"):
                return web.json_response({"error": "Invalid directory type"}, status=400)

            directory = get_directory_by_type(directory_type)

            def is_visible_file(entry: os.DirEntry) -> bool:
                """Filter out hidden files (e.g., .DS_Store on macOS)."""
                return entry.is_file() and not entry.name.startswith('.')

            sorted_files = sorted(
                (entry for entry in os.scandir(directory) if is_visible_file(entry)),
                key=lambda entry: -entry.stat().st_mtime
            )
            return web.json_response([entry.name for entry in sorted_files], status=200)


    def get_app(self):
        if self._app is None:
            self._app = web.Application()
            self.setup_routes()
            self._app.add_routes(self.routes)
        return self._app


================================================
FILE: api_server/services/__init__.py
================================================


================================================
FILE: api_server/services/terminal_service.py
================================================
from app.logger import on_flush
import os
import shutil


class TerminalService:
    def __init__(self, server):
        self.server = server
        self.cols = None
        self.rows = None
        self.subscriptions = set()
        on_flush(self.send_messages)

    def get_terminal_size(self):
        try:
            size = os.get_terminal_size()
            return (size.columns, size.lines)
        except OSError:
            try:
                size = shutil.get_terminal_size()
                return (size.columns, size.lines)
            except OSError:
                return (80, 24)  # fallback to 80x24

    def update_size(self):
        columns, lines = self.get_terminal_size()
        changed = False

        if columns != self.cols:
            self.cols = columns
            changed = True

        if lines != self.rows:
            self.rows = lines
            changed = True

        if changed:
            return {"cols": self.cols, "rows": self.rows}

        return None

    def subscribe(self, client_id):
        self.subscriptions.add(client_id)

    def unsubscribe(self, client_id):
        self.subscriptions.discard(client_id)

    def send_messages(self, entries):
        if not len(entries) or not len(self.subscriptions):
            return

        new_size = self.update_size()

        for client_id in self.subscriptions.copy(): # prevent: Set changed size during iteration
            if client_id not in self.server.sockets:
                # Automatically unsub if the socket has disconnected
                self.unsubscribe(client_id)
                continue

            self.server.send_sync("logs", {"entries": entries, "size": new_size}, client_id)


================================================
FILE: api_server/utils/file_operations.py
================================================
import os
from typing import List, Union, TypedDict, Literal
from typing_extensions import TypeGuard
class FileInfo(TypedDict):
    name: str
    path: str
    type: Literal["file"]
    size: int

class DirectoryInfo(TypedDict):
    name: str
    path: str
    type: Literal["directory"]

FileSystemItem = Union[FileInfo, DirectoryInfo]

def is_file_info(item: FileSystemItem) -> TypeGuard[FileInfo]:
    return item["type"] == "file"

class FileSystemOperations:
    @staticmethod
    def walk_directory(directory: str) -> List[FileSystemItem]:
        file_list: List[FileSystemItem] = []
        for root, dirs, files in os.walk(directory):
            for name in files:
                file_path = os.path.join(root, name)
                relative_path = os.path.relpath(file_path, directory)
                file_list.append({
                    "name": name,
                    "path": relative_path,
                    "type": "file",
                    "size": os.path.getsize(file_path)
                })
            for name in dirs:
                dir_path = os.path.join(root, name)
                relative_path = os.path.relpath(dir_path, directory)
                file_list.append({
                    "name": name,
                    "path": relative_path,
                    "type": "directory"
                })
        return file_list


================================================
FILE: app/__init__.py
================================================


================================================
FILE: app/app_settings.py
================================================
import os
import json
from aiohttp import web
import logging


class AppSettings():
    def __init__(self, user_manager):
        self.user_manager = user_manager

    def get_settings(self, request):
        try:
            file = self.user_manager.get_request_user_filepath(
                request,
                "comfy.settings.json"
            )
        except KeyError as e:
            logging.error("User settings not found.")
            raise web.HTTPUnauthorized() from e
        if os.path.isfile(file):
            try:
                with open(file) as f:
                    return json.load(f)
            except:
                logging.error(f"The user settings file is corrupted: {file}")
                return {}
        else:
            return {}

    def save_settings(self, request, settings):
        file = self.user_manager.get_request_user_filepath(
            request, "comfy.settings.json")
        with open(file, "w") as f:
            f.write(json.dumps(settings, indent=4))

    def add_routes(self, routes):
        @routes.get("/settings")
        async def get_settings(request):
            return web.json_response(self.get_settings(request))

        @routes.get("/settings/{id}")
        async def get_setting(request):
            value = None
            settings = self.get_settings(request)
            setting_id = request.match_info.get("id", None)
            if setting_id and setting_id in settings:
                value = settings[setting_id]
            return web.json_response(value)

        @routes.post("/settings")
        async def post_settings(request):
            settings = self.get_settings(request)
            new_settings = await request.json()
            self.save_settings(request, {**settings, **new_settings})
            return web.Response(status=200)

        @routes.post("/settings/{id}")
        async def post_setting(request):
            setting_id = request.match_info.get("id", None)
            if not setting_id:
                return web.Response(status=400)
            settings = self.get_settings(request)
            settings[setting_id] = await request.json()
            self.save_settings(request, settings)
            return web.Response(status=200)


================================================
FILE: app/assets/api/routes.py
================================================
import asyncio
import functools
import json
import logging
import os
import urllib.parse
import uuid
from typing import Any

from aiohttp import web
from pydantic import ValidationError

import folder_paths
from app import user_manager
from app.assets.api import schemas_in, schemas_out
from app.assets.services import schemas
from app.assets.api.schemas_in import (
    AssetValidationError,
    UploadError,
)
from app.assets.helpers import validate_blake3_hash
from app.assets.api.upload import (
    delete_temp_file_if_exists,
    parse_multipart_upload,
)
from app.assets.seeder import ScanInProgressError, asset_seeder
from app.assets.services import (
    DependencyMissingError,
    HashMismatchError,
    apply_tags,
    asset_exists,
    create_from_hash,
    delete_asset_reference,
    get_asset_detail,
    list_assets_page,
    list_tags,
    remove_tags,
    resolve_asset_for_download,
    update_asset_metadata,
    upload_from_temp_path,
)
from app.assets.services.tagging import list_tag_histogram

ROUTES = web.RouteTableDef()
USER_MANAGER: user_manager.UserManager | None = None
_ASSETS_ENABLED = False


def _require_assets_feature_enabled(handler):
    @functools.wraps(handler)
    async def wrapper(request: web.Request) -> web.Response:
        if not _ASSETS_ENABLED:
            return _build_error_response(
                503,
                "SERVICE_DISABLED",
                "Assets system is disabled. Start the server with --enable-assets to use this feature.",
            )
        return await handler(request)

    return wrapper


# UUID regex (canonical hyphenated form, case-insensitive)
UUID_RE = r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}"


def get_query_dict(request: web.Request) -> dict[str, Any]:
    """Gets a dictionary of query parameters from the request.

    request.query is a MultiMapping[str], needs to be converted to a dict
    to be validated by Pydantic.
    """
    query_dict = {
        key: request.query.getall(key)
        if len(request.query.getall(key)) > 1
        else request.query.get(key)
        for key in request.query.keys()
    }
    return query_dict


# Note to any custom node developers reading this code:
# The assets system is not yet fully implemented,
# do not rely on the code in /app/assets remaining the same.


def register_assets_routes(
    app: web.Application,
    user_manager_instance: user_manager.UserManager | None = None,
) -> None:
    global USER_MANAGER, _ASSETS_ENABLED
    if user_manager_instance is not None:
        USER_MANAGER = user_manager_instance
        _ASSETS_ENABLED = True
    app.add_routes(ROUTES)


def disable_assets_routes() -> None:
    """Disable asset routes at runtime (e.g. after DB init failure)."""
    global _ASSETS_ENABLED
    _ASSETS_ENABLED = False


def _build_error_response(
    status: int, code: str, message: str, details: dict | None = None
) -> web.Response:
    return web.json_response(
        {"error": {"code": code, "message": message, "details": details or {}}},
        status=status,
    )


def _build_validation_error_response(code: str, ve: ValidationError) -> web.Response:
    errors = json.loads(ve.json())
    return _build_error_response(400, code, "Validation failed.", {"errors": errors})


def _validate_sort_field(requested: str | None) -> str:
    if not requested:
        return "created_at"
    v = requested.lower()
    if v in {"name", "created_at", "updated_at", "size", "last_access_time"}:
        return v
    return "created_at"


def _build_preview_url_from_view(tags: list[str], user_metadata: dict[str, Any] | None) -> str | None:
    """Build a /api/view preview URL from asset tags and user_metadata filename."""
    if not user_metadata:
        return None
    filename = user_metadata.get("filename")
    if not filename:
        return None

    if "input" in tags:
        view_type = "input"
    elif "output" in tags:
        view_type = "output"
    else:
        return None

    subfolder = ""
    if "/" in filename:
        subfolder, filename = filename.rsplit("/", 1)

    encoded_filename = urllib.parse.quote(filename, safe="")
    url = f"/api/view?type={view_type}&filename={encoded_filename}"
    if subfolder:
        url += f"&subfolder={urllib.parse.quote(subfolder, safe='')}"
    return url


def _build_asset_response(result: schemas.AssetDetailResult | schemas.UploadResult) -> schemas_out.Asset:
    """Build an Asset response from a service result."""
    if result.ref.preview_id:
        preview_detail = get_asset_detail(result.ref.preview_id)
        if preview_detail:
            preview_url = _build_preview_url_from_view(preview_detail.tags, preview_detail.ref.user_metadata)
        else:
            preview_url = None
    else:
        preview_url = _build_preview_url_from_view(result.tags, result.ref.user_metadata)
    return schemas_out.Asset(
        id=result.ref.id,
        name=result.ref.name,
        asset_hash=result.asset.hash if result.asset else None,
        size=int(result.asset.size_bytes) if result.asset else None,
        mime_type=result.asset.mime_type if result.asset else None,
        tags=result.tags,
        preview_url=preview_url,
        preview_id=result.ref.preview_id,
        user_metadata=result.ref.user_metadata or {},
        metadata=result.ref.system_metadata,
        job_id=result.ref.job_id,
        prompt_id=result.ref.job_id,  # deprecated: mirrors job_id for cloud compat
        created_at=result.ref.created_at,
        updated_at=result.ref.updated_at,
        last_access_time=result.ref.last_access
Download .txt
gitextract_r6aji11h/

├── .ci/
│   ├── update_windows/
│   │   ├── update.py
│   │   ├── update_comfyui.bat
│   │   └── update_comfyui_stable.bat
│   ├── windows_amd_base_files/
│   │   ├── README_VERY_IMPORTANT.txt
│   │   ├── run_amd_gpu.bat
│   │   └── run_amd_gpu_disable_smart_memory.bat
│   ├── windows_nightly_base_files/
│   │   └── run_nvidia_gpu_fast.bat
│   └── windows_nvidia_base_files/
│       ├── README_VERY_IMPORTANT.txt
│       ├── advanced/
│       │   └── run_nvidia_gpu_disable_api_nodes.bat
│       ├── run_cpu.bat
│       ├── run_nvidia_gpu.bat
│       └── run_nvidia_gpu_fast_fp16_accumulation.bat
├── .coderabbit.yaml
├── .gitattributes
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.yml
│   │   ├── config.yml
│   │   ├── feature-request.yml
│   │   └── user-support.yml
│   ├── PULL_REQUEST_TEMPLATE/
│   │   └── api-node.md
│   ├── scripts/
│   │   └── check-ai-co-authors.sh
│   └── workflows/
│       ├── api-node-template.yml
│       ├── check-ai-co-authors.yml
│       ├── check-line-endings.yml
│       ├── pullrequest-ci-run.yml
│       ├── release-stable-all.yml
│       ├── release-webhook.yml
│       ├── ruff.yml
│       ├── stable-release.yml
│       ├── stale-issues.yml
│       ├── test-build.yml
│       ├── test-ci.yml
│       ├── test-execution.yml
│       ├── test-launch.yml
│       ├── test-unit.yml
│       ├── update-api-stubs.yml
│       ├── update-ci-container.yml
│       ├── update-version.yml
│       ├── windows_release_dependencies.yml
│       ├── windows_release_dependencies_manual.yml
│       ├── windows_release_nightly_pytorch.yml
│       └── windows_release_package.yml
├── .gitignore
├── CODEOWNERS
├── CONTRIBUTING.md
├── LICENSE
├── QUANTIZATION.md
├── README.md
├── alembic.ini
├── alembic_db/
│   ├── README.md
│   ├── env.py
│   ├── script.py.mako
│   └── versions/
│       ├── 0001_assets.py
│       ├── 0002_merge_to_asset_references.py
│       └── 0003_add_metadata_job_id.py
├── api_server/
│   ├── __init__.py
│   ├── routes/
│   │   ├── __init__.py
│   │   └── internal/
│   │       ├── README.md
│   │       ├── __init__.py
│   │       └── internal_routes.py
│   ├── services/
│   │   ├── __init__.py
│   │   └── terminal_service.py
│   └── utils/
│       └── file_operations.py
├── app/
│   ├── __init__.py
│   ├── app_settings.py
│   ├── assets/
│   │   ├── api/
│   │   │   ├── routes.py
│   │   │   ├── schemas_in.py
│   │   │   ├── schemas_out.py
│   │   │   └── upload.py
│   │   ├── database/
│   │   │   ├── models.py
│   │   │   └── queries/
│   │   │       ├── __init__.py
│   │   │       ├── asset.py
│   │   │       ├── asset_reference.py
│   │   │       ├── common.py
│   │   │       └── tags.py
│   │   ├── helpers.py
│   │   ├── scanner.py
│   │   ├── seeder.py
│   │   └── services/
│   │       ├── __init__.py
│   │       ├── asset_management.py
│   │       ├── bulk_ingest.py
│   │       ├── file_utils.py
│   │       ├── hashing.py
│   │       ├── ingest.py
│   │       ├── metadata_extract.py
│   │       ├── path_utils.py
│   │       ├── schemas.py
│   │       └── tagging.py
│   ├── custom_node_manager.py
│   ├── database/
│   │   ├── db.py
│   │   └── models.py
│   ├── frontend_management.py
│   ├── logger.py
│   ├── model_manager.py
│   ├── node_replace_manager.py
│   ├── subgraph_manager.py
│   └── user_manager.py
├── blueprints/
│   ├── .glsl/
│   │   ├── Brightness_and_Contrast_1.frag
│   │   ├── Chromatic_Aberration_16.frag
│   │   ├── Color_Adjustment_15.frag
│   │   ├── Edge-Preserving_Blur_128.frag
│   │   ├── Film_Grain_15.frag
│   │   ├── Glow_30.frag
│   │   ├── Hue_and_Saturation_1.frag
│   │   ├── Image_Blur_1.frag
│   │   ├── Image_Channels_23.frag
│   │   ├── Image_Levels_1.frag
│   │   ├── README.md
│   │   ├── Sharpen_23.frag
│   │   ├── Unsharp_Mask_26.frag
│   │   └── update_blueprints.py
│   ├── Brightness and Contrast.json
│   ├── Canny to Image (Z-Image-Turbo).json
│   ├── Canny to Video (LTX 2.0).json
│   ├── Chromatic Aberration.json
│   ├── Color Adjustment.json
│   ├── Depth to Image (Z-Image-Turbo).json
│   ├── Depth to Video (ltx 2.0).json
│   ├── Edge-Preserving Blur.json
│   ├── Film Grain.json
│   ├── Glow.json
│   ├── Hue and Saturation.json
│   ├── Image Blur.json
│   ├── Image Captioning (gemini).json
│   ├── Image Channels.json
│   ├── Image Edit (Flux.2 Klein 4B).json
│   ├── Image Edit (Qwen 2511).json
│   ├── Image Inpainting (Qwen-image).json
│   ├── Image Levels.json
│   ├── Image Outpainting (Qwen-Image).json
│   ├── Image Upscale(Z-image-Turbo).json
│   ├── Image to Depth Map (Lotus).json
│   ├── Image to Layers(Qwen-Image Layered).json
│   ├── Image to Model (Hunyuan3d 2.1).json
│   ├── Image to Video (Wan 2.2).json
│   ├── Pose to Image (Z-Image-Turbo).json
│   ├── Pose to Video (LTX 2.0).json
│   ├── Prompt Enhance.json
│   ├── Sharpen.json
│   ├── Text to Audio (ACE-Step 1.5).json
│   ├── Text to Image (Z-Image-Turbo).json
│   ├── Text to Video (Wan 2.2).json
│   ├── Unsharp Mask.json
│   ├── Video Captioning (Gemini).json
│   ├── Video Inpaint(Wan2.1 VACE).json
│   ├── Video Stitch.json
│   ├── Video Upscale(GAN x4).json
│   └── put_blueprints_here
├── comfy/
│   ├── audio_encoders/
│   │   ├── audio_encoders.py
│   │   ├── wav2vec2.py
│   │   └── whisper.py
│   ├── cldm/
│   │   ├── cldm.py
│   │   ├── control_types.py
│   │   ├── dit_embedder.py
│   │   └── mmdit.py
│   ├── cli_args.py
│   ├── clip_config_bigg.json
│   ├── clip_model.py
│   ├── clip_vision.py
│   ├── clip_vision_config_g.json
│   ├── clip_vision_config_h.json
│   ├── clip_vision_config_vitl.json
│   ├── clip_vision_config_vitl_336.json
│   ├── clip_vision_config_vitl_336_llava.json
│   ├── clip_vision_siglip2_base_naflex.json
│   ├── clip_vision_siglip_384.json
│   ├── clip_vision_siglip_512.json
│   ├── comfy_types/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── examples/
│   │   │   └── example_nodes.py
│   │   └── node_typing.py
│   ├── conds.py
│   ├── context_windows.py
│   ├── controlnet.py
│   ├── diffusers_convert.py
│   ├── diffusers_load.py
│   ├── extra_samplers/
│   │   └── uni_pc.py
│   ├── float.py
│   ├── gligen.py
│   ├── hooks.py
│   ├── image_encoders/
│   │   ├── dino2.py
│   │   ├── dino2_giant.json
│   │   └── dino2_large.json
│   ├── k_diffusion/
│   │   ├── deis.py
│   │   ├── sa_solver.py
│   │   ├── sampling.py
│   │   └── utils.py
│   ├── latent_formats.py
│   ├── ldm/
│   │   ├── ace/
│   │   │   ├── ace_step15.py
│   │   │   ├── attention.py
│   │   │   ├── lyric_encoder.py
│   │   │   ├── model.py
│   │   │   └── vae/
│   │   │       ├── autoencoder_dc.py
│   │   │       ├── music_dcae_pipeline.py
│   │   │       ├── music_log_mel.py
│   │   │       └── music_vocoder.py
│   │   ├── anima/
│   │   │   └── model.py
│   │   ├── audio/
│   │   │   ├── autoencoder.py
│   │   │   ├── dit.py
│   │   │   └── embedders.py
│   │   ├── aura/
│   │   │   └── mmdit.py
│   │   ├── cascade/
│   │   │   ├── common.py
│   │   │   ├── controlnet.py
│   │   │   ├── stage_a.py
│   │   │   ├── stage_b.py
│   │   │   ├── stage_c.py
│   │   │   └── stage_c_coder.py
│   │   ├── chroma/
│   │   │   ├── layers.py
│   │   │   └── model.py
│   │   ├── chroma_radiance/
│   │   │   ├── layers.py
│   │   │   └── model.py
│   │   ├── common_dit.py
│   │   ├── cosmos/
│   │   │   ├── blocks.py
│   │   │   ├── cosmos_tokenizer/
│   │   │   │   ├── layers3d.py
│   │   │   │   ├── patching.py
│   │   │   │   └── utils.py
│   │   │   ├── model.py
│   │   │   ├── position_embedding.py
│   │   │   ├── predict2.py
│   │   │   └── vae.py
│   │   ├── flux/
│   │   │   ├── controlnet.py
│   │   │   ├── layers.py
│   │   │   ├── math.py
│   │   │   ├── model.py
│   │   │   └── redux.py
│   │   ├── genmo/
│   │   │   ├── joint_model/
│   │   │   │   ├── asymm_models_joint.py
│   │   │   │   ├── layers.py
│   │   │   │   ├── rope_mixed.py
│   │   │   │   ├── temporal_rope.py
│   │   │   │   └── utils.py
│   │   │   └── vae/
│   │   │       └── model.py
│   │   ├── hidream/
│   │   │   └── model.py
│   │   ├── hunyuan3d/
│   │   │   ├── model.py
│   │   │   └── vae.py
│   │   ├── hunyuan3dv2_1/
│   │   │   └── hunyuandit.py
│   │   ├── hunyuan_video/
│   │   │   ├── model.py
│   │   │   ├── upsampler.py
│   │   │   ├── vae.py
│   │   │   └── vae_refiner.py
│   │   ├── hydit/
│   │   │   ├── attn_layers.py
│   │   │   ├── controlnet.py
│   │   │   ├── models.py
│   │   │   ├── poolers.py
│   │   │   └── posemb_layers.py
│   │   ├── kandinsky5/
│   │   │   └── model.py
│   │   ├── lightricks/
│   │   │   ├── av_model.py
│   │   │   ├── embeddings_connector.py
│   │   │   ├── latent_upsampler.py
│   │   │   ├── model.py
│   │   │   ├── symmetric_patchifier.py
│   │   │   ├── vae/
│   │   │   │   ├── audio_vae.py
│   │   │   │   ├── causal_audio_autoencoder.py
│   │   │   │   ├── causal_conv3d.py
│   │   │   │   ├── causal_video_autoencoder.py
│   │   │   │   ├── conv_nd_factory.py
│   │   │   │   ├── dual_conv3d.py
│   │   │   │   └── pixel_norm.py
│   │   │   └── vocoders/
│   │   │       └── vocoder.py
│   │   ├── lumina/
│   │   │   ├── controlnet.py
│   │   │   └── model.py
│   │   ├── mmaudio/
│   │   │   └── vae/
│   │   │       ├── __init__.py
│   │   │       ├── activations.py
│   │   │       ├── alias_free_torch.py
│   │   │       ├── autoencoder.py
│   │   │       ├── bigvgan.py
│   │   │       ├── distributions.py
│   │   │       ├── vae.py
│   │   │       └── vae_modules.py
│   │   ├── models/
│   │   │   └── autoencoder.py
│   │   ├── modules/
│   │   │   ├── attention.py
│   │   │   ├── diffusionmodules/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── mmdit.py
│   │   │   │   ├── model.py
│   │   │   │   ├── openaimodel.py
│   │   │   │   ├── upscaling.py
│   │   │   │   └── util.py
│   │   │   ├── distributions/
│   │   │   │   ├── __init__.py
│   │   │   │   └── distributions.py
│   │   │   ├── ema.py
│   │   │   ├── encoders/
│   │   │   │   ├── __init__.py
│   │   │   │   └── noise_aug_modules.py
│   │   │   ├── sdpose.py
│   │   │   ├── sub_quadratic_attention.py
│   │   │   └── temporal_ae.py
│   │   ├── omnigen/
│   │   │   └── omnigen2.py
│   │   ├── pixart/
│   │   │   ├── blocks.py
│   │   │   └── pixartms.py
│   │   ├── qwen_image/
│   │   │   ├── controlnet.py
│   │   │   └── model.py
│   │   ├── util.py
│   │   └── wan/
│   │       ├── model.py
│   │       ├── model_animate.py
│   │       ├── model_multitalk.py
│   │       ├── vae.py
│   │       └── vae2_2.py
│   ├── lora.py
│   ├── lora_convert.py
│   ├── memory_management.py
│   ├── model_base.py
│   ├── model_detection.py
│   ├── model_management.py
│   ├── model_patcher.py
│   ├── model_sampling.py
│   ├── nested_tensor.py
│   ├── ops.py
│   ├── options.py
│   ├── patcher_extension.py
│   ├── pinned_memory.py
│   ├── pixel_space_convert.py
│   ├── quant_ops.py
│   ├── rmsnorm.py
│   ├── sample.py
│   ├── sampler_helpers.py
│   ├── samplers.py
│   ├── sd.py
│   ├── sd1_clip.py
│   ├── sd1_clip_config.json
│   ├── sd1_tokenizer/
│   │   ├── merges.txt
│   │   ├── special_tokens_map.json
│   │   ├── tokenizer_config.json
│   │   └── vocab.json
│   ├── sdxl_clip.py
│   ├── supported_models.py
│   ├── supported_models_base.py
│   ├── t2i_adapter/
│   │   └── adapter.py
│   ├── taesd/
│   │   ├── taehv.py
│   │   └── taesd.py
│   ├── text_encoders/
│   │   ├── ace.py
│   │   ├── ace15.py
│   │   ├── ace_lyrics_tokenizer/
│   │   │   └── vocab.json
│   │   ├── ace_text_cleaners.py
│   │   ├── anima.py
│   │   ├── aura_t5.py
│   │   ├── bert.py
│   │   ├── byt5_config_small_glyph.json
│   │   ├── byt5_tokenizer/
│   │   │   ├── added_tokens.json
│   │   │   ├── special_tokens_map.json
│   │   │   └── tokenizer_config.json
│   │   ├── cosmos.py
│   │   ├── flux.py
│   │   ├── genmo.py
│   │   ├── hidream.py
│   │   ├── hunyuan_image.py
│   │   ├── hunyuan_video.py
│   │   ├── hydit.py
│   │   ├── hydit_clip.json
│   │   ├── hydit_clip_tokenizer/
│   │   │   ├── special_tokens_map.json
│   │   │   ├── tokenizer_config.json
│   │   │   └── vocab.txt
│   │   ├── jina_clip_2.py
│   │   ├── kandinsky5.py
│   │   ├── llama.py
│   │   ├── llama_tokenizer/
│   │   │   ├── tokenizer.json
│   │   │   └── tokenizer_config.json
│   │   ├── long_clipl.py
│   │   ├── longcat_image.py
│   │   ├── lt.py
│   │   ├── lumina2.py
│   │   ├── mt5_config_xl.json
│   │   ├── newbie.py
│   │   ├── omnigen2.py
│   │   ├── ovis.py
│   │   ├── pixart_t5.py
│   │   ├── qwen25_tokenizer/
│   │   │   ├── merges.txt
│   │   │   ├── tokenizer_config.json
│   │   │   └── vocab.json
│   │   ├── qwen_image.py
│   │   ├── qwen_vl.py
│   │   ├── sa_t5.py
│   │   ├── sd2_clip.py
│   │   ├── sd2_clip_config.json
│   │   ├── sd3_clip.py
│   │   ├── spiece_tokenizer.py
│   │   ├── t5.py
│   │   ├── t5_config_base.json
│   │   ├── t5_config_xxl.json
│   │   ├── t5_old_config_xxl.json
│   │   ├── t5_pile_config_xl.json
│   │   ├── t5_pile_tokenizer/
│   │   │   └── tokenizer.model
│   │   ├── t5_tokenizer/
│   │   │   ├── special_tokens_map.json
│   │   │   ├── tokenizer.json
│   │   │   └── tokenizer_config.json
│   │   ├── umt5_config_base.json
│   │   ├── umt5_config_xxl.json
│   │   ├── wan.py
│   │   └── z_image.py
│   ├── utils.py
│   ├── weight_adapter/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── boft.py
│   │   ├── bypass.py
│   │   ├── glora.py
│   │   ├── loha.py
│   │   ├── lokr.py
│   │   ├── lora.py
│   │   └── oft.py
│   └── windows.py
├── comfy_api/
│   ├── feature_flags.py
│   ├── generate_api_stubs.py
│   ├── input/
│   │   ├── __init__.py
│   │   ├── basic_types.py
│   │   └── video_types.py
│   ├── input_impl/
│   │   ├── __init__.py
│   │   └── video_types.py
│   ├── internal/
│   │   ├── __init__.py
│   │   ├── api_registry.py
│   │   ├── async_to_sync.py
│   │   └── singleton.py
│   ├── latest/
│   │   ├── __init__.py
│   │   ├── _caching.py
│   │   ├── _input/
│   │   │   ├── __init__.py
│   │   │   ├── basic_types.py
│   │   │   └── video_types.py
│   │   ├── _input_impl/
│   │   │   ├── __init__.py
│   │   │   └── video_types.py
│   │   ├── _io.py
│   │   ├── _io_public.py
│   │   ├── _ui.py
│   │   ├── _ui_public.py
│   │   ├── _util/
│   │   │   ├── __init__.py
│   │   │   ├── geometry_types.py
│   │   │   ├── image_types.py
│   │   │   └── video_types.py
│   │   └── generated/
│   │       └── ComfyAPISyncStub.pyi
│   ├── torch_helpers/
│   │   ├── __init__.py
│   │   └── torch_compile.py
│   ├── util/
│   │   ├── __init__.py
│   │   └── video_types.py
│   ├── util.py
│   ├── v0_0_1/
│   │   ├── __init__.py
│   │   └── generated/
│   │       └── ComfyAPISyncStub.pyi
│   ├── v0_0_2/
│   │   ├── __init__.py
│   │   └── generated/
│   │       └── ComfyAPISyncStub.pyi
│   └── version_list.py
├── comfy_api_nodes/
│   ├── __init__.py
│   ├── apis/
│   │   ├── __init__.py
│   │   ├── bfl.py
│   │   ├── bria.py
│   │   ├── bytedance.py
│   │   ├── elevenlabs.py
│   │   ├── gemini.py
│   │   ├── grok.py
│   │   ├── hitpaw.py
│   │   ├── hunyuan3d.py
│   │   ├── ideogram.py
│   │   ├── kling.py
│   │   ├── luma.py
│   │   ├── magnific.py
│   │   ├── meshy.py
│   │   ├── minimax.py
│   │   ├── moonvalley.py
│   │   ├── openai.py
│   │   ├── pixverse.py
│   │   ├── recraft.py
│   │   ├── reve.py
│   │   ├── rodin.py
│   │   ├── runway.py
│   │   ├── stability.py
│   │   ├── topaz.py
│   │   ├── tripo.py
│   │   ├── veo.py
│   │   ├── vidu.py
│   │   └── wavespeed.py
│   ├── nodes_bfl.py
│   ├── nodes_bria.py
│   ├── nodes_bytedance.py
│   ├── nodes_elevenlabs.py
│   ├── nodes_gemini.py
│   ├── nodes_grok.py
│   ├── nodes_hitpaw.py
│   ├── nodes_hunyuan3d.py
│   ├── nodes_ideogram.py
│   ├── nodes_kling.py
│   ├── nodes_ltxv.py
│   ├── nodes_luma.py
│   ├── nodes_magnific.py
│   ├── nodes_meshy.py
│   ├── nodes_minimax.py
│   ├── nodes_moonvalley.py
│   ├── nodes_openai.py
│   ├── nodes_pixverse.py
│   ├── nodes_recraft.py
│   ├── nodes_reve.py
│   ├── nodes_rodin.py
│   ├── nodes_runway.py
│   ├── nodes_sora.py
│   ├── nodes_stability.py
│   ├── nodes_topaz.py
│   ├── nodes_tripo.py
│   ├── nodes_veo2.py
│   ├── nodes_vidu.py
│   ├── nodes_wan.py
│   ├── nodes_wavespeed.py
│   └── util/
│       ├── __init__.py
│       ├── _helpers.py
│       ├── client.py
│       ├── common_exceptions.py
│       ├── conversions.py
│       ├── download_helpers.py
│       ├── request_logger.py
│       ├── upload_helpers.py
│       └── validation_utils.py
├── comfy_config/
│   ├── config_parser.py
│   └── types.py
├── comfy_execution/
│   ├── cache_provider.py
│   ├── caching.py
│   ├── graph.py
│   ├── graph_utils.py
│   ├── jobs.py
│   ├── progress.py
│   ├── utils.py
│   └── validation.py
├── comfy_extras/
│   ├── chainner_models/
│   │   └── model_loading.py
│   ├── nodes_ace.py
│   ├── nodes_advanced_samplers.py
│   ├── nodes_align_your_steps.py
│   ├── nodes_apg.py
│   ├── nodes_attention_multiply.py
│   ├── nodes_audio.py
│   ├── nodes_audio_encoder.py
│   ├── nodes_camera_trajectory.py
│   ├── nodes_canny.py
│   ├── nodes_cfg.py
│   ├── nodes_chroma_radiance.py
│   ├── nodes_clip_sdxl.py
│   ├── nodes_color.py
│   ├── nodes_compositing.py
│   ├── nodes_cond.py
│   ├── nodes_context_windows.py
│   ├── nodes_controlnet.py
│   ├── nodes_cosmos.py
│   ├── nodes_custom_sampler.py
│   ├── nodes_dataset.py
│   ├── nodes_differential_diffusion.py
│   ├── nodes_easycache.py
│   ├── nodes_edit_model.py
│   ├── nodes_eps.py
│   ├── nodes_flux.py
│   ├── nodes_freelunch.py
│   ├── nodes_fresca.py
│   ├── nodes_gits.py
│   ├── nodes_glsl.py
│   ├── nodes_hidream.py
│   ├── nodes_hooks.py
│   ├── nodes_hunyuan.py
│   ├── nodes_hunyuan3d.py
│   ├── nodes_hypernetwork.py
│   ├── nodes_hypertile.py
│   ├── nodes_image_compare.py
│   ├── nodes_images.py
│   ├── nodes_ip2p.py
│   ├── nodes_kandinsky5.py
│   ├── nodes_latent.py
│   ├── nodes_load_3d.py
│   ├── nodes_logic.py
│   ├── nodes_lora_debug.py
│   ├── nodes_lora_extract.py
│   ├── nodes_lotus.py
│   ├── nodes_lt.py
│   ├── nodes_lt_audio.py
│   ├── nodes_lt_upsampler.py
│   ├── nodes_lumina2.py
│   ├── nodes_mahiro.py
│   ├── nodes_mask.py
│   ├── nodes_math.py
│   ├── nodes_mochi.py
│   ├── nodes_model_advanced.py
│   ├── nodes_model_downscale.py
│   ├── nodes_model_merging.py
│   ├── nodes_model_merging_model_specific.py
│   ├── nodes_model_patch.py
│   ├── nodes_morphology.py
│   ├── nodes_nag.py
│   ├── nodes_nop.py
│   ├── nodes_optimalsteps.py
│   ├── nodes_pag.py
│   ├── nodes_painter.py
│   ├── nodes_perpneg.py
│   ├── nodes_photomaker.py
│   ├── nodes_pixart.py
│   ├── nodes_post_processing.py
│   ├── nodes_preview_any.py
│   ├── nodes_primitive.py
│   ├── nodes_qwen.py
│   ├── nodes_rebatch.py
│   ├── nodes_replacements.py
│   ├── nodes_resolution.py
│   ├── nodes_rope.py
│   ├── nodes_sag.py
│   ├── nodes_sd3.py
│   ├── nodes_sdpose.py
│   ├── nodes_sdupscale.py
│   ├── nodes_slg.py
│   ├── nodes_stable3d.py
│   ├── nodes_stable_cascade.py
│   ├── nodes_string.py
│   ├── nodes_tcfg.py
│   ├── nodes_textgen.py
│   ├── nodes_tomesd.py
│   ├── nodes_toolkit.py
│   ├── nodes_torch_compile.py
│   ├── nodes_train.py
│   ├── nodes_upscale_model.py
│   ├── nodes_video.py
│   ├── nodes_video_model.py
│   ├── nodes_wan.py
│   ├── nodes_wanmove.py
│   ├── nodes_webcam.py
│   └── nodes_zimage.py
├── comfyui_version.py
├── cuda_malloc.py
├── custom_nodes/
│   └── example_node.py.example
├── execution.py
├── extra_model_paths.yaml.example
├── folder_paths.py
├── hook_breaker_ac10a0.py
├── latent_preview.py
├── main.py
├── manager_requirements.txt
├── middleware/
│   ├── __init__.py
│   └── cache_middleware.py
├── new_updater.py
├── node_helpers.py
├── nodes.py
├── protocol.py
├── pyproject.toml
├── pytest.ini
├── requirements.txt
├── script_examples/
│   ├── basic_api_example.py
│   ├── websockets_api_example.py
│   └── websockets_api_example_ws_images.py
├── server.py
├── tests/
│   ├── README.md
│   ├── __init__.py
│   ├── compare/
│   │   ├── conftest.py
│   │   └── test_quality.py
│   ├── conftest.py
│   ├── execution/
│   │   ├── test_async_nodes.py
│   │   ├── test_execution.py
│   │   ├── test_jobs.py
│   │   ├── test_preview_method.py
│   │   ├── test_progress_isolation.py
│   │   ├── test_public_api.py
│   │   └── testing_nodes/
│   │       └── testing-pack/
│   │           ├── __init__.py
│   │           ├── api_test_nodes.py
│   │           ├── async_test_nodes.py
│   │           ├── conditions.py
│   │           ├── flow_control.py
│   │           ├── specific_tests.py
│   │           ├── stubs.py
│   │           ├── tools.py
│   │           └── util.py
│   └── inference/
│       ├── __init__.py
│       ├── graphs/
│       │   └── default_graph_sdxl1_0.json
│       └── test_inference.py
├── tests-unit/
│   ├── README.md
│   ├── app_test/
│   │   ├── __init__.py
│   │   ├── custom_node_manager_test.py
│   │   ├── frontend_manager_test.py
│   │   ├── model_manager_test.py
│   │   ├── test_migrations.py
│   │   └── user_manager_system_user_test.py
│   ├── assets_test/
│   │   ├── conftest.py
│   │   ├── helpers.py
│   │   ├── queries/
│   │   │   ├── conftest.py
│   │   │   ├── test_asset.py
│   │   │   ├── test_asset_info.py
│   │   │   ├── test_cache_state.py
│   │   │   ├── test_metadata.py
│   │   │   └── test_tags.py
│   │   ├── services/
│   │   │   ├── __init__.py
│   │   │   ├── conftest.py
│   │   │   ├── test_asset_management.py
│   │   │   ├── test_bulk_ingest.py
│   │   │   ├── test_enrich.py
│   │   │   ├── test_ingest.py
│   │   │   ├── test_tag_histogram.py
│   │   │   └── test_tagging.py
│   │   ├── test_assets_missing_sync.py
│   │   ├── test_crud.py
│   │   ├── test_downloads.py
│   │   ├── test_file_utils.py
│   │   ├── test_list_filter.py
│   │   ├── test_metadata_filters.py
│   │   ├── test_prune_orphaned_assets.py
│   │   ├── test_sync_references.py
│   │   ├── test_tags_api.py
│   │   └── test_uploads.py
│   ├── comfy_api_test/
│   │   ├── input_impl_test.py
│   │   └── video_types_test.py
│   ├── comfy_extras_test/
│   │   ├── __init__.py
│   │   ├── image_stitch_test.py
│   │   └── nodes_math_test.py
│   ├── comfy_quant/
│   │   └── test_mixed_precision.py
│   ├── comfy_test/
│   │   ├── folder_path_test.py
│   │   └── model_detection_test.py
│   ├── execution_test/
│   │   ├── preview_method_override_test.py
│   │   ├── test_cache_provider.py
│   │   └── validate_node_input_test.py
│   ├── feature_flags_test.py
│   ├── folder_paths_test/
│   │   ├── __init__.py
│   │   ├── filter_by_content_types_test.py
│   │   ├── misc_test.py
│   │   └── system_user_test.py
│   ├── prompt_server_test/
│   │   ├── __init__.py
│   │   ├── system_user_endpoint_test.py
│   │   └── user_manager_test.py
│   ├── requirements.txt
│   ├── seeder_test/
│   │   └── test_seeder.py
│   ├── server/
│   │   └── utils/
│   │       └── file_operations_test.py
│   ├── server_test/
│   │   └── test_cache_control.py
│   ├── utils/
│   │   ├── extra_config_test.py
│   │   └── json_util_test.py
│   └── websocket_feature_flags_test.py
└── utils/
    ├── __init__.py
    ├── extra_config.py
    ├── install_util.py
    ├── json_util.py
    └── mime_types.py
Download .txt
Showing preview only (423K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (5205 symbols across 233 files)

FILE: .ci/update_windows/update.py
  function pull (line 8) | def pull(repo, remote_name='origin', branch='master'):
  function latest_tag (line 96) | def latest_tag(repo):
  function files_equal (line 130) | def files_equal(file1, file2):
  function file_size (line 136) | def file_size(f):

FILE: alembic_db/env.py
  function run_migrations_offline (line 20) | def run_migrations_offline() -> None:
  function run_migrations_online (line 41) | def run_migrations_online() -> None:

FILE: alembic_db/versions/0001_assets.py
  function upgrade (line 17) | def upgrade() -> None:
  function downgrade (line 144) | def downgrade() -> None:

FILE: alembic_db/versions/0002_merge_to_asset_references.py
  function upgrade (line 21) | def upgrade() -> None:
  function downgrade (line 175) | def downgrade() -> None:

FILE: alembic_db/versions/0003_add_metadata_job_id.py
  function upgrade (line 21) | def upgrade() -> None:
  function downgrade (line 63) | def downgrade() -> None:

FILE: api_server/routes/internal/internal_routes.py
  class InternalRoutes (line 8) | class InternalRoutes:
    method __init__ (line 15) | def __init__(self, prompt_server):
    method setup_routes (line 21) | def setup_routes(self):
    method get_app (line 73) | def get_app(self):

FILE: api_server/services/terminal_service.py
  class TerminalService (line 6) | class TerminalService:
    method __init__ (line 7) | def __init__(self, server):
    method get_terminal_size (line 14) | def get_terminal_size(self):
    method update_size (line 25) | def update_size(self):
    method subscribe (line 42) | def subscribe(self, client_id):
    method unsubscribe (line 45) | def unsubscribe(self, client_id):
    method send_messages (line 48) | def send_messages(self, entries):

FILE: api_server/utils/file_operations.py
  class FileInfo (line 4) | class FileInfo(TypedDict):
  class DirectoryInfo (line 10) | class DirectoryInfo(TypedDict):
  function is_file_info (line 17) | def is_file_info(item: FileSystemItem) -> TypeGuard[FileInfo]:
  class FileSystemOperations (line 20) | class FileSystemOperations:
    method walk_directory (line 22) | def walk_directory(directory: str) -> List[FileSystemItem]:

FILE: app/app_settings.py
  class AppSettings (line 7) | class AppSettings():
    method __init__ (line 8) | def __init__(self, user_manager):
    method get_settings (line 11) | def get_settings(self, request):
    method save_settings (line 30) | def save_settings(self, request, settings):
    method add_routes (line 36) | def add_routes(self, routes):

FILE: app/assets/api/routes.py
  function _require_assets_feature_enabled (line 49) | def _require_assets_feature_enabled(handler):
  function get_query_dict (line 67) | def get_query_dict(request: web.Request) -> dict[str, Any]:
  function register_assets_routes (line 87) | def register_assets_routes(
  function disable_assets_routes (line 98) | def disable_assets_routes() -> None:
  function _build_error_response (line 104) | def _build_error_response(
  function _build_validation_error_response (line 113) | def _build_validation_error_response(code: str, ve: ValidationError) -> ...
  function _validate_sort_field (line 118) | def _validate_sort_field(requested: str | None) -> str:
  function _build_preview_url_from_view (line 127) | def _build_preview_url_from_view(tags: list[str], user_metadata: dict[st...
  function _build_asset_response (line 153) | def _build_asset_response(result: schemas.AssetDetailResult | schemas.Up...
  function head_asset_by_hash (line 184) | async def head_asset_by_hash(request: web.Request) -> web.Response:
  function list_assets_route (line 198) | async def list_assets_route(request: web.Request) -> web.Response:
  function get_asset_route (line 236) | async def get_asset_route(request: web.Request) -> web.Response:
  function download_asset_content (line 271) | async def download_asset_content(request: web.Request) -> web.Response:
  function create_asset_from_hash_route (line 337) | async def create_asset_from_hash_route(request: web.Request) -> web.Resp...
  function upload_asset (line 377) | async def upload_asset(request: web.Request) -> web.Response:
  function update_asset_route (line 480) | async def update_asset_route(request: web.Request) -> web.Response:
  function delete_asset_route (line 518) | async def delete_asset_route(request: web.Request) -> web.Response:
  function get_tags (line 550) | async def get_tags(request: web.Request) -> web.Response:
  function add_asset_tags (line 587) | async def add_asset_tags(request: web.Request) -> web.Response:
  function delete_asset_tags (line 635) | async def delete_asset_tags(request: web.Request) -> web.Response:
  function get_tags_refine (line 682) | async def get_tags_refine(request: web.Request) -> web.Response:
  function seed_assets (line 704) | async def seed_assets(request: web.Request) -> web.Response:
  function get_seed_status (line 754) | async def get_seed_status(request: web.Request) -> web.Response:
  function cancel_seed (line 776) | async def cancel_seed(request: web.Request) -> web.Response:
  function mark_missing_assets (line 786) | async def mark_missing_assets(request: web.Request) -> web.Response:

FILE: app/assets/api/schemas_in.py
  class UploadError (line 16) | class UploadError(Exception):
    method __init__ (line 19) | def __init__(self, status: int, code: str, message: str):
  class AssetValidationError (line 26) | class AssetValidationError(Exception):
    method __init__ (line 29) | def __init__(self, code: str, message: str):
  class ParsedUpload (line 36) | class ParsedUpload:
  class ListAssetsQuery (line 52) | class ListAssetsQuery(BaseModel):
    method _split_csv_tags (line 70) | def _split_csv_tags(cls, v):
    method _parse_metadata_json (line 86) | def _parse_metadata_json(cls, v):
  class UpdateAssetBody (line 100) | class UpdateAssetBody(BaseModel):
    method _validate_at_least_one_field (line 106) | def _validate_at_least_one_field(self):
  class CreateFromHashBody (line 117) | class CreateFromHashBody(BaseModel):
    method _require_blake3 (line 129) | def _require_blake3(cls, v):
    method _normalize_tags_field (line 134) | def _normalize_tags_field(cls, v):
  class TagsRefineQuery (line 151) | class TagsRefineQuery(BaseModel):
    method _split_csv_tags (line 160) | def _split_csv_tags(cls, v):
    method _parse_metadata_json (line 175) | def _parse_metadata_json(cls, v):
  class TagsListQuery (line 189) | class TagsListQuery(BaseModel):
    method normalize_prefix (line 200) | def normalize_prefix(cls, v: str | None) -> str | None:
  class TagsAdd (line 207) | class TagsAdd(BaseModel):
    method normalize_tags (line 213) | def normalize_tags(cls, v: list[str]) -> list[str]:
  class TagsRemove (line 230) | class TagsRemove(TagsAdd):
  class UploadAssetSpec (line 234) | class UploadAssetSpec(BaseModel):
    method _parse_hash (line 259) | def _parse_hash(cls, v):
    method _parse_tags (line 269) | def _parse_tags(cls, v):
    method _parse_metadata_json (line 315) | def _parse_metadata_json(cls, v):
    method _validate_order (line 332) | def _validate_order(self):

FILE: app/assets/api/schemas_out.py
  class Asset (line 7) | class Asset(BaseModel):
    method _serialize_datetime (line 31) | def _serialize_datetime(self, v: datetime | None, _info):
  class AssetCreated (line 35) | class AssetCreated(Asset):
  class AssetsList (line 39) | class AssetsList(BaseModel):
  class TagUsage (line 45) | class TagUsage(BaseModel):
  class TagsList (line 51) | class TagsList(BaseModel):
  class TagsAdd (line 57) | class TagsAdd(BaseModel):
  class TagsRemove (line 64) | class TagsRemove(BaseModel):
  class TagHistogram (line 71) | class TagHistogram(BaseModel):

FILE: app/assets/api/upload.py
  function normalize_and_validate_hash (line 13) | def normalize_and_validate_hash(s: str) -> str:
  function parse_multipart_upload (line 24) | async def parse_multipart_upload(
  function delete_temp_file_if_exists (line 172) | def delete_temp_file_if_exists(tmp_path: str | None) -> None:

FILE: app/assets/database/models.py
  class Asset (line 26) | class Asset(Base):
    method __repr__ (line 56) | def __repr__(self) -> str:
  class AssetReference (line 60) | class AssetReference(Base):
    method __repr__ (line 164) | def __repr__(self) -> str:
  class AssetReferenceMeta (line 169) | class AssetReferenceMeta(Base):
  class AssetReferenceTag (line 201) | class AssetReferenceTag(Base):
  class Tag (line 226) | class Tag(Base):
    method __repr__ (line 245) | def __repr__(self) -> str:

FILE: app/assets/database/queries/asset.py
  function asset_exists_by_hash (line 10) | def asset_exists_by_hash(
  function get_asset_by_hash (line 28) | def get_asset_by_hash(
  function upsert_asset (line 39) | def upsert_asset(
  function bulk_insert_assets (line 81) | def bulk_insert_assets(
  function get_existing_asset_ids (line 93) | def get_existing_asset_ids(
  function update_asset_hash_and_mime (line 109) | def update_asset_hash_and_mime(
  function reassign_asset_references (line 126) | def reassign_asset_references(

FILE: app/assets/database/queries/asset_reference.py
  function _check_is_scalar (line 37) | def _check_is_scalar(v):
  function _scalar_to_row (line 47) | def _scalar_to_row(key: str, ordinal: int, value) -> dict:
  function convert_metadata_to_rows (line 59) | def convert_metadata_to_rows(key: str, value) -> list[dict]:
  function get_reference_by_id (line 77) | def get_reference_by_id(
  function get_reference_with_owner_check (line 84) | def get_reference_with_owner_check(
  function get_reference_by_file_path (line 103) | def get_reference_by_file_path(
  function reference_exists_for_asset_id (line 117) | def reference_exists_for_asset_id(
  function reference_exists (line 131) | def reference_exists(
  function insert_reference (line 146) | def insert_reference(
  function get_or_create_reference (line 177) | def get_or_create_reference(
  function update_reference_timestamps (line 229) | def update_reference_timestamps(
  function list_references_page (line 241) | def list_references_page(
  function fetch_reference_asset_and_tags (line 321) | def fetch_reference_asset_and_tags(
  function fetch_reference_and_asset (line 358) | def fetch_reference_and_asset(
  function update_reference_access_time (line 380) | def update_reference_access_time(
  function update_reference_name (line 398) | def update_reference_name(
  function update_reference_updated_at (line 412) | def update_reference_updated_at(
  function rebuild_metadata_projection (line 426) | def rebuild_metadata_projection(session: Session, ref: AssetReference) -...
  function set_reference_metadata (line 462) | def set_reference_metadata(
  function set_reference_system_metadata (line 478) | def set_reference_system_metadata(
  function delete_reference_by_id (line 495) | def delete_reference_by_id(
  function soft_delete_reference_by_id (line 507) | def soft_delete_reference_by_id(
  function set_reference_preview (line 529) | def set_reference_preview(
  class CacheStateRow (line 550) | class CacheStateRow(NamedTuple):
  function list_references_by_asset_id (line 562) | def list_references_by_asset_id(
  function list_all_file_paths_by_asset_id (line 579) | def list_all_file_paths_by_asset_id(
  function upsert_reference (line 598) | def upsert_reference(
  function mark_references_missing_outside_prefixes (line 655) | def mark_references_missing_outside_prefixes(
  function restore_references_by_paths (line 679) | def restore_references_by_paths(session: Session, file_paths: list[str])...
  function get_unreferenced_unhashed_asset_ids (line 700) | def get_unreferenced_unhashed_asset_ids(session: Session) -> list[str]:
  function delete_assets_by_ids (line 722) | def delete_assets_by_ids(session: Session, asset_ids: list[str]) -> int:
  function get_references_for_prefixes (line 739) | def get_references_for_prefixes(
  function bulk_update_needs_verify (line 797) | def bulk_update_needs_verify(
  function bulk_update_is_missing (line 817) | def bulk_update_is_missing(
  function update_is_missing_by_asset_id (line 837) | def update_is_missing_by_asset_id(
  function delete_references_by_ids (line 853) | def delete_references_by_ids(session: Session, reference_ids: list[str])...
  function delete_orphaned_seed_asset (line 869) | def delete_orphaned_seed_asset(session: Session, asset_id: str) -> bool:
  class UnenrichedReferenceRow (line 886) | class UnenrichedReferenceRow(NamedTuple):
  function get_unenriched_references (line 895) | def get_unenriched_references(
  function bulk_update_enrichment_level (line 945) | def bulk_update_enrichment_level(
  function bulk_insert_references_ignore_conflicts (line 964) | def bulk_insert_references_ignore_conflicts(
  function get_references_by_paths_and_asset_ids (line 983) | def get_references_by_paths_and_asset_ids(
  function get_reference_ids_by_ids (line 1014) | def get_reference_ids_by_ids(

FILE: app/assets/database/queries/common.py
  function calculate_rows_per_statement (line 16) | def calculate_rows_per_statement(cols: int) -> int:
  function iter_chunks (line 21) | def iter_chunks(seq, n: int):
  function iter_row_chunks (line 27) | def iter_row_chunks(rows: list[dict], cols_per_row: int) -> Iterable[lis...
  function build_visible_owner_clause (line 34) | def build_visible_owner_clause(owner_id: str) -> sa.sql.ClauseElement:
  function build_prefix_like_conditions (line 45) | def build_prefix_like_conditions(
  function apply_tag_filters (line 59) | def apply_tag_filters(
  function apply_metadata_filter (line 87) | def apply_metadata_filter(

FILE: app/assets/database/queries/tags.py
  class AddTagsResult (line 27) | class AddTagsResult:
  class RemoveTagsResult (line 34) | class RemoveTagsResult:
  class SetTagsResult (line 41) | class SetTagsResult:
  function validate_tags_exist (line 47) | def validate_tags_exist(session: Session, tags: list[str]) -> None:
  function ensure_tags_exist (line 58) | def ensure_tags_exist(
  function get_reference_tags (line 73) | def get_reference_tags(session: Session, reference_id: str) -> list[str]:
  function set_reference_tags (line 86) | def set_reference_tags(
  function add_tags_to_reference (line 126) | def add_tags_to_reference(
  function remove_tags_from_reference (line 178) | def remove_tags_from_reference(
  function add_missing_tag_for_asset_id (line 210) | def add_missing_tag_for_asset_id(
  function remove_missing_tag_for_asset_id (line 247) | def remove_missing_tag_for_asset_id(
  function list_tags_with_usage (line 261) | def list_tags_with_usage(
  function list_tag_counts_for_filtered_assets (line 338) | def list_tag_counts_for_filtered_assets(
  function bulk_insert_tags_and_meta (line 385) | def bulk_insert_tags_and_meta(

FILE: app/assets/helpers.py
  function select_best_live_path (line 6) | def select_best_live_path(states: Sequence) -> str:
  function escape_sql_like_string (line 26) | def escape_sql_like_string(s: str, escape: str = "!") -> tuple[str, str]:
  function get_utc_now (line 36) | def get_utc_now() -> datetime:
  function normalize_tags (line 41) | def normalize_tags(tags: list[str] | None) -> list[str]:
  function validate_blake3_hash (line 50) | def validate_blake3_hash(s: str) -> str:

FILE: app/assets/scanner.py
  class _RefInfo (line 44) | class _RefInfo(TypedDict):
  class _AssetAccumulator (line 52) | class _AssetAccumulator(TypedDict):
  function get_prefixes_for_root (line 61) | def get_prefixes_for_root(root: RootType) -> list[str]:
  function get_all_known_prefixes (line 74) | def get_all_known_prefixes() -> list[str]:
  function collect_models_files (line 80) | def collect_models_files() -> list[str]:
  function sync_references_with_filesystem (line 102) | def sync_references_with_filesystem(
  function sync_root_safely (line 232) | def sync_root_safely(root: RootType) -> set[str]:
  function mark_missing_outside_prefixes_safely (line 252) | def mark_missing_outside_prefixes_safely(prefixes: list[str]) -> int:
  function collect_paths_for_roots (line 267) | def collect_paths_for_roots(roots: tuple[RootType, ...]) -> list[str]:
  function build_asset_specs (line 279) | def build_asset_specs(
  function insert_asset_specs (line 349) | def insert_asset_specs(specs: list[SeedAssetSpec], tag_pool: set[str]) -...
  function get_unenriched_assets_for_roots (line 367) | def get_unenriched_assets_for_roots(
  function enrich_asset (line 395) | def enrich_asset(
  function enrich_assets_batch (line 514) | def enrich_assets_batch(

FILE: app/assets/seeder.py
  class ScanInProgressError (line 28) | class ScanInProgressError(Exception):
  class State (line 32) | class State(Enum):
  class ScanPhase (line 41) | class ScanPhase(Enum):
  class Progress (line 50) | class Progress:
  class ScanStatus (line 60) | class ScanStatus:
  class _AssetSeeder (line 71) | class _AssetSeeder:
    method __init__ (line 79) | def __init__(self) -> None:
    method disable (line 96) | def disable(self) -> None:
    method is_disabled (line 101) | def is_disabled(self) -> bool:
    method start (line 105) | def start(
    method start_fast (line 151) | def start_fast(
    method start_enrich (line 175) | def start_enrich(
    method cancel (line 199) | def cancel(self) -> bool:
    method stop (line 214) | def stop(self) -> bool:
    method pause (line 222) | def pause(self) -> bool:
    method resume (line 238) | def resume(self) -> bool:
    method restart (line 255) | def restart(
    method wait (line 300) | def wait(self, timeout: float | None = None) -> bool:
    method get_status (line 316) | def get_status(self) -> ScanStatus:
    method shutdown (line 333) | def shutdown(self, timeout: float = 5.0) -> None:
    method mark_missing_outside_prefixes (line 344) | def mark_missing_outside_prefixes(self) -> int:
    method _is_cancelled (line 388) | def _is_cancelled(self) -> bool:
    method _is_paused_or_cancelled (line 392) | def _is_paused_or_cancelled(self) -> bool:
    method _check_pause_and_cancel (line 402) | def _check_pause_and_cancel(self) -> bool:
    method _emit_event (line 417) | def _emit_event(self, event_type: str, data: dict) -> None:
    method _update_progress (line 427) | def _update_progress(
    method _add_error (line 466) | def _add_error(self, message: str) -> None:
    method _log_scan_config (line 472) | def _log_scan_config(self, roots: tuple[RootType, ...]) -> None:
    method _run_scan (line 487) | def _run_scan(self) -> None:
    method _run_fast_phase (line 601) | def _run_fast_phase(self, roots: tuple[RootType, ...]) -> tuple[int, i...
    method _run_enrich_phase (line 709) | def _run_enrich_phase(self, roots: tuple[RootType, ...]) -> tuple[bool...

FILE: app/assets/services/asset_management.py
  function get_asset_detail (line 44) | def get_asset_detail(
  function update_asset_metadata (line 65) | def update_asset_metadata(
  function delete_asset_reference (line 147) | def delete_asset_reference(
  function set_asset_preview (line 203) | def set_asset_preview(
  function asset_exists (line 234) | def asset_exists(asset_hash: str) -> bool:
  function get_asset_by_hash (line 239) | def get_asset_by_hash(asset_hash: str) -> AssetData | None:
  function list_assets_page (line 245) | def list_assets_page(
  function resolve_hash_to_path (line 283) | def resolve_hash_to_path(
  function resolve_asset_for_download (line 324) | def resolve_asset_for_download(

FILE: app/assets/services/bulk_ingest.py
  class SeedAssetSpec (line 28) | class SeedAssetSpec(TypedDict):
  class AssetRow (line 42) | class AssetRow(TypedDict):
  class ReferenceRow (line 52) | class ReferenceRow(TypedDict):
  class TagRow (line 68) | class TagRow(TypedDict):
  class MetadataRow (line 77) | class MetadataRow(TypedDict):
  class BulkInsertResult (line 90) | class BulkInsertResult:
  function batch_insert_seed_assets (line 98) | def batch_insert_seed_assets(
  function cleanup_unreferenced_assets (line 270) | def cleanup_unreferenced_assets(session: Session) -> int:

FILE: app/assets/services/file_utils.py
  function get_mtime_ns (line 4) | def get_mtime_ns(stat_result: os.stat_result) -> int:
  function get_size_and_mtime_ns (line 11) | def get_size_and_mtime_ns(path: str, follow_symlinks: bool = True) -> tu...
  function verify_file_unchanged (line 17) | def verify_file_unchanged(
  function is_visible (line 39) | def is_visible(name: str) -> bool:
  function list_files_recursively (line 44) | def list_files_recursively(base_dir: str) -> list[str]:

FILE: app/assets/services/hashing.py
  class HashCheckpoint (line 19) | class HashCheckpoint:
  function _open_for_hashing (line 29) | def _open_for_hashing(fp: str | IO[bytes]) -> Iterator[tuple[IO[bytes], ...
  function compute_blake3_hash (line 51) | def compute_blake3_hash(

FILE: app/assets/services/ingest.py
  function _ingest_file_from_path (line 45) | def _ingest_file_from_path(
  function _register_existing_asset (line 133) | def _register_existing_asset(
  function _update_metadata_with_filename (line 213) | def _update_metadata_with_filename(
  function _sanitize_filename (line 237) | def _sanitize_filename(name: str | None, fallback: str) -> str:
  class HashMismatchError (line 242) | class HashMismatchError(Exception):
  class DependencyMissingError (line 246) | class DependencyMissingError(Exception):
    method __init__ (line 247) | def __init__(self, message: str):
  function upload_from_temp_path (line 252) | def upload_from_temp_path(
  function register_file_in_place (line 364) | def register_file_in_place(
  function create_from_hash (line 430) | def create_from_hash(

FILE: app/assets/services/metadata_extract.py
  class ExtractedMetadata (line 29) | class ExtractedMetadata:
    method to_user_metadata (line 58) | def to_user_metadata(self) -> dict[str, Any]:
    method to_meta_rows (line 104) | def to_meta_rows(self, reference_id: str) -> list[dict]:
  function _read_safetensors_header (line 177) | def _read_safetensors_header(
  function _extract_safetensors_metadata (line 208) | def _extract_safetensors_metadata(
  function extract_file_metadata (line 278) | def extract_file_metadata(

FILE: app/assets/services/path_utils.py
  function get_comfy_models_folders (line 12) | def get_comfy_models_folders() -> list[tuple[str, list[str]]]:
  function resolve_destination_from_tags (line 29) | def resolve_destination_from_tags(tags: list[str]) -> tuple[str, list[st...
  function validate_path_within_base (line 61) | def validate_path_within_base(candidate: str, base: str) -> None:
  function compute_relative_filename (line 68) | def compute_relative_filename(file_path: str) -> str | None:
  function get_asset_category_and_relative_path (line 94) | def get_asset_category_and_relative_path(
  function get_name_and_tags_from_asset_path (line 153) | def get_name_and_tags_from_asset_path(file_path: str) -> tuple[str, list...

FILE: app/assets/services/schemas.py
  class AssetData (line 11) | class AssetData:
  class ReferenceData (line 18) | class ReferenceData:
  class AssetDetailResult (line 34) | class AssetDetailResult:
  class RegisterAssetResult (line 41) | class RegisterAssetResult:
  class IngestResult (line 49) | class IngestResult:
  class TagUsage (line 57) | class TagUsage(NamedTuple):
  class AssetSummaryData (line 64) | class AssetSummaryData:
  class ListAssetsResult (line 71) | class ListAssetsResult:
  class DownloadResolutionResult (line 77) | class DownloadResolutionResult:
  class UploadResult (line 84) | class UploadResult:
  function extract_reference_data (line 91) | def extract_reference_data(ref: AssetReference) -> ReferenceData:
  function extract_asset_data (line 106) | def extract_asset_data(asset: Asset | None) -> AssetData | None:

FILE: app/assets/services/tagging.py
  function apply_tags (line 16) | def apply_tags(
  function remove_tags (line 38) | def remove_tags(
  function list_tags (line 56) | def list_tags(
  function list_tag_histogram (line 81) | def list_tag_histogram(

FILE: app/custom_node_manager.py
  function safe_load_json_file (line 22) | def safe_load_json_file(file_path: str) -> dict:
  class CustomNodeManager (line 34) | class CustomNodeManager:
    method build_translations (line 36) | def build_translations(self):
    method add_routes (line 94) | def add_routes(self, routes, webapp, loadedModules):

FILE: app/database/db.py
  function dependencies_available (line 38) | def dependencies_available():
  function can_create_session (line 45) | def can_create_session():
  function get_alembic_config (line 53) | def get_alembic_config():
  function get_db_path (line 65) | def get_db_path():
  function _acquire_file_lock (line 75) | def _acquire_file_lock(db_path):
  function _is_memory_db (line 94) | def _is_memory_db(db_url):
  function init_db (line 99) | def init_db():
  function _init_memory_db (line 109) | def _init_memory_db(db_url):
  function _init_file_db (line 134) | def _init_file_db(db_url):
  function create_session (line 190) | def create_session():

FILE: app/database/models.py
  class Base (line 14) | class Base(DeclarativeBase):
  function to_dict (line 17) | def to_dict(obj: Any, include_none: bool = False) -> dict[str, Any]:

FILE: app/frontend_management.py
  function frontend_install_warning_message (line 26) | def frontend_install_warning_message():
  function parse_version (line 33) | def parse_version(version: str) -> tuple[int, int, int]:
  function is_valid_version (line 36) | def is_valid_version(version: str) -> bool:
  function get_installed_frontend_version (line 41) | def get_installed_frontend_version():
  function get_required_frontend_version (line 47) | def get_required_frontend_version():
  function check_frontend_version (line 51) | def check_frontend_version():
  class Asset (line 80) | class Asset(TypedDict):
  class Release (line 84) | class Release(TypedDict):
  class FrontEndProvider (line 96) | class FrontEndProvider:
    method folder_name (line 101) | def folder_name(self) -> str:
    method release_url (line 105) | def release_url(self) -> str:
    method all_releases (line 109) | def all_releases(self) -> list[Release]:
    method latest_release (line 124) | def latest_release(self) -> Release:
    method latest_prerelease (line 131) | def latest_prerelease(self) -> Release:
    method get_release (line 141) | def get_release(self, version: str) -> Release:
  function download_release_asset_zip (line 153) | def download_release_asset_zip(release: Release, destination_path: str) ...
  class FrontendManager (line 183) | class FrontendManager:
    method get_required_frontend_version (line 187) | def get_required_frontend_version(cls) -> str:
    method get_installed_templates_version (line 192) | def get_installed_templates_version(cls) -> str:
    method get_required_templates_version (line 201) | def get_required_templates_version(cls) -> str:
    method default_frontend_path (line 205) | def default_frontend_path(cls) -> str:
    method template_asset_map (line 225) | def template_asset_map(cls) -> Optional[Dict[str, str]]:
    method legacy_templates_path (line 271) | def legacy_templates_path(cls) -> Optional[str]:
    method embedded_docs_path (line 294) | def embedded_docs_path(cls) -> str:
    method parse_version_string (line 307) | def parse_version_string(cls, value: str) -> tuple[str, str, str]:
    method init_frontend_unsafe (line 326) | def init_frontend_unsafe(
    method init_frontend (line 391) | def init_frontend(cls, version_string: str) -> str:
    method template_asset_handler (line 409) | def template_asset_handler(cls):

FILE: app/logger.py
  class LogInterceptor (line 13) | class LogInterceptor(io.TextIOWrapper):
    method __init__ (line 14) | def __init__(self, stream,  *args, **kwargs):
    method write (line 22) | def write(self, data):
    method flush (line 34) | def flush(self):
    method on_flush (line 40) | def on_flush(self, callback):
  function get_logs (line 44) | def get_logs():
  function on_flush (line 48) | def on_flush(callback):
  function setup_logger (line 54) | def setup_logger(log_level: str = 'INFO', capacity: int = 300, use_stdou...
  function log_startup_warning (line 90) | def log_startup_warning(msg):
  function print_startup_warnings (line 95) | def print_startup_warnings():

FILE: app/model_manager.py
  class ModelFileManager (line 17) | class ModelFileManager:
    method __init__ (line 18) | def __init__(self) -> None:
    method get_cache (line 21) | def get_cache(self, key: str, default=None) -> tuple[list[dict], dict[...
    method set_cache (line 24) | def set_cache(self, key: str, value: tuple[list[dict], dict[str, float...
    method clear_cache (line 27) | def clear_cache(self):
    method add_routes (line 30) | def add_routes(self, routes):
    method get_model_file_list (line 79) | def get_model_file_list(self, folder_name: str):
    method cache_model_file_list_ (line 95) | def cache_model_file_list_(self, folder: str):
    method recursive_search_models_ (line 112) | def recursive_search_models_(self, directory: str, pathIndex: int) -> ...
    method get_model_previews (line 160) | def get_model_previews(self, filepath: str) -> list[str | BytesIO]:
    method __exit__ (line 194) | def __exit__(self, exc_type, exc_value, traceback):

FILE: app/node_replace_manager.py
  class NodeStruct (line 12) | class NodeStruct(TypedDict):
  function copy_node_struct (line 17) | def copy_node_struct(node_struct: NodeStruct, empty_inputs: bool = False...
  class NodeReplaceManager (line 27) | class NodeReplaceManager:
    method __init__ (line 30) | def __init__(self):
    method register (line 33) | def register(self, node_replace: NodeReplace):
    method get_replacement (line 37) | def get_replacement(self, old_node_id: str) -> list[NodeReplace] | None:
    method has_replacement (line 41) | def has_replacement(self, old_node_id: str) -> bool:
    method apply_replacements (line 45) | def apply_replacements(self, prompt: dict[str, NodeStruct]):
    method as_dict (line 97) | def as_dict(self):
    method add_routes (line 104) | def add_routes(self, routes):

FILE: app/subgraph_manager.py
  class Source (line 11) | class Source:
  class SubgraphEntry (line 15) | class SubgraphEntry(TypedDict):
  class CustomNodeSubgraphEntryInfo (line 35) | class CustomNodeSubgraphEntryInfo(TypedDict):
  class SubgraphManager (line 39) | class SubgraphManager:
    method __init__ (line 40) | def __init__(self):
    method _create_entry (line 44) | def _create_entry(self, file: str, source: str, node_pack: str) -> tup...
    method load_entry_data (line 55) | async def load_entry_data(self, entry: SubgraphEntry):
    method sanitize_entry (line 60) | async def sanitize_entry(self, entry: SubgraphEntry | None, remove_dat...
    method sanitize_entries (line 69) | async def sanitize_entries(self, entries: dict[str, SubgraphEntry], re...
    method get_custom_node_subgraphs (line 75) | async def get_custom_node_subgraphs(self, loadedModules, force_reload=...
    method get_blueprint_subgraphs (line 92) | async def get_blueprint_subgraphs(self, force_reload=False):
    method get_all_subgraphs (line 109) | async def get_all_subgraphs(self, loadedModules, force_reload=False):
    method get_subgraph (line 115) | async def get_subgraph(self, id: str, loadedModules):
    method add_routes (line 122) | def add_routes(self, routes, loadedModules):

FILE: app/user_manager.py
  class FileInfo (line 20) | class FileInfo(TypedDict):
  function get_file_info (line 27) | def get_file_info(path: str, relative_to: str) -> FileInfo:
  class UserManager (line 36) | class UserManager():
    method __init__ (line 37) | def __init__(self):
    method get_users_file (line 56) | def get_users_file(self):
    method get_request_user_id (line 59) | def get_request_user_id(self, request):
    method get_request_user_filepath (line 72) | def get_request_user_filepath(self, request, file, type="userdata", cr...
    method add_user (line 105) | def add_user(self, name):
    method add_routes (line 123) | def add_routes(self, routes):

FILE: blueprints/.glsl/update_blueprints.py
  function get_blueprint_files (line 29) | def get_blueprint_files():
  function sanitize_filename (line 34) | def sanitize_filename(name):
  function extract_shaders (line 39) | def extract_shaders():
  function patch_shaders (line 76) | def patch_shaders():
  function main (line 141) | def main():

FILE: comfy/audio_encoders/audio_encoders.py
  class AudioEncoderModel (line 10) | class AudioEncoderModel():
    method __init__ (line 11) | def __init__(self, config):
    method load_sd (line 32) | def load_sd(self, sd):
    method get_sd (line 35) | def get_sd(self):
    method encode_audio (line 38) | def encode_audio(self, audio, sample_rate):
  function load_audio_encoder_from_sd (line 49) | def load_audio_encoder_from_sd(sd, prefix=""):

FILE: comfy/audio_encoders/wav2vec2.py
  class LayerNormConv (line 6) | class LayerNormConv(nn.Module):
    method __init__ (line 7) | def __init__(self, in_channels, out_channels, kernel_size, stride, bia...
    method forward (line 12) | def forward(self, x):
  class LayerGroupNormConv (line 16) | class LayerGroupNormConv(nn.Module):
    method __init__ (line 17) | def __init__(self, in_channels, out_channels, kernel_size, stride, bia...
    method forward (line 22) | def forward(self, x):
  class ConvNoNorm (line 26) | class ConvNoNorm(nn.Module):
    method __init__ (line 27) | def __init__(self, in_channels, out_channels, kernel_size, stride, bia...
    method forward (line 31) | def forward(self, x):
  class ConvFeatureEncoder (line 36) | class ConvFeatureEncoder(nn.Module):
    method __init__ (line 37) | def __init__(self, conv_dim, conv_bias=False, conv_norm=True, dtype=No...
    method forward (line 60) | def forward(self, x):
  class FeatureProjection (line 69) | class FeatureProjection(nn.Module):
    method __init__ (line 70) | def __init__(self, conv_dim, embed_dim, dtype=None, device=None, opera...
    method forward (line 75) | def forward(self, x):
  class PositionalConvEmbedding (line 81) | class PositionalConvEmbedding(nn.Module):
    method __init__ (line 82) | def __init__(self, embed_dim=768, kernel_size=128, groups=16):
    method forward (line 94) | def forward(self, x):
  class TransformerEncoder (line 102) | class TransformerEncoder(nn.Module):
    method __init__ (line 103) | def __init__(
    method forward (line 129) | def forward(self, x, mask=None):
  class Attention (line 143) | class Attention(nn.Module):
    method __init__ (line 144) | def __init__(self, embed_dim, num_heads, bias=True, dtype=None, device...
    method forward (line 155) | def forward(self, x, mask=None):
  class FeedForward (line 165) | class FeedForward(nn.Module):
    method __init__ (line 166) | def __init__(self, embed_dim, mlp_ratio, dtype=None, device=None, oper...
    method forward (line 171) | def forward(self, x):
  class TransformerEncoderLayer (line 178) | class TransformerEncoderLayer(nn.Module):
    method __init__ (line 179) | def __init__(
    method forward (line 196) | def forward(self, x, mask=None):
  class Wav2Vec2Model (line 209) | class Wav2Vec2Model(nn.Module):
    method __init__ (line 212) | def __init__(
    method forward (line 241) | def forward(self, x, mask_time_indices=None, return_dict=False):

FILE: comfy/audio_encoders/whisper.py
  class WhisperFeatureExtractor (line 9) | class WhisperFeatureExtractor(nn.Module):
    method __init__ (line 10) | def __init__(self, n_mels=128, device=None):
    method __call__ (line 30) | def __call__(self, audio):
  class MultiHeadAttention (line 54) | class MultiHeadAttention(nn.Module):
    method __init__ (line 55) | def __init__(self, d_model: int, n_heads: int, dtype=None, device=None...
    method forward (line 68) | def forward(
  class EncoderLayer (line 87) | class EncoderLayer(nn.Module):
    method __init__ (line 88) | def __init__(self, d_model: int, n_heads: int, d_ff: int, dtype=None, ...
    method forward (line 98) | def forward(
  class AudioEncoder (line 118) | class AudioEncoder(nn.Module):
    method __init__ (line 119) | def __init__(
    method forward (line 144) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class WhisperLargeV3 (line 162) | class WhisperLargeV3(nn.Module):
    method __init__ (line 163) | def __init__(
    method forward (line 183) | def forward(self, audio):

FILE: comfy/cldm/cldm.py
  class OptimizedAttention (line 19) | class OptimizedAttention(nn.Module):
    method __init__ (line 20) | def __init__(self, c, nhead, dropout=0.0, dtype=None, device=None, ope...
    method forward (line 28) | def forward(self, x):
  class QuickGELU (line 34) | class QuickGELU(nn.Module):
    method forward (line 35) | def forward(self, x: torch.Tensor):
  class ResBlockUnionControlnet (line 38) | class ResBlockUnionControlnet(nn.Module):
    method __init__ (line 39) | def __init__(self, dim, nhead, dtype=None, device=None, operations=None):
    method attention (line 48) | def attention(self, x: torch.Tensor):
    method forward (line 51) | def forward(self, x: torch.Tensor):
  class ControlledUnetModel (line 56) | class ControlledUnetModel(UNetModel):
  class ControlNet (line 60) | class ControlNet(nn.Module):
    method __init__ (line 61) | def __init__(
    method union_controlnet_merge (line 353) | def union_controlnet_merge(self, hint, control_type, emb, context):
    method make_zero_conv (line 380) | def make_zero_conv(self, channels, operations=None, dtype=None, device...
    method forward (line 383) | def forward(self, x, hint, timesteps, context, y=None, **kwargs):

FILE: comfy/cldm/dit_embedder.py
  class ControlNetEmbedder (line 11) | class ControlNetEmbedder(nn.Module):
    method __init__ (line 13) | def __init__(
    method forward (line 88) | def forward(

FILE: comfy/cldm/mmdit.py
  class ControlNet (line 5) | class ControlNet(comfy.ldm.modules.diffusionmodules.mmdit.MMDiT):
    method __init__ (line 6) | def __init__(
    method forward (line 36) | def forward(

FILE: comfy/cli_args.py
  class EnumAction (line 7) | class EnumAction(argparse.Action):
    method __init__ (line 11) | def __init__(self, **kwargs):
    method __call__ (line 30) | def __call__(self, parser, namespace, values, option_string=None):
  class LatentPreviewMethod (line 96) | class LatentPreviewMethod(enum.Enum):
    method from_string (line 103) | def from_string(cls, value: str):
  class PerformanceFeature (line 161) | class PerformanceFeature(enum.Enum):
  function is_valid_directory (line 206) | def is_valid_directory(path: str) -> str:
  function enables_dynamic_vram (line 265) | def enables_dynamic_vram():

FILE: comfy/clip_model.py
  function clip_preprocess (line 6) | def clip_preprocess(image, size=224, mean=[0.48145466, 0.4578275, 0.4082...
  function siglip2_flex_calc_resolution (line 25) | def siglip2_flex_calc_resolution(oh, ow, patch_size, max_num_patches, ep...
  function siglip2_preprocess (line 42) | def siglip2_preprocess(image, size, patch_size, num_patches, mean=[0.5, ...
  class CLIPAttention (line 58) | class CLIPAttention(torch.nn.Module):
    method __init__ (line 59) | def __init__(self, embed_dim, heads, dtype, device, operations):
    method forward (line 69) | def forward(self, x, mask=None, optimized_attention=None):
  class CLIPMLP (line 82) | class CLIPMLP(torch.nn.Module):
    method __init__ (line 83) | def __init__(self, embed_dim, intermediate_size, activation, dtype, de...
    method forward (line 89) | def forward(self, x):
  class CLIPLayer (line 95) | class CLIPLayer(torch.nn.Module):
    method __init__ (line 96) | def __init__(self, embed_dim, heads, intermediate_size, intermediate_a...
    method forward (line 103) | def forward(self, x, mask=None, optimized_attention=None):
  class CLIPEncoder (line 109) | class CLIPEncoder(torch.nn.Module):
    method __init__ (line 110) | def __init__(self, num_layers, embed_dim, heads, intermediate_size, in...
    method forward (line 114) | def forward(self, x, mask=None, intermediate_output=None):
  class CLIPEmbeddings (line 138) | class CLIPEmbeddings(torch.nn.Module):
    method __init__ (line 139) | def __init__(self, embed_dim, vocab_size=49408, num_positions=77, dtyp...
    method forward (line 144) | def forward(self, input_tokens, dtype=torch.float32):
  class CLIPTextModel_ (line 148) | class CLIPTextModel_(torch.nn.Module):
    method __init__ (line 149) | def __init__(self, config_dict, dtype, device, operations):
    method forward (line 163) | def forward(self, input_tokens=None, attention_mask=None, embeds=None,...
  class CLIPTextModel (line 192) | class CLIPTextModel(torch.nn.Module):
    method __init__ (line 193) | def __init__(self, config_dict, dtype, device, operations):
    method get_input_embeddings (line 201) | def get_input_embeddings(self):
    method set_input_embeddings (line 204) | def set_input_embeddings(self, embeddings):
    method forward (line 207) | def forward(self, *args, **kwargs):
  function siglip2_pos_embed (line 212) | def siglip2_pos_embed(embed_weight, embeds, orig_shape):
  class Siglip2Embeddings (line 219) | class Siglip2Embeddings(torch.nn.Module):
    method __init__ (line 220) | def __init__(self, embed_dim, num_channels=3, patch_size=14, image_siz...
    method forward (line 226) | def forward(self, pixel_values):
  class CLIPVisionEmbeddings (line 234) | class CLIPVisionEmbeddings(torch.nn.Module):
    method __init__ (line 235) | def __init__(self, embed_dim, num_channels=3, patch_size=14, image_siz...
    method forward (line 259) | def forward(self, pixel_values):
  class CLIPVision (line 266) | class CLIPVision(torch.nn.Module):
    method __init__ (line 267) | def __init__(self, config_dict, dtype, device, operations):
    method forward (line 289) | def forward(self, pixel_values, attention_mask=None, intermediate_outp...
  class LlavaProjector (line 301) | class LlavaProjector(torch.nn.Module):
    method __init__ (line 302) | def __init__(self, in_dim, out_dim, dtype, device, operations):
    method forward (line 307) | def forward(self, x):
  class CLIPVisionModelProjection (line 310) | class CLIPVisionModelProjection(torch.nn.Module):
    method __init__ (line 311) | def __init__(self, config_dict, dtype, device, operations):
    method forward (line 324) | def forward(self, *args, **kwargs):

FILE: comfy/clip_vision.py
  class Output (line 13) | class Output:
    method __getitem__ (line 14) | def __getitem__(self, key):
    method __setitem__ (line 16) | def __setitem__(self, key, item):
  class ClipVisionModel (line 28) | class ClipVisionModel():
    method __init__ (line 29) | def __init__(self, json_config):
    method load_sd (line 52) | def load_sd(self, sd):
    method get_sd (line 55) | def get_sd(self):
    method encode_image (line 58) | def encode_image(self, image, crop=True):
  function convert_to_transformers (line 80) | def convert_to_transformers(sd, prefix):
  function load_clipvision_from_sd (line 106) | def load_clipvision_from_sd(sd, prefix="", convert_keys=False):
  function load (line 151) | def load(ckpt_path):

FILE: comfy/comfy_types/__init__.py
  class UnetApplyFunction (line 6) | class UnetApplyFunction(Protocol):
    method __call__ (line 9) | def __call__(self, x: torch.Tensor, t: torch.Tensor, **kwargs) -> torc...
  class UnetApplyConds (line 13) | class UnetApplyConds(TypedDict):
  class UnetParams (line 22) | class UnetParams(TypedDict):

FILE: comfy/comfy_types/examples/example_nodes.py
  class ExampleNode (line 5) | class ExampleNode(ComfyNodeABC):
    method INPUT_TYPES (line 16) | def INPUT_TYPES(s) -> InputTypeDict:
    method execute (line 27) | def execute(self, input_int: int):

FILE: comfy/comfy_types/node_typing.py
  class StrEnum (line 10) | class StrEnum(str, Enum):
    method __str__ (line 13) | def __str__(self) -> str:
  class IO (line 17) | class IO(StrEnum):
    method __ne__ (line 65) | def __ne__(self, value: object) -> bool:
  class RemoteInputOptions (line 75) | class RemoteInputOptions(TypedDict):
  class MultiSelectOptions (line 90) | class MultiSelectOptions(TypedDict):
  class InputTypeOptions (line 97) | class InputTypeOptions(TypedDict):
  class HiddenInputTypeDict (line 183) | class HiddenInputTypeDict(TypedDict):
  class InputTypeDict (line 198) | class InputTypeDict(TypedDict):
  class ComfyNodeABC (line 215) | class ComfyNodeABC(ABC):
    method INPUT_TYPES (line 248) | def INPUT_TYPES(s) -> InputTypeDict:
  class CheckLazyMixin (line 325) | class CheckLazyMixin:
    method check_lazy_status (line 328) | def check_lazy_status(self, **kwargs) -> list[str]:
  class FileLocator (line 346) | class FileLocator(TypedDict):

FILE: comfy/conds.py
  function is_equal (line 7) | def is_equal(x, y):
  class CONDRegular (line 26) | class CONDRegular:
    method __init__ (line 27) | def __init__(self, cond):
    method _copy_with (line 30) | def _copy_with(self, cond):
    method process_cond (line 33) | def process_cond(self, batch_size, **kwargs):
    method can_concat (line 36) | def can_concat(self, other):
    method concat (line 44) | def concat(self, others):
    method size (line 50) | def size(self):
  class CONDNoiseShape (line 54) | class CONDNoiseShape(CONDRegular):
    method process_cond (line 55) | def process_cond(self, batch_size, area, **kwargs):
  class CONDCrossAttn (line 65) | class CONDCrossAttn(CONDRegular):
    method can_concat (line 66) | def can_concat(self, other):
    method concat (line 82) | def concat(self, others):
  class CONDConstant (line 98) | class CONDConstant(CONDRegular):
    method __init__ (line 99) | def __init__(self, cond):
    method process_cond (line 102) | def process_cond(self, batch_size, **kwargs):
    method can_concat (line 105) | def can_concat(self, other):
    method concat (line 110) | def concat(self, others):
    method size (line 113) | def size(self):
  class CONDList (line 117) | class CONDList(CONDRegular):
    method __init__ (line 118) | def __init__(self, cond):
    method process_cond (line 121) | def process_cond(self, batch_size, **kwargs):
    method can_concat (line 128) | def can_concat(self, other):
    method concat (line 137) | def concat(self, others):
    method size (line 147) | def size(self):  # hackish implementation to make the mem estimation work

FILE: comfy/context_windows.py
  class ContextWindowABC (line 17) | class ContextWindowABC(ABC):
    method __init__ (line 18) | def __init__(self):
    method get_tensor (line 22) | def get_tensor(self, full: torch.Tensor) -> torch.Tensor:
    method add_window (line 29) | def add_window(self, full: torch.Tensor, to_add: torch.Tensor) -> torc...
  class ContextHandlerABC (line 35) | class ContextHandlerABC(ABC):
    method __init__ (line 36) | def __init__(self):
    method should_use_context (line 40) | def should_use_context(self, model: BaseModel, conds: list[list[dict]]...
    method get_resized_cond (line 44) | def get_resized_cond(self, cond_in: list[dict], x_in: torch.Tensor, wi...
    method execute (line 48) | def execute(self, calc_cond_batch: Callable, model: BaseModel, conds: ...
  class IndexListContextWindow (line 53) | class IndexListContextWindow(ContextWindowABC):
    method __init__ (line 54) | def __init__(self, index_list: list[int], dim: int=0, total_frames: in...
    method get_tensor (line 61) | def get_tensor(self, full: torch.Tensor, device=None, dim=None, retain...
    method add_window (line 73) | def add_window(self, full: torch.Tensor, to_add: torch.Tensor, dim=Non...
    method get_region_index (line 80) | def get_region_index(self, num_regions: int) -> int:
  class IndexListCallbacks (line 85) | class IndexListCallbacks:
    method init_callbacks (line 92) | def init_callbacks(self):
  function slice_cond (line 96) | def slice_cond(cond_value, window: IndexListContextWindow, x_in: torch.T...
  class ContextSchedule (line 141) | class ContextSchedule:
  class ContextFuseMethod (line 146) | class ContextFuseMethod:
  class IndexListContextHandler (line 151) | class IndexListContextHandler(ContextHandlerABC):
    method __init__ (line 152) | def __init__(self, context_schedule: ContextSchedule, fuse_method: Con...
    method should_use_context (line 168) | def should_use_context(self, model: BaseModel, conds: list[list[dict]]...
    method prepare_control_objects (line 177) | def prepare_control_objects(self, control: ControlBase, device=None) -...
    method get_resized_cond (line 182) | def get_resized_cond(self, cond_in: list[dict], x_in: torch.Tensor, wi...
    method set_step (line 264) | def set_step(self, timestep: torch.Tensor, model_options: dict[str]):
    method get_context_windows (line 271) | def get_context_windows(self, model: BaseModel, x_in: torch.Tensor, mo...
    method execute (line 277) | def execute(self, calc_cond_batch: Callable, model: BaseModel, conds: ...
    method evaluate_context_windows (line 314) | def evaluate_context_windows(self, calc_cond_batch: Callable, model: B...
    method combine_context_window_results (line 339) | def combine_context_window_results(self, x_in: torch.Tensor, sub_conds...
  function _prepare_sampling_wrapper (line 369) | def _prepare_sampling_wrapper(executor, model, noise_shape: torch.Tensor...
  function create_prepare_sampling_wrapper (line 381) | def create_prepare_sampling_wrapper(model: ModelPatcher):
  function _sampler_sample_wrapper (line 389) | def _sampler_sample_wrapper(executor, guider, sigmas, extra_args, callba...
  function create_sampler_sample_wrapper (line 403) | def create_sampler_sample_wrapper(model: ModelPatcher):
  function match_weights_to_dim (line 411) | def match_weights_to_dim(weights: list[float], x_in: torch.Tensor, dim: ...
  function get_shape_for_dim (line 420) | def get_shape_for_dim(x_in: torch.Tensor, dim: int) -> list[int]:
  class ContextSchedules (line 430) | class ContextSchedules:
  function create_windows_uniform_looped (line 438) | def create_windows_uniform_looped(num_frames: int, handler: IndexListCon...
  function create_windows_uniform_standard (line 457) | def create_windows_uniform_standard(num_frames: int, handler: IndexListC...
  function create_windows_static_standard (line 505) | def create_windows_static_standard(num_frames: int, handler: IndexListCo...
  function create_windows_batched (line 524) | def create_windows_batched(num_frames: int, handler: IndexListContextHan...
  function create_windows_default (line 537) | def create_windows_default(num_frames: int, handler: IndexListContextHan...
  function get_matching_context_schedule (line 549) | def get_matching_context_schedule(context_schedule: str) -> ContextSched...
  function get_context_weights (line 556) | def get_context_weights(length: int, full_length: int, idxs: list[int], ...
  function create_weights_flat (line 560) | def create_weights_flat(length: int, **kwargs) -> list[float]:
  function create_weights_pyramid (line 564) | def create_weights_pyramid(length: int, **kwargs) -> list[float]:
  function create_weights_overlap_linear (line 575) | def create_weights_overlap_linear(length: int, full_length: int, idxs: l...
  class ContextFuseMethods (line 589) | class ContextFuseMethods:
  function get_matching_fuse_method (line 606) | def get_matching_fuse_method(fuse_method: str) -> ContextFuseMethod:
  function ordered_halving (line 613) | def ordered_halving(val):
  function get_missing_indexes (line 625) | def get_missing_indexes(windows: list[list[int]], num_frames: int) -> li...
  function does_window_roll_over (line 636) | def does_window_roll_over(window: list[int], num_frames: int) -> tuple[b...
  function shift_window_to_start (line 646) | def shift_window_to_start(window: list[int], num_frames: int):
  function shift_window_to_end (line 654) | def shift_window_to_end(window: list[int], num_frames: int):
  function apply_freenoise (line 665) | def apply_freenoise(noise: torch.Tensor, dim: int, context_length: int, ...

FILE: comfy/controlnet.py
  function broadcast_image_to (line 46) | def broadcast_image_to(tensor, target_batch_size, batched_number):
  class StrengthType (line 63) | class StrengthType(Enum):
  class ControlBase (line 67) | class ControlBase:
    method __init__ (line 68) | def __init__(self):
    method set_cond_hint (line 89) | def set_cond_hint(self, cond_hint, strength=1.0, timestep_percent_rang...
    method pre_run (line 102) | def pre_run(self, model, percent_to_timestep_function):
    method set_previous_controlnet (line 107) | def set_previous_controlnet(self, controlnet):
    method cleanup (line 111) | def cleanup(self):
    method get_models (line 119) | def get_models(self):
    method get_extra_hooks (line 125) | def get_extra_hooks(self):
    method copy_to (line 133) | def copy_to(self, c):
    method inference_memory_requirements (line 150) | def inference_memory_requirements(self, dtype):
    method control_merge (line 155) | def control_merge(self, control, control_prev, output_dtype):
    method set_extra_arg (line 196) | def set_extra_arg(self, argument, value=None):
  class ControlNet (line 200) | class ControlNet(ControlBase):
    method __init__ (line 201) | def __init__(self, control_model=None, global_average_pooling=False, c...
    method get_control (line 218) | def get_control(self, x_noisy, t, cond, batched_number, transformer_op...
    method copy (line 280) | def copy(self):
    method get_models (line 287) | def get_models(self):
    method pre_run (line 292) | def pre_run(self, model, percent_to_timestep_function):
    method cleanup (line 296) | def cleanup(self):
  class QwenFunControlNet (line 301) | class QwenFunControlNet(ControlNet):
    method get_control (line 302) | def get_control(self, x_noisy, t, cond, batched_number, transformer_op...
    method pre_run (line 313) | def pre_run(self, model, percent_to_timestep_function):
    method copy (line 317) | def copy(self):
  class ControlLoraOps (line 324) | class ControlLoraOps:
    class Linear (line 325) | class Linear(torch.nn.Module, comfy.ops.CastWeightBiasOp):
      method __init__ (line 326) | def __init__(self, in_features: int, out_features: int, bias: bool =...
      method forward (line 336) | def forward(self, input):
    class Conv2d (line 345) | class Conv2d(torch.nn.Module, comfy.ops.CastWeightBiasOp):
      method __init__ (line 346) | def __init__(
      method forward (line 378) | def forward(self, input):
  class ControlLora (line 387) | class ControlLora(ControlNet):
    method __init__ (line 388) | def __init__(self, control_weights, global_average_pooling=False, mode...
    method pre_run (line 394) | def pre_run(self, model, percent_to_timestep_function):
    method copy (line 428) | def copy(self):
    method cleanup (line 433) | def cleanup(self):
    method get_models (line 438) | def get_models(self):
    method inference_memory_requirements (line 442) | def inference_memory_requirements(self, dtype):
  function controlnet_config (line 445) | def controlnet_config(sd, model_options={}):
  function controlnet_load_state_dict (line 465) | def controlnet_load_state_dict(control_model, sd):
  function load_controlnet_mmdit (line 476) | def load_controlnet_mmdit(sd, model_options={}):
  class ControlNetSD35 (line 497) | class ControlNetSD35(ControlNet):
    method pre_run (line 498) | def pre_run(self, model, percent_to_timestep_function):
    method copy (line 505) | def copy(self):
  function load_controlnet_sd35 (line 512) | def load_controlnet_sd35(sd, model_options={}):
  function load_controlnet_hunyuandit (line 573) | def load_controlnet_hunyuandit(controlnet_data, model_options={}):
  function load_controlnet_flux_xlabs_mistoline (line 584) | def load_controlnet_flux_xlabs_mistoline(sd, mistoline=False, model_opti...
  function load_controlnet_flux_instantx (line 593) | def load_controlnet_flux_instantx(sd, model_options={}):
  function load_controlnet_qwen_instantx (line 617) | def load_controlnet_qwen_instantx(sd, model_options={}):
  function load_controlnet_qwen_fun (line 634) | def load_controlnet_qwen_fun(sd, model_options={}):
  function convert_mistoline (line 680) | def convert_mistoline(sd):
  function load_controlnet_state_dict (line 684) | def load_controlnet_state_dict(state_dict, model=None, model_options={}):
  function load_controlnet (line 839) | def load_controlnet(ckpt_path, model=None, model_options={}):
  class T2IAdapter (line 851) | class T2IAdapter(ControlBase):
    method __init__ (line 852) | def __init__(self, t2i_model, channels_in, compression_ratio, upscale_...
    method scale_image_to (line 863) | def scale_image_to(self, width, height):
    method get_control (line 869) | def get_control(self, x_noisy, t, cond, batched_number, transformer_op...
    method copy (line 904) | def copy(self):
  function load_t2i_adapter (line 909) | def load_t2i_adapter(t2i_data, model_options={}): #TODO: model_options

FILE: comfy/diffusers_convert.py
  function reshape_weight_for_sd (line 61) | def reshape_weight_for_sd(w, conv3d=False):
  function convert_vae_state_dict (line 69) | def convert_vae_state_dict(vae_state_dict):
  function cat_tensors (line 119) | def cat_tensors(tensors):
  function convert_text_enc_state_dict_v20 (line 135) | def convert_text_enc_state_dict_v20(text_enc_dict, prefix=""):
  function convert_text_enc_state_dict (line 188) | def convert_text_enc_state_dict(text_enc_dict):

FILE: comfy/diffusers_load.py
  function first_file (line 5) | def first_file(path, filenames):
  function load_diffusers (line 12) | def load_diffusers(model_path, output_vae=True, output_clip=True, embedd...

FILE: comfy/extra_samplers/uni_pc.py
  class NoiseScheduleVP (line 10) | class NoiseScheduleVP:
    method __init__ (line 11) | def __init__(
    method marginal_log_mean_coeff (line 129) | def marginal_log_mean_coeff(self, t):
    method marginal_alpha (line 142) | def marginal_alpha(self, t):
    method marginal_std (line 148) | def marginal_std(self, t):
    method marginal_lambda (line 154) | def marginal_lambda(self, t):
    method inverse_lambda (line 162) | def inverse_lambda(self, lamb):
  function model_wrapper (line 181) | def model_wrapper(
  class UniPC (line 352) | class UniPC:
    method __init__ (line 353) | def __init__(
    method dynamic_thresholding_fn (line 373) | def dynamic_thresholding_fn(self, x0, t=None):
    method noise_prediction_fn (line 384) | def noise_prediction_fn(self, x, t):
    method data_prediction_fn (line 390) | def data_prediction_fn(self, x, t):
    method model_fn (line 405) | def model_fn(self, x, t):
    method get_time_steps (line 414) | def get_time_steps(self, skip_type, t_T, t_0, N, device):
    method get_orders_and_timesteps_for_singlestep_solver (line 431) | def get_orders_and_timesteps_for_singlestep_solver(self, steps, order,...
    method denoise_to_zero_fn (line 462) | def denoise_to_zero_fn(self, x, s):
    method multistep_uni_pc_update (line 468) | def multistep_uni_pc_update(self, x, model_prev_list, t_prev_list, t, ...
    method multistep_uni_pc_vary_update (line 477) | def multistep_uni_pc_vary_update(self, x, model_prev_list, t_prev_list...
    method multistep_uni_pc_bh_update (line 579) | def multistep_uni_pc_bh_update(self, x, model_prev_list, t_prev_list, ...
    method sample (line 700) | def sample(self, x, timesteps, t_start=None, t_end=None, order=3, skip...
  function interpolate_fn (line 766) | def interpolate_fn(x, xp, yp):
  function expand_dims (line 808) | def expand_dims(v, dims):
  class SigmaConvert (line 821) | class SigmaConvert:
    method marginal_log_mean_coeff (line 823) | def marginal_log_mean_coeff(self, sigma):
    method marginal_alpha (line 826) | def marginal_alpha(self, t):
    method marginal_std (line 829) | def marginal_std(self, t):
    method marginal_lambda (line 832) | def marginal_lambda(self, t):
  function predict_eps_sigma (line 840) | def predict_eps_sigma(model, input, sigma_in, **kwargs):
  function sample_unipc (line 846) | def sample_unipc(model, noise, sigmas, extra_args=None, callback=None, d...
  function sample_unipc_bh2 (line 872) | def sample_unipc_bh2(model, noise, sigmas, extra_args=None, callback=Non...

FILE: comfy/float.py
  function calc_mantissa (line 3) | def calc_mantissa(abs_x, exponent, normal_mask, MANTISSA_BITS, EXPONENT_...
  function manual_stochastic_round_to_float8 (line 14) | def manual_stochastic_round_to_float8(x, dtype, generator=None):
  function stochastic_rounding (line 50) | def stochastic_rounding(value, dtype, seed=0):
  function stochastic_float_to_fp4_e2m1 (line 71) | def stochastic_float_to_fp4_e2m1(x, generator):
  function to_blocked (line 99) | def to_blocked(input_matrix, flatten: bool = True) -> torch.Tensor:
  function stochastic_round_quantize_nvfp4_block (line 140) | def stochastic_round_quantize_nvfp4_block(x, per_tensor_scale, generator):
  function stochastic_round_quantize_nvfp4 (line 157) | def stochastic_round_quantize_nvfp4(x, per_tensor_scale, pad_16x, seed=0):
  function stochastic_round_quantize_nvfp4_by_block (line 177) | def stochastic_round_quantize_nvfp4_by_block(x, per_tensor_scale, pad_16...
  function stochastic_round_quantize_mxfp8_by_block (line 214) | def stochastic_round_quantize_mxfp8_by_block(x, pad_32x, seed=0):

FILE: comfy/gligen.py
  class GatedCrossAttentionDense (line 9) | class GatedCrossAttentionDense(nn.Module):
    method __init__ (line 10) | def __init__(self, query_dim, context_dim, n_heads, d_head):
    method forward (line 32) | def forward(self, x, objs):
  class GatedSelfAttentionDense (line 42) | class GatedSelfAttentionDense(nn.Module):
    method __init__ (line 43) | def __init__(self, query_dim, context_dim, n_heads, d_head):
    method forward (line 69) | def forward(self, x, objs):
  class GatedSelfAttentionDense2 (line 82) | class GatedSelfAttentionDense2(nn.Module):
    method __init__ (line 83) | def __init__(self, query_dim, context_dim, n_heads, d_head):
    method forward (line 105) | def forward(self, x, objs):
  class FourierEmbedder (line 136) | class FourierEmbedder():
    method __init__ (line 137) | def __init__(self, num_freqs=64, temperature=100):
    method __call__ (line 144) | def __call__(self, x, cat_dim=-1):
  class PositionNet (line 153) | class PositionNet(nn.Module):
    method __init__ (line 154) | def __init__(self, in_dim, out_dim, fourier_freqs=8):
    method forward (line 175) | def forward(self, boxes, masks, positive_embeddings):
  class Gligen (line 198) | class Gligen(nn.Module):
    method __init__ (line 199) | def __init__(self, modules, position_net, key_dim):
    method _set_position (line 207) | def _set_position(self, boxes, masks, positive_embeddings):
    method set_position (line 215) | def set_position(self, latent_image_shape, position_params, device):
    method set_empty (line 246) | def set_empty(self, latent_image_shape, device):
  function load_gligen (line 259) | def load_gligen(sd):

FILE: comfy/hooks.py
  class EnumHookMode (line 29) | class EnumHookMode(enum.Enum):
  class EnumHookType (line 39) | class EnumHookType(enum.Enum):
  class EnumWeightTarget (line 49) | class EnumWeightTarget(enum.Enum):
  class EnumHookScope (line 53) | class EnumHookScope(enum.Enum):
  class _HookRef (line 64) | class _HookRef:
  function default_should_register (line 68) | def default_should_register(hook: Hook, model: ModelPatcher, model_optio...
  function create_target_dict (line 73) | def create_target_dict(target: EnumWeightTarget=None, **kwargs) -> dict[...
  class Hook (line 82) | class Hook:
    method __init__ (line 83) | def __init__(self, hook_type: EnumHookType=None, hook_ref: _HookRef=No...
    method strength (line 99) | def strength(self):
    method initialize_timesteps (line 102) | def initialize_timesteps(self, model: BaseModel):
    method reset (line 106) | def reset(self):
    method clone (line 109) | def clone(self):
    method should_register (line 119) | def should_register(self, model: ModelPatcher, model_options: dict, ta...
    method add_hook_patches (line 122) | def add_hook_patches(self, model: ModelPatcher, model_options: dict, t...
    method __eq__ (line 125) | def __eq__(self, other: Hook):
    method __hash__ (line 128) | def __hash__(self):
  class WeightHook (line 131) | class WeightHook(Hook):
    method __init__ (line 137) | def __init__(self, strength_model=1.0, strength_clip=1.0):
    method strength_model (line 147) | def strength_model(self):
    method strength_clip (line 151) | def strength_clip(self):
    method add_hook_patches (line 154) | def add_hook_patches(self, model: ModelPatcher, model_options: dict, t...
    method clone (line 182) | def clone(self):
  class ObjectPatchHook (line 191) | class ObjectPatchHook(Hook):
    method __init__ (line 192) | def __init__(self, object_patches: dict[str]=None,
    method clone (line 198) | def clone(self):
    method add_hook_patches (line 203) | def add_hook_patches(self, model: ModelPatcher, model_options: dict, t...
  class AdditionalModelsHook (line 206) | class AdditionalModelsHook(Hook):
    method __init__ (line 212) | def __init__(self, models: list[ModelPatcher]=None, key: str=None):
    method clone (line 217) | def clone(self):
    method add_hook_patches (line 223) | def add_hook_patches(self, model: ModelPatcher, model_options: dict, t...
  class TransformerOptionsHook (line 229) | class TransformerOptionsHook(Hook):
    method __init__ (line 233) | def __init__(self, transformers_dict: dict[str, dict[str, dict[str, li...
    method clone (line 241) | def clone(self):
    method add_hook_patches (line 247) | def add_hook_patches(self, model: ModelPatcher, model_options: dict, t...
    method on_apply_hooks (line 263) | def on_apply_hooks(self, model: ModelPatcher, transformer_options: dic...
  class InjectionsHook (line 270) | class InjectionsHook(Hook):
    method __init__ (line 271) | def __init__(self, key: str=None, injections: list[PatcherInjection]=N...
    method clone (line 278) | def clone(self):
    method add_hook_patches (line 284) | def add_hook_patches(self, model: ModelPatcher, model_options: dict, t...
  class HookGroup (line 287) | class HookGroup:
    method __init__ (line 294) | def __init__(self):
    method __len__ (line 298) | def __len__(self):
    method add (line 301) | def add(self, hook: Hook):
    method remove (line 306) | def remove(self, hook: Hook):
    method get_type (line 311) | def get_type(self, hook_type: EnumHookType):
    method contains (line 314) | def contains(self, hook: Hook):
    method is_subset_of (line 317) | def is_subset_of(self, other: HookGroup):
    method new_with_common_hooks (line 322) | def new_with_common_hooks(self, other: HookGroup):
    method clone (line 329) | def clone(self):
    method clone_and_combine (line 335) | def clone_and_combine(self, other: HookGroup):
    method set_keyframes_on_hooks (line 342) | def set_keyframes_on_hooks(self, hook_kf: HookKeyframeGroup):
    method get_hooks_for_clip_schedule (line 350) | def get_hooks_for_clip_schedule(self):
    method reset (line 399) | def reset(self):
    method combine_all_hooks (line 404) | def combine_all_hooks(hooks_list: list[HookGroup], require_count=0) ->...
  class HookKeyframe (line 426) | class HookKeyframe:
    method __init__ (line 427) | def __init__(self, strength: float, start_percent=0.0, guarantee_steps...
    method get_effective_guarantee_steps (line 434) | def get_effective_guarantee_steps(self, max_sigma: torch.Tensor):
    method clone (line 440) | def clone(self):
  class HookKeyframeGroup (line 446) | class HookKeyframeGroup:
    method __init__ (line 447) | def __init__(self):
    method strength (line 457) | def strength(self):
    method reset (line 462) | def reset(self):
    method add (line 470) | def add(self, keyframe: HookKeyframe):
    method _set_first_as_current (line 476) | def _set_first_as_current(self):
    method has_guarantee_steps (line 482) | def has_guarantee_steps(self):
    method has_index (line 488) | def has_index(self, index: int):
    method is_empty (line 491) | def is_empty(self):
    method clone (line 494) | def clone(self):
    method initialize_timesteps (line 501) | def initialize_timesteps(self, model: BaseModel):
    method prepare_current_keyframe (line 505) | def prepare_current_keyframe(self, curr_t: float, transformer_options:...
  class InterpolationMethod (line 540) | class InterpolationMethod:
    method get_weights (line 549) | def get_weights(cls, num_from: float, num_to: float, length: int, meth...
  function get_sorted_list_via_attr (line 568) | def get_sorted_list_via_attr(objects: list, attr: str) -> list:
  function create_transformer_options_from_hooks (line 591) | def create_transformer_options_from_hooks(model: ModelPatcher, hooks: Ho...
  function create_hook_lora (line 602) | def create_hook_lora(lora: dict[str, torch.Tensor], strength_model: floa...
  function create_hook_model_as_lora (line 609) | def create_hook_model_as_lora(weights_model, weights_clip, strength_mode...
  function get_patch_weights_from_model (line 628) | def get_patch_weights_from_model(model: ModelPatcher, discard_model_samp...
  function load_hook_lora_for_models (line 640) | def load_hook_lora_for_models(model: ModelPatcher, clip: CLIP, lora: dic...
  function _combine_hooks_from_values (line 672) | def _combine_hooks_from_values(c_dict: dict[str, HookGroup], values: dic...
  function conditioning_set_values_with_hooks (line 692) | def conditioning_set_values_with_hooks(conditioning, values={}, append_h...
  function set_hooks_for_conditioning (line 708) | def set_hooks_for_conditioning(cond, hooks: HookGroup, append_hooks=True...
  function set_timesteps_for_conditioning (line 713) | def set_timesteps_for_conditioning(cond, timestep_range: tuple[float,flo...
  function set_mask_for_conditioning (line 719) | def set_mask_for_conditioning(cond, mask: torch.Tensor, set_cond_area: s...
  function combine_conditioning (line 731) | def combine_conditioning(conds: list):
  function combine_with_new_conds (line 737) | def combine_with_new_conds(conds: list, new_conds: list):
  function set_conds_props (line 743) | def set_conds_props(conds: list, strength: float, set_cond_area: str,
  function set_conds_props_and_combine (line 758) | def set_conds_props_and_combine(conds: list, new_conds: list, strength: ...
  function set_default_conds_and_combine (line 773) | def set_default_conds_and_combine(conds: list, new_conds: list,

FILE: comfy/image_encoders/dino2.py
  class Dino2AttentionOutput (line 7) | class Dino2AttentionOutput(torch.nn.Module):
    method __init__ (line 8) | def __init__(self, input_dim, output_dim, layer_norm_eps, dtype, devic...
    method forward (line 12) | def forward(self, x):
  class Dino2AttentionBlock (line 16) | class Dino2AttentionBlock(torch.nn.Module):
    method __init__ (line 17) | def __init__(self, embed_dim, heads, layer_norm_eps, dtype, device, op...
    method forward (line 22) | def forward(self, x, mask, optimized_attention):
  class LayerScale (line 26) | class LayerScale(torch.nn.Module):
    method __init__ (line 27) | def __init__(self, dim, dtype, device, operations):
    method forward (line 31) | def forward(self, x):
  class Dinov2MLP (line 34) | class Dinov2MLP(torch.nn.Module):
    method __init__ (line 35) | def __init__(self, hidden_size: int, dtype, device, operations):
    method forward (line 43) | def forward(self, hidden_state: torch.Tensor) -> torch.Tensor:
  class SwiGLUFFN (line 49) | class SwiGLUFFN(torch.nn.Module):
    method __init__ (line 50) | def __init__(self, dim, dtype, device, operations):
    method forward (line 59) | def forward(self, x):
  class Dino2Block (line 66) | class Dino2Block(torch.nn.Module):
    method __init__ (line 67) | def __init__(self, dim, num_heads, layer_norm_eps, dtype, device, oper...
    method forward (line 79) | def forward(self, x, optimized_attention):
  class Dino2Encoder (line 85) | class Dino2Encoder(torch.nn.Module):
    method __init__ (line 86) | def __init__(self, dim, num_heads, layer_norm_eps, num_layers, dtype, ...
    method forward (line 91) | def forward(self, x, intermediate_output=None):
  class Dino2PatchEmbeddings (line 106) | class Dino2PatchEmbeddings(torch.nn.Module):
    method __init__ (line 107) | def __init__(self, dim, num_channels=3, patch_size=14, image_size=518,...
    method forward (line 119) | def forward(self, pixel_values):
  class Dino2Embeddings (line 123) | class Dino2Embeddings(torch.nn.Module):
    method __init__ (line 124) | def __init__(self, dim, dtype, device, operations):
    method forward (line 134) | def forward(self, pixel_values):
  class Dinov2Model (line 142) | class Dinov2Model(torch.nn.Module):
    method __init__ (line 143) | def __init__(self, config_dict, dtype, device, operations):
    method forward (line 155) | def forward(self, pixel_values, attention_mask=None, intermediate_outp...

FILE: comfy/k_diffusion/deis.py
  function edm2t (line 13) | def edm2t(edm_steps, epsilon_s=1e-3, sigma_min=0.002, sigma_max=80):
  function cal_poly (line 22) | def cal_poly(prev_t, j, taus):
  function t2alpha_fn (line 33) | def t2alpha_fn(beta_0, beta_1, t):
  function cal_intergrand (line 38) | def cal_intergrand(beta_0, beta_1, taus):
  function get_deis_coeff_list (line 54) | def get_deis_coeff_list(t_steps, max_order, N=10000, deis_mode='tab'):

FILE: comfy/k_diffusion/sa_solver.py
  function compute_exponential_coeffs (line 10) | def compute_exponential_coeffs(s: torch.Tensor, t: torch.Tensor, solver_...
  function compute_simple_stochastic_adams_b_coeffs (line 53) | def compute_simple_stochastic_adams_b_coeffs(sigma_next: torch.Tensor, c...
  function compute_stochastic_adams_b_coeffs (line 69) | def compute_stochastic_adams_b_coeffs(sigma_next: torch.Tensor, curr_lam...
  function get_tau_interval_func (line 103) | def get_tau_interval_func(start_sigma: float, end_sigma: float, eta: flo...

FILE: comfy/k_diffusion/sampling.py
  function append_zero (line 19) | def append_zero(x):
  function get_sigmas_karras (line 23) | def get_sigmas_karras(n, sigma_min, sigma_max, rho=7., device='cpu'):
  function get_sigmas_exponential (line 32) | def get_sigmas_exponential(n, sigma_min, sigma_max, device='cpu'):
  function get_sigmas_polyexponential (line 38) | def get_sigmas_polyexponential(n, sigma_min, sigma_max, rho=1., device='...
  function get_sigmas_vp (line 45) | def get_sigmas_vp(n, beta_d=19.9, beta_min=0.1, eps_s=1e-3, device='cpu'):
  function get_sigmas_laplace (line 52) | def get_sigmas_laplace(n, sigma_min, sigma_max, mu=0., beta=0.5, device=...
  function to_d (line 63) | def to_d(x, sigma, denoised):
  function get_ancestral_step (line 68) | def get_ancestral_step(sigma_from, sigma_to, eta=1.):
  function default_noise_sampler (line 78) | def default_noise_sampler(x, seed=None):
  class BatchedBrownianTree (line 91) | class BatchedBrownianTree:
    method __init__ (line 94) | def __init__(self, x, t0, t1, seed=None, **kwargs):
    method sort (line 115) | def sort(a, b):
    method __call__ (line 118) | def __call__(self, t0, t1):
  class BrownianTreeNoiseSampler (line 127) | class BrownianTreeNoiseSampler:
    method __init__ (line 142) | def __init__(self, x, sigma_min, sigma_max, seed=None, transform=lambd...
    method __call__ (line 147) | def __call__(self, sigma, sigma_next):
  function sigma_to_half_log_snr (line 152) | def sigma_to_half_log_snr(sigma, model_sampling):
  function half_log_snr_to_sigma (line 160) | def half_log_snr_to_sigma(half_log_snr, model_sampling):
  function offset_first_sigma_for_snr (line 168) | def offset_first_sigma_for_snr(sigmas, model_sampling, percent_offset=1e...
  function ei_h_phi_1 (line 179) | def ei_h_phi_1(h: torch.Tensor) -> torch.Tensor:
  function ei_h_phi_2 (line 184) | def ei_h_phi_2(h: torch.Tensor) -> torch.Tensor:
  function sample_euler (line 190) | def sample_euler(model, x, sigmas, extra_args=None, callback=None, disab...
  function sample_euler_ancestral (line 216) | def sample_euler_ancestral(model, x, sigmas, extra_args=None, callback=N...
  function sample_euler_ancestral_RF (line 240) | def sample_euler_ancestral_RF(model, x, sigmas, extra_args=None, callbac...
  function sample_heun (line 268) | def sample_heun(model, x, sigmas, extra_args=None, callback=None, disabl...
  function sample_dpm_2 (line 303) | def sample_dpm_2(model, x, sigmas, extra_args=None, callback=None, disab...
  function sample_dpm_2_ancestral (line 339) | def sample_dpm_2_ancestral(model, x, sigmas, extra_args=None, callback=N...
  function sample_dpm_2_ancestral_RF (line 371) | def sample_dpm_2_ancestral_RF(model, x, sigmas, extra_args=None, callbac...
  function linear_multistep_coeff (line 404) | def linear_multistep_coeff(order, t, i, j):
  function sample_lms (line 418) | def sample_lms(model, x, sigmas, extra_args=None, callback=None, disable...
  class PIDStepSizeController (line 441) | class PIDStepSizeController:
    method __init__ (line 443) | def __init__(self, h, pcoeff, icoeff, dcoeff, order=1, accept_safety=0...
    method limiter (line 452) | def limiter(self, x):
    method propose_step (line 455) | def propose_step(self, error):
  class DPMSolver (line 470) | class DPMSolver(nn.Module):
    method __init__ (line 473) | def __init__(self, model, extra_args=None, eps_callback=None, info_cal...
    method t (line 480) | def t(self, sigma):
    method sigma (line 483) | def sigma(self, t):
    method eps (line 486) | def eps(self, eps_cache, key, x, t, *args, **kwargs):
    method dpm_solver_1_step (line 495) | def dpm_solver_1_step(self, x, t, t_next, eps_cache=None):
    method dpm_solver_2_step (line 502) | def dpm_solver_2_step(self, x, t, t_next, r1=1 / 2, eps_cache=None):
    method dpm_solver_3_step (line 512) | def dpm_solver_3_step(self, x, t, t_next, r1=1 / 3, r2=2 / 3, eps_cach...
    method dpm_solver_fast (line 525) | def dpm_solver_fast(self, x, t_start, t_end, nfe, eta=0., s_noise=1., ...
    method dpm_solver_adaptive (line 564) | def dpm_solver_adaptive(self, x, t_start, t_end, order=3, rtol=0.05, a...
  function sample_dpm_fast (line 619) | def sample_dpm_fast(model, x, sigma_min, sigma_max, n, extra_args=None, ...
  function sample_dpm_adaptive (line 631) | def sample_dpm_adaptive(model, x, sigma_min, sigma_max, extra_args=None,...
  function sample_dpmpp_2s_ancestral (line 646) | def sample_dpmpp_2s_ancestral(model, x, sigmas, extra_args=None, callbac...
  function sample_dpmpp_2s_ancestral_RF (line 684) | def sample_dpmpp_2s_ancestral_RF(model, x, sigmas, extra_args=None, call...
  function sample_dpmpp_sde (line 735) | def sample_dpmpp_sde(model, x, sigmas, extra_args=None, callback=None, d...
  function sample_dpmpp_2m (line 792) | def sample_dpmpp_2m(model, x, sigmas, extra_args=None, callback=None, di...
  function sample_dpmpp_2m_sde (line 818) | def sample_dpmpp_2m_sde(model, x, sigmas, extra_args=None, callback=None...
  function sample_dpmpp_2m_sde_heun (line 872) | def sample_dpmpp_2m_sde_heun(model, x, sigmas, extra_args=None, callback...
  function sample_dpmpp_3m_sde (line 877) | def sample_dpmpp_3m_sde(model, x, sigmas, extra_args=None, callback=None...
  function sample_dpmpp_3m_sde_gpu (line 939) | def sample_dpmpp_3m_sde_gpu(model, x, sigmas, extra_args=None, callback=...
  function sample_dpmpp_2m_sde_heun_gpu (line 949) | def sample_dpmpp_2m_sde_heun_gpu(model, x, sigmas, extra_args=None, call...
  function sample_dpmpp_2m_sde_gpu (line 959) | def sample_dpmpp_2m_sde_gpu(model, x, sigmas, extra_args=None, callback=...
  function sample_dpmpp_sde_gpu (line 969) | def sample_dpmpp_sde_gpu(model, x, sigmas, extra_args=None, callback=Non...
  function DDPMSampler_step (line 978) | def DDPMSampler_step(x, sigma, sigma_prev, noise, noise_sampler):
  function generic_step_sampler (line 988) | def generic_step_sampler(model, x, sigmas, extra_args=None, callback=Non...
  function sample_ddpm (line 1005) | def sample_ddpm(model, x, sigmas, extra_args=None, callback=None, disabl...
  function sample_lcm (line 1009) | def sample_lcm(model, x, sigmas, extra_args=None, callback=None, disable...
  function sample_heunpp2 (line 1027) | def sample_heunpp2(model, x, sigmas, extra_args=None, callback=None, dis...
  function sample_ipndm (line 1085) | def sample_ipndm(model, x, sigmas, extra_args=None, callback=None, disab...
  function sample_ipndm_v (line 1128) | def sample_ipndm_v(model, x, sigmas, extra_args=None, callback=None, dis...
  function sample_deis (line 1195) | def sample_deis(model, x, sigmas, extra_args=None, callback=None, disabl...
  function sample_euler_ancestral_cfg_pp (line 1244) | def sample_euler_ancestral_cfg_pp(model, x, sigmas, extra_args=None, cal...
  function sample_euler_cfg_pp (line 1288) | def sample_euler_cfg_pp(model, x, sigmas, extra_args=None, callback=None...
  function sample_dpmpp_2s_ancestral_cfg_pp (line 1294) | def sample_dpmpp_2s_ancestral_cfg_pp(model, x, sigmas, extra_args=None, ...
  function sample_dpmpp_2m_cfg_pp (line 1337) | def sample_dpmpp_2m_cfg_pp(model, x, sigmas, extra_args=None, callback=N...
  function res_multistep (line 1370) | def res_multistep(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_multistep (line 1434) | def sample_res_multistep(model, x, sigmas, extra_args=None, callback=Non...
  function sample_res_multistep_cfg_pp (line 1438) | def sample_res_multistep_cfg_pp(model, x, sigmas, extra_args=None, callb...
  function sample_res_multistep_ancestral (line 1442) | def sample_res_multistep_ancestral(model, x, sigmas, extra_args=None, ca...
  function sample_res_multistep_ancestral_cfg_pp (line 1446) | def sample_res_multistep_ancestral_cfg_pp(model, x, sigmas, extra_args=N...
  function sample_gradient_estimation (line 1451) | def sample_gradient_estimation(model, x, sigmas, extra_args=None, callba...
  function sample_gradient_estimation_cfg_pp (line 1495) | def sample_gradient_estimation_cfg_pp(model, x, sigmas, extra_args=None,...
  function sample_er_sde (line 1500) | def sample_er_sde(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_seeds_2 (line 1566) | def sample_seeds_2(model, x, sigmas, extra_args=None, callback=None, dis...
  function sample_exp_heun_2_x0 (line 1628) | def sample_exp_heun_2_x0(model, x, sigmas, extra_args=None, callback=Non...
  function sample_exp_heun_2_x0_sde (line 1634) | def sample_exp_heun_2_x0_sde(model, x, sigmas, extra_args=None, callback...
  function sample_seeds_3 (line 1640) | def sample_seeds_3(model, x, sigmas, extra_args=None, callback=None, dis...
  function sample_sa_solver (line 1706) | def sample_sa_solver(model, x, sigmas, extra_args=None, callback=None, d...
  function sample_sa_solver_pece (line 1810) | def sample_sa_solver_pece(model, x, sigmas, extra_args=None, callback=No...

FILE: comfy/k_diffusion/utils.py
  function hf_datasets_augs_helper (line 15) | def hf_datasets_augs_helper(examples, transform, image_key, mode='RGB'):
  function append_dims (line 21) | def append_dims(x, target_dims):
  function n_params (line 32) | def n_params(module):
  function download_file (line 37) | def download_file(path, url, digest=None):
  function train_mode (line 52) | def train_mode(model, mode=True):
  function eval_mode (line 63) | def eval_mode(model):
  function ema_update (line 70) | def ema_update(model, averaged_model, decay):
  class EMAWarmup (line 88) | class EMAWarmup:
    method __init__ (line 104) | def __init__(self, inv_gamma=1., power=1., min_value=0., max_value=1.,...
    method state_dict (line 113) | def state_dict(self):
    method load_state_dict (line 117) | def load_state_dict(self, state_dict):
    method get_value (line 125) | def get_value(self):
    method step (line 131) | def step(self):
  class InverseLR (line 136) | class InverseLR(optim.lr_scheduler._LRScheduler):
    method __init__ (line 153) | def __init__(self, optimizer, inv_gamma=1., power=1., warmup=0., min_l...
    method get_lr (line 163) | def get_lr(self):
    method _get_closed_form_lr (line 170) | def _get_closed_form_lr(self):
  class ExponentialLR (line 177) | class ExponentialLR(optim.lr_scheduler._LRScheduler):
    method __init__ (line 194) | def __init__(self, optimizer, num_steps, decay=0.5, warmup=0., min_lr=0.,
    method get_lr (line 204) | def get_lr(self):
    method _get_closed_form_lr (line 211) | def _get_closed_form_lr(self):
  function rand_log_normal (line 218) | def rand_log_normal(shape, loc=0., scale=1., device='cpu', dtype=torch.f...
  function rand_log_logistic (line 223) | def rand_log_logistic(shape, loc=0., scale=1., min_value=0., max_value=f...
  function rand_log_uniform (line 233) | def rand_log_uniform(shape, min_value, max_value, device='cpu', dtype=to...
  function rand_v_diffusion (line 240) | def rand_v_diffusion(shape, sigma_data=1., min_value=0., max_value=float...
  function rand_split_log_normal (line 248) | def rand_split_log_normal(shape, loc, scale_1, scale_2, device='cpu', dt...
  class FolderOfImages (line 258) | class FolderOfImages(data.Dataset):
    method __init__ (line 264) | def __init__(self, root, transform=None):
    method __repr__ (line 270) | def __repr__(self):
    method __len__ (line 273) | def __len__(self):
    method __getitem__ (line 276) | def __getitem__(self, key):
  class CSVLogger (line 284) | class CSVLogger:
    method __init__ (line 285) | def __init__(self, filename, columns):
    method write (line 294) | def write(self, *args):
  function tf32_mode (line 299) | def tf32_mode(cudnn=None, matmul=None):

FILE: comfy/latent_formats.py
  class LatentFormat (line 3) | class LatentFormat:
    method process_in (line 13) | def process_in(self, latent):
    method process_out (line 16) | def process_out(self, latent):
  class SD15 (line 19) | class SD15(LatentFormat):
    method __init__ (line 20) | def __init__(self, scale_factor=0.18215):
  class SDXL (line 31) | class SDXL(LatentFormat):
    method __init__ (line 34) | def __init__(self):
  class SDXL_Playground_2_5 (line 46) | class SDXL_Playground_2_5(LatentFormat):
    method __init__ (line 47) | def __init__(self):
    method process_in (line 61) | def process_in(self, latent):
    method process_out (line 66) | def process_out(self, latent):
  class SD_X4 (line 72) | class SD_X4(LatentFormat):
    method __init__ (line 73) | def __init__(self):
  class SC_Prior (line 82) | class SC_Prior(LatentFormat):
    method __init__ (line 85) | def __init__(self):
  class SC_B (line 106) | class SC_B(LatentFormat):
    method __init__ (line 108) | def __init__(self):
  class SD3 (line 117) | class SD3(LatentFormat):
    method __init__ (line 119) | def __init__(self):
    method process_in (line 143) | def process_in(self, latent):
    method process_out (line 146) | def process_out(self, latent):
  class StableAudio1 (line 149) | class StableAudio1(LatentFormat):
  class Flux (line 153) | class Flux(SD3):
    method __init__ (line 155) | def __init__(self):
    method process_in (line 179) | def process_in(self, latent):
    method process_out (line 182) | def process_out(self, latent):
  class Flux2 (line 185) | class Flux2(LatentFormat):
    method __init__ (line 189) | def __init__(self):
    method process_in (line 228) | def process_in(self, latent):
    method process_out (line 231) | def process_out(self, latent):
  class Mochi (line 234) | class Mochi(LatentFormat):
    method __init__ (line 238) | def __init__(self):
    method process_in (line 266) | def process_in(self, latent):
    method process_out (line 271) | def process_out(self, latent):
  class LTXV (line 276) | class LTXV(LatentFormat):
    method __init__ (line 281) | def __init__(self):
  class LTXAV (line 415) | class LTXAV(LTXV):
    method __init__ (line 416) | def __init__(self):
  class HunyuanVideo (line 420) | class HunyuanVideo(LatentFormat):
  class Cosmos1CV8x8x8 (line 446) | class Cosmos1CV8x8x8(LatentFormat):
  class Wan21 (line 471) | class Wan21(LatentFormat):
    method __init__ (line 496) | def __init__(self):
    method process_in (line 510) | def process_in(self, latent):
    method process_out (line 515) | def process_out(self, latent):
  class Wan22 (line 520) | class Wan22(Wan21):
    method __init__ (line 578) | def __init__(self):
  class HunyuanImage21 (line 598) | class HunyuanImage21(LatentFormat):
  class HunyuanImage21Refiner (line 672) | class HunyuanImage21Refiner(LatentFormat):
    method process_in (line 677) | def process_in(self, latent):
    method process_out (line 686) | def process_out(self, latent):
  class HunyuanVideo15 (line 696) | class HunyuanVideo15(LatentFormat):
  class Hunyuan3Dv2 (line 739) | class Hunyuan3Dv2(LatentFormat):
  class Hunyuan3Dv2_1 (line 744) | class Hunyuan3Dv2_1(LatentFormat):
  class Hunyuan3Dv2mini (line 749) | class Hunyuan3Dv2mini(LatentFormat):
  class ACEAudio (line 754) | class ACEAudio(LatentFormat):
  class ACEAudio15 (line 758) | class ACEAudio15(LatentFormat):
  class ChromaRadiance (line 762) | class ChromaRadiance(LatentFormat):
    method __init__ (line 766) | def __init__(self):
    method process_in (line 774) | def process_in(self, latent):
    method process_out (line 777) | def process_out(self, latent):
  class ZImagePixelSpace (line 781) | class ZImagePixelSpace(ChromaRadiance):

FILE: comfy/ldm/ace/ace_step15.py
  function get_silence_latent (line 10) | def get_silence_latent(length, device):
  function get_layer_class (line 71) | def get_layer_class(operations, layer_name):
  class RotaryEmbedding (line 76) | class RotaryEmbedding(nn.Module):
    method __init__ (line 77) | def __init__(self, dim, max_position_embeddings=32768, base=1000000.0,...
    method _set_cos_sin_cache (line 87) | def _set_cos_sin_cache(self, seq_len, device, dtype):
    method forward (line 95) | def forward(self, x, seq_len=None):
  function rotate_half (line 103) | def rotate_half(x):
  function apply_rotary_pos_emb (line 108) | def apply_rotary_pos_emb(q, k, cos, sin):
  class MLP (line 115) | class MLP(nn.Module):
    method __init__ (line 116) | def __init__(self, hidden_size, intermediate_size, dtype=None, device=...
    method forward (line 124) | def forward(self, x):
  class TimestepEmbedding (line 127) | class TimestepEmbedding(nn.Module):
    method __init__ (line 128) | def __init__(self, in_channels: int, time_embed_dim: int, scale: float...
    method forward (line 139) | def forward(self, t, dtype=None):
  class AceStepAttention (line 147) | class AceStepAttention(nn.Module):
    method __init__ (line 148) | def __init__(
    method forward (line 179) | def forward(
  class AceStepDiTLayer (line 252) | class AceStepDiTLayer(nn.Module):
    method __init__ (line 253) | def __init__(
    method forward (line 289) | def forward(
  class AceStepEncoderLayer (line 327) | class AceStepEncoderLayer(nn.Module):
    method __init__ (line 328) | def __init__(
    method forward (line 349) | def forward(self, hidden_states, position_embeddings, attention_mask=N...
  class AceStepLyricEncoder (line 365) | class AceStepLyricEncoder(nn.Module):
    method __init__ (line 366) | def __init__(
    method forward (line 401) | def forward(self, inputs_embeds, attention_mask=None):
  class AceStepTimbreEncoder (line 417) | class AceStepTimbreEncoder(nn.Module):
    method __init__ (line 418) | def __init__(
    method unpack_timbre_embeddings (line 454) | def unpack_timbre_embeddings(self, timbre_embs_packed, refer_audio_ord...
    method forward (line 485) | def forward(self, refer_audio_acoustic_hidden_states_packed, refer_aud...
  function pack_sequences (line 506) | def pack_sequences(hidden1, hidden2, mask1, mask2):
  class AceStepConditionEncoder (line 523) | class AceStepConditionEncoder(nn.Module):
    method __init__ (line 524) | def __init__(
    method forward (line 572) | def forward(
  class AceStepDiTModel (line 602) | class AceStepDiTModel(nn.Module):
    method __init__ (line 603) | def __init__(
    method forward (line 668) | def forward(
  class AttentionPooler (line 721) | class AttentionPooler(nn.Module):
    method __init__ (line 722) | def __init__(self, hidden_size, num_layers, head_dim, rms_norm_eps, dt...
    method forward (line 738) | def forward(self, x):
  class FSQ (line 753) | class FSQ(nn.Module):
    method __init__ (line 754) | def __init__(
    method bound (line 791) | def bound(self, z):
    method _indices_to_codes (line 801) | def _indices_to_codes(self, indices):
    method codes_to_indices (line 806) | def codes_to_indices(self, zhat):
    method forward (line 810) | def forward(self, z):
  class ResidualFSQ (line 821) | class ResidualFSQ(nn.Module):
    method __init__ (line 822) | def __init__(
    method get_output_from_indices (line 873) | def get_output_from_indices(self, indices, dtype=torch.float32):
    method forward (line 886) | def forward(self, x):
  class AceStepAudioTokenizer (line 913) | class AceStepAudioTokenizer(nn.Module):
    method __init__ (line 914) | def __init__(
    method forward (line 945) | def forward(self, hidden_states):
    method tokenize (line 951) | def tokenize(self, x):
  class AudioTokenDetokenizer (line 967) | class AudioTokenDetokenizer(nn.Module):
    method __init__ (line 968) | def __init__(
    method forward (line 995) | def forward(self, x):
  class AceStepConditionGenerationModel (line 1011) | class AceStepConditionGenerationModel(nn.Module):
    method __init__ (line 1012) | def __init__(
    method prepare_condition (line 1076) | def prepare_condition(
    method forward (line 1113) | def forward(self, x, timestep, context, lyric_embed=None, refer_audio=...

FILE: comfy/ldm/ace/attention.py
  class Attention (line 24) | class Attention(nn.Module):
    method __init__ (line 25) | def __init__(
    method forward (line 131) | def forward(
  class CustomLiteLAProcessor2_0 (line 149) | class CustomLiteLAProcessor2_0:
    method __init__ (line 152) | def __init__(self):
    method apply_rotary_emb (line 157) | def apply_rotary_emb(
    method __call__ (line 187) | def __call__(
  class CustomerAttnProcessor2_0 (line 327) | class CustomerAttnProcessor2_0:
    method apply_rotary_emb (line 332) | def apply_rotary_emb(
    method __call__ (line 362) | def __call__(
  function val2list (line 457) | def val2list(x: list or tuple or any, repeat_time=1) -> list:  # type: i...
  function val2tuple (line 464) | def val2tuple(x: list or tuple or any, min_len: int = 1, idx_repeat: int...
  function t2i_modulate (line 476) | def t2i_modulate(x, shift, scale):
  function get_same_padding (line 480) | def get_same_padding(kernel_size: Union[int, Tuple[int, ...]]) -> Union[...
  class ConvLayer (line 487) | class ConvLayer(nn.Module):
    method __init__ (line 488) | def __init__(
    method forward (line 537) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class GLUMBConv (line 546) | class GLUMBConv(nn.Module):
    method __init__ (line 547) | def __init__(
    method forward (line 606) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class LinearTransformerBlock (line 621) | class LinearTransformerBlock(nn.Module):
    method __init__ (line 625) | def __init__(
    method forward (line 694) | def forward(

FILE: comfy/ldm/ace/lyric_encoder.py
  class ConvolutionModule (line 9) | class ConvolutionModule(nn.Module):
    method __init__ (line 12) | def __init__(self,
    method forward (line 79) | def forward(
  class PositionwiseFeedForward (line 136) | class PositionwiseFeedForward(torch.nn.Module):
    method __init__ (line 149) | def __init__(
    method forward (line 164) | def forward(self, xs: torch.Tensor) -> torch.Tensor:
  class Swish (line 174) | class Swish(torch.nn.Module):
    method forward (line 177) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class MultiHeadedAttention (line 181) | class MultiHeadedAttention(nn.Module):
    method __init__ (line 191) | def __init__(self,
    method forward_qkv (line 209) | def forward_qkv(
    method forward_attention (line 237) | def forward_attention(
    method forward (line 279) | def forward(
  class RelPositionMultiHeadedAttention (line 331) | class RelPositionMultiHeadedAttention(MultiHeadedAttention):
    method __init__ (line 340) | def __init__(self,
    method rel_shift (line 357) | def rel_shift(self, x: torch.Tensor) -> torch.Tensor:
    method forward (line 381) | def forward(
  function subsequent_mask (line 450) | def subsequent_mask(
  function subsequent_chunk_mask (line 486) | def subsequent_chunk_mask(
  function add_optional_chunk_mask (line 523) | def add_optional_chunk_mask(xs: torch.Tensor,
  class ConformerEncoderLayer (line 597) | class ConformerEncoderLayer(nn.Module):
    method __init__ (line 617) | def __init__(
    method forward (line 649) | def forward(
  class EspnetRelPositionalEncoding (line 729) | class EspnetRelPositionalEncoding(torch.nn.Module):
    method __init__ (line 743) | def __init__(self, d_model: int, dropout_rate: float, max_len: int = 5...
    method extend_pe (line 752) | def extend_pe(self, x: torch.Tensor):
    method forward (line 784) | def forward(self, x: torch.Tensor, offset: Union[int, torch.Tensor] = ...
    method position_encoding (line 800) | def position_encoding(self,
  class LinearEmbed (line 826) | class LinearEmbed(torch.nn.Module):
    method __init__ (line 836) | def __init__(self, idim: int, odim: int, dropout_rate: float,
    method position_encoding (line 847) | def position_encoding(self, offset: Union[int, torch.Tensor],
    method forward (line 851) | def forward(
  function make_pad_mask (line 889) | def make_pad_mask(lengths: torch.Tensor, max_len: int = 0) -> torch.Tensor:
  class ConformerEncoder (line 918) | class ConformerEncoder(torch.nn.Module):
    method __init__ (line 921) | def __init__(
    method forward_layers (line 1011) | def forward_layers(self, xs: torch.Tensor, chunk_masks: torch.Tensor,
    method forward (line 1018) | def forward(

FILE: comfy/ldm/ace/model.py
  function cross_norm (line 29) | def cross_norm(hidden_states, controlnet_input):
  class Qwen2RotaryEmbedding (line 38) | class Qwen2RotaryEmbedding(nn.Module):
    method __init__ (line 39) | def __init__(self, dim, max_position_embeddings=2048, base=10000, dtyp...
    method _set_cos_sin_cache (line 53) | def _set_cos_sin_cache(self, seq_len, device, dtype):
    method forward (line 63) | def forward(self, x, seq_len=None):
  class T2IFinalLayer (line 74) | class T2IFinalLayer(nn.Module):
    method __init__ (line 79) | def __init__(self, hidden_size, patch_size=[16, 1], out_channels=256, ...
    method unpatchfy (line 87) | def unpatchfy(
    method forward (line 107) | def forward(self, x, t, output_length):
  class PatchEmbed (line 116) | class PatchEmbed(nn.Module):
    method __init__ (line 119) | def __init__(
    method forward (line 140) | def forward(self, latent):
  class ACEStepTransformer2DModel (line 147) | class ACEStepTransformer2DModel(nn.Module):
    method __init__ (line 150) | def __init__(
    method forward_lyric_encoder (line 258) | def forward_lyric_encoder(
    method encode (line 270) | def encode(
    method decode (line 307) | def decode(
    method forward (line 349) | def forward(self,
    method _forward (line 370) | def _forward(

FILE: comfy/ldm/ace/vae/autoencoder_dc.py
  class RMSNorm (line 12) | class RMSNorm(ops.RMSNorm):
    method __init__ (line 13) | def __init__(self, dim, eps=1e-5, elementwise_affine=True, bias=False):
    method forward (line 18) | def forward(self, x):
  function get_normalization (line 26) | def get_normalization(norm_type, num_features, num_groups=32, eps=1e-5):
  function get_activation (line 39) | def get_activation(activation_type):
  class ResBlock (line 52) | class ResBlock(nn.Module):
    method __init__ (line 53) | def __init__(
    method forward (line 68) | def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
  class SanaMultiscaleAttentionProjection (line 82) | class SanaMultiscaleAttentionProjection(nn.Module):
    method __init__ (line 83) | def __init__(
    method forward (line 102) | def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
  class SanaMultiscaleLinearAttention (line 107) | class SanaMultiscaleLinearAttention(nn.Module):
    method __init__ (line 108) | def __init__(
    method apply_linear_attention (line 148) | def apply_linear_attention(self, query, key, value):
    method apply_quadratic_attention (line 157) | def apply_quadratic_attention(self, query, key, value):
    method forward (line 164) | def forward(self, hidden_states):
  class EfficientViTBlock (line 219) | class EfficientViTBlock(nn.Module):
    method __init__ (line 220) | def __init__(
    method forward (line 246) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class GLUMBConv (line 252) | class GLUMBConv(nn.Module):
    method __init__ (line 253) | def __init__(
    method forward (line 276) | def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
  function get_block (line 299) | def get_block(
  class DCDownBlock2d (line 323) | class DCDownBlock2d(nn.Module):
    method __init__ (line 324) | def __init__(self, in_channels: int, out_channels: int, downsample: bo...
    method forward (line 346) | def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
  class DCUpBlock2d (line 362) | class DCUpBlock2d(nn.Module):
    method __init__ (line 363) | def __init__(
    method forward (line 385) | def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
  class Encoder (line 403) | class Encoder(nn.Module):
    method __init__ (line 404) | def __init__(
    method forward (line 474) | def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
  class Decoder (line 489) | class Decoder(nn.Module):
    method __init__ (line 490) | def __init__(
    method forward (line 563) | def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
  class AutoencoderDC (line 581) | class AutoencoderDC(nn.Module):
    method __init__ (line 582) | def __init__(
    method encode (line 630) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 635) | def decode(self, z: torch.Tensor) -> torch.Tensor:
    method forward (line 641) | def forward(self, x: torch.Tensor) -> torch.Tensor:

FILE: comfy/ldm/ace/vae/music_dcae_pipeline.py
  class MusicDCAE (line 14) | class MusicDCAE(torch.nn.Module):
    method __init__ (line 15) | def __init__(self, source_sample_rate=None, dcae_config={}, vocoder_co...
    method forward_mel (line 38) | def forward_mel(self, audios):
    method encode (line 47) | def encode(self, audios, audio_lengths=None, sr=None):
    method decode (line 74) | def decode(self, latents, audio_lengths=None, sr=None):
    method forward (line 95) | def forward(self, audios, audio_lengths=None, sr=None):

FILE: comfy/ldm/ace/vae/music_log_mel.py
  class LinearSpectrogram (line 13) | class LinearSpectrogram(nn.Module):
    method __init__ (line 14) | def __init__(
    method forward (line 32) | def forward(self, y: Tensor) -> Tensor:
  class LogMelSpectrogram (line 65) | class LogMelSpectrogram(nn.Module):
    method __init__ (line 66) | def __init__(
    method compress (line 99) | def compress(self, x: Tensor) -> Tensor:
    method decompress (line 102) | def decompress(self, x: Tensor) -> Tensor:
    method forward (line 105) | def forward(self, x: Tensor, return_linear: bool = False) -> Tensor:

FILE: comfy/ldm/ace/vae/music_vocoder.py
  function drop_path (line 20) | def drop_path(
  class DropPath (line 45) | class DropPath(nn.Module):
    method __init__ (line 48) | def __init__(self, drop_prob: float = 0.0, scale_by_keep: bool = True):
    method forward (line 53) | def forward(self, x):
    method extra_repr (line 56) | def extra_repr(self):
  class LayerNorm (line 60) | class LayerNorm(nn.Module):
    method __init__ (line 67) | def __init__(self, normalized_shape, eps=1e-6, data_format="channels_l...
    method forward (line 77) | def forward(self, x):
  class ConvNeXtBlock (line 90) | class ConvNeXtBlock(nn.Module):
    method __init__ (line 105) | def __init__(
    method forward (line 137) | def forward(self, x, apply_residual: bool = True):
  class ParallelConvNeXtBlock (line 159) | class ParallelConvNeXtBlock(nn.Module):
    method __init__ (line 160) | def __init__(self, kernel_sizes: List[int], *args, **kwargs):
    method forward (line 169) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class ConvNeXtEncoder (line 176) | class ConvNeXtEncoder(nn.Module):
    method __init__ (line 177) | def __init__(
    method forward (line 237) | def forward(
  function get_padding (line 248) | def get_padding(kernel_size, dilation=1):
  class ResBlock1 (line 252) | class ResBlock1(torch.nn.Module):
    method __init__ (line 253) | def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
    method forward (line 326) | def forward(self, x):
    method remove_weight_norm (line 335) | def remove_weight_norm(self):
  class HiFiGANGenerator (line 342) | class HiFiGANGenerator(nn.Module):
    method __init__ (line 343) | def __init__(
    method forward (line 430) | def forward(self, x, template=None):
    method remove_weight_norm (line 456) | def remove_weight_norm(self):
  class ADaMoSHiFiGANV1 (line 465) | class ADaMoSHiFiGANV1(nn.Module):
    method __init__ (line 466) | def __init__(
    method decode (line 526) | def decode(self, mel):
    method encode (line 532) | def encode(self, x):
    method forward (line 535) | def forward(self, mel):

FILE: comfy/ldm/anima/model.py
  function rotate_half (line 7) | def rotate_half(x):
  function apply_rotary_pos_emb (line 13) | def apply_rotary_pos_emb(x, cos, sin, unsqueeze_dim=1):
  class RotaryEmbedding (line 20) | class RotaryEmbedding(nn.Module):
    method __init__ (line 21) | def __init__(self, head_dim):
    method forward (line 28) | def forward(self, x, position_ids):
  class Attention (line 42) | class Attention(nn.Module):
    method __init__ (line 43) | def __init__(self, query_dim, context_dim, n_heads, head_dim, device=N...
    method forward (line 62) | def forward(self, x, mask=None, context=None, position_embeddings=None...
    method init_weights (line 86) | def init_weights(self):
  class TransformerBlock (line 90) | class TransformerBlock(nn.Module):
    method __init__ (line 91) | def __init__(self, source_dim, model_dim, num_heads=16, mlp_ratio=4.0,...
    method forward (line 125) | def forward(self, x, context, target_attention_mask=None, source_atten...
    method init_weights (line 138) | def init_weights(self):
  class LLMAdapter (line 143) | class LLMAdapter(nn.Module):
    method __init__ (line 144) | def __init__(
    method forward (line 171) | def forward(self, source_hidden_states, target_input_ids, target_atten...
  class Anima (line 193) | class Anima(MiniTrainDIT):
    method __init__ (line 194) | def __init__(self, *args, **kwargs):
    method preprocess_text_embeds (line 198) | def preprocess_text_embeds(self, text_embeds, text_ids, t5xxl_weights=...
    method forward (line 210) | def forward(self, x, timesteps, context, **kwargs):

FILE: comfy/ldm/audio/autoencoder.py
  function vae_sample (line 10) | def vae_sample(mean, scale):
  class VAEBottleneck (line 20) | class VAEBottleneck(nn.Module):
    method __init__ (line 21) | def __init__(self):
    method encode (line 25) | def encode(self, x, return_info=False, **kwargs):
    method decode (line 39) | def decode(self, x):
  function snake_beta (line 43) | def snake_beta(x, alpha, beta):
  class SnakeBeta (line 47) | class SnakeBeta(nn.Module):
    method __init__ (line 49) | def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha...
    method forward (line 67) | def forward(self, x):
  function WNConv1d (line 77) | def WNConv1d(*args, **kwargs):
  function WNConvTranspose1d (line 80) | def WNConvTranspose1d(*args, **kwargs):
  function get_activation (line 83) | def get_activation(activation: Literal["elu", "snake", "none"], antialia...
  class ResidualUnit (line 99) | class ResidualUnit(nn.Module):
    method __init__ (line 100) | def __init__(self, in_channels, out_channels, dilation, use_snake=Fals...
    method forward (line 116) | def forward(self, x):
  class EncoderBlock (line 124) | class EncoderBlock(nn.Module):
    method __init__ (line 125) | def __init__(self, in_channels, out_channels, stride, use_snake=False,...
    method forward (line 140) | def forward(self, x):
  class DecoderBlock (line 143) | class DecoderBlock(nn.Module):
    method __init__ (line 144) | def __init__(self, in_channels, out_channels, stride, use_snake=False,...
    method forward (line 173) | def forward(self, x):
  class OobleckEncoder (line 176) | class OobleckEncoder(nn.Module):
    method __init__ (line 177) | def __init__(self,
    method forward (line 206) | def forward(self, x):
  class OobleckDecoder (line 210) | class OobleckDecoder(nn.Module):
    method __init__ (line 211) | def __init__(self,
    method forward (line 250) | def forward(self, x):
  class AudioOobleckVAE (line 254) | class AudioOobleckVAE(nn.Module):
    method __init__ (line 255) | def __init__(self,
    method encode (line 271) | def encode(self, x):
    method decode (line 274) | def decode(self, x):

FILE: comfy/ldm/audio/dit.py
  class FourierFeatures (line 14) | class FourierFeatures(nn.Module):
    method __init__ (line 15) | def __init__(self, in_features, out_features, std=1., dtype=None, devi...
    method forward (line 21) | def forward(self, input):
  class LayerNorm (line 26) | class LayerNorm(nn.Module):
    method __init__ (line 27) | def __init__(self, dim, bias=False, fix_scale=False, dtype=None, devic...
    method forward (line 40) | def forward(self, x):
  class GLU (line 46) | class GLU(nn.Module):
    method __init__ (line 47) | def __init__(
    method forward (line 63) | def forward(self, x):
  class AbsolutePositionalEmbedding (line 74) | class AbsolutePositionalEmbedding(nn.Module):
    method __init__ (line 75) | def __init__(self, dim, max_seq_len):
    method forward (line 81) | def forward(self, x, pos = None, seq_start_pos = None):
  class ScaledSinusoidalEmbedding (line 95) | class ScaledSinusoidalEmbedding(nn.Module):
    method __init__ (line 96) | def __init__(self, dim, theta = 10000):
    method forward (line 106) | def forward(self, x, pos = None, seq_start_pos = None):
  class RotaryEmbedding (line 119) | class RotaryEmbedding(nn.Module):
    method __init__ (line 120) | def __init__(
    method forward_from_seq_len (line 152) | def forward_from_seq_len(self, seq_len, device, dtype):
    method forward (line 158) | def forward(self, t):
  function rotate_half (line 178) | def rotate_half(x):
  function apply_rotary_pos_emb (line 183) | def apply_rotary_pos_emb(t, freqs, scale = 1):
  class FeedForward (line 203) | class FeedForward(nn.Module):
    method __init__ (line 204) | def __init__(
    method forward (line 253) | def forward(self, x):
  class Attention (line 256) | class Attention(nn.Module):
    method __init__ (line 257) | def __init__(
    method forward (line 294) | def forward(
  class ConformerModule (line 376) | class ConformerModule(nn.Module):
    method __init__ (line 377) | def __init__(
    method forward (line 395) | def forward(self, x):
  class TransformerBlock (line 412) | class TransformerBlock(nn.Module):
    method __init__ (line 413) | def __init__(
    method forward (line 485) | def forward(
  class ContinuousTransformer (line 534) | class ContinuousTransformer(nn.Module):
    method __init__ (line 535) | def __init__(
    method forward (line 601) | def forward(
  class AudioDiffusionTransformer (line 670) | class AudioDiffusionTransformer(nn.Module):
    method __init__ (line 671) | def __init__(self,
    method _forward (line 776) | def _forward(
    method forward (line 872) | def forward(

FILE: comfy/ldm/audio/embedders.py
  class LearnedPositionalEmbedding (line 11) | class LearnedPositionalEmbedding(nn.Module):
    method __init__ (line 14) | def __init__(self, dim: int):
    method forward (line 20) | def forward(self, x: Tensor) -> Tensor:
  function TimePositionalEmbedding (line 27) | def TimePositionalEmbedding(dim: int, out_features: int) -> nn.Module:
  class NumberEmbedder (line 34) | class NumberEmbedder(nn.Module):
    method __init__ (line 35) | def __init__(
    method forward (line 44) | def forward(self, x: Union[List[float], Tensor]) -> Tensor:
  class Conditioner (line 56) | class Conditioner(nn.Module):
    method __init__ (line 57) | def __init__(
    method forward (line 70) | def forward(self, x):
  class NumberConditioner (line 73) | class NumberConditioner(Conditioner):
    method __init__ (line 77) | def __init__(self,
    method forward (line 89) | def forward(self, floats, device=None):

FILE: comfy/ldm/aura/mmdit.py
  function modulate (line 15) | def modulate(x, shift, scale):
  function find_multiple (line 19) | def find_multiple(n: int, k: int) -> int:
  class MLP (line 25) | class MLP(nn.Module):
    method __init__ (line 26) | def __init__(self, dim, hidden_dim=None, dtype=None, device=None, oper...
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class MultiHeadLayerNorm (line 44) | class MultiHeadLayerNorm(nn.Module):
    method __init__ (line 45) | def __init__(self, hidden_size=None, eps=1e-5, dtype=None, device=None):
    method forward (line 52) | def forward(self, hidden_states):
  class SingleAttention (line 63) | class SingleAttention(nn.Module):
    method __init__ (line 64) | def __init__(self, dim, n_heads, mh_qknorm=False, dtype=None, device=N...
    method forward (line 88) | def forward(self, c, transformer_options={}):
  class DoubleAttention (line 104) | class DoubleAttention(nn.Module):
    method __init__ (line 105) | def __init__(self, dim, n_heads, mh_qknorm=False, dtype=None, device=N...
    method forward (line 147) | def forward(self, c, x, transformer_options={}):
  class MMDiTBlock (line 180) | class MMDiTBlock(nn.Module):
    method __init__ (line 181) | def __init__(self, dim, heads=8, global_conddim=1024, is_last=False, d...
    method forward (line 210) | def forward(self, c, x, global_cond, transformer_options={}, **kwargs):
  class DiTBlock (line 241) | class DiTBlock(nn.Module):
    method __init__ (line 243) | def __init__(self, dim, heads=8, global_conddim=1024, dtype=None, devi...
    method forward (line 258) | def forward(self, cx, global_cond, transformer_options={}, **kwargs):
  class TimestepEmbedder (line 275) | class TimestepEmbedder(nn.Module):
    method __init__ (line 276) | def __init__(self, hidden_size, frequency_embedding_size=256, dtype=No...
    method timestep_embedding (line 286) | def timestep_embedding(t, dim, max_period=10000):
    method forward (line 300) | def forward(self, t, dtype):
  class MMDiT (line 306) | class MMDiT(nn.Module):
    method __init__ (line 307) | def __init__(
    method extend_pe (line 371) | def extend_pe(self, init_dim=(16, 16), target_dim=(64, 64)):
    method pe_selection_index_based_on_dim (line 386) | def pe_selection_index_based_on_dim(self, h, w):
    method unpatchify (line 399) | def unpatchify(self, x, h, w):
    method patchify (line 408) | def patchify(self, x):
    method apply_pos_embeds (line 422) | def apply_pos_embeds(self, x, h, w):
    method forward (line 439) | def forward(self, x, timestep, context, transformer_options={}, **kwar...
    method _forward (line 446) | def _forward(self, x, timestep, context, transformer_options={}, **kwa...

FILE: comfy/ldm/cascade/common.py
  class OptimizedAttention (line 24) | class OptimizedAttention(nn.Module):
    method __init__ (line 25) | def __init__(self, c, nhead, dropout=0.0, dtype=None, device=None, ope...
    method forward (line 35) | def forward(self, q, k, v, transformer_options={}):
  class Attention2D (line 44) | class Attention2D(nn.Module):
    method __init__ (line 45) | def __init__(self, c, nhead, dropout=0.0, dtype=None, device=None, ope...
    method forward (line 50) | def forward(self, x, kv, self_attn=False, transformer_options={}):
  function LayerNorm2d_op (line 61) | def LayerNorm2d_op(operations):
  class GlobalResponseNorm (line 70) | class GlobalResponseNorm(nn.Module):
    method __init__ (line 72) | def __init__(self, dim, dtype=None, device=None):
    method forward (line 77) | def forward(self, x):
  class ResBlock (line 83) | class ResBlock(nn.Module):
    method __init__ (line 84) | def __init__(self, c, c_skip=0, kernel_size=3, dropout=0.0, dtype=None...
    method forward (line 97) | def forward(self, x, x_skip=None):
  class AttnBlock (line 106) | class AttnBlock(nn.Module):
    method __init__ (line 107) | def __init__(self, c, c_cond, nhead, self_attn=True, dropout=0.0, dtyp...
    method forward (line 117) | def forward(self, x, kv, transformer_options={}):
  class FeedForwardBlock (line 123) | class FeedForwardBlock(nn.Module):
    method __init__ (line 124) | def __init__(self, c, dropout=0.0, dtype=None, device=None, operations...
    method forward (line 135) | def forward(self, x):
  class TimestepBlock (line 140) | class TimestepBlock(nn.Module):
    method __init__ (line 141) | def __init__(self, c, c_timestep, conds=['sca'], dtype=None, device=No...
    method forward (line 148) | def forward(self, x, t):

FILE: comfy/ldm/cascade/controlnet.py
  class CNetResBlock (line 24) | class CNetResBlock(nn.Module):
    method __init__ (line 25) | def __init__(self, c, dtype=None, device=None, operations=None):
    method forward (line 36) | def forward(self, x):
  class ControlNet (line 40) | class ControlNet(nn.Module):
    method __init__ (line 41) | def __init__(self, c_in=3, c_proj=2048, proj_blocks=None, bottleneck_m...
    method forward (line 87) | def forward(self, x):

FILE: comfy/ldm/cascade/stage_a.py
  class vector_quantize (line 27) | class vector_quantize(Function):
    method forward (line 29) | def forward(ctx, x, codebook):
    method backward (line 44) | def backward(ctx, grad_output, grad_indices):
  class VectorQuantize (line 59) | class VectorQuantize(nn.Module):
    method __init__ (line 60) | def __init__(self, embedding_size, k, ema_decay=0.99, ema_loss=False):
    method _laplace_smoothing (line 80) | def _laplace_smoothing(self, x, epsilon):
    method _updateEMA (line 84) | def _updateEMA(self, z_e_x, indices):
    method idx2vq (line 95) | def idx2vq(self, idx, dim=-1):
    method forward (line 101) | def forward(self, x, get_losses=True, dim=-1):
  class ResBlock (line 121) | class ResBlock(nn.Module):
    method __init__ (line 122) | def __init__(self, c, c_hidden):
    method _norm (line 141) | def _norm(self, x, norm):
    method forward (line 144) | def forward(self, x):
  class StageA (line 160) | class StageA(nn.Module):
    method __init__ (line 161) | def __init__(self, levels=2, bottleneck_blocks=12, c_hidden=384, c_lat...
    method encode (line 205) | def encode(self, x, quantize=False):
    method decode (line 214) | def decode(self, x):
    method forward (line 219) | def forward(self, x, quantize=False):
  class Discriminator (line 225) | class Discriminator(nn.Module):
    method __init__ (line 226) | def __init__(self, c_in=3, c_cond=0, c_hidden=512, depth=6):
    method forward (line 243) | def forward(self, x, cond=None):

FILE: comfy/ldm/cascade/stage_b.py
  class StageB (line 24) | class StageB(nn.Module):
    method __init__ (line 25) | def __init__(self, c_in=4, c_out=4, c_r=64, patch_size=2, c_cond=1280,...
    method gen_r_embedding (line 158) | def gen_r_embedding(self, r, max_positions=10000):
    method gen_c_embeddings (line 169) | def gen_c_embeddings(self, clip):
    method _down_encode (line 176) | def _down_encode(self, x, r_embed, clip, transformer_options={}):
    method _up_decode (line 202) | def _up_decode(self, level_outputs, r_embed, clip, transformer_options...
    method forward (line 231) | def forward(self, x, r, effnet, clip, pixels=None, transformer_options...
    method update_weights_ema (line 252) | def update_weights_ema(self, src_model, beta=0.999):

FILE: comfy/ldm/cascade/stage_c.py
  class UpDownBlock2d (line 25) | class UpDownBlock2d(nn.Module):
    method __init__ (line 26) | def __init__(self, c_in, c_out, mode, enabled=True, dtype=None, device...
    method forward (line 34) | def forward(self, x):
  class StageC (line 40) | class StageC(nn.Module):
    method __init__ (line 41) | def __init__(self, c_in=16, c_out=16, c_r=64, patch_size=1, c_cond=204...
    method gen_r_embedding (line 162) | def gen_r_embedding(self, r, max_positions=10000):
    method gen_c_embeddings (line 173) | def gen_c_embeddings(self, clip_txt, clip_txt_pooled, clip_img):
    method _down_encode (line 185) | def _down_encode(self, x, r_embed, clip, cnet=None, transformer_option...
    method _up_decode (line 216) | def _up_decode(self, level_outputs, r_embed, clip, cnet=None, transfor...
    method forward (line 250) | def forward(self, x, r, clip_text, clip_text_pooled, clip_img, control...
    method update_weights_ema (line 269) | def update_weights_ema(self, src_model, beta=0.999):

FILE: comfy/ldm/cascade/stage_c_coder.py
  class EfficientNetEncoder (line 27) | class EfficientNetEncoder(nn.Module):
    method __init__ (line 28) | def __init__(self, c_latent=16):
    method forward (line 38) | def forward(self, x):
  class Previewer (line 46) | class Previewer(nn.Module):
    method __init__ (line 47) | def __init__(self, c_in=16, c_hidden=512, c_out=3):
    method forward (line 85) | def forward(self, x):
  class StageC_coder (line 88) | class StageC_coder(nn.Module):
    method __init__ (line 89) | def __init__(self):
    method encode (line 94) | def encode(self, x):
    method decode (line 97) | def decode(self, x):

FILE: comfy/ldm/chroma/layers.py
  class ChromaModulationOut (line 14) | class ChromaModulationOut(ModulationOut):
    method from_offset (line 16) | def from_offset(cls, tensor: torch.Tensor, offset: int = 0) -> Modulat...
  class Approximator (line 26) | class Approximator(nn.Module):
    method __init__ (line 27) | def __init__(self, in_dim: int, out_dim: int, hidden_dim: int, n_layer...
    method device (line 35) | def device(self):
    method forward (line 39) | def forward(self, x: Tensor) -> Tensor:
  class LastLayer (line 50) | class LastLayer(nn.Module):
    method __init__ (line 51) | def __init__(self, hidden_size: int, patch_size: int, out_channels: in...
    method forward (line 56) | def forward(self, x: Tensor, vec: Tensor) -> Tensor:

FILE: comfy/ldm/chroma/model.py
  class ChromaParams (line 26) | class ChromaParams:
  class Chroma (line 48) | class Chroma(nn.Module):
    method __init__ (line 53) | def __init__(self, image_model=None, final_layer=True, dtype=None, dev...
    method get_modulations (line 115) | def get_modulations(self, tensor: torch.Tensor, block_type: str, *, id...
    method forward_orig (line 143) | def forward_orig(
    method forward (line 270) | def forward(self, x, timestep, context, guidance, control=None, transf...
    method _forward (line 277) | def _forward(self, x, timestep, context, guidance, control=None, trans...

FILE: comfy/ldm/chroma_radiance/layers.py
  class NerfEmbedder (line 8) | class NerfEmbedder(nn.Module):
    method __init__ (line 17) | def __init__(
    method fetch_pos (line 48) | def fetch_pos(self, patch_size: int, device: torch.device, dtype: torc...
    method forward (line 102) | def forward(self, inputs: torch.Tensor) -> torch.Tensor:
  class NerfGLUBlock (line 135) | class NerfGLUBlock(nn.Module):
    method __init__ (line 139) | def __init__(self, hidden_size_s: int, hidden_size_x: int, mlp_ratio, ...
    method forward (line 150) | def forward(self, x: torch.Tensor, s: torch.Tensor) -> torch.Tensor:
  class NerfFinalLayer (line 176) | class NerfFinalLayer(nn.Module):
    method __init__ (line 177) | def __init__(self, hidden_size, out_channels, dtype=None, device=None,...
    method forward (line 182) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class NerfFinalLayerConv (line 188) | class NerfFinalLayerConv(nn.Module):
    method __init__ (line 189) | def __init__(self, hidden_size: int, out_channels: int, dtype=None, de...
    method forward (line 201) | def forward(self, x: torch.Tensor) -> torch.Tensor:

FILE: comfy/ldm/chroma_radiance/model.py
  class ChromaRadianceParams (line 28) | class ChromaRadianceParams(ChromaParams):
  class ChromaRadiance (line 42) | class ChromaRadiance(Chroma):
    method __init__ (line 47) | def __init__(self, image_model=None, final_layer=True, dtype=None, dev...
    method _nerf_final_layer (line 166) | def _nerf_final_layer(self) -> nn.Module:
    method img_in (line 174) | def img_in(self, img: Tensor) -> Tensor:
    method forward_nerf (line 179) | def forward_nerf(
    method forward_tiled_nerf (line 223) | def forward_tiled_nerf(
    method radiance_get_override_params (line 260) | def radiance_get_override_params(self, overrides: dict) -> ChromaRadia...
    method _apply_x0_residual (line 282) | def _apply_x0_residual(self, predicted, noisy, timesteps):
    method _forward (line 288) | def _forward(

FILE: comfy/ldm/common_dit.py
  function pad_to_patch_size (line 5) | def pad_to_patch_size(img, patch_size=(2, 2), padding_mode="circular"):

FILE: comfy/ldm/cosmos/blocks.py
  function get_normalization (line 29) | def get_normalization(name: str, channels: int, weight_args={}, operatio...
  class BaseAttentionOp (line 38) | class BaseAttentionOp(nn.Module):
    method __init__ (line 39) | def __init__(self):
  class Attention (line 43) | class Attention(nn.Module):
    method __init__ (line 74) | def __init__(
    method cal_qkv (line 128) | def cal_qkv(
    method forward (line 173) | def forward(
  class FeedForward (line 194) | class FeedForward(nn.Module):
    method __init__ (line 215) | def __init__(
    method forward (line 237) | def forward(self, x: torch.Tensor):
  class GPT2FeedForward (line 247) | class GPT2FeedForward(FeedForward):
    method __init__ (line 248) | def __init__(self, d_model: int, d_ff: int, dropout: float = 0.1, bias...
    method forward (line 260) | def forward(self, x: torch.Tensor):
  function modulate (line 270) | def modulate(x, shift, scale):
  class Timesteps (line 274) | class Timesteps(nn.Module):
    method __init__ (line 275) | def __init__(self, num_channels):
    method forward (line 279) | def forward(self, timesteps):
  class TimestepEmbedding (line 294) | class TimestepEmbedding(nn.Module):
    method __init__ (line 295) | def __init__(self, in_features: int, out_features: int, use_adaln_lora...
    method forward (line 308) | def forward(self, sample: torch.Tensor) -> torch.Tensor:
  class FourierFeatures (line 323) | class FourierFeatures(nn.Module):
    method __init__ (line 343) | def __init__(self, num_channels, bandwidth=1, normalize=False):
    method forward (line 349) | def forward(self, x, gain: float = 1.0):
  class PatchEmbed (line 366) | class PatchEmbed(nn.Module):
    method __init__ (line 381) | def __init__(
    method forward (line 408) | def forward(self, x):
  class FinalLayer (line 431) | class FinalLayer(nn.Module):
    method __init__ (line 436) | def __init__(
    method forward (line 466) | def forward(
  class VideoAttn (line 489) | class VideoAttn(nn.Module):
    method __init__ (line 516) | def __init__(
    method forward (line 544) | def forward(
  function adaln_norm_state (line 582) | def adaln_norm_state(norm_state, x, scale, shift):
  class DITBuildingBlock (line 587) | class DITBuildingBlock(nn.Module):
    method __init__ (line 609) | def __init__(
    method forward (line 663) | def forward(
  class GeneralDITTransformerBlock (line 725) | class GeneralDITTransformerBlock(nn.Module):
    method __init__ (line 753) | def __init__(
    method forward (line 785) | def forward(

FILE: comfy/ldm/cosmos/cosmos_tokenizer/layers3d.py
  class CausalConv3d (line 59) | class CausalConv3d(nn.Module):
    method __init__ (line 60) | def __init__(
    method _replication_pad (line 98) | def _replication_pad(self, x: torch.Tensor) -> torch.Tensor:
    method forward (line 104) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CausalUpsample3d (line 109) | class CausalUpsample3d(nn.Module):
    method __init__ (line 110) | def __init__(self, in_channels: int) -> None:
    method forward (line 116) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CausalDownsample3d (line 129) | class CausalDownsample3d(nn.Module):
    method __init__ (line 130) | def __init__(self, in_channels: int) -> None:
    method forward (line 141) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CausalHybridUpsample3d (line 149) | class CausalHybridUpsample3d(nn.Module):
    method __init__ (line 150) | def __init__(
    method forward (line 188) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CausalHybridDownsample3d (line 211) | class CausalHybridDownsample3d(nn.Module):
    method __init__ (line 212) | def __init__(
    method forward (line 251) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CausalResnetBlock3d (line 275) | class CausalResnetBlock3d(nn.Module):
    method __init__ (line 276) | def __init__(
    method forward (line 303) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CausalResnetBlockFactorized3d (line 318) | class CausalResnetBlockFactorized3d(nn.Module):
    method __init__ (line 319) | def __init__(
    method forward (line 372) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CausalAttnBlock (line 387) | class CausalAttnBlock(nn.Module):
    method __init__ (line 388) | def __init__(self, in_channels: int, num_groups: int) -> None:
    method forward (line 407) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CausalTemporalAttnBlock (line 427) | class CausalTemporalAttnBlock(nn.Module):
    method __init__ (line 428) | def __init__(self, in_channels: int, num_groups: int) -> None:
    method forward (line 445) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class EncoderBase (line 479) | class EncoderBase(nn.Module):
    method __init__ (line 480) | def __init__(
    method patcher3d (line 561) | def patcher3d(self, x: torch.Tensor) -> torch.Tensor:
    method forward (line 567) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class DecoderBase (line 607) | class DecoderBase(nn.Module):
    method __init__ (line 608) | def __init__(
    method unpatcher3d (line 696) | def unpatcher3d(self, x: torch.Tensor) -> torch.Tensor:
    method forward (line 703) | def forward(self, z):
  class EncoderFactorized (line 734) | class EncoderFactorized(nn.Module):
    method __init__ (line 735) | def __init__(
    method forward (line 863) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class DecoderFactorized (line 888) | class DecoderFactorized(nn.Module):
    method __init__ (line 889) | def __init__(
    method forward (line 1020) | def forward(self, z):

FILE: comfy/ldm/cosmos/cosmos_tokenizer/patching.py
  class Patcher (line 39) | class Patcher(torch.nn.Module):
    method __init__ (line 49) | def __init__(self, patch_size=1, patch_method="haar"):
    method forward (line 65) | def forward(self, x):
    method _dwt (line 73) | def _dwt(self, x, mode="reflect", rescale=False):
    method _haar (line 97) | def _haar(self, x):
    method _arrange (line 102) | def _arrange(self, x):
  class Patcher3D (line 112) | class Patcher3D(Patcher):
    method __init__ (line 115) | def __init__(self, patch_size=1, patch_method="haar"):
    method _dwt (line 123) | def _dwt(self, x, wavelet, mode="reflect", rescale=False):
    method _haar (line 161) | def _haar(self, x):
    method _arrange (line 168) | def _arrange(self, x):
  class UnPatcher (line 181) | class UnPatcher(torch.nn.Module):
    method __init__ (line 191) | def __init__(self, patch_size=1, patch_method="haar"):
    method forward (line 207) | def forward(self, x):
    method _idwt (line 215) | def _idwt(self, x, wavelet="haar", mode="reflect", rescale=False):
    method _ihaar (line 252) | def _ihaar(self, x):
    method _iarrange (line 257) | def _iarrange(self, x):
  class UnPatcher3D (line 267) | class UnPatcher3D(UnPatcher):
    method __init__ (line 270) | def __init__(self, patch_size=1, patch_method="haar"):
    method _idwt (line 273) | def _idwt(self, x, wavelet="haar", mode="reflect", rescale=False):
    method _ihaar (line 362) | def _ihaar(self, x):
    method _iarrange (line 368) | def _iarrange(self, x):

FILE: comfy/ldm/cosmos/cosmos_tokenizer/utils.py
  function time2batch (line 26) | def time2batch(x: torch.Tensor) -> tuple[torch.Tensor, int]:
  function batch2time (line 31) | def batch2time(x: torch.Tensor, batch_size: int) -> torch.Tensor:
  function space2batch (line 35) | def space2batch(x: torch.Tensor) -> tuple[torch.Tensor, int]:
  function batch2space (line 40) | def batch2space(x: torch.Tensor, batch_size: int, height: int) -> torch....
  function cast_tuple (line 44) | def cast_tuple(t: Any, length: int = 1) -> Any:
  function replication_pad (line 48) | def replication_pad(x):
  function divisible_by (line 52) | def divisible_by(num: int, den: int) -> bool:
  function is_odd (line 56) | def is_odd(n: int) -> bool:
  function nonlinearity (line 60) | def nonlinearity(x):
  function Normalize (line 65) | def Normalize(in_channels, num_groups=32):
  class CausalNormalize (line 71) | class CausalNormalize(torch.nn.Module):
    method __init__ (line 72) | def __init__(self, in_channels, num_groups=1):
    method forward (line 82) | def forward(self, x):
  function exists (line 91) | def exists(v):
  function default (line 95) | def default(*args):
  function round_ste (line 102) | def round_ste(z: torch.Tensor) -> torch.Tensor:
  function log (line 108) | def log(t, eps=1e-5):
  function entropy (line 112) | def entropy(prob):

FILE: comfy/ldm/cosmos/model.py
  class DataType (line 43) | class DataType(Enum):
  class GeneralDIT (line 48) | class GeneralDIT(nn.Module):
    method __init__ (line 91) | def __init__(
    method build_pos_embed (line 213) | def build_pos_embed(self, device=None, dtype=None):
    method prepare_embedded_sequence (line 250) | def prepare_embedded_sequence(
    method decoder_head (line 310) | def decoder_head(
    method forward_before_blocks (line 342) | def forward_before_blocks(
    method forward (line 423) | def forward(
    method _forward (line 459) | def _forward(

FILE: comfy/ldm/cosmos/position_embedding.py
  function normalize (line 24) | def normalize(x: torch.Tensor, dim: Optional[List[int]] = None, eps: flo...
  class VideoPositionEmb (line 43) | class VideoPositionEmb(nn.Module):
    method forward (line 44) | def forward(self, x_B_T_H_W_C: torch.Tensor, fps=Optional[torch.Tensor...
    method generate_embeddings (line 53) | def generate_embeddings(self, B_T_H_W_C: torch.Size, fps=Optional[torc...
  class VideoRopePosition3DEmb (line 57) | class VideoRopePosition3DEmb(VideoPositionEmb):
    method __init__ (line 58) | def __init__(
    method generate_embeddings (line 100) | def generate_embeddings(
  class LearnablePosEmbAxis (line 166) | class LearnablePosEmbAxis(VideoPositionEmb):
    method __init__ (line 167) | def __init__(
    method generate_embeddings (line 192) | def generate_embeddings(self, B_T_H_W_C: torch.Size, fps=Optional[torc...

FILE: comfy/ldm/cosmos/predict2.py
  function apply_rotary_pos_emb (line 18) | def apply_rotary_pos_emb(
  class GPT2FeedForward (line 29) | class GPT2FeedForward(nn.Module):
    method __init__ (line 30) | def __init__(self, d_model: int, d_ff: int, device=None, dtype=None, o...
    method forward (line 40) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  function torch_attention_op (line 48) | def torch_attention_op(q_B_S_H_D: torch.Tensor, k_B_S_H_D: torch.Tensor,...
  class Attention (line 78) | class Attention(nn.Module):
    method __init__ (line 110) | def __init__(
    method compute_qkv (line 154) | def compute_qkv(
    method compute_attention (line 184) | def compute_attention(self, q: torch.Tensor, k: torch.Tensor, v: torch...
    method forward (line 188) | def forward(
  class Timesteps (line 204) | class Timesteps(nn.Module):
    method __init__ (line 205) | def __init__(self, num_channels: int):
    method forward (line 209) | def forward(self, timesteps_B_T: torch.Tensor) -> torch.Tensor:
  class TimestepEmbedding (line 226) | class TimestepEmbedding(nn.Module):
    method __init__ (line 227) | def __init__(self, in_features: int, out_features: int, use_adaln_lora...
    method forward (line 242) | def forward(self, sample: torch.Tensor) -> Tuple[torch.Tensor, Optiona...
  class PatchEmbed (line 257) | class PatchEmbed(nn.Module):
    method __init__ (line 272) | def __init__(
    method forward (line 297) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class FinalLayer (line 322) | class FinalLayer(nn.Module):
    method __init__ (line 327) | def __init__(
    method forward (line 357) | def forward(
  class Block (line 388) | class Block(nn.Module):
    method __init__ (line 409) | def __init__(
    method forward (line 456) | def forward(
  class MiniTrainDIT (line 573) | class MiniTrainDIT(nn.Module):
    method __init__ (line 608) | def __init__(
    method build_pos_embed (line 718) | def build_pos_embed(self, device=None, dtype=None) -> None:
    method prepare_embedded_sequence (line 755) | def prepare_embedded_sequence(
    method unpatchify (line 808) | def unpatchify(self, x_B_T_H_W_M: torch.Tensor) -> torch.Tensor:
    method forward (line 818) | def forward(self,
    method _forward (line 832) | def _forward(

FILE: comfy/ldm/cosmos/vae.py
  class IdentityDistribution (line 30) | class IdentityDistribution(torch.nn.Module):
    method __init__ (line 31) | def __init__(self):
    method forward (line 34) | def forward(self, parameters):
  class GaussianDistribution (line 38) | class GaussianDistribution(torch.nn.Module):
    method __init__ (line 39) | def __init__(self, min_logvar: float = -30.0, max_logvar: float = 20.0):
    method sample (line 44) | def sample(self, mean, logvar):
    method forward (line 48) | def forward(self, parameters):
  class ContinuousFormulation (line 54) | class ContinuousFormulation(Enum):
  class CausalContinuousVideoTokenizer (line 59) | class CausalContinuousVideoTokenizer(nn.Module):
    method __init__ (line 60) | def __init__(
    method encode (line 103) | def encode(self, x):
    method decode (line 117) | def decode(self, z):

FILE: comfy/ldm/flux/controlnet.py
  class MistolineCondDownsamplBlock (line 14) | class MistolineCondDownsamplBlock(nn.Module):
    method __init__ (line 15) | def __init__(self, dtype=None, device=None, operations=None):
    method forward (line 39) | def forward(self, x):
  class MistolineControlnetBlock (line 42) | class MistolineControlnetBlock(nn.Module):
    method __init__ (line 43) | def __init__(self, hidden_size, dtype=None, device=None, operations=No...
    method forward (line 48) | def forward(self, x):
  class ControlNetFlux (line 52) | class ControlNetFlux(Flux):
    method __init__ (line 53) | def __init__(self, latent_input=False, num_union_modes=0, mistoline=Fa...
    method forward_orig (line 109) | def forward_orig(
    method forward (line 182) | def forward(self, x, timesteps, context, y=None, guidance=None, hint=N...

FILE: comfy/ldm/flux/layers.py
  class EmbedND (line 12) | class EmbedND(nn.Module):
    method __init__ (line 13) | def __init__(self, dim: int, theta: int, axes_dim: list):
    method forward (line 19) | def forward(self, ids: Tensor) -> Tensor:
  function timestep_embedding (line 29) | def timestep_embedding(t: Tensor, dim, max_period=10000, time_factor: fl...
  class MLPEmbedder (line 50) | class MLPEmbedder(nn.Module):
    method __init__ (line 51) | def __init__(self, in_dim: int, hidden_dim: int, bias=True, dtype=None...
    method forward (line 57) | def forward(self, x: Tensor) -> Tensor:
  class YakMLP (line 60) | class YakMLP(nn.Module):
    method __init__ (line 61) | def __init__(self, hidden_size: int, intermediate_size: int, dtype=Non...
    method forward (line 70) | def forward(self, x: Tensor) -> Tensor:
  function build_mlp (line 74) | def build_mlp(hidden_size, mlp_hidden_dim, mlp_silu_act=False, yak_mlp=F...
  class QKNorm (line 91) | class QKNorm(torch.nn.Module):
    method __init__ (line 92) | def __init__(self, dim: int, dtype=None, device=None, operations=None):
    method forward (line 97) | def forward(self, q: Tensor, k: Tensor, v: Tensor) -> tuple:
  class SelfAttention (line 103) | class SelfAttention(nn.Module):
    method __init__ (line 104) | def __init__(self, dim: int, num_heads: int = 8, qkv_bias: bool = Fals...
  class ModulationOut (line 115) | class ModulationOut:
  class Modulation (line 121) | class Modulation(nn.Module):
    method __init__ (line 122) | def __init__(self, dim: int, double: bool, bias=True, dtype=None, devi...
    method forward (line 128) | def forward(self, vec: Tensor) -> tuple:
  function apply_mod (line 139) | def apply_mod(tensor, m_mult, m_add=None, modulation_dims=None):
  class SiLUActivation (line 153) | class SiLUActivation(nn.Module):
    method __init__ (line 154) | def __init__(self):
    method forward (line 158) | def forward(self, x: Tensor) -> Tensor:
  class DoubleStreamBlock (line 163) | class DoubleStreamBlock(nn.Module):
    method __init__ (line 164) | def __init__(self, hidden_size: int, num_heads: int, mlp_ratio: float,...
    method forward (line 192) | def forward(self, img: Tensor, txt: Tensor, vec: Tensor, pe: Tensor, a...
  class SingleStreamBlock (line 261) | class SingleStreamBlock(nn.Module):
    method __init__ (line 267) | def __init__(
    method forward (line 316) | def forward(self, x: Tensor, vec: Tensor, pe: Tensor, attn_mask=None, ...
  class LastLayer (line 358) | class LastLayer(nn.Module):
    method __init__ (line 359) | def __init__(self, hidden_size: int, patch_size: int, out_channels: in...
    method forward (line 365) | def forward(self, x: Tensor, vec: Tensor, modulation_dims=None) -> Ten...

FILE: comfy/ldm/flux/math.py
  function attention (line 10) | def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor, mask=None, tr...
  function rope (line 17) | def rope(pos: Tensor, dim: int, theta: int) -> Tensor:
  function _apply_rope1 (line 32) | def _apply_rope1(x: Tensor, freqs_cis: Tensor):
  function _apply_rope (line 43) | def _apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor):
  function apply_rope (line 51) | def apply_rope(xq, xk, freqs_cis):
  function apply_rope1 (line 56) | def apply_rope1(x, freqs_cis):

FILE: comfy/ldm/flux/model.py
  class FluxParams (line 22) | class FluxParams:
  function invert_slices (line 47) | def invert_slices(slices, length):
  class Flux (line 63) | class Flux(nn.Module):
    method __init__ (line 68) | def __init__(self, image_model=None, final_layer=True, dtype=None, dev...
    method forward_orig (line 147) | def forward_orig(
    method process_img (line 314) | def process_img(self, x, index=0, h_offset=0, w_offset=0, transformer_...
    method forward (line 344) | def forward(self, x, timestep, context, y=None, guidance=None, ref_lat...
    method _forward (line 351) | def _forward(self, x, timestep, context, y=None, guidance=None, ref_la...

FILE: comfy/ldm/flux/redux.py
  class ReduxImageEncoder (line 6) | class ReduxImageEncoder(torch.nn.Module):
    method __init__ (line 7) | def __init__(
    method forward (line 23) | def forward(self, sigclip_embeds) -> torch.Tensor:

FILE: comfy/ldm/genmo/joint_model/asymm_models_joint.py
  function modulated_rmsnorm (line 33) | def modulated_rmsnorm(x, scale, eps=1e-6):
  function residual_tanh_gated_rmsnorm (line 41) | def residual_tanh_gated_rmsnorm(x, x_res, gate, eps=1e-6):
  class AsymmetricAttention (line 53) | class AsymmetricAttention(nn.Module):
    method __init__ (line 54) | def __init__(
    method forward (line 105) | def forward(
  class AsymmetricJointBlock (line 160) | class AsymmetricJointBlock(nn.Module):
    method __init__ (line 161) | def __init__(
    method forward (line 223) | def forward(
    method ff_block_x (line 277) | def ff_block_x(self, x, scale_x, gate_x):
    method ff_block_y (line 283) | def ff_block_y(self, y, scale_y, gate_y):
  class FinalLayer (line 290) | class FinalLayer(nn.Module):
    method __init__ (line 295) | def __init__(
    method forward (line 313) | def forward(self, x, c):
  class AsymmDiTJoint (line 321) | class AsymmDiTJoint(nn.Module):
    method __init__ (line 328) | def __init__(
    method embed_x (line 443) | def embed_x(self, x: torch.Tensor) -> torch.Tensor:
    method prepare (line 453) | def prepare(
    method forward (line 487) | def forward(

FILE: comfy/ldm/genmo/joint_model/layers.py
  function _ntuple (line 17) | def _ntuple(n):
  class TimestepEmbedder (line 29) | class TimestepEmbedder(nn.Module):
    method __init__ (line 30) | def __init__(
    method timestep_embedding (line 51) | def timestep_embedding(t, dim, max_period=10000):
    method forward (line 63) | def forward(self, t, out_dtype):
  class FeedForward (line 71) | class FeedForward(nn.Module):
    method __init__ (line 72) | def __init__(
    method forward (line 94) | def forward(self, x):
  class PatchEmbed (line 100) | class PatchEmbed(nn.Module):
    method __init__ (line 101) | def __init__(
    method forward (line 133) | def forward(self, x):

FILE: comfy/ldm/genmo/joint_model/rope_mixed.py
  function centers (line 9) | def centers(start: float, stop, num, dtype=None, device=None):
  function create_position_matrix (line 27) | def create_position_matrix(
  function compute_mixed_rotation (line 68) | def compute_mixed_rotation(

FILE: comfy/ldm/genmo/joint_model/temporal_rope.py
  function apply_rotary_emb_qk_real (line 7) | def apply_rotary_emb_qk_real(

FILE: comfy/ldm/genmo/joint_model/utils.py
  function modulate (line 11) | def modulate(x, shift, scale):
  function pool_tokens (line 15) | def pool_tokens(x: torch.Tensor, mask: torch.Tensor, *, keepdim=False) -...
  class AttentionPool (line 36) | class AttentionPool(nn.Module):
    method __init__ (line 37) | def __init__(
    method forward (line 59) | def forward(self, x, mask):

FILE: comfy/ldm/genmo/vae/model.py
  function cast_tuple (line 22) | def cast_tuple(t, length=1):
  class GroupNormSpatial (line 26) | class GroupNormSpatial(ops.GroupNorm):
    method forward (line 31) | def forward(self, x: torch.Tensor, *, chunk_size: int = 8):
  class PConv3d (line 40) | class PConv3d(ops.Conv3d):
    method __init__ (line 41) | def __init__(
    method forward (line 68) | def forward(self, x: torch.Tensor):
  class Conv1x1 (line 85) | class Conv1x1(ops.Linear):
    method __init__ (line 88) | def __init__(self, in_features: int, out_features: int, *args, **kwargs):
    method forward (line 91) | def forward(self, x: torch.Tensor):
  class DepthToSpaceTime (line 106) | class DepthToSpaceTime(nn.Module):
    method __init__ (line 107) | def __init__(
    method extra_repr (line 117) | def extra_repr(self):
    method forward (line 120) | def forward(self, x: torch.Tensor):
  function norm_fn (line 149) | def norm_fn(
  class ResBlock (line 156) | class ResBlock(nn.Module):
    method __init__ (line 159) | def __init__(
    method forward (line 201) | def forward(self, x: torch.Tensor):
  class Attention (line 215) | class Attention(nn.Module):
    method __init__ (line 216) | def __init__(
    method forward (line 232) | def forward(
  class AttentionBlock (line 276) | class AttentionBlock(nn.Module):
    method __init__ (line 277) | def __init__(
    method forward (line 286) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CausalUpsampleBlock (line 290) | class CausalUpsampleBlock(nn.Module):
    method __init__ (line 291) | def __init__(
    method forward (line 321) | def forward(self, x):
  function block_fn (line 328) | def block_fn(channels, *, affine: bool = True, has_attention: bool = Fal...
  class DownsampleBlock (line 333) | class DownsampleBlock(nn.Module):
    method __init__ (line 334) | def __init__(
    method forward (line 378) | def forward(self, x):
  function add_fourier_features (line 382) | def add_fourier_features(inputs: torch.Tensor, start=6, stop=8, step=1):
  class FourierFeatures (line 409) | class FourierFeatures(nn.Module):
    method __init__ (line 410) | def __init__(self, start: int = 6, stop: int = 8, step: int = 1):
    method forward (line 416) | def forward(self, inputs):
  class Decoder (line 428) | class Decoder(nn.Module):
    method __init__ (line 429) | def __init__(
    method forward (line 509) | def forward(self, x):
  class LatentDistribution (line 532) | class LatentDistribution:
    method __init__ (line 533) | def __init__(self, mean: torch.Tensor, logvar: torch.Tensor):
    method sample (line 544) | def sample(self, temperature=1.0, generator: torch.Generator = None, n...
    method mode (line 560) | def mode(self):
  class Encoder (line 563) | class Encoder(nn.Module):
    method __init__ (line 564) | def __init__(
    method temporal_downsample (line 637) | def temporal_downsample(self):
    method spatial_downsample (line 641) | def spatial_downsample(self):
    method forward (line 644) | def forward(self, x) -> LatentDistribution:
  class VideoVAE (line 673) | class VideoVAE(nn.Module):
    method __init__ (line 674) | def __init__(self):
    method encode (line 707) | def encode(self, x):
    method decode (line 710) | def decode(self, x):

FILE: comfy/ldm/hidream/model.py
  class EmbedND (line 21) | class EmbedND(nn.Module):
    method __init__ (line 22) | def __init__(self, theta: int, axes_dim: List[int]):
    method forward (line 27) | def forward(self, ids: torch.Tensor) -> torch.Tensor:
  class PatchEmbed (line 36) | class PatchEmbed(nn.Module):
    method __init__ (line 37) | def __init__(
    method forward (line 49) | def forward(self, latent):
  class PooledEmbed (line 54) | class PooledEmbed(nn.Module):
    method __init__ (line 55) | def __init__(self, text_emb_dim, hidden_size, dtype=None, device=None,...
    method forward (line 59) | def forward(self, pooled_embed):
  class TimestepEmbed (line 63) | class TimestepEmbed(nn.Module):
    method __init__ (line 64) | def __init__(self, hidden_size, frequency_embedding_size=256, dtype=No...
    method forward (line 69) | def forward(self, timesteps, wdtype):
  function attention (line 75) | def attention(query: torch.Tensor, key: torch.Tensor, value: torch.Tenso...
  class HiDreamAttnProcessor_flashattn (line 79) | class HiDreamAttnProcessor_flashattn:
    method __call__ (line 82) | def __call__(
  class HiDreamAttention (line 148) | class HiDreamAttention(nn.Module):
    method __init__ (line 149) | def __init__(
    method forward (line 197) | def forward(
  class FeedForwardSwiGLU (line 215) | class FeedForwardSwiGLU(nn.Module):
    method __init__ (line 216) | def __init__(
    method forward (line 237) | def forward(self, x):
  class MoEGate (line 242) | class MoEGate(nn.Module):
    method __init__ (line 243) | def __init__(self, embed_dim, num_routed_experts=4, num_activated_expe...
    method reset_parameters (line 258) | def reset_parameters(self) -> None:
    method forward (line 263) | def forward(self, hidden_states):
  class MOEFeedForwardSwiGLU (line 287) | class MOEFeedForwardSwiGLU(nn.Module):
    method __init__ (line 288) | def __init__(
    method forward (line 307) | def forward(self, x):
    method moe_infer (line 328) | def moe_infer(self, x, flat_expert_indices, flat_expert_weights):
  class TextProjection (line 349) | class TextProjection(nn.Module):
    method __init__ (line 350) | def __init__(self, in_features, hidden_size, dtype=None, device=None, ...
    method forward (line 354) | def forward(self, caption):
  class BlockType (line 359) | class BlockType:
  class HiDreamImageSingleTransformerBlock (line 364) | class HiDreamImageSingleTransformerBlock(nn.Module):
    method __init__ (line 365) | def __init__(
    method forward (line 405) | def forward(
  class HiDreamImageTransformerBlock (line 437) | class HiDreamImageTransformerBlock(nn.Module):
    method __init__ (line 438) | def __init__(
    method forward (line 483) | def forward(
  class HiDreamImageBlock (line 527) | class HiDreamImageBlock(nn.Module):
    method __init__ (line 528) | def __init__(
    method forward (line 552) | def forward(
  class HiDreamImageTransformer2DModel (line 571) | class HiDreamImageTransformer2DModel(nn.Module):
    method __init__ (line 572) | def __init__(
    method expand_timesteps (line 654) | def expand_timesteps(self, timesteps, batch_size, device):
    method unpatchify (line 668) | def unpatchify(self, x: torch.Tensor, img_sizes: List[Tuple[int, int]]...
    method patchify (line 679) | def patchify(self, x, max_seq, img_sizes=None):
    method forward (line 704) | def forward(self,
    method _forward (line 720) | def _forward(

FILE: comfy/ldm/hunyuan3d/model.py
  class Hunyuan3Dv2 (line 13) | class Hunyuan3Dv2(nn.Module):
    method __init__ (line 14) | def __init__(
    method forward (line 70) | def forward(self, x, timestep, context, guidance=None, transformer_opt...
    method _forward (line 77) | def _forward(self, x, timestep, context, guidance=None, transformer_op...

FILE: comfy/ldm/hunyuan3d/vae.py
  function fps (line 18) | def fps(src: torch.Tensor, batch: torch.Tensor, sampling_ratio: float, s...
  class PointCrossAttention (line 62) | class PointCrossAttention(nn.Module):
    method __init__ (line 63) | def __init__(self,
    method sample_points_and_latents (line 113) | def sample_points_and_latents(self, point_cloud: torch.Tensor, feature...
    method forward (line 199) | def forward(self, point_cloud: torch.Tensor, features: torch.Tensor):
    method subsample (line 219) | def subsample(self, pc, num_query, input_pc_size: int):
    method handle_features (line 247) | def handle_features(self, features, idx_pc, input_pc_size, batch_size:...
  function normalize_mesh (line 258) | def normalize_mesh(mesh, scale = 0.9999):
  function sample_pointcloud (line 270) | def sample_pointcloud(mesh, num = 200000):
  function detect_sharp_edges (line 277) | def detect_sharp_edges(mesh, threshold=0.985):
  function sharp_sample_pointcloud (line 297) | def sharp_sample_pointcloud(mesh, num = 16384):
  function load_surface_sharpedge (line 317) | def load_surface_sharpedge(mesh, num_points=4096, num_sharp_points=4096,...
  class SharpEdgeSurfaceLoader (line 377) | class SharpEdgeSurfaceLoader:
    method __init__ (line 380) | def __init__(self, num_uniform_points = 8192, num_sharp_points = 8192):
    method __call__ (line 386) | def __call__(self, mesh_input, device = "cuda"):
    method _load_mesh (line 391) | def _load_mesh(mesh_input):
  class DiagonalGaussianDistribution (line 407) | class DiagonalGaussianDistribution:
    method __init__ (line 408) | def __init__(self, params: torch.Tensor, feature_dim: int = -1):
    method sample (line 416) | def sample(self):
  class VanillaVolumeDecoder (line 427) | class VanillaVolumeDecoder():
    method __call__ (line 429) | def __call__(self, latents: torch.Tensor, geo_decoder: callable, octre...
  class FourierEmbedder (line 459) | class FourierEmbedder(nn.Module):
    method __init__ (line 496) | def __init__(self,
    method get_dims (line 529) | def get_dims(self, input_dim):
    method forward (line 535) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CrossAttentionProcessor (line 555) | class CrossAttentionProcessor:
    method __call__ (line 556) | def __call__(self, attn, q, k, v):
  class DropPath (line 560) | class DropPath(nn.Module):
    method __init__ (line 564) | def __init__(self, drop_prob: float = 0., scale_by_keep: bool = True):
    method forward (line 569) | def forward(self, x):
    method extra_repr (line 588) | def extra_repr(self):
  class MLP (line 592) | class MLP(nn.Module):
    method __init__ (line 593) | def __init__(
    method forward (line 607) | def forward(self, x):
  class QKVMultiheadCrossAttention (line 610) | class QKVMultiheadCrossAttention(nn.Module):
    method __init__ (line 611) | def __init__(
    method forward (line 625) | def forward(self, q, kv):
  class MultiheadCrossAttention (line 646) | class MultiheadCrossAttention(nn.Module):
    method __init__ (line 647) | def __init__(
    method forward (line 674) | def forward(self, x, data):
  class ResidualCrossAttentionBlock (line 687) | class ResidualCrossAttentionBlock(nn.Module):
    method __init__ (line 688) | def __init__(
    method forward (line 717) | def forward(self, x: torch.Tensor, data: torch.Tensor):
  class QKVMultiheadAttention (line 723) | class QKVMultiheadAttention(nn.Module):
    method __init__ (line 724) | def __init__(
    method forward (line 737) | def forward(self, qkv):
  class MultiheadAttention (line 751) | class MultiheadAttention(nn.Module):
    method __init__ (line 752) | def __init__(
    method forward (line 774) | def forward(self, x):
  class ResidualAttentionBlock (line 781) | class ResidualAttentionBlock(nn.Module):
    method __init__ (line 782) | def __init__(
    method forward (line 805) | def forward(self, x: torch.Tensor):
  class Transformer (line 811) | class Transformer(nn.Module):
    method __init__ (line 812) | def __init__(
    method forward (line 840) | def forward(self, x: torch.Tensor):
  class CrossAttentionDecoder (line 846) | class CrossAttentionDecoder(nn.Module):
    method __init__ (line 848) | def __init__(
    method forward (line 886) | def forward(self, queries=None, query_embeddings=None, latents=None):
  class ShapeVAE (line 899) | class ShapeVAE(nn.Module):
    method __init__ (line 900) | def __init__(
    method decode (line 966) | def decode(self, latents, **kwargs):
    method encode (line 978) | def encode(self, surface):

FILE: comfy/ldm/hunyuan3dv2_1/hunyuandit.py
  class GELU (line 8) | class GELU(nn.Module):
    method __init__ (line 10) | def __init__(self, dim_in: int, dim_out: int, operations, device, dtype):
    method gelu (line 14) | def gelu(self, gate: torch.Tensor) -> torch.Tensor:
    method forward (line 21) | def forward(self, hidden_states):
  class FeedForward (line 28) | class FeedForward(nn.Module):
    method __init__ (line 30) | def __init__(self, dim: int, dim_out = None, mult: int = 4,
    method forward (line 47) | def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
  class AddAuxLoss (line 52) | class AddAuxLoss(torch.autograd.Function):
    method forward (line 55) | def forward(ctx, x, loss):
    method backward (line 63) | def backward(ctx, grad_output):
  class MoEGate (line 73) | class MoEGate(nn.Module):
    method __init__ (line 75) | def __init__(self, embed_dim, num_experts=16, num_experts_per_tok=2, a...
    method forward (line 86) | def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
  class MoEBlock (line 114) | class MoEBlock(nn.Module):
    method __init__ (line 115) | def __init__(self, dim, num_experts: int = 6, moe_top_k: int = 2, drop...
    method forward (line 130) | def forward(self, hidden_states) -> torch.Tensor:
    method moe_infer (line 160) | def moe_infer(self, x, flat_expert_indices, flat_expert_weights):
  class Timesteps (line 190) | class Timesteps(nn.Module):
    method __init__ (line 191) | def __init__(self, num_channels: int, downscale_freq_shift: float = 0.0,
    method forward (line 214) | def forward(self, timesteps: torch.Tensor):
  class TimestepEmbedder (line 235) | class TimestepEmbedder(nn.Module):
    method __init__ (line 236) | def __init__(self, hidden_size, frequency_embedding_size = 256, cond_p...
    method forward (line 251) | def forward(self, timesteps, condition):
  class MLP (line 264) | class MLP(nn.Module):
    method __init__ (line 265) | def __init__(self, *, width: int, operations = None, device = None, dt...
    method forward (line 272) | def forward(self, x):
  class CrossAttention (line 275) | class CrossAttention(nn.Module):
    method __init__ (line 276) | def __init__(
    method forward (line 317) | def forward(self, x, y):
  class Attention (line 353) | class Attention(nn.Module):
    method __init__ (line 355) | def __init__(
    method forward (line 391) | def forward(self, x):
  class HunYuanDiTBlock (line 422) | class HunYuanDiTBlock(nn.Module):
    method __init__ (line 423) | def __init__(
    method forward (line 491) | def forward(self, hidden_states, conditioning=None, text_states=None, ...
  class FinalLayer (line 519) | class FinalLayer(nn.Module):
    method __init__ (line 521) | def __init__(self, final_hidden_size, out_channels, operations, use_fp...
    method forward (line 532) | def forward(self, x):
  class HunYuanDiTPlain (line 538) | class HunYuanDiTPlain(nn.Module):
    method __init__ (line 541) | def __init__(
    method forward (line 607) | def forward(self, x, t, context, transformer_options = {}, **kwargs):

FILE: comfy/ldm/hunyuan_video/model.py
  class HunyuanVideoParams (line 27) | class HunyuanVideoParams:
  class SelfAttentionRef (line 49) | class SelfAttentionRef(nn.Module):
    method __init__ (line 50) | def __init__(self, dim: int, qkv_bias: bool = False, dtype=None, devic...
  class TokenRefinerBlock (line 56) | class TokenRefinerBlock(nn.Module):
    method __init__ (line 57) | def __init__(
    method forward (line 85) | def forward(self, x, c, mask, transformer_options={}):
  class IndividualTokenRefiner (line 98) | class IndividualTokenRefiner(nn.Module):
    method __init__ (line 99) | def __init__(
    method forward (line 122) | def forward(self, x, c, mask, transformer_options={}):
  class TokenRefiner (line 134) | class TokenRefiner(nn.Module):
    method __init__ (line 135) | def __init__(
    method forward (line 152) | def forward(
  class ByT5Mapper (line 173) | class ByT5Mapper(nn.Module):
    method __init__ (line 174) | def __init__(self, in_dim, out_dim, hidden_dim, out_dim1, use_res=Fals...
    method forward (line 183) | def forward(self, x):
  class HunyuanVideo (line 196) | class HunyuanVideo(nn.Module):
    method __init__ (line 201) | def __init__(self, image_model=None, final_layer=True, dtype=None, dev...
    method forward_orig (line 289) | def forward_orig(
    method img_ids (line 459) | def img_ids(self, x):
    method img_ids_2d (line 471) | def img_ids_2d(self, x):
    method forward (line 481) | def forward(self, x, timestep, context, y=None, txt_byt5=None, clip_fe...
    method _forward (line 488) | def _forward(self, x, timestep, context, y=None, txt_byt5=None, clip_f...

FILE: comfy/ldm/hunyuan_video/upsampler.py
  class SRResidualCausalBlock3D (line 9) | class SRResidualCausalBlock3D(nn.Module):
    method __init__ (line 10) | def __init__(self, channels: int):
    method forward (line 20) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class SRModel3DV2 (line 23) | class SRModel3DV2(nn.Module):
    method __init__ (line 24) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class Upsampler (line 49) | class Upsampler(nn.Module):
    method __init__ (line 50) | def __init__(
    method forward (line 81) | def forward(self, z):
  class HunyuanVideo15SRModel (line 104) | class HunyuanVideo15SRModel():
    method __init__ (line 105) | def __init__(self, model_type, config):
    method load_sd (line 114) | def load_sd(self, sd):
    method get_sd (line 117) | def get_sd(self):
    method resample_latent (line 120) | def resample_latent(self, latent):

FILE: comfy/ldm/hunyuan_video/vae.py
  class PixelShuffle2D (line 8) | class PixelShuffle2D(nn.Module):
    method __init__ (line 9) | def __init__(self, in_dim, out_dim, op=ops.Conv2d):
    method forward (line 14) | def forward(self, x):
  class PixelUnshuffle2D (line 22) | class PixelUnshuffle2D(nn.Module):
    method __init__ (line 23) | def __init__(self, in_dim, out_dim, op=ops.Conv2d):
    method forward (line 28) | def forward(self, x):
  class Encoder (line 36) | class Encoder(nn.Module):
    method __init__ (line 37) | def __init__(self, in_channels, z_channels, block_out_channels, num_re...
    method forward (line 71) | def forward(self, x):
  class Decoder (line 89) | class Decoder(nn.Module):
    method __init__ (line 90) | def __init__(self, z_channels, out_channels, block_out_channels, num_r...
    method forward (line 126) | def forward(self, z):

FILE: comfy/ldm/hunyuan_video/vae_refiner.py
  class RMS_norm (line 11) | class RMS_norm(nn.Module):
    method __init__ (line 12) | def __init__(self, dim):
    method forward (line 18) | def forward(self, x):
  class DnSmpl (line 21) | class DnSmpl(nn.Module):
    method __init__ (line 22) | def __init__(self, ic, oc, tds, refiner_vae, op):
    method forward (line 32) | def forward(self, x, conv_carry_in=None, conv_carry_out=None):
  class UpSmpl (line 81) | class UpSmpl(nn.Module):
    method __init__ (line 82) | def __init__(self, ic, oc, tus, refiner_vae, op):
    method forward (line 91) | def forward(self, x, conv_carry_in=None, conv_carry_out=None):
  class Encoder (line 136) | class Encoder(nn.Module):
    method __init__ (line 137) | def __init__(self, in_channels, z_channels, block_out_channels, num_re...
    method forward (line 184) | def forward(self, x):
  class Decoder (line 232) | class Decoder(nn.Module):
    method __init__ (line 233) | def __init__(self, z_channels, out_channels, block_out_channels, num_r...
    method forward (line 278) | def forward(self, z):

FILE: comfy/ldm/hydit/attn_layers.py
  function reshape_for_broadcast (line 7) | def reshape_for_broadcast(freqs_cis: Union[torch.Tensor, Tuple[torch.Ten...
  function rotate_half (line 49) | def rotate_half(x):
  function apply_rotary_emb (line 54) | def apply_rotary_emb(
  class CrossAttention (line 96) | class CrossAttention(nn.Module):
    method __init__ (line 100) | def __init__(self,
    method forward (line 134) | def forward(self, x, y, freqs_cis_img=None):
  class Attention (line 174) | class Attention(nn.Module):
    method __init__ (line 178) | def __init__(self, dim, num_heads, qkv_bias=True, qk_norm=False, attn_...
    method forward (line 198) | def forward(self, x, freqs_cis_img=None):

FILE: comfy/ldm/hydit/controlnet.py
  class HunYuanControlNet (line 17) | class HunYuanControlNet(nn.Module):
    method __init__ (line 47) | def __init__(
    method forward (line 203) | def forward(

FILE: comfy/ldm/hydit/models.py
  function calc_rope (line 14) | def calc_rope(x, patch_size, head_size):
  function modulate (line 26) | def modulate(x, shift, scale):
  class HunYuanDiTBlock (line 30) | class HunYuanDiTBlock(nn.Module):
    method __init__ (line 34) | def __init__(self,
    method _forward (line 89) | def _forward(self, x, c=None, text_states=None, freq_cis_img=None, ski...
    method forward (line 117) | def forward(self, x, c=None, text_states=None, freq_cis_img=None, skip...
  class FinalLayer (line 123) | class FinalLayer(nn.Module):
    method __init__ (line 127) | def __init__(self, final_hidden_size, c_emb_size, patch_size, out_chan...
    method forward (line 136) | def forward(self, x, c):
  class HunYuanDiT (line 143) | class HunYuanDiT(nn.Module):
    method __init__ (line 173) | def __init__(self,
    method forward (line 274) | def forward(self,
    method unpatchify (line 404) | def unpatchify(self, x, h, w):

FILE: comfy/ldm/hydit/poolers.py
  class AttentionPool (line 6) | class AttentionPool(nn.Module):
    method __init__ (line 7) | def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, o...
    method forward (line 17) | def forward(self, x):

FILE: comfy/ldm/hydit/posemb_layers.py
  function _to_tuple (line 6) | def _to_tuple(x):
  function get_fill_resize_and_crop (line 13) | def get_fill_resize_and_crop(src, tgt):
  function get_meshgrid (line 34) | def get_meshgrid(start, *args):
  function get_2d_sincos_pos_embed (line 64) | def get_2d_sincos_pos_embed(embed_dim, start, *args, cls_token=False, ex...
  function get_2d_sincos_pos_embed_from_grid (line 83) | def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
  function get_1d_sincos_pos_embed_from_grid (line 94) | def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
  function get_2d_rotary_pos_embed (line 120) | def get_2d_rotary_pos_embed(embed_dim, start, *args, use_real=True):
  function get_2d_rotary_pos_embed_from_grid (line 145) | def get_2d_rotary_pos_embed_from_grid(embed_dim, grid, use_real=False):
  function get_1d_rotary_pos_embed (line 161) | def get_1d_rotary_pos_embed(dim: int, pos: Union[np.ndarray, int], theta...
  function calc_sizes (line 195) | def calc_sizes(rope_img, patch_size, th, tw):
  function init_image_posemb (line 209) | def init_image_posemb(rope_img,

FILE: comfy/ldm/kandinsky5/model.py
  function attention (line 10) | def attention(q, k, v, heads, transformer_options={}):
  function apply_scale_shift_norm (line 20) | def apply_scale_shift_norm(norm, x, scale, shift):
  function apply_gate_sum (line 23) | def apply_gate_sum(x, out, gate):
  function get_shift_scale_gate (line 26) | def get_shift_scale_gate(params):
  function get_freqs (line 30) | def get_freqs(dim, max_period=10000.0):
  class TimeEmbeddings (line 34) | class TimeEmbeddings(nn.Module):
    method __init__ (line 35) | def __init__(self, model_dim, time_dim, max_period=10000.0, operation_...
    method forward (line 46) | def forward(self, timestep, dtype):
  class TextEmbeddings (line 53) | class TextEmbeddings(nn.Module):
    method __init__ (line 54) | def __init__(self, text_dim, model_dim, operation_settings=None):
    method forward (line 60) | def forward(self, text_embed):
  class VisualEmbeddings (line 65) | class VisualEmbeddings(nn.Module):
    method __init__ (line 66) | def __init__(self, visual_dim, model_dim, patch_size, operation_settin...
    method forward (line 72) | def forward(self, x):
  class Modulation (line 88) | class Modulation(nn.Module):
    method __init__ (line 89) | def __init__(self, time_dim, model_dim, num_params, operation_settings...
    method forward (line 94) | def forward(self, x):
  class SelfAttention (line 98) | class SelfAttention(nn.Module):
    method __init__ (line 99) | def __init__(self, num_channels, head_dim, operation_settings=None):
    method _compute_qk (line 115) | def _compute_qk(self, x, freqs, proj_fn, norm_fn):
    method _forward (line 119) | def _forward(self, x, freqs, transformer_options={}):
    method _forward_chunked (line 126) | def _forward_chunked(self, x, freqs, transformer_options={}):
    method forward (line 141) | def forward(self, x, freqs, transformer_options={}):
  class CrossAttention (line 148) | class CrossAttention(SelfAttention):
    method get_qkv (line 149) | def get_qkv(self, x, context):
    method forward (line 155) | def forward(self, x, context, transformer_options={}):
  class FeedForward (line 161) | class FeedForward(nn.Module):
    method __init__ (line 162) | def __init__(self, dim, ff_dim, operation_settings=None):
    method _forward (line 170) | def _forward(self, x):
    method _forward_chunked (line 173) | def _forward_chunked(self, x):
    method forward (line 180) | def forward(self, x):
  class OutLayer (line 187) | class OutLayer(nn.Module):
    method __init__ (line 188) | def __init__(self, model_dim, time_dim, visual_dim, patch_size, operat...
    method forward (line 196) | def forward(self, visual_embed, time_embed):
  class TransformerEncoderBlock (line 213) | class TransformerEncoderBlock(nn.Module):
    method __init__ (line 214) | def __init__(self, model_dim, time_dim, ff_dim, head_dim, operation_se...
    method forward (line 225) | def forward(self, x, time_embed, freqs, transformer_options={}):
  class TransformerDecoderBlock (line 239) | class TransformerDecoderBlock(nn.Module):
    method __init__ (line 240) | def __init__(self, model_dim, time_dim, ff_dim, head_dim, operation_se...
    method forward (line 254) | def forward(self, visual_embed, text_embed, time_embed, freqs, transfo...
  class Kandinsky5 (line 274) | class Kandinsky5(nn.Module):
    method __init__ (line 275) | def __init__(
    method rope_encode_1d (line 311) | def rope_encode_1d(self, seq_len, seq_start=0, steps=None, device=None...
    method rope_encode_3d (line 318) | def rope_encode_3d(self, t, h, w, t_start=0, steps_t=None, steps_h=Non...
    method forward_orig (line 362) | def forward_orig(self, x, timestep, context, y, freqs, freqs_text, tra...
    method _forward (line 389) | def _forward(self, x, timestep, context, y, time_dim_replace=None, tra...
    method forward (line 408) | def forward(self, x, timestep, context, y, time_dim_replace=None, tran...

FILE: comfy/ldm/lightricks/av_model.py
  class CompressedTimestep (line 20) | class CompressedTimestep:
    method __init__ (line 24) | def __init__(self, tensor: torch.Tensor, patches_per_frame: int):
    method expand (line 46) | def expand(self):
    method expand_for_computation (line 55) | def expand_for_computation(self, scale_shift_table: torch.Tensor, batc...
  class BasicAVTransformerBlock (line 83) | class BasicAVTransformerBlock(nn.Module):
    method __init__ (line 84) | def __init__(
    method get_ada_values (line 202) | def get_ada_values(
    method get_av_ca_ada_values (line 216) | def get_av_ca_ada_values(
    method _apply_text_cross_attention (line 237) | def _apply_text_cross_attention(
    method forward (line 256) | def forward(
  class LTXAVModel (line 378) | class LTXAVModel(LTXVModel):
    method __init__ (line 381) | def __init__(
    method _init_model_components (line 448) | def _init_model_components(self, device, dtype, **kwargs):
    method preprocess_text_embeds (line 563) | def preprocess_text_embeds(self, context, unprocessed=False):
    method _init_transformer_blocks (line 582) | def _init_transformer_blocks(self, device, dtype, **kwargs):
    method _init_output_components (line 605) | def _init_output_components(self, device, dtype):
    method separate_audio_and_video_latents (line 621) | def separate_audio_and_video_latents(self, x, audio_length):
    method recombine_audio_and_video_latents (line 640) | def recombine_audio_and_video_latents(self, vx, ax, target_shape=None):
    method _process_input (line 663) | def _process_input(self, x, keyframe_idxs, denoise_mask, **kwargs):
    method _prepare_timestep (line 689) | def _prepare_timestep(self, timestep, batch_size, hidden_dtype, **kwar...
    method _prepare_context (line 789) | def _prepare_context(self, context, batch_size, x, attention_mask=None):
    method _prepare_positional_embeddings (line 811) | def _prepare_positional_embeddings(self, pixel_coords, frame_rate, x_d...
    method _process_transformer_blocks (line 850) | def _process_transformer_blocks(
    method _process_output (line 952) | def _process_output(self, x, embedded_timestep, keyframe_idxs, **kwargs):
    method forward (line 984) | def forward(

FILE: comfy/ldm/lightricks/embeddings_connector.py
  class BasicTransformerBlock1D (line 16) | class BasicTransformerBlock1D(nn.Module):
    method __init__ (line 46) | def __init__(
    method forward (line 83) | def forward(self, hidden_states, attention_mask=None, pe=None) -> torc...
  class Embeddings1DConnector (line 112) | class Embeddings1DConnector(nn.Module):
    method __init__ (line 115) | def __init__(
    method get_fractional_positions (line 169) | def get_fractional_positions(self, indices_grid):
    method precompute_freqs (line 179) | def precompute_freqs(self, indices_grid, spacing):
    method generate_freq_grid (line 212) | def generate_freq_grid(self, spacing, dtype, device):
    method precompute_freqs_cis (line 239) | def precompute_freqs_cis(self, indices_grid, spacing="exp", out_dtype=...
    method forward (line 254) | def forward(

FILE: comfy/ldm/lightricks/latent_upsampler.py
  function _rational_for_scale (line 8) | def _rational_for_scale(scale: float) -> Tuple[int, int]:
  class PixelShuffleND (line 17) | class PixelShuffleND(nn.Module):
    method __init__ (line 18) | def __init__(self, dims, upscale_factors=(2, 2, 2)):
    method forward (line 24) | def forward(self, x):
  class BlurDownsample (line 48) | class BlurDownsample(nn.Module):
    method __init__ (line 54) | def __init__(self, dims: int, stride: int):
    method forward (line 67) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class SpatialRationalResampler (line 92) | class SpatialRationalResampler(nn.Module):
    method __init__ (line 100) | def __init__(self, mid_channels: int, scale: float):
    method forward (line 110) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class ResBlock (line 120) | class ResBlock(nn.Module):
    method __init__ (line 121) | def __init__(
    method forward (line 136) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class LatentUpsampler (line 147) | class LatentUpsampler(nn.Module):
    method __init__ (line 160) | def __init__(
    method forward (line 223) | def forward(self, latent: torch.Tensor) -> torch.Tensor:
    method from_config (line 269) | def from_config(cls, config):
    method config (line 281) | def config(self):

FILE: comfy/ldm/lightricks/model.py
  function _log_base (line 20) | def _log_base(x, base):
  class LTXRopeType (line 23) | class LTXRopeType(str, Enum):
    method from_dict (line 30) | def from_dict(cls, kwargs, default=None):
  class LTXFrequenciesPrecision (line 36) | class LTXFrequenciesPrecision(str, Enum):
    method from_dict (line 43) | def from_dict(cls, kwargs, default=None):
  function get_timestep_embedding (line 49) | def get_timestep_embedding(
  class TimestepEmbedding (line 101) | class TimestepEmbedding(nn.Module):
    method __init__ (line 102) | def __init__(
    method forward (line 139) | def forward(self, sample, condition=None):
  class Timesteps (line 154) | class Timesteps(nn.Module):
    method __init__ (line 155) | def __init__(self, num_channels: int, flip_sin_to_cos: bool, downscale...
    method forward (line 162) | def forward(self, timesteps):
  class PixArtAlphaCombinedTimestepSizeEmbeddings (line 173) | class PixArtAlphaCombinedTimestepSizeEmbeddings(nn.Module):
    method __init__ (line 181) | def __init__(
    method forward (line 198) | def forward(self, timestep, resolution, aspect_ratio, batch_size, hidd...
  class AdaLayerNormSingle (line 204) | class AdaLayerNormSingle(nn.Module):
    method __init__ (line 215) | def __init__(
    method forward (line 232) | def forward(
  class PixArtAlphaTextProjection (line 245) | class PixArtAlphaTextProjection(nn.Module):
    method __init__ (line 252) | def __init__(
    method forward (line 271) | def forward(self, caption):
  class NormSingleLinearTextProjection (line 278) | class NormSingleLinearTextProjection(nn.Module):
    method __init__ (line 281) | def __init__(
    method forward (line 296) | def forward(self, caption):
  class GELU_approx (line 302) | class GELU_approx(nn.Module):
    method __init__ (line 303) | def __init__(self, dim_in, dim_out, dtype=None, device=None, operation...
    method forward (line 307) | def forward(self, x):
  class FeedForward (line 311) | class FeedForward(nn.Module):
    method __init__ (line 312) | def __init__(self, dim, dim_out, mult=4, glu=False, dropout=0.0, dtype...
    method forward (line 321) | def forward(self, x):
  function apply_rotary_emb (line 324) | def apply_rotary_emb(input_tensor, freqs_cis):
  function apply_interleaved_rotary_emb (line 333) | def apply_interleaved_rotary_emb(input_tensor, cos_freqs, sin_freqs):  #...
  function apply_split_rotary_emb (line 343) | def apply_split_rotary_emb(input_tensor, cos, sin):
  class CrossAttention (line 361) | class CrossAttention(nn.Module):
    method __init__ (line 362) | def __init__(
    method forward (line 400) | def forward(self, x, context=None, mask=None, pe=None, k_pe=None, tran...
  class BasicTransformerBlock (line 433) | class BasicTransformerBlock(nn.Module):
    method __init__ (line 434) | def __init__(
    method forward (line 470) | def forward(self, x, context=None, attention_mask=None, timestep=None,...
  function compute_prompt_timestep (line 490) | def compute_prompt_timestep(adaln_module, timestep_scaled, batch_size, h...
  function apply_cross_attention_adaln (line 512) | def apply_cross_attention_adaln(
  function get_fractional_positions (line 531) | def get_fractional_positions(indices_grid, max_pos):
  function generate_freq_grid_np (line 542) | def generate_freq_grid_np(positional_embedding_theta, positional_embeddi...
  function generate_freq_grid_pytorch (line 559) | def generate_freq_grid_pytorch(positional_embedding_theta, positional_em...
  function generate_freqs (line 580) | def generate_freqs(indices, indices_grid, max_pos, use_middle_indices_gr...
  function interleaved_freqs_cis (line 599) | def interleaved_freqs_cis(freqs, pad_size):
  function split_freqs_cis (line 609) | def split_freqs_cis(freqs, pad_size, num_attention_heads):
  class LTXBaseModel (line 630) | class LTXBaseModel(torch.nn.Module, ABC):
    method __init__ (line 638) | def __init__(
    method _init_common_components (line 701) | def _init_common_components(self, device, dtype):
    method _init_model_components (line 744) | def _init_model_components(self, device, dtype, **kwargs):
    method _init_transformer_blocks (line 749) | def _init_transformer_blocks(self, device, dtype, **kwargs):
    method _init_output_components (line 754) | def _init_output_components(self, device, dtype):
    method _process_input (line 759) | def _process_input(self, x, keyframe_idxs, denoise_mask, **kwargs):
    method _build_guide_self_attention_mask (line 763) | def _build_guide_self_attention_mask(self, x, transformer_options, mer...
    method _process_transformer_blocks (line 772) | def _process_transformer_blocks(self, x, context, attention_mask, time...
    method _process_output (line 777) | def _process_output(self, x, embedded_timestep, keyframe_idxs, **kwargs):
    method _prepare_timestep (line 781) | def _prepare_timestep(self, timestep, batch_size, hidden_dtype, **kwar...
    method _prepare_context (line 805) | def _prepare_context(self, context, batch_size, x, attention_mask=None):
    method _precompute_freqs_cis (line 813) | def _precompute_freqs_cis(
    method _prepare_positional_embeddings (line 838) | def _prepare_positional_embeddings(self, pixel_coords, frame_rate, x_d...
    method _prepare_attention_mask (line 852) | def _prepare_attention_mask(self, attention_mask, x_dtype):
    method forward (line 860) | def forward(
    method _forward (line 887) | def _forward(
  class LTXVModel (line 944) | class LTXVModel(LTXBaseModel):
    method __init__ (line 947) | def __init__(
    method _init_model_components (line 989) | def _init_model_components(self, device, dtype, **kwargs):
    method _init_transformer_blocks (line 993) | def _init_transformer_blocks(self, device, dtype, **kwargs):
    method _init_output_components (line 1011) | def _init_output_components(self, device, dtype):
    method _process_input (line 1020) | def _process_input(self, x, keyframe_idxs, denoise_mask, **kwargs):
    method _build_guide_self_attention_mask (line 1074) | def _build_guide_self_attention_mask(self, x, transformer_options, mer...
    method _downsample_mask_to_latent (line 1174) | def _downsample_mask_to_latent(mask, f_lat, h_lat, w_lat):
    method _build_self_attention_mask (line 1238) | def _build_self_attention_mask(total_tokens, num_guide_tokens, tracked...
    method _process_transformer_blocks (line 1276) | def _process_transformer_blocks(self, x, context, attention_mask, time...
    method _process_output (line 1306) | def _process_output(self, x, embedded_timestep, keyframe_idxs, **kwargs):

FILE: comfy/ldm/lightricks/symmetric_patchifier.py
  function latent_to_pixel_coords (line 9) | def latent_to_pixel_coords(
  class Patchifier (line 36) | class Patchifier(ABC):
    method __init__ (line 37) | def __init__(self, patch_size: int, start_end: bool=False):
    method patchify (line 43) | def patchify(
    method unpatchify (line 49) | def unpatchify(
    method patch_size (line 60) | def patch_size(self):
    method get_latent_coords (line 63) | def get_latent_coords(
  class SymmetricPatchifier (line 97) | class SymmetricPatchifier(Patchifier):
    method patchify (line 98) | def patchify(
    method unpatchify (line 113) | def unpatchify(
  class AudioPatchifier (line 135) | class AudioPatchifier(Patchifier):
    method __init__ (line 136) | def __init__(self, patch_size: int,
    method copy_with_shift (line 151) | def copy_with_shift(self, shift):
    method _get_audio_latent_time_in_sec (line 157) | def _get_audio_latent_time_in_sec(self, start_latent, end_latent: int,...
    method patchify (line 165) | def patchify(self, audio_latents: torch.Tensor) -> Tuple[torch.Tensor,...
    method unpatchify (line 185) | def unpatchify(self, audio_latents: torch.Tensor, channels: int, freq:...

FILE: comfy/ldm/lightricks/vae/audio_vae.py
  class AudioVAEComponentConfig (line 22) | class AudioVAEComponentConfig:
    method from_metadata (line 29) | def from_metadata(cls, metadata: dict) -> "AudioVAEComponentConfig":
  class ModelDeviceManager (line 47) | class ModelDeviceManager:
    method __init__ (line 50) | def __init__(self, module: torch.nn.Module):
    method ensure_model_loaded (line 55) | def ensure_model_loaded(self) -> None:
    method move_to_load_device (line 62) | def move_to_load_device(self, tensor: torch.Tensor) -> torch.Tensor:
    method load_device (line 66) | def load_device(self):
  class AudioLatentNormalizer (line 70) | class AudioLatentNormalizer:
    method __init__ (line 73) | def __init__(self, patchfier: AudioPatchifier, statistics_processor: t...
    method normalize (line 77) | def normalize(self, latents: torch.Tensor) -> torch.Tensor:
    method denormalize (line 84) | def denormalize(self, latents: torch.Tensor) -> torch.Tensor:
  class AudioPreprocessor (line 92) | class AudioPreprocessor:
    method __init__ (line 95) | def __init__(self, target_sample_rate: int, mel_bins: int, mel_hop_len...
    method resample (line 101) | def resample(self, waveform: torch.Tensor, source_rate: int) -> torch....
    method waveform_to_mel (line 106) | def waveform_to_mel(
  class AudioVAE (line 132) | class AudioVAE(torch.nn.Module):
    method __init__ (line 135) | def __init__(self, state_dict: dict, metadata: dict):
    method encode (line 173) | def encode(self, audio: dict) -> torch.Tensor:
    method decode (line 203) | def decode(self, latents: torch.Tensor) -> torch.Tensor:
    method target_shape_from_latents (line 219) | def target_shape_from_latents(self, latents_shape):
    method num_of_latents_from_frames (line 231) | def num_of_latents_from_frames(self, frames_number: int, frame_rate: i...
    method run_vocoder (line 234) | def run_vocoder(self, mel_spec: torch.Tensor) -> torch.Tensor:
    method sample_rate (line 246) | def sample_rate(self) -> int:
    method mel_hop_length (line 250) | def mel_hop_length(self) -> int:
    method mel_bins (line 254) | def mel_bins(self) -> int:
    method latent_channels (line 258) | def latent_channels(self) -> int:
    method latent_frequency_bins (line 262) | def latent_frequency_bins(self) -> int:
    method latents_per_second (line 266) | def latents_per_second(self) -> float:
    method output_sample_rate (line 270) | def output_sample_rate(self) -> int:
    method memory_required (line 281) | def memory_required(self, input_shape):

FILE: comfy/ldm/lightricks/vae/causal_audio_autoencoder.py
  class StringConvertibleEnum (line 14) | class StringConvertibleEnum(Enum):
    method str_to_enum (line 23) | def str_to_enum(cls, value):
  class AttentionType (line 76) | class AttentionType(StringConvertibleEnum):
  class CausalityAxis (line 84) | class CausalityAxis(StringConvertibleEnum):
  function Normalize (line 93) | def Normalize(in_channels, *, num_groups=32, normtype="group"):
  class CausalConv2d (line 102) | class CausalConv2d(nn.Module):
    method __init__ (line 111) | def __init__(
    method forward (line 157) | def forward(self, x):
  function make_conv2d (line 163) | def make_conv2d(
  class Upsample (line 213) | class Upsample(nn.Module):
    method __init__ (line 214) | def __init__(self, in_channels, with_conv, causality_axis: CausalityAx...
    method forward (line 221) | def forward(self, x):
  class Downsample (line 254) | class Downsample(nn.Module):
    method __init__ (line 261) | def __init__(self, in_channels, with_conv, causality_axis: CausalityAx...
    method forward (line 274) | def forward(self, x):
  class ResnetBlock (line 298) | class ResnetBlock(nn.Module):
    method __init__ (line 299) | def __init__(
    method forward (line 338) | def forward(self, x, temb):
  class AttnBlock (line 361) | class AttnBlock(nn.Module):
    method __init__ (line 362) | def __init__(self, in_channels, norm_type="group"):
    method forward (line 372) | def forward(self, x):
  function make_attn (line 399) | def make_attn(in_channels, attn_type="vanilla", norm_type="group"):
  class Encoder (line 419) | class Encoder(nn.Module):
    method __init__ (line 420) | def __init__(
    method forward (line 532) | def forward(self, x):
  class Decoder (line 588) | class Decoder(nn.Module):
    method __init__ (line 589) | def __init__(
    method _adjust_output_shape (line 691) | def _adjust_output_shape(self, decoded_output, target_shape):
    method get_config (line 735) | def get_config(self):
    method forward (line 746) | def forward(self, latent_features, target_shape=None):
  class processor (line 807) | class processor(nn.Module):
    method __init__ (line 808) | def __init__(self):
    method un_normalize (line 813) | def un_normalize(self, x):
    method normalize (line 816) | def normalize(self, x):
  class CausalAudioAutoencoder (line 820) | class CausalAudioAutoencoder(nn.Module):
    method __init__ (line 821) | def __init__(self, config=None):
    method get_default_config (line 850) | def get_default_config(self):
    method get_config (line 886) | def get_config(self):
    method encode (line 896) | def encode(self, x):
    method decode (line 899) | def decode(self, x, target_shape=None):

FILE: comfy/ldm/lightricks/vae/causal_conv3d.py
  class CausalConv3d (line 9) | class CausalConv3d(nn.Module):
    method __init__ (line 10) | def __init__(
    method forward (line 52) | def forward(self, x, causal: bool = True):
    method weight (line 89) | def weight(self):

FILE: comfy/ldm/lightricks/vae/causal_video_autoencoder.py
  function in_meta_context (line 19) | def in_meta_context():
  function mark_conv3d_ended (line 22) | def mark_conv3d_ended(module):
  function split2 (line 29) | def split2(tensor, split_point, dim=2):
  function add_exchange_cache (line 32) | def add_exchange_cache(dest, cache_in, new_input, dim=2):
  class Encoder (line 43) | class Encoder(nn.Module):
    method __init__ (line 68) | def __init__(
    method _forward_chunk (line 236) | def _forward_chunk(self, sample: torch.FloatTensor) -> Optional[torch....
    method forward_orig (line 286) | def forward_orig(self, sample: torch.FloatTensor, device=None) -> torc...
    method forward (line 314) | def forward(self, *args, **kwargs):
  function get_max_chunk_size (line 332) | def get_max_chunk_size(device: torch.device) -> int:
  class Decoder (line 345) | class Decoder(nn.Module):
    method __init__ (line 370) | def __init__(
    method decode_output_shape (line 535) | def decode_output_shape(self, input_shape):
    method run_up (line 539) | def run_up(self, idx, sample_ref, ended, timestep_shift_scale, scaled_...
    method forward_orig (line 586) | def forward_orig(
    method forward (line 646) | def forward(self, *args, **kwargs):
  class UNetMidBlock3D (line 658) | class UNetMidBlock3D(nn.Module):
    method __init__ (line 682) | def __init__(
    method forward (line 725) | def forward(
  class SpaceToDepthDownsample (line 754) | class SpaceToDepthDownsample(nn.Module):
    method __init__ (line 755) | def __init__(self, dims, in_channels, out_channels, stride, spatial_pa...
    method forward (line 770) | def forward(self, x, causal: bool = True):
  class DepthToSpaceUpsample (line 825) | class DepthToSpaceUpsample(nn.Module):
    method __init__ (line 826) | def __init__(
    method forward (line 853) | def forward(self, x, causal: bool = True, timestep: Optional[torch.Ten...
  class LayerNorm (line 893) | class LayerNorm(nn.Module):
    method __init__ (line 894) | def __init__(self, dim, eps, elementwise_affine=True) -> None:
    method forward (line 898) | def forward(self, x):
  class ResnetBlock3D (line 905) | class ResnetBlock3D(nn.Module):
    method __init__ (line 918) | def __init__(
    method _feed_spatial_noise (line 1019) | def _feed_spatial_noise(
    method forward (line 1033) | def forward(
  function patchify (line 1100) | def patchify(x, patch_size_hw, patch_size_t=1):
  function unpatchify (line 1121) | def unpatchify(x, patch_size_hw, patch_size_t=1):
  class processor (line 1140) | class processor(nn.Module):
    method __init__ (line 1141) | def __init__(self):
    method un_normalize (line 1146) | def un_normalize(self, x):
    method normalize (line 1149) | def normalize(self, x):
  class VideoVAE (line 1152) | class VideoVAE(nn.Module):
    method __init__ (line 1155) | def __init__(self, version=0, config=None):
    method get_default_config (line 1197) | def get_default_config(self, version):
    method encode (line 1297) | def encode(self, x, device=None):
    method decode_output_shape (line 1302) | def decode_output_shape(self, input_shape):
    method decode (line 1305) | def decode(self, x, output_buffer=None):

FILE: comfy/ldm/lightricks/vae/conv_nd_factory.py
  function make_conv_nd (line 9) | def make_conv_nd(
  function make_linear_nd (line 75) | def make_linear_nd(

FILE: comfy/ldm/lightricks/vae/dual_conv3d.py
  class DualConv3d (line 10) | class DualConv3d(nn.Module):
    method __init__ (line 11) | def __init__(
    method reset_parameters (line 86) | def reset_parameters(self):
    method forward (line 97) | def forward(self, x, use_conv3d=False, skip_time_conv=False):
    method forward_with_3d (line 103) | def forward_with_3d(self, x, skip_time_conv):
    method forward_with_2d (line 133) | def forward_with_2d(self, x, skip_time_conv):
    method weight (line 185) | def weight(self):
  function test_dual_conv3d_consistency (line 189) | def test_dual_conv3d_consistency():

FILE: comfy/ldm/lightricks/vae/pixel_norm.py
  class PixelNorm (line 5) | class PixelNorm(nn.Module):
    method __init__ (line 6) | def __init__(self, dim=1, eps=1e-8):
    method forward (line 11) | def forward(self, x):

FILE: comfy/ldm/lightricks/vocoders/vocoder.py
  function get_padding (line 13) | def get_padding(kernel_size, dilation=1):
  function _sinc (line 23) | def _sinc(x: torch.Tensor):
  function kaiser_sinc_filter1d (line 31) | def kaiser_sinc_filter1d(cutoff, half_width, kernel_size):
  class LowPassFilter1d (line 56) | class LowPassFilter1d(nn.Module):
    method __init__ (line 57) | def __init__(
    method forward (line 81) | def forward(self, x):
  class UpSample1d (line 88) | class UpSample1d(nn.Module):
    method __init__ (line 89) | def __init__(self, ratio=2, kernel_size=None, persistent=True, window_...
    method forward (line 125) | def forward(self, x):
  class DownSample1d (line 135) | class DownSample1d(nn.Module):
    method __init__ (line 136) | def __init__(self, ratio=2, kernel_size=None):
    method forward (line 149) | def forward(self, x):
  class Activation1d (line 153) | class Activation1d(nn.Module):
    method __init__ (line 154) | def __init__(
    method forward (line 167) | def forward(self, x):
  class Snake (line 179) | class Snake(nn.Module):
    method __init__ (line 180) | def __init__(
    method forward (line 193) | def forward(self, x):
  class SnakeBeta (line 200) | class SnakeBeta(nn.Module):
    method __init__ (line 201) | def __init__(
    method forward (line 220) | def forward(self, x):
  class AMPBlock1 (line 234) | class AMPBlock1(torch.nn.Module):
    method __init__ (line 235) | def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5), activa...
    method forward (line 303) | def forward(self, x):
  class ResBlock1 (line 318) | class ResBlock1(torch.nn.Module):
    method __init__ (line 319) | def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
    method forward (line 379) | def forward(self, x):
  class ResBlock2 (line 389) | class ResBlock2(torch.nn.Module):
    method __init__ (line 390) | def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
    method forward (line 413) | def forward(self, x):
  class Vocoder (line 421) | class Vocoder(torch.nn.Module):
    method __init__ (line 428) | def __init__(self, config=None):
    method get_default_config (line 504) | def get_default_config(self):
    method forward (line 522) | def forward(self, x):
  class _STFTFn (line 563) | class _STFTFn(nn.Module):
    method __init__ (line 572) | def __init__(self, filter_length: int, hop_length: int, win_length: int):
    method forward (line 580) | def forward(self, y: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
  class MelSTFT (line 608) | class MelSTFT(nn.Module):
    method __init__ (line 618) | def __init__(
    method mel_spectrogram (line 634) | def mel_spectrogram(
  class VocoderWithBWE (line 656) | class VocoderWithBWE(torch.nn.Module):
    method __init__ (line 663) | def __init__(self, config):
    method _compute_mel (line 692) | def _compute_mel(self, audio):
    method forward (line 699) | def forward(self, mel_spec):

FILE: comfy/ldm/lumina/controlnet.py
  class ZImageControlTransformerBlock (line 6) | class ZImageControlTransformerBlock(JointTransformerBlock):
    method __init__ (line 7) | def __init__(
    method forward (line 27) | def forward(self, c, x, **kwargs):
  class ZImage_Control (line 34) | class ZImage_Control(torch.nn.Module):
    method __init__ (line 35) | def __init__(
    method forward (line 127) | def forward(self, cap_feats, control_context, x_freqs_cis, adaln_input):
    method forward_noise_refiner_block (line 141) | def forward_noise_refiner_block(self, layer_id, control_context, x, x_...
    method forward_control_block (line 159) | def forward_control_block(self, layer_id, control_context, x, x_attn_m...

FILE: comfy/ldm/lumina/model.py
  function invert_slices (line 20) | def invert_slices(slices, length):
  function modulate (line 36) | def modulate(x, scale, timestep_zero_index=None):
  function apply_gate (line 51) | def apply_gate(gate, x, timestep_zero_index=None):
  function clamp_fp16 (line 69) | def clamp_fp16(x):
  class JointAttention (line 74) | class JointAttention(nn.Module):
    method __init__ (line 77) | def __init__(
    method forward (line 123) | def forward(
  class FeedForward (line 169) | class FeedForward(nn.Module):
    method __init__ (line 170) | def __init__(
    method _forward_silu_gating (line 219) | def _forward_silu_gating(self, x1, x3):
    method forward (line 222) | def forward(self, x):
  class JointTransformerBlock (line 226) | class JointTransformerBlock(nn.Module):
    method __init__ (line 227) | def __init__(
    method forward (line 299) | def forward(
  class FinalLayer (line 355) | class FinalLayer(nn.Module):
    method __init__ (line 360) | def __init__(self, hidden_size, patch_size, out_channels, z_image_modu...
    method forward (line 393) | def forward(self, x, c, timestep_zero_index=None):
  function p
Copy disabled (too large) Download .json
Condensed preview — 718 files, each showing path, character count, and a content snippet. Download the .json file for the full structured content (23,219K chars).
[
  {
    "path": ".ci/update_windows/update.py",
    "chars": 5782,
    "preview": "import pygit2\r\nfrom datetime import datetime\r\nimport sys\r\nimport os\r\nimport shutil\r\nimport filecmp\r\n\r\ndef pull(repo, rem"
  },
  {
    "path": ".ci/update_windows/update_comfyui.bat",
    "chars": 276,
    "preview": "@echo off\r\n..\\python_embeded\\python.exe .\\update.py ..\\ComfyUI\\\r\nif exist update_new.py (\r\n  move /y update_new.py updat"
  },
  {
    "path": ".ci/update_windows/update_comfyui_stable.bat",
    "chars": 294,
    "preview": "@echo off\r\n..\\python_embeded\\python.exe .\\update.py ..\\ComfyUI\\ --stable\r\nif exist update_new.py (\r\n  move /y update_new"
  },
  {
    "path": ".ci/windows_amd_base_files/README_VERY_IMPORTANT.txt",
    "chars": 958,
    "preview": "As of the time of writing this you need this driver for best results:\r\nhttps://www.amd.com/en/resources/support-articles"
  },
  {
    "path": ".ci/windows_amd_base_files/run_amd_gpu.bat",
    "chars": 82,
    "preview": ".\\python_embeded\\python.exe -s ComfyUI\\main.py --windows-standalone-build\r\npause\r\n"
  },
  {
    "path": ".ci/windows_amd_base_files/run_amd_gpu_disable_smart_memory.bat",
    "chars": 105,
    "preview": ".\\python_embeded\\python.exe -s ComfyUI\\main.py --windows-standalone-build --disable-smart-memory\r\npause\r\n"
  },
  {
    "path": ".ci/windows_nightly_base_files/run_nvidia_gpu_fast.bat",
    "chars": 89,
    "preview": ".\\python_embeded\\python.exe -s ComfyUI\\main.py --windows-standalone-build --fast\r\npause\r\n"
  },
  {
    "path": ".ci/windows_nvidia_base_files/README_VERY_IMPORTANT.txt",
    "chars": 1030,
    "preview": "HOW TO RUN:\r\n\r\nif you have a NVIDIA gpu:\r\n\r\nrun_nvidia_gpu.bat\r\n\r\nif you want to enable the fast fp16 accumulation (fast"
  },
  {
    "path": ".ci/windows_nvidia_base_files/advanced/run_nvidia_gpu_disable_api_nodes.bat",
    "chars": 316,
    "preview": "..\\python_embeded\\python.exe -s ..\\ComfyUI\\main.py --windows-standalone-build --disable-api-nodes\r\necho If you see this "
  },
  {
    "path": ".ci/windows_nvidia_base_files/run_cpu.bat",
    "chars": 88,
    "preview": ".\\python_embeded\\python.exe -s ComfyUI\\main.py --cpu --windows-standalone-build\r\npause\r\n"
  },
  {
    "path": ".ci/windows_nvidia_base_files/run_nvidia_gpu.bat",
    "chars": 292,
    "preview": ".\\python_embeded\\python.exe -s ComfyUI\\main.py --windows-standalone-build\r\necho If you see this and ComfyUI did not star"
  },
  {
    "path": ".ci/windows_nvidia_base_files/run_nvidia_gpu_fast_fp16_accumulation.bat",
    "chars": 317,
    "preview": ".\\python_embeded\\python.exe -s ComfyUI\\main.py --windows-standalone-build --fast fp16_accumulation\r\necho If you see this"
  },
  {
    "path": ".coderabbit.yaml",
    "chars": 3875,
    "preview": "# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json\nlanguage: \"en-US\"\nearly_access: false\n"
  },
  {
    "path": ".gitattributes",
    "chars": 112,
    "preview": "/web/assets/** linguist-generated\n/web/** linguist-vendored\ncomfy_api_nodes/apis/__init__.py linguist-generated\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug-report.yml",
    "chars": 3105,
    "preview": "name: Bug Report\ndescription: \"Something is broken inside of ComfyUI. (Do not use this if you're just having issues and "
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "chars": 689,
    "preview": "blank_issues_enabled: true\ncontact_links:\n  - name: ComfyUI Frontend Issues\n    url: https://github.com/Comfy-Org/ComfyU"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature-request.yml",
    "chars": 1584,
    "preview": "name: Feature Request\ndescription: \"You have an idea for something new you would like to see added to ComfyUI's core.\"\nl"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/user-support.yml",
    "chars": 1988,
    "preview": "name: User Support\ndescription: \"Use this if you need help with something, or you're experiencing an issue.\"\nlabels: [ \""
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE/api-node.md",
    "chars": 396,
    "preview": "<!-- API_NODE_PR_CHECKLIST: do not remove -->\n\n## API Node PR Checklist\n\n### Scope\n- [ ] **Is API Node Change**\n\n### Pri"
  },
  {
    "path": ".github/scripts/check-ai-co-authors.sh",
    "chars": 3200,
    "preview": "#!/usr/bin/env bash\n# Checks pull request commits for AI agent Co-authored-by trailers.\n# Exits non-zero when any are fo"
  },
  {
    "path": ".github/workflows/api-node-template.yml",
    "chars": 2244,
    "preview": "name: Append API Node PR template\n\non:\n  pull_request_target:\n    types: [opened, reopened, synchronize, ready_for_revie"
  },
  {
    "path": ".github/workflows/check-ai-co-authors.yml",
    "chars": 485,
    "preview": "name: Check AI Co-Authors\n\non:\n  pull_request:\n    branches: ['*']\n\njobs:\n  check-ai-co-authors:\n    name: Check for AI "
  },
  {
    "path": ".github/workflows/check-line-endings.yml",
    "chars": 1235,
    "preview": "name: Check for Windows Line Endings\n\non:\n  pull_request:\n    branches: ['*'] # Trigger on all pull requests to any bran"
  },
  {
    "path": ".github/workflows/pullrequest-ci-run.yml",
    "chars": 1937,
    "preview": "# This is the GitHub Workflow that drives full-GPU-enabled tests of pull requests to ComfyUI, when the 'Run-CI-Test' lab"
  },
  {
    "path": ".github/workflows/release-stable-all.yml",
    "chars": 1870,
    "preview": "name: \"Release Stable All Portable Versions\"\n\non:\n  workflow_dispatch:\n    inputs:\n      git_tag:\n        description: '"
  },
  {
    "path": ".github/workflows/release-webhook.yml",
    "chars": 6601,
    "preview": "name: Release Webhook\n\non:\n  release:\n    types: [published]\n\njobs:\n  send-webhook:\n    runs-on: ubuntu-latest\n    env:\n"
  },
  {
    "path": ".github/workflows/ruff.yml",
    "chars": 973,
    "preview": "name: Python Linting\n\non: [push, pull_request]\n\njobs:\n  ruff:\n    name: Run Ruff\n    runs-on: ubuntu-latest\n\n    steps:\n"
  },
  {
    "path": ".github/workflows/stable-release.yml",
    "chars": 5358,
    "preview": "\nname: \"Release Stable Version\"\n\non:\n  workflow_call:\n    inputs:\n      git_tag:\n        description: 'Git tag'\n        "
  },
  {
    "path": ".github/workflows/stale-issues.yml",
    "chars": 691,
    "preview": "name: 'Close stale issues'\non:\n  schedule:\n    # Run daily at 430 am PT\n    - cron: '30 11 * * *'\npermissions:\n  issues:"
  },
  {
    "path": ".github/workflows/test-build.yml",
    "chars": 765,
    "preview": "name: Build package\n\n#\n# This workflow is a test of the python package build.\n# Install Python dependencies across diffe"
  },
  {
    "path": ".github/workflows/test-ci.yml",
    "chars": 3101,
    "preview": "# This is the GitHub Workflow that drives automatic full-GPU-enabled tests of all new commits to the master branch of Co"
  },
  {
    "path": ".github/workflows/test-execution.yml",
    "chars": 839,
    "preview": "name: Execution Tests\n\non:\n  push:\n    branches: [ main, master, release/** ]\n  pull_request:\n    branches: [ main, mast"
  },
  {
    "path": ".github/workflows/test-launch.yml",
    "chars": 1788,
    "preview": "name: Test server launches without errors\n\non:\n  push:\n    branches: [ main, master, release/** ]\n  pull_request:\n    br"
  },
  {
    "path": ".github/workflows/test-unit.yml",
    "chars": 798,
    "preview": "name: Unit Tests\n\non:\n  push:\n    branches: [ main, master, release/** ]\n  pull_request:\n    branches: [ main, master, r"
  },
  {
    "path": ".github/workflows/update-api-stubs.yml",
    "chars": 1833,
    "preview": "name: Generate Pydantic Stubs from api.comfy.org\n\non:\n  schedule:\n    - cron: '0 0 * * 1'\n  workflow_dispatch:\n\njobs:\n  "
  },
  {
    "path": ".github/workflows/update-ci-container.yml",
    "chars": 2113,
    "preview": "name: \"CI: Update CI Container\"\n\non:\n  release:\n    types: [published]\n  workflow_dispatch:\n    inputs:\n      version:\n "
  },
  {
    "path": ".github/workflows/update-version.yml",
    "chars": 1859,
    "preview": "name: Update Version File\n\non:\n  pull_request:\n    paths:\n      - \"pyproject.toml\"\n    branches:\n      - master\n      - "
  },
  {
    "path": ".github/workflows/windows_release_dependencies.yml",
    "chars": 2530,
    "preview": "name: \"Windows Release dependencies\"\n\non:\n  workflow_dispatch:\n    inputs:\n      xformers:\n        description: 'xformer"
  },
  {
    "path": ".github/workflows/windows_release_dependencies_manual.yml",
    "chars": 2295,
    "preview": "name: \"Windows Release dependencies Manual\"\n\non:\n  workflow_dispatch:\n    inputs:\n      torch_dependencies:\n        desc"
  },
  {
    "path": ".github/workflows/windows_release_nightly_pytorch.yml",
    "chars": 3737,
    "preview": "name: \"Windows Release Nightly pytorch\"\n\non:\n  workflow_dispatch:\n    inputs:\n      cu:\n        description: 'cuda versi"
  },
  {
    "path": ".github/workflows/windows_release_package.yml",
    "chars": 3666,
    "preview": "name: \"Windows Release packaging\"\n\non:\n  workflow_dispatch:\n    inputs:\n      cu:\n        description: 'cuda version'\n  "
  },
  {
    "path": ".gitignore",
    "chars": 383,
    "preview": "__pycache__/\n*.py[cod]\n/output/\n/input/\n!/input/example.png\n/models/\n/temp/\n/custom_nodes/\n!custom_nodes/example_node.py"
  },
  {
    "path": "CODEOWNERS",
    "chars": 47,
    "preview": "# Admins\n* @comfyanonymous @kosinkadink @guill\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 1876,
    "preview": "# Contributing to ComfyUI\n\nWelcome, and thank you for your interest in contributing to ComfyUI!\n\nThere are several ways "
  },
  {
    "path": "LICENSE",
    "chars": 35149,
    "preview": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free "
  },
  {
    "path": "QUANTIZATION.md",
    "chars": 7690,
    "preview": "# The Comfy guide to Quantization\n\n\n## How does quantization work?\n\nQuantization aims to map a high-precision value x_f "
  },
  {
    "path": "README.md",
    "chars": 28839,
    "preview": "<div align=\"center\">\n\n# ComfyUI\n**The most powerful and modular visual AI engine and application.**\n\n\n[![Website][websit"
  },
  {
    "path": "alembic.ini",
    "chars": 3230,
    "preview": "# A generic, single database configuration.\n\n[alembic]\n# path to migration scripts\n# Use forward slashes (/) also on win"
  },
  {
    "path": "alembic_db/README.md",
    "chars": 133,
    "preview": "## Generate new revision\n\n1. Update models in `/app/database/models.py`\n2. Run `alembic revision --autogenerate -m \"{you"
  },
  {
    "path": "alembic_db/env.py",
    "chars": 1914,
    "preview": "from sqlalchemy import engine_from_config\nfrom sqlalchemy import pool\n\nfrom alembic import context\n\n# this is the Alembi"
  },
  {
    "path": "alembic_db/script.py.mako",
    "chars": 689,
    "preview": "\"\"\"${message}\n\nRevision ID: ${up_revision}\nRevises: ${down_revision | comma,n}\nCreate Date: ${create_date}\n\n\"\"\"\nfrom typ"
  },
  {
    "path": "alembic_db/versions/0001_assets.py",
    "chars": 8823,
    "preview": "\"\"\"\nInitial assets schema\nRevision ID: 0001_assets\nRevises: None\nCreate Date: 2025-12-10 00:00:00\n\"\"\"\n\nfrom alembic impo"
  },
  {
    "path": "alembic_db/versions/0002_merge_to_asset_references.py",
    "chars": 13024,
    "preview": "\"\"\"\nMerge AssetInfo and AssetCacheState into unified asset_references table.\n\nThis migration drops old tables and create"
  },
  {
    "path": "alembic_db/versions/0003_add_metadata_job_id.py",
    "chars": 3450,
    "preview": "\"\"\"\nAdd system_metadata and job_id columns to asset_references.\nChange preview_id FK from assets.id to asset_references."
  },
  {
    "path": "api_server/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "api_server/routes/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "api_server/routes/internal/README.md",
    "chars": 220,
    "preview": "# ComfyUI Internal Routes\n\nAll routes under the `/internal` path are designated for **internal use by ComfyUI only**. Th"
  },
  {
    "path": "api_server/routes/internal/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "api_server/routes/internal/internal_routes.py",
    "chars": 3048,
    "preview": "from aiohttp import web\nfrom typing import Optional\nfrom folder_paths import folder_names_and_paths, get_directory_by_ty"
  },
  {
    "path": "api_server/services/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "api_server/services/terminal_service.py",
    "chars": 1709,
    "preview": "from app.logger import on_flush\nimport os\nimport shutil\n\n\nclass TerminalService:\n    def __init__(self, server):\n       "
  },
  {
    "path": "api_server/utils/file_operations.py",
    "chars": 1367,
    "preview": "import os\nfrom typing import List, Union, TypedDict, Literal\nfrom typing_extensions import TypeGuard\nclass FileInfo(Type"
  },
  {
    "path": "app/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/app_settings.py",
    "chars": 2255,
    "preview": "import os\nimport json\nfrom aiohttp import web\nimport logging\n\n\nclass AppSettings():\n    def __init__(self, user_manager)"
  },
  {
    "path": "app/assets/api/routes.py",
    "chars": 28115,
    "preview": "import asyncio\nimport functools\nimport json\nimport logging\nimport os\nimport urllib.parse\nimport uuid\nfrom typing import "
  },
  {
    "path": "app/assets/api/schemas_in.py",
    "chars": 10990,
    "preview": "import json\nfrom dataclasses import dataclass\nfrom typing import Any, Literal\n\nfrom app.assets.helpers import validate_b"
  },
  {
    "path": "app/assets/api/schemas_out.py",
    "chars": 2067,
    "preview": "from datetime import datetime\nfrom typing import Any\n\nfrom pydantic import BaseModel, ConfigDict, Field, field_serialize"
  },
  {
    "path": "app/assets/api/upload.py",
    "chars": 6338,
    "preview": "import logging\nimport os\nimport uuid\nfrom typing import Callable\n\nfrom aiohttp import web\n\nimport folder_paths\nfrom app."
  },
  {
    "path": "app/assets/database/models.py",
    "chars": 8998,
    "preview": "from __future__ import annotations\n\nimport uuid\nfrom datetime import datetime\nfrom typing import Any\n\nfrom sqlalchemy im"
  },
  {
    "path": "app/assets/database/queries/__init__.py",
    "chars": 4012,
    "preview": "from app.assets.database.queries.asset import (\n    asset_exists_by_hash,\n    bulk_insert_assets,\n    get_asset_by_hash,"
  },
  {
    "path": "app/assets/database/queries/asset.py",
    "chars": 3818,
    "preview": "import sqlalchemy as sa\nfrom sqlalchemy import select\nfrom sqlalchemy.dialects import sqlite\nfrom sqlalchemy.orm import "
  },
  {
    "path": "app/assets/database/queries/asset_reference.py",
    "chars": 30213,
    "preview": "\"\"\"Query functions for the unified AssetReference table.\n\nThis module replaces the separate asset_info.py and cache_stat"
  },
  {
    "path": "app/assets/database/queries/common.py",
    "chars": 4299,
    "preview": "\"\"\"Shared utilities for database query modules.\"\"\"\n\nimport os\nfrom decimal import Decimal\nfrom typing import Iterable, S"
  },
  {
    "path": "app/assets/database/queries/tags.py",
    "chars": 13042,
    "preview": "from dataclasses import dataclass\nfrom typing import Iterable, Sequence\n\nimport sqlalchemy as sa\nfrom sqlalchemy import "
  },
  {
    "path": "app/assets/helpers.py",
    "chars": 2032,
    "preview": "import os\nfrom datetime import datetime, timezone\nfrom typing import Sequence\n\n\ndef select_best_live_path(states: Sequen"
  },
  {
    "path": "app/assets/scanner.py",
    "chars": 19665,
    "preview": "import logging\nimport os\nfrom pathlib import Path\nfrom typing import Callable, Literal, TypedDict\n\nimport folder_paths\nf"
  },
  {
    "path": "app/assets/seeder.py",
    "chars": 27202,
    "preview": "\"\"\"Background asset seeder with thread management and cancellation support.\"\"\"\n\nimport logging\nimport os\nimport threadin"
  },
  {
    "path": "app/assets/services/__init__.py",
    "chars": 1994,
    "preview": "from app.assets.services.asset_management import (\n    asset_exists,\n    delete_asset_reference,\n    get_asset_by_hash,\n"
  },
  {
    "path": "app/assets/services/asset_management.py",
    "chars": 11372,
    "preview": "import contextlib\nimport mimetypes\nimport os\nfrom typing import Sequence\n\n\nfrom app.assets.database.models import Asset\n"
  },
  {
    "path": "app/assets/services/bulk_ingest.py",
    "chars": 8369,
    "preview": "from __future__ import annotations\n\nimport os\nimport uuid\nfrom dataclasses import dataclass\nfrom datetime import datetim"
  },
  {
    "path": "app/assets/services/file_utils.py",
    "chars": 2267,
    "preview": "import os\n\n\ndef get_mtime_ns(stat_result: os.stat_result) -> int:\n    \"\"\"Extract mtime in nanoseconds from a stat result"
  },
  {
    "path": "app/assets/services/hashing.py",
    "chars": 3076,
    "preview": "import io\nimport os\nfrom contextlib import contextmanager\nfrom dataclasses import dataclass\nfrom typing import IO, Any, "
  },
  {
    "path": "app/assets/services/ingest.py",
    "chars": 14205,
    "preview": "import contextlib\nimport logging\nimport mimetypes\nimport os\nfrom typing import Any, Sequence\n\nfrom sqlalchemy.orm import"
  },
  {
    "path": "app/assets/services/metadata_extract.py",
    "chars": 10987,
    "preview": "\"\"\"Metadata extraction for asset scanning.\n\nTier 1: Filesystem metadata (zero parsing)\nTier 2: Safetensors header metada"
  },
  {
    "path": "app/assets/services/path_utils.py",
    "chars": 6188,
    "preview": "import os\nfrom pathlib import Path\nfrom typing import Literal\n\nimport folder_paths\nfrom app.assets.helpers import normal"
  },
  {
    "path": "app/assets/services/schemas.py",
    "chars": 2349,
    "preview": "from dataclasses import dataclass\nfrom datetime import datetime\nfrom typing import Any, NamedTuple\n\nfrom app.assets.data"
  },
  {
    "path": "app/assets/services/tagging.py",
    "chars": 2598,
    "preview": "from typing import Sequence\n\nfrom app.assets.database.queries import (\n    AddTagsResult,\n    RemoveTagsResult,\n    add_"
  },
  {
    "path": "app/custom_node_manager.py",
    "chars": 5330,
    "preview": "from __future__ import annotations\n\nimport os\nimport folder_paths\nimport glob\nfrom aiohttp import web\nimport json\nimport"
  },
  {
    "path": "app/database/db.py",
    "chars": 5839,
    "preview": "import logging\nimport os\nimport shutil\nfrom app.logger import log_startup_warning\nfrom utils.install_util import get_mis"
  },
  {
    "path": "app/database/models.py",
    "chars": 932,
    "preview": "from typing import Any\nfrom datetime import datetime\nfrom sqlalchemy import MetaData\nfrom sqlalchemy.orm import Declarat"
  },
  {
    "path": "app/frontend_management.py",
    "chars": 13886,
    "preview": "from __future__ import annotations\nimport argparse\nimport logging\nimport os\nimport re\nimport sys\nimport tempfile\nimport "
  },
  {
    "path": "app/logger.py",
    "chars": 2849,
    "preview": "from collections import deque\nfrom datetime import datetime\nimport io\nimport logging\nimport sys\nimport threading\n\nlogs ="
  },
  {
    "path": "app/model_manager.py",
    "chars": 7887,
    "preview": "from __future__ import annotations\n\nimport os\nimport base64\nimport json\nimport time\nimport logging\nimport folder_paths\ni"
  },
  {
    "path": "app/node_replace_manager.py",
    "chars": 4997,
    "preview": "from __future__ import annotations\n\nfrom aiohttp import web\n\nfrom typing import TYPE_CHECKING, TypedDict\nif TYPE_CHECKIN"
  },
  {
    "path": "app/subgraph_manager.py",
    "chars": 5245,
    "preview": "from __future__ import annotations\n\nfrom typing import TypedDict\nimport os\nimport folder_paths\nimport glob\nfrom aiohttp "
  },
  {
    "path": "app/user_manager.py",
    "chars": 19001,
    "preview": "from __future__ import annotations\nimport json\nimport os\nimport re\nimport uuid\nimport glob\nimport shutil\nimport logging\n"
  },
  {
    "path": "blueprints/.glsl/Brightness_and_Contrast_1.frag",
    "chars": 981,
    "preview": "#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform float u_float0; // Brightness slider -100..1"
  },
  {
    "path": "blueprints/.glsl/Chromatic_Aberration_16.frag",
    "chars": 2158,
    "preview": "#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform vec2 u_resolution;\nuniform int u_int0;      "
  },
  {
    "path": "blueprints/.glsl/Color_Adjustment_15.frag",
    "chars": 2730,
    "preview": "#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform float u_float0; // temperature (-100 to 100)"
  },
  {
    "path": "blueprints/.glsl/Edge-Preserving_Blur_128.frag",
    "chars": 2744,
    "preview": "#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform float u_float0;   // Blur radius (0–20, defa"
  },
  {
    "path": "blueprints/.glsl/Film_Grain_15.frag",
    "chars": 3837,
    "preview": "#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform vec2 u_resolution;\nuniform float u_float0; /"
  },
  {
    "path": "blueprints/.glsl/Glow_30.frag",
    "chars": 3504,
    "preview": "#version 300 es\nprecision mediump float;\n\nuniform sampler2D u_image0;\nuniform vec2 u_resolution;\nuniform int u_int0;    "
  },
  {
    "path": "blueprints/.glsl/Hue_and_Saturation_1.frag",
    "chars": 6311,
    "preview": "#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform int u_int0;      // Mode: 0=Master, 1=Reds, "
  },
  {
    "path": "blueprints/.glsl/Image_Blur_1.frag",
    "chars": 3017,
    "preview": "#version 300 es\n#pragma passes 2\nprecision highp float;\n\n// Blur type constants\nconst int BLUR_GAUSSIAN = 0;\nconst int B"
  },
  {
    "path": "blueprints/.glsl/Image_Channels_23.frag",
    "chars": 618,
    "preview": "#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\n\nin vec2 v_texCoord;\nlayout(location = 0) out vec4 f"
  },
  {
    "path": "blueprints/.glsl/Image_Levels_1.frag",
    "chars": 2335,
    "preview": "#version 300 es\nprecision highp float;\n\n// Levels Adjustment\n// u_int0:   channel      (0=RGB, 1=R, 2=G, 3=B)         de"
  },
  {
    "path": "blueprints/.glsl/README.md",
    "chars": 742,
    "preview": "# GLSL Shader Sources\n\nThis folder contains the GLSL fragment shaders extracted from blueprint JSON files for easier edi"
  },
  {
    "path": "blueprints/.glsl/Sharpen_23.frag",
    "chars": 920,
    "preview": "#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform vec2 u_resolution;\nuniform float u_float0;  "
  },
  {
    "path": "blueprints/.glsl/Unsharp_Mask_26.frag",
    "chars": 1879,
    "preview": "#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform vec2 u_resolution;\nuniform float u_float0;  "
  },
  {
    "path": "blueprints/.glsl/update_blueprints.py",
    "chars": 5197,
    "preview": "#!/usr/bin/env python3\n\"\"\"\nShader Blueprint Updater\n\nSyncs GLSL shader files between this folder and blueprint JSON file"
  },
  {
    "path": "blueprints/Brightness and Contrast.json",
    "chars": 5784,
    "preview": "{\"revision\": 0, \"last_node_id\": 140, \"last_link_id\": 0, \"nodes\": [{\"id\": 140, \"type\": \"916dff42-6166-4d45-b028-04eaf69fb"
  },
  {
    "path": "blueprints/Canny to Image (Z-Image-Turbo).json",
    "chars": 19621,
    "preview": "{\"id\": \"e046dd74-e2a7-4f31-a75b-5e11a8c72d4e\", \"revision\": 0, \"last_node_id\": 18, \"last_link_id\": 32, \"nodes\": [{\"id\": 1"
  },
  {
    "path": "blueprints/Canny to Video (LTX 2.0).json",
    "chars": 45351,
    "preview": "{\"id\": \"02f6166f-32f8-4673-b861-76be1464cba5\", \"revision\": 0, \"last_node_id\": 155, \"last_link_id\": 391, \"nodes\": [{\"id\":"
  },
  {
    "path": "blueprints/Chromatic Aberration.json",
    "chars": 6647,
    "preview": "{\"revision\": 0, \"last_node_id\": 19, \"last_link_id\": 0, \"nodes\": [{\"id\": 19, \"type\": \"2c5ef154-2bde-496d-bc8b-9dcf42f2913"
  },
  {
    "path": "blueprints/Color Adjustment.json",
    "chars": 9232,
    "preview": "{\"revision\": 0, \"last_node_id\": 14, \"last_link_id\": 0, \"nodes\": [{\"id\": 14, \"type\": \"36677b92-5dd8-47a5-9380-4da982c1894"
  },
  {
    "path": "blueprints/Depth to Image (Z-Image-Turbo).json",
    "chars": 31219,
    "preview": "{\"id\": \"e046dd74-e2a7-4f31-a75b-5e11a8c72d4e\", \"revision\": 0, \"last_node_id\": 76, \"last_link_id\": 259, \"nodes\": [{\"id\": "
  },
  {
    "path": "blueprints/Depth to Video (ltx 2.0).json",
    "chars": 67090,
    "preview": "{\"id\": \"ec176c82-4db5-4ab9-b5a0-8aa8e5684a81\", \"revision\": 0, \"last_node_id\": 191, \"last_link_id\": 433, \"nodes\": [{\"id\":"
  },
  {
    "path": "blueprints/Edge-Preserving Blur.json",
    "chars": 8129,
    "preview": "{\"revision\": 0, \"last_node_id\": 136, \"last_link_id\": 0, \"nodes\": [{\"id\": 136, \"type\": \"c6dc0f88-416b-4db1-bed1-442d793de"
  },
  {
    "path": "blueprints/Film Grain.json",
    "chars": 10742,
    "preview": "{\"revision\": 0, \"last_node_id\": 22, \"last_link_id\": 0, \"nodes\": [{\"id\": 22, \"type\": \"3324cf54-bcff-405f-a4bf-c5122c72fe5"
  },
  {
    "path": "blueprints/Glow.json",
    "chars": 10127,
    "preview": "{\"revision\": 0, \"last_node_id\": 37, \"last_link_id\": 0, \"nodes\": [{\"id\": 37, \"type\": \"0a99445a-aaf8-4a7f-aec3-d7d710ae149"
  },
  {
    "path": "blueprints/Hue and Saturation.json",
    "chars": 14476,
    "preview": "{\"revision\": 0, \"last_node_id\": 11, \"last_link_id\": 0, \"nodes\": [{\"id\": 11, \"type\": \"c64f83e9-aa5d-4031-89f1-0704e39299f"
  },
  {
    "path": "blueprints/Image Blur.json",
    "chars": 7506,
    "preview": "{\"revision\": 0, \"last_node_id\": 8, \"last_link_id\": 0, \"nodes\": [{\"id\": 8, \"type\": \"198632a3-ee76-4aab-9ce7-a69c624eaff9\""
  },
  {
    "path": "blueprints/Image Captioning (gemini).json",
    "chars": 5341,
    "preview": "{\"revision\": 0, \"last_node_id\": 231, \"last_link_id\": 0, \"nodes\": [{\"id\": 231, \"type\": \"e3e78497-720e-45a2-b4fb-c7bfdb80d"
  },
  {
    "path": "blueprints/Image Channels.json",
    "chars": 4413,
    "preview": "{\"revision\": 0, \"last_node_id\": 29, \"last_link_id\": 0, \"nodes\": [{\"id\": 29, \"type\": \"4c9d6ea4-b912-40e5-8766-6793a9758c5"
  },
  {
    "path": "blueprints/Image Edit (Flux.2 Klein 4B).json",
    "chars": 22997,
    "preview": "{\"id\": \"6686cb78-8003-4289-b969-929755e9a84d\", \"revision\": 0, \"last_node_id\": 81, \"last_link_id\": 179, \"nodes\": [{\"id\": "
  },
  {
    "path": "blueprints/Image Edit (Qwen 2511).json",
    "chars": 19963,
    "preview": "{\"id\": \"d84b7d1a-a73f-4e31-bd16-983ac0cf5f1b\", \"revision\": 0, \"last_node_id\": 17, \"last_link_id\": 32, \"nodes\": [{\"id\": 1"
  },
  {
    "path": "blueprints/Image Inpainting (Qwen-image).json",
    "chars": 23094,
    "preview": "{\"id\": \"84318cde-a839-41d4-8632-df6d7c50ffc5\", \"revision\": 0, \"last_node_id\": 256, \"last_link_id\": 403, \"nodes\": [{\"id\":"
  },
  {
    "path": "blueprints/Image Levels.json",
    "chars": 10250,
    "preview": "{\"revision\": 0, \"last_node_id\": 139, \"last_link_id\": 0, \"nodes\": [{\"id\": 139, \"type\": \"75bf8a72-aad8-4f3e-83ee-380e70248"
  },
  {
    "path": "blueprints/Image Outpainting (Qwen-Image).json",
    "chars": 31642,
    "preview": "{\"id\": \"8f79c27f-bec4-412e-9b82-7c5b3b778ecf\", \"revision\": 0, \"last_node_id\": 255, \"last_link_id\": 401, \"nodes\": [{\"id\":"
  },
  {
    "path": "blueprints/Image Upscale(Z-image-Turbo).json",
    "chars": 16400,
    "preview": "{\"id\": \"bf8108f3-d857-46c9-aef5-0e8ad2a64bf5\", \"revision\": 0, \"last_node_id\": 95, \"last_link_id\": 115, \"nodes\": [{\"id\": "
  },
  {
    "path": "blueprints/Image to Depth Map (Lotus).json",
    "chars": 11713,
    "preview": "{\"id\": \"6af0a6c1-0161-4528-8685-65776e838d44\", \"revision\": 0, \"last_node_id\": 75, \"last_link_id\": 245, \"nodes\": [{\"id\": "
  },
  {
    "path": "blueprints/Image to Layers(Qwen-Image Layered).json",
    "chars": 17502,
    "preview": "{\"id\": \"1a761372-7c82-4016-b9bf-fa285967e1e9\", \"revision\": 0, \"last_node_id\": 83, \"last_link_id\": 0, \"nodes\": [{\"id\": 83"
  },
  {
    "path": "blueprints/Image to Model (Hunyuan3d 2.1).json",
    "chars": 9216,
    "preview": "{\"id\": \"8fe311ec-2147-47a8-b618-7bd6fb6d4f9d\", \"revision\": 0, \"last_node_id\": 23, \"last_link_id\": 24, \"nodes\": [{\"id\": 1"
  },
  {
    "path": "blueprints/Image to Video (Wan 2.2).json",
    "chars": 28973,
    "preview": "{\"id\": \"ec7da562-7e21-4dac-a0d2-f4441e1efd3b\", \"revision\": 0, \"last_node_id\": 119, \"last_link_id\": 231, \"nodes\": [{\"id\":"
  },
  {
    "path": "blueprints/Pose to Image (Z-Image-Turbo).json",
    "chars": 16594,
    "preview": "{\"id\": \"e046dd74-e2a7-4f31-a75b-5e11a8c72d4e\", \"revision\": 0, \"last_node_id\": 26, \"last_link_id\": 46, \"nodes\": [{\"id\": 1"
  },
  {
    "path": "blueprints/Pose to Video (LTX 2.0).json",
    "chars": 48922,
    "preview": "{\"id\": \"01cd475b-52df-43bf-aafa-484a5976d2d2\", \"revision\": 0, \"last_node_id\": 160, \"last_link_id\": 410, \"nodes\": [{\"id\":"
  },
  {
    "path": "blueprints/Prompt Enhance.json",
    "chars": 3316,
    "preview": "{\"revision\": 0, \"last_node_id\": 15, \"last_link_id\": 0, \"nodes\": [{\"id\": 15, \"type\": \"24d8bbfd-39d4-4774-bff0-3de40cc7a47"
  },
  {
    "path": "blueprints/Sharpen.json",
    "chars": 4557,
    "preview": "{\"revision\": 0, \"last_node_id\": 25, \"last_link_id\": 0, \"nodes\": [{\"id\": 25, \"type\": \"621ba4e2-22a8-482d-a369-023753198b7"
  },
  {
    "path": "blueprints/Text to Audio (ACE-Step 1.5).json",
    "chars": 18513,
    "preview": "{\"id\": \"67979fed-a490-450a-83f4-c7c0105d450e\", \"revision\": 0, \"last_node_id\": 110, \"last_link_id\": 288, \"nodes\": [{\"id\":"
  },
  {
    "path": "blueprints/Text to Image (Z-Image-Turbo).json",
    "chars": 13356,
    "preview": "{\"id\": \"1c3eaa76-5cfa-4dc7-8571-97a570324e01\", \"revision\": 0, \"last_node_id\": 34, \"last_link_id\": 40, \"nodes\": [{\"id\": 5"
  },
  {
    "path": "blueprints/Text to Video (Wan 2.2).json",
    "chars": 22547,
    "preview": "{\"id\": \"ec7da562-7e21-4dac-a0d2-f4441e1efd3b\", \"revision\": 0, \"last_node_id\": 116, \"last_link_id\": 188, \"nodes\": [{\"id\":"
  },
  {
    "path": "blueprints/Unsharp Mask.json",
    "chars": 6954,
    "preview": "{\"revision\": 0, \"last_node_id\": 30, \"last_link_id\": 0, \"nodes\": [{\"id\": 30, \"type\": \"d99ba3f5-8a56-4365-8e45-3f3ea7c572a"
  },
  {
    "path": "blueprints/Video Captioning (Gemini).json",
    "chars": 7237,
    "preview": "{\"revision\": 0, \"last_node_id\": 233, \"last_link_id\": 0, \"nodes\": [{\"id\": 233, \"type\": \"dcf32045-0ee4-4efc-9aca-9f26f3a15"
  },
  {
    "path": "blueprints/Video Inpaint(Wan2.1 VACE).json",
    "chars": 30007,
    "preview": "{\"id\": \"2f429c60-2e03-4117-908b-31e1fab04bba\", \"revision\": 0, \"last_node_id\": 229, \"last_link_id\": 366, \"nodes\": [{\"id\":"
  },
  {
    "path": "blueprints/Video Stitch.json",
    "chars": 6738,
    "preview": "{\"revision\": 0, \"last_node_id\": 84, \"last_link_id\": 0, \"nodes\": [{\"id\": 84, \"type\": \"8e8aa94a-647e-436d-8440-8ee4691864d"
  },
  {
    "path": "blueprints/Video Upscale(GAN x4).json",
    "chars": 4791,
    "preview": "{\"revision\": 0, \"last_node_id\": 13, \"last_link_id\": 0, \"nodes\": [{\"id\": 13, \"type\": \"cf95b747-3e17-46cb-8097-cac60ff9b2e"
  },
  {
    "path": "blueprints/put_blueprints_here",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "comfy/audio_encoders/audio_encoders.py",
    "chars": 3489,
    "preview": "from .wav2vec2 import Wav2Vec2Model\nfrom .whisper import WhisperLargeV3\nimport comfy.model_management\nimport comfy.ops\ni"
  },
  {
    "path": "comfy/audio_encoders/wav2vec2.py",
    "chars": 10639,
    "preview": "import torch\nimport torch.nn as nn\nfrom comfy.ldm.modules.attention import optimized_attention_masked\n\n\nclass LayerNormC"
  },
  {
    "path": "comfy/audio_encoders/whisper.py",
    "chars": 5953,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchaudio\nfrom typing import Optional\nfrom co"
  },
  {
    "path": "comfy/cldm/cldm.py",
    "chars": 19828,
    "preview": "#taken from: https://github.com/lllyasviel/ControlNet\n#and modified\n\nimport torch\nimport torch.nn as nn\n\nfrom ..ldm.modu"
  },
  {
    "path": "comfy/cldm/control_types.py",
    "chars": 207,
    "preview": "UNION_CONTROLNET_TYPES = {\n    \"openpose\": 0,\n    \"depth\": 1,\n    \"hed/pidi/scribble/ted\": 2,\n    \"canny/lineart/anime_l"
  },
  {
    "path": "comfy/cldm/dit_embedder.py",
    "chars": 4204,
    "preview": "import math\nfrom typing import List, Optional, Tuple\n\nimport torch\nimport torch.nn as nn\nfrom torch import Tensor\n\nfrom "
  },
  {
    "path": "comfy/cldm/mmdit.py",
    "chars": 2555,
    "preview": "import torch\nfrom typing import Optional\nimport comfy.ldm.modules.diffusionmodules.mmdit\n\nclass ControlNet(comfy.ldm.mod"
  },
  {
    "path": "comfy/cli_args.py",
    "chars": 17661,
    "preview": "import argparse\nimport enum\nimport os\nimport comfy.options\n\n\nclass EnumAction(argparse.Action):\n    \"\"\"\n    Argparse act"
  },
  {
    "path": "comfy/clip_config_bigg.json",
    "chars": 524,
    "preview": "{\n  \"architectures\": [\n    \"CLIPTextModel\"\n  ],\n  \"attention_dropout\": 0.0,\n  \"bos_token_id\": 0,\n  \"dropout\": 0.0,\n  \"eo"
  },
  {
    "path": "comfy/clip_model.py",
    "chars": 16257,
    "preview": "import torch\nfrom comfy.ldm.modules.attention import optimized_attention_for_device\nimport comfy.ops\nimport math\n\ndef cl"
  },
  {
    "path": "comfy/clip_vision.py",
    "chars": 7665,
    "preview": "from .utils import load_torch_file, transformers_convert, state_dict_prefix_replace\nimport os\nimport json\nimport logging"
  },
  {
    "path": "comfy/clip_vision_config_g.json",
    "chars": 419,
    "preview": "{\n  \"attention_dropout\": 0.0,\n  \"dropout\": 0.0,\n  \"hidden_act\": \"gelu\",\n  \"hidden_size\": 1664,\n  \"image_size\": 224,\n  \"i"
  },
  {
    "path": "comfy/clip_vision_config_h.json",
    "chars": 419,
    "preview": "{\n  \"attention_dropout\": 0.0,\n  \"dropout\": 0.0,\n  \"hidden_act\": \"gelu\",\n  \"hidden_size\": 1280,\n  \"image_size\": 224,\n  \"i"
  },
  {
    "path": "comfy/clip_vision_config_vitl.json",
    "chars": 424,
    "preview": "{\n  \"attention_dropout\": 0.0,\n  \"dropout\": 0.0,\n  \"hidden_act\": \"quick_gelu\",\n  \"hidden_size\": 1024,\n  \"image_size\": 224"
  },
  {
    "path": "comfy/clip_vision_config_vitl_336.json",
    "chars": 423,
    "preview": "{\n  \"attention_dropout\": 0.0,\n  \"dropout\": 0.0,\n  \"hidden_act\": \"quick_gelu\",\n  \"hidden_size\": 1024,\n  \"image_size\": 336"
  },
  {
    "path": "comfy/clip_vision_config_vitl_336_llava.json",
    "chars": 453,
    "preview": "{\n  \"attention_dropout\": 0.0,\n  \"dropout\": 0.0,\n  \"hidden_act\": \"quick_gelu\",\n  \"hidden_size\": 1024,\n  \"image_size\": 336"
  },
  {
    "path": "comfy/clip_vision_siglip2_base_naflex.json",
    "chars": 336,
    "preview": "{\n  \"num_channels\": 3,\n  \"hidden_act\": \"gelu_pytorch_tanh\",\n  \"hidden_size\": 1152,\n  \"image_size\": -1,\n  \"intermediate_s"
  },
  {
    "path": "comfy/clip_vision_siglip_384.json",
    "chars": 314,
    "preview": "{\n  \"num_channels\": 3,\n  \"hidden_act\": \"gelu_pytorch_tanh\",\n  \"hidden_size\": 1152,\n  \"image_size\": 384,\n  \"intermediate_"
  },
  {
    "path": "comfy/clip_vision_siglip_512.json",
    "chars": 314,
    "preview": "{\n  \"num_channels\": 3,\n  \"hidden_act\": \"gelu_pytorch_tanh\",\n  \"hidden_size\": 1152,\n  \"image_size\": 512,\n  \"intermediate_"
  },
  {
    "path": "comfy/comfy_types/README.md",
    "chars": 1350,
    "preview": "# Comfy Typing\n## Type hinting for ComfyUI Node development\n\nThis module provides type hinting and concrete convenience "
  },
  {
    "path": "comfy/comfy_types/__init__.py",
    "chars": 1251,
    "preview": "import torch\nfrom typing import Callable, Protocol, TypedDict, Optional, List\nfrom .node_typing import IO, InputTypeDict"
  },
  {
    "path": "comfy/comfy_types/examples/example_nodes.py",
    "chars": 781,
    "preview": "from comfy.comfy_types import IO, ComfyNodeABC, InputTypeDict\nfrom inspect import cleandoc\n\n\nclass ExampleNode(ComfyNode"
  },
  {
    "path": "comfy/comfy_types/node_typing.py",
    "chars": 15256,
    "preview": "\"\"\"Comfy-specific type hinting\"\"\"\n\nfrom __future__ import annotations\nfrom typing import Literal, TypedDict, Optional\nfr"
  },
  {
    "path": "comfy/conds.py",
    "chars": 4522,
    "preview": "import torch\nimport math\nimport comfy.utils\nimport logging\n\n\ndef is_equal(x, y):\n    if torch.is_tensor(x) and torch.is_"
  },
  {
    "path": "comfy/context_windows.py",
    "chars": 33967,
    "preview": "from __future__ import annotations\nfrom typing import TYPE_CHECKING, Callable\nimport torch\nimport numpy as np\nimport col"
  },
  {
    "path": "comfy/controlnet.py",
    "chars": 44581,
    "preview": "\"\"\"\n    This file is part of ComfyUI.\n    Copyright (C) 2024 Comfy\n\n    This program is free software: you can redistrib"
  },
  {
    "path": "comfy/diffusers_convert.py",
    "chars": 6796,
    "preview": "import re\nimport torch\nimport logging\n\n# conversion code from https://github.com/huggingface/diffusers/blob/main/scripts"
  },
  {
    "path": "comfy/diffusers_load.py",
    "chars": 1437,
    "preview": "import os\n\nimport comfy.sd\n\ndef first_file(path, filenames):\n    for f in filenames:\n        p = os.path.join(path, f)\n "
  },
  {
    "path": "comfy/extra_samplers/uni_pc.py",
    "chars": 38202,
    "preview": "#code taken from: https://github.com/wl-zhao/UniPC and modified\n\nimport torch\nimport math\nimport logging\n\nfrom tqdm.auto"
  },
  {
    "path": "comfy/float.py",
    "chars": 9109,
    "preview": "import torch\n\ndef calc_mantissa(abs_x, exponent, normal_mask, MANTISSA_BITS, EXPONENT_BIAS, generator=None):\n    mantiss"
  },
  {
    "path": "comfy/gligen.py",
    "chars": 10800,
    "preview": "import math\nimport torch\nfrom torch import nn\nfrom .ldm.modules.attention import CrossAttention, FeedForward\nimport comf"
  },
  {
    "path": "comfy/hooks.py",
    "chars": 32752,
    "preview": "from __future__ import annotations\nfrom typing import TYPE_CHECKING, Callable\nimport enum\nimport math\nimport torch\nimpor"
  },
  {
    "path": "comfy/image_encoders/dino2.py",
    "chars": 7120,
    "preview": "import torch\nfrom comfy.text_encoders.bert import BertAttention\nimport comfy.model_management\nfrom comfy.ldm.modules.att"
  },
  {
    "path": "comfy/image_encoders/dino2_giant.json",
    "chars": 512,
    "preview": "{\n  \"attention_probs_dropout_prob\": 0.0,\n  \"drop_path_rate\": 0.0,\n  \"hidden_act\": \"gelu\",\n  \"hidden_dropout_prob\": 0.0,\n"
  },
  {
    "path": "comfy/image_encoders/dino2_large.json",
    "chars": 538,
    "preview": "{\n  \"hidden_size\": 1024,\n  \"use_mask_token\": true,\n  \"patch_size\": 14,\n  \"image_size\": 518,\n  \"num_channels\": 3,\n  \"num_"
  },
  {
    "path": "comfy/k_diffusion/deis.py",
    "chars": 5799,
    "preview": "#Taken from: https://github.com/zju-pi/diff-sampler/blob/main/gits-main/solver_utils.py\n#under Apache 2 license\nimport t"
  },
  {
    "path": "comfy/k_diffusion/sa_solver.py",
    "chars": 5620,
    "preview": "# SA-Solver: Stochastic Adams Solver (NeurIPS 2023, arXiv:2309.05019)\n# Conference: https://proceedings.neurips.cc/paper"
  },
  {
    "path": "comfy/k_diffusion/sampling.py",
    "chars": 82593,
    "preview": "import math\nfrom functools import partial\n\nfrom scipy import integrate\nimport torch\nfrom torch import nn\nimport torchsde"
  },
  {
    "path": "comfy/k_diffusion/utils.py",
    "chars": 12718,
    "preview": "from contextlib import contextmanager\nimport hashlib\nimport math\nfrom pathlib import Path\nimport shutil\nimport urllib\nim"
  },
  {
    "path": "comfy/latent_formats.py",
    "chars": 30530,
    "preview": "import torch\n\nclass LatentFormat:\n    scale_factor = 1.0\n    latent_channels = 4\n    latent_dimensions = 2\n    latent_rg"
  },
  {
    "path": "comfy/ldm/ace/ace_step15.py",
    "chars": 46957,
    "preview": "import math\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport itertools\nfrom comfy.ldm.modules.a"
  },
  {
    "path": "comfy/ldm/ace/attention.py",
    "chars": 30379,
    "preview": "# Original from: https://github.com/ace-step/ACE-Step/blob/main/models/attention.py\n# Copyright 2024 The HuggingFace Tea"
  },
  {
    "path": "comfy/ldm/ace/lyric_encoder.py",
    "chars": 43151,
    "preview": "# Original from: https://github.com/ace-step/ACE-Step/blob/main/models/lyrics_utils/lyric_encoder.py\nfrom typing import "
  },
  {
    "path": "comfy/ldm/ace/model.py",
    "chars": 17216,
    "preview": "# Original from: https://github.com/ace-step/ACE-Step/blob/main/models/ace_step_transformer.py\n\n# Copyright 2024 The Hug"
  },
  {
    "path": "comfy/ldm/ace/vae/autoencoder_dc.py",
    "chars": 22909,
    "preview": "# Rewritten from diffusers\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom typing import Tuple, "
  },
  {
    "path": "comfy/ldm/ace/vae/music_dcae_pipeline.py",
    "chars": 3622,
    "preview": "# Original from: https://github.com/ace-step/ACE-Step/blob/main/music_dcae/music_dcae_pipeline.py\nimport torch\nfrom .aut"
  },
  {
    "path": "comfy/ldm/ace/vae/music_log_mel.py",
    "chars": 3090,
    "preview": "# Original from: https://github.com/ace-step/ACE-Step/blob/main/music_dcae/music_log_mel.py\nimport torch\nimport torch.nn"
  },
  {
    "path": "comfy/ldm/ace/vae/music_vocoder.py",
    "chars": 18149,
    "preview": "# Original from: https://github.com/ace-step/ACE-Step/blob/main/music_dcae/music_vocoder.py\nimport torch\nfrom torch impo"
  },
  {
    "path": "comfy/ldm/anima/model.py",
    "chars": 9793,
    "preview": "from comfy.ldm.cosmos.predict2 import MiniTrainDIT\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F\n\n\nd"
  },
  {
    "path": "comfy/ldm/audio/autoencoder.py",
    "chars": 9969,
    "preview": "# code adapted from: https://github.com/Stability-AI/stable-audio-tools\n\nimport torch\nfrom torch import nn\nfrom typing i"
  },
  {
    "path": "comfy/ldm/audio/dit.py",
    "chars": 31491,
    "preview": "# code adapted from: https://github.com/Stability-AI/stable-audio-tools\n\nfrom comfy.ldm.modules.attention import optimiz"
  },
  {
    "path": "comfy/ldm/audio/embedders.py",
    "chars": 3406,
    "preview": "# code adapted from: https://github.com/Stability-AI/stable-audio-tools\n\nimport torch\nimport torch.nn as nn\nfrom torch i"
  },
  {
    "path": "comfy/ldm/aura/mmdit.py",
    "chars": 20461,
    "preview": "#AuraFlow MMDiT\n#Originally written by the AuraFlow Authors\n\nimport math\n\nimport torch\nimport torch.nn as nn\nimport torc"
  }
]

// ... and 518 more files (download for full content)

About this extraction

This page contains the full source code of the Comfy-Org/ComfyUI GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 718 files (23.3 MB), approximately 5.2M tokens, and a symbol index with 5205 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!