Full Code of Stability-AI/diffusers for AI

main 2e3541d7f40b cached
545 files
5.6 MB
1.5M tokens
4032 symbols
1 requests
Download .txt
Showing preview only (6,035K chars total). Download the full file or copy to clipboard to get everything.
Repository: Stability-AI/diffusers
Branch: main
Commit: 2e3541d7f40b
Files: 545
Total size: 5.6 MB

Directory structure:
gitextract_2gqdad1t/

├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.yml
│   │   ├── config.yml
│   │   ├── feature_request.md
│   │   ├── feedback.md
│   │   └── new-model-addition.yml
│   ├── actions/
│   │   └── setup-miniconda/
│   │       └── action.yml
│   └── workflows/
│       ├── build_docker_images.yml
│       ├── build_documentation.yml
│       ├── build_pr_documentation.yml
│       ├── delete_doc_comment.yml
│       ├── nightly_tests.yml
│       ├── pr_quality.yml
│       ├── pr_tests.yml
│       ├── push_tests.yml
│       ├── push_tests_fast.yml
│       ├── stale.yml
│       └── typos.yml
├── .gitignore
├── CITATION.cff
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── Makefile
├── README.md
├── _typos.toml
├── docker/
│   ├── diffusers-flax-cpu/
│   │   └── Dockerfile
│   ├── diffusers-flax-tpu/
│   │   └── Dockerfile
│   ├── diffusers-onnxruntime-cpu/
│   │   └── Dockerfile
│   ├── diffusers-onnxruntime-cuda/
│   │   └── Dockerfile
│   ├── diffusers-pytorch-cpu/
│   │   └── Dockerfile
│   └── diffusers-pytorch-cuda/
│       └── Dockerfile
├── docs/
│   ├── README.md
│   ├── TRANSLATING.md
│   └── source/
│       ├── en/
│       │   ├── _toctree.yml
│       │   ├── api/
│       │   │   ├── configuration.mdx
│       │   │   ├── diffusion_pipeline.mdx
│       │   │   ├── experimental/
│       │   │   │   └── rl.mdx
│       │   │   ├── loaders.mdx
│       │   │   ├── logging.mdx
│       │   │   ├── models.mdx
│       │   │   ├── outputs.mdx
│       │   │   ├── pipelines/
│       │   │   │   ├── alt_diffusion.mdx
│       │   │   │   ├── audio_diffusion.mdx
│       │   │   │   ├── cycle_diffusion.mdx
│       │   │   │   ├── dance_diffusion.mdx
│       │   │   │   ├── ddim.mdx
│       │   │   │   ├── ddpm.mdx
│       │   │   │   ├── dit.mdx
│       │   │   │   ├── latent_diffusion.mdx
│       │   │   │   ├── latent_diffusion_uncond.mdx
│       │   │   │   ├── overview.mdx
│       │   │   │   ├── paint_by_example.mdx
│       │   │   │   ├── pndm.mdx
│       │   │   │   ├── repaint.mdx
│       │   │   │   ├── score_sde_ve.mdx
│       │   │   │   ├── semantic_stable_diffusion.mdx
│       │   │   │   ├── stable_diffusion/
│       │   │   │   │   ├── attend_and_excite.mdx
│       │   │   │   │   ├── controlnet.mdx
│       │   │   │   │   ├── depth2img.mdx
│       │   │   │   │   ├── image_variation.mdx
│       │   │   │   │   ├── img2img.mdx
│       │   │   │   │   ├── inpaint.mdx
│       │   │   │   │   ├── latent_upscale.mdx
│       │   │   │   │   ├── overview.mdx
│       │   │   │   │   ├── panorama.mdx
│       │   │   │   │   ├── pix2pix.mdx
│       │   │   │   │   ├── pix2pix_zero.mdx
│       │   │   │   │   ├── self_attention_guidance.mdx
│       │   │   │   │   ├── text2img.mdx
│       │   │   │   │   └── upscale.mdx
│       │   │   │   ├── stable_diffusion_2.mdx
│       │   │   │   ├── stable_diffusion_safe.mdx
│       │   │   │   ├── stable_unclip.mdx
│       │   │   │   ├── stochastic_karras_ve.mdx
│       │   │   │   ├── unclip.mdx
│       │   │   │   ├── versatile_diffusion.mdx
│       │   │   │   └── vq_diffusion.mdx
│       │   │   └── schedulers/
│       │   │       ├── ddim.mdx
│       │   │       ├── ddim_inverse.mdx
│       │   │       ├── ddpm.mdx
│       │   │       ├── deis.mdx
│       │   │       ├── dpm_discrete.mdx
│       │   │       ├── dpm_discrete_ancestral.mdx
│       │   │       ├── euler.mdx
│       │   │       ├── euler_ancestral.mdx
│       │   │       ├── heun.mdx
│       │   │       ├── ipndm.mdx
│       │   │       ├── lms_discrete.mdx
│       │   │       ├── multistep_dpm_solver.mdx
│       │   │       ├── overview.mdx
│       │   │       ├── pndm.mdx
│       │   │       ├── repaint.mdx
│       │   │       ├── score_sde_ve.mdx
│       │   │       ├── score_sde_vp.mdx
│       │   │       ├── singlestep_dpm_solver.mdx
│       │   │       ├── stochastic_karras_ve.mdx
│       │   │       ├── unipc.mdx
│       │   │       └── vq_diffusion.mdx
│       │   ├── conceptual/
│       │   │   ├── contribution.mdx
│       │   │   ├── ethical_guidelines.mdx
│       │   │   └── philosophy.mdx
│       │   ├── index.mdx
│       │   ├── installation.mdx
│       │   ├── optimization/
│       │   │   ├── fp16.mdx
│       │   │   ├── habana.mdx
│       │   │   ├── mps.mdx
│       │   │   ├── onnx.mdx
│       │   │   ├── open_vino.mdx
│       │   │   ├── torch2.0.mdx
│       │   │   └── xformers.mdx
│       │   ├── quicktour.mdx
│       │   ├── stable_diffusion.mdx
│       │   ├── training/
│       │   │   ├── dreambooth.mdx
│       │   │   ├── lora.mdx
│       │   │   ├── overview.mdx
│       │   │   ├── text2image.mdx
│       │   │   ├── text_inversion.mdx
│       │   │   └── unconditional_training.mdx
│       │   ├── tutorials/
│       │   │   └── basic_training.mdx
│       │   └── using-diffusers/
│       │       ├── audio.mdx
│       │       ├── conditional_image_generation.mdx
│       │       ├── configuration.mdx
│       │       ├── contribute_pipeline.mdx
│       │       ├── controlling_generation.mdx
│       │       ├── custom_pipeline_examples.mdx
│       │       ├── custom_pipeline_overview.mdx
│       │       ├── depth2img.mdx
│       │       ├── img2img.mdx
│       │       ├── inpaint.mdx
│       │       ├── kerascv.mdx
│       │       ├── loading.mdx
│       │       ├── other-modalities.mdx
│       │       ├── reproducibility.mdx
│       │       ├── reusing_seeds.mdx
│       │       ├── rl.mdx
│       │       ├── schedulers.mdx
│       │       ├── unconditional_image_generation.mdx
│       │       ├── using_safetensors
│       │       └── using_safetensors.mdx
│       └── ko/
│           ├── _toctree.yml
│           ├── in_translation.mdx
│           ├── index.mdx
│           ├── installation.mdx
│           └── quicktour.mdx
├── examples/
│   ├── README.md
│   ├── community/
│   │   ├── README.md
│   │   ├── bit_diffusion.py
│   │   ├── checkpoint_merger.py
│   │   ├── clip_guided_stable_diffusion.py
│   │   ├── composable_stable_diffusion.py
│   │   ├── imagic_stable_diffusion.py
│   │   ├── img2img_inpainting.py
│   │   ├── interpolate_stable_diffusion.py
│   │   ├── lpw_stable_diffusion.py
│   │   ├── lpw_stable_diffusion_onnx.py
│   │   ├── magic_mix.py
│   │   ├── multilingual_stable_diffusion.py
│   │   ├── one_step_unet.py
│   │   ├── sd_text2img_k_diffusion.py
│   │   ├── seed_resize_stable_diffusion.py
│   │   ├── speech_to_image_diffusion.py
│   │   ├── stable_diffusion_comparison.py
│   │   ├── stable_diffusion_mega.py
│   │   ├── stable_unclip.py
│   │   ├── text_inpainting.py
│   │   ├── tiled_upscaling.py
│   │   ├── unclip_image_interpolation.py
│   │   ├── unclip_text_interpolation.py
│   │   └── wildcard_stable_diffusion.py
│   ├── conftest.py
│   ├── dreambooth/
│   │   ├── README.md
│   │   ├── requirements.txt
│   │   ├── requirements_flax.txt
│   │   ├── train_dreambooth.py
│   │   ├── train_dreambooth_flax.py
│   │   └── train_dreambooth_lora.py
│   ├── inference/
│   │   ├── README.md
│   │   ├── image_to_image.py
│   │   └── inpainting.py
│   ├── research_projects/
│   │   ├── README.md
│   │   ├── colossalai/
│   │   │   ├── README.md
│   │   │   ├── inference.py
│   │   │   ├── requirement.txt
│   │   │   └── train_dreambooth_colossalai.py
│   │   ├── dreambooth_inpaint/
│   │   │   ├── README.md
│   │   │   ├── requirements.txt
│   │   │   ├── train_dreambooth_inpaint.py
│   │   │   └── train_dreambooth_inpaint_lora.py
│   │   ├── intel_opts/
│   │   │   ├── README.md
│   │   │   ├── inference_bf16.py
│   │   │   └── textual_inversion/
│   │   │       ├── README.md
│   │   │       ├── requirements.txt
│   │   │       └── textual_inversion_bf16.py
│   │   ├── multi_subject_dreambooth/
│   │   │   ├── README.md
│   │   │   ├── requirements.txt
│   │   │   └── train_multi_subject_dreambooth.py
│   │   └── onnxruntime/
│   │       ├── README.md
│   │       ├── text_to_image/
│   │       │   ├── README.md
│   │       │   ├── requirements.txt
│   │       │   └── train_text_to_image.py
│   │       ├── textual_inversion/
│   │       │   ├── README.md
│   │       │   ├── requirements.txt
│   │       │   └── textual_inversion.py
│   │       └── unconditional_image_generation/
│   │           ├── README.md
│   │           ├── requirements.txt
│   │           └── train_unconditional.py
│   ├── rl/
│   │   ├── README.md
│   │   └── run_diffuser_locomotion.py
│   ├── test_examples.py
│   ├── text_to_image/
│   │   ├── README.md
│   │   ├── requirements.txt
│   │   ├── requirements_flax.txt
│   │   ├── train_text_to_image.py
│   │   ├── train_text_to_image_flax.py
│   │   └── train_text_to_image_lora.py
│   ├── textual_inversion/
│   │   ├── README.md
│   │   ├── requirements.txt
│   │   ├── requirements_flax.txt
│   │   ├── textual_inversion.py
│   │   └── textual_inversion_flax.py
│   └── unconditional_image_generation/
│       ├── README.md
│       ├── requirements.txt
│       └── train_unconditional.py
├── pyproject.toml
├── scripts/
│   ├── __init__.py
│   ├── change_naming_configs_and_checkpoints.py
│   ├── conversion_ldm_uncond.py
│   ├── convert_dance_diffusion_to_diffusers.py
│   ├── convert_ddpm_original_checkpoint_to_diffusers.py
│   ├── convert_diffusers_to_original_stable_diffusion.py
│   ├── convert_dit_to_diffusers.py
│   ├── convert_k_upscaler_to_diffusers.py
│   ├── convert_kakao_brain_unclip_to_diffusers.py
│   ├── convert_ldm_original_checkpoint_to_diffusers.py
│   ├── convert_models_diffuser_to_diffusers.py
│   ├── convert_ncsnpp_original_checkpoint_to_diffusers.py
│   ├── convert_original_stable_diffusion_to_diffusers.py
│   ├── convert_stable_diffusion_checkpoint_to_onnx.py
│   ├── convert_unclip_txt2img_to_image_variation.py
│   ├── convert_vae_pt_to_diffusers.py
│   ├── convert_versatile_diffusion_to_diffusers.py
│   ├── convert_vq_diffusion_to_diffusers.py
│   └── generate_logits.py
├── setup.cfg
├── setup.py
├── src/
│   └── diffusers/
│       ├── __init__.py
│       ├── commands/
│       │   ├── __init__.py
│       │   ├── diffusers_cli.py
│       │   └── env.py
│       ├── configuration_utils.py
│       ├── dependency_versions_check.py
│       ├── dependency_versions_table.py
│       ├── experimental/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   └── rl/
│       │       ├── __init__.py
│       │       └── value_guided_sampling.py
│       ├── loaders.py
│       ├── models/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   ├── attention.py
│       │   ├── attention_flax.py
│       │   ├── autoencoder_kl.py
│       │   ├── controlnet.py
│       │   ├── cross_attention.py
│       │   ├── dual_transformer_2d.py
│       │   ├── embeddings.py
│       │   ├── embeddings_flax.py
│       │   ├── modeling_flax_pytorch_utils.py
│       │   ├── modeling_flax_utils.py
│       │   ├── modeling_pytorch_flax_utils.py
│       │   ├── modeling_utils.py
│       │   ├── prior_transformer.py
│       │   ├── resnet.py
│       │   ├── resnet_flax.py
│       │   ├── transformer_2d.py
│       │   ├── unet_1d.py
│       │   ├── unet_1d_blocks.py
│       │   ├── unet_2d.py
│       │   ├── unet_2d_blocks.py
│       │   ├── unet_2d_blocks_flax.py
│       │   ├── unet_2d_condition.py
│       │   ├── unet_2d_condition_flax.py
│       │   ├── vae.py
│       │   ├── vae_flax.py
│       │   └── vq_model.py
│       ├── optimization.py
│       ├── pipeline_utils.py
│       ├── pipelines/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   ├── alt_diffusion/
│       │   │   ├── __init__.py
│       │   │   ├── modeling_roberta_series.py
│       │   │   ├── pipeline_alt_diffusion.py
│       │   │   └── pipeline_alt_diffusion_img2img.py
│       │   ├── audio_diffusion/
│       │   │   ├── __init__.py
│       │   │   ├── mel.py
│       │   │   └── pipeline_audio_diffusion.py
│       │   ├── dance_diffusion/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_dance_diffusion.py
│       │   ├── ddim/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_ddim.py
│       │   ├── ddpm/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_ddpm.py
│       │   ├── dit/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_dit.py
│       │   ├── latent_diffusion/
│       │   │   ├── __init__.py
│       │   │   ├── pipeline_latent_diffusion.py
│       │   │   └── pipeline_latent_diffusion_superresolution.py
│       │   ├── latent_diffusion_uncond/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_latent_diffusion_uncond.py
│       │   ├── onnx_utils.py
│       │   ├── paint_by_example/
│       │   │   ├── __init__.py
│       │   │   ├── image_encoder.py
│       │   │   └── pipeline_paint_by_example.py
│       │   ├── pipeline_flax_utils.py
│       │   ├── pipeline_utils.py
│       │   ├── pndm/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_pndm.py
│       │   ├── repaint/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_repaint.py
│       │   ├── score_sde_ve/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_score_sde_ve.py
│       │   ├── semantic_stable_diffusion/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_semantic_stable_diffusion.py
│       │   ├── stable_diffusion/
│       │   │   ├── README.md
│       │   │   ├── __init__.py
│       │   │   ├── convert_from_ckpt.py
│       │   │   ├── pipeline_cycle_diffusion.py
│       │   │   ├── pipeline_flax_stable_diffusion.py
│       │   │   ├── pipeline_flax_stable_diffusion_img2img.py
│       │   │   ├── pipeline_flax_stable_diffusion_inpaint.py
│       │   │   ├── pipeline_onnx_stable_diffusion.py
│       │   │   ├── pipeline_onnx_stable_diffusion_img2img.py
│       │   │   ├── pipeline_onnx_stable_diffusion_inpaint.py
│       │   │   ├── pipeline_onnx_stable_diffusion_inpaint_legacy.py
│       │   │   ├── pipeline_stable_diffusion.py
│       │   │   ├── pipeline_stable_diffusion_attend_and_excite.py
│       │   │   ├── pipeline_stable_diffusion_controlnet.py
│       │   │   ├── pipeline_stable_diffusion_depth2img.py
│       │   │   ├── pipeline_stable_diffusion_image_variation.py
│       │   │   ├── pipeline_stable_diffusion_img2img.py
│       │   │   ├── pipeline_stable_diffusion_inpaint.py
│       │   │   ├── pipeline_stable_diffusion_inpaint_legacy.py
│       │   │   ├── pipeline_stable_diffusion_instruct_pix2pix.py
│       │   │   ├── pipeline_stable_diffusion_k_diffusion.py
│       │   │   ├── pipeline_stable_diffusion_latent_upscale.py
│       │   │   ├── pipeline_stable_diffusion_panorama.py
│       │   │   ├── pipeline_stable_diffusion_pix2pix_zero.py
│       │   │   ├── pipeline_stable_diffusion_sag.py
│       │   │   ├── pipeline_stable_diffusion_upscale.py
│       │   │   ├── pipeline_stable_unclip.py
│       │   │   ├── pipeline_stable_unclip_img2img.py
│       │   │   ├── safety_checker.py
│       │   │   ├── safety_checker_flax.py
│       │   │   └── stable_unclip_image_normalizer.py
│       │   ├── stable_diffusion_safe/
│       │   │   ├── __init__.py
│       │   │   ├── pipeline_stable_diffusion_safe.py
│       │   │   └── safety_checker.py
│       │   ├── stochastic_karras_ve/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_stochastic_karras_ve.py
│       │   ├── unclip/
│       │   │   ├── __init__.py
│       │   │   ├── pipeline_unclip.py
│       │   │   ├── pipeline_unclip_image_variation.py
│       │   │   └── text_proj.py
│       │   ├── versatile_diffusion/
│       │   │   ├── __init__.py
│       │   │   ├── modeling_text_unet.py
│       │   │   ├── pipeline_versatile_diffusion.py
│       │   │   ├── pipeline_versatile_diffusion_dual_guided.py
│       │   │   ├── pipeline_versatile_diffusion_image_variation.py
│       │   │   └── pipeline_versatile_diffusion_text_to_image.py
│       │   └── vq_diffusion/
│       │       ├── __init__.py
│       │       └── pipeline_vq_diffusion.py
│       ├── schedulers/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   ├── scheduling_ddim.py
│       │   ├── scheduling_ddim_flax.py
│       │   ├── scheduling_ddim_inverse.py
│       │   ├── scheduling_ddpm.py
│       │   ├── scheduling_ddpm_flax.py
│       │   ├── scheduling_deis_multistep.py
│       │   ├── scheduling_dpmsolver_multistep.py
│       │   ├── scheduling_dpmsolver_multistep_flax.py
│       │   ├── scheduling_dpmsolver_singlestep.py
│       │   ├── scheduling_euler_ancestral_discrete.py
│       │   ├── scheduling_euler_discrete.py
│       │   ├── scheduling_heun_discrete.py
│       │   ├── scheduling_ipndm.py
│       │   ├── scheduling_k_dpm_2_ancestral_discrete.py
│       │   ├── scheduling_k_dpm_2_discrete.py
│       │   ├── scheduling_karras_ve.py
│       │   ├── scheduling_karras_ve_flax.py
│       │   ├── scheduling_lms_discrete.py
│       │   ├── scheduling_lms_discrete_flax.py
│       │   ├── scheduling_pndm.py
│       │   ├── scheduling_pndm_flax.py
│       │   ├── scheduling_repaint.py
│       │   ├── scheduling_sde_ve.py
│       │   ├── scheduling_sde_ve_flax.py
│       │   ├── scheduling_sde_vp.py
│       │   ├── scheduling_unclip.py
│       │   ├── scheduling_unipc_multistep.py
│       │   ├── scheduling_utils.py
│       │   ├── scheduling_utils_flax.py
│       │   └── scheduling_vq_diffusion.py
│       ├── training_utils.py
│       └── utils/
│           ├── __init__.py
│           ├── accelerate_utils.py
│           ├── constants.py
│           ├── deprecation_utils.py
│           ├── doc_utils.py
│           ├── dummy_flax_and_transformers_objects.py
│           ├── dummy_flax_objects.py
│           ├── dummy_onnx_objects.py
│           ├── dummy_pt_objects.py
│           ├── dummy_torch_and_librosa_objects.py
│           ├── dummy_torch_and_scipy_objects.py
│           ├── dummy_torch_and_transformers_and_k_diffusion_objects.py
│           ├── dummy_torch_and_transformers_and_onnx_objects.py
│           ├── dummy_torch_and_transformers_objects.py
│           ├── dynamic_modules_utils.py
│           ├── hub_utils.py
│           ├── import_utils.py
│           ├── logging.py
│           ├── model_card_template.md
│           ├── outputs.py
│           ├── pil_utils.py
│           ├── testing_utils.py
│           └── torch_utils.py
├── tests/
│   ├── __init__.py
│   ├── conftest.py
│   ├── fixtures/
│   │   └── custom_pipeline/
│   │       ├── pipeline.py
│   │       └── what_ever.py
│   ├── models/
│   │   ├── __init__.py
│   │   ├── test_models_unet_1d.py
│   │   ├── test_models_unet_2d.py
│   │   ├── test_models_unet_2d_condition.py
│   │   ├── test_models_unet_2d_flax.py
│   │   ├── test_models_vae.py
│   │   ├── test_models_vae_flax.py
│   │   └── test_models_vq.py
│   ├── pipeline_params.py
│   ├── pipelines/
│   │   ├── __init__.py
│   │   ├── altdiffusion/
│   │   │   ├── __init__.py
│   │   │   ├── test_alt_diffusion.py
│   │   │   └── test_alt_diffusion_img2img.py
│   │   ├── audio_diffusion/
│   │   │   ├── __init__.py
│   │   │   └── test_audio_diffusion.py
│   │   ├── dance_diffusion/
│   │   │   ├── __init__.py
│   │   │   └── test_dance_diffusion.py
│   │   ├── ddim/
│   │   │   ├── __init__.py
│   │   │   └── test_ddim.py
│   │   ├── ddpm/
│   │   │   ├── __init__.py
│   │   │   └── test_ddpm.py
│   │   ├── dit/
│   │   │   ├── __init__.py
│   │   │   └── test_dit.py
│   │   ├── karras_ve/
│   │   │   ├── __init__.py
│   │   │   └── test_karras_ve.py
│   │   ├── latent_diffusion/
│   │   │   ├── __init__.py
│   │   │   ├── test_latent_diffusion.py
│   │   │   ├── test_latent_diffusion_superresolution.py
│   │   │   └── test_latent_diffusion_uncond.py
│   │   ├── paint_by_example/
│   │   │   ├── __init__.py
│   │   │   └── test_paint_by_example.py
│   │   ├── pndm/
│   │   │   ├── __init__.py
│   │   │   └── test_pndm.py
│   │   ├── repaint/
│   │   │   ├── __init__.py
│   │   │   └── test_repaint.py
│   │   ├── score_sde_ve/
│   │   │   ├── __init__.py
│   │   │   └── test_score_sde_ve.py
│   │   ├── semantic_stable_diffusion/
│   │   │   ├── __init__.py
│   │   │   └── test_semantic_diffusion.py
│   │   ├── stable_diffusion/
│   │   │   ├── __init__.py
│   │   │   ├── test_cycle_diffusion.py
│   │   │   ├── test_onnx_stable_diffusion.py
│   │   │   ├── test_onnx_stable_diffusion_img2img.py
│   │   │   ├── test_onnx_stable_diffusion_inpaint.py
│   │   │   ├── test_onnx_stable_diffusion_inpaint_legacy.py
│   │   │   ├── test_stable_diffusion.py
│   │   │   ├── test_stable_diffusion_controlnet.py
│   │   │   ├── test_stable_diffusion_image_variation.py
│   │   │   ├── test_stable_diffusion_img2img.py
│   │   │   ├── test_stable_diffusion_inpaint.py
│   │   │   ├── test_stable_diffusion_inpaint_legacy.py
│   │   │   ├── test_stable_diffusion_instruction_pix2pix.py
│   │   │   ├── test_stable_diffusion_k_diffusion.py
│   │   │   ├── test_stable_diffusion_panorama.py
│   │   │   ├── test_stable_diffusion_pix2pix_zero.py
│   │   │   └── test_stable_diffusion_sag.py
│   │   ├── stable_diffusion_2/
│   │   │   ├── __init__.py
│   │   │   ├── test_stable_diffusion.py
│   │   │   ├── test_stable_diffusion_attend_and_excite.py
│   │   │   ├── test_stable_diffusion_depth.py
│   │   │   ├── test_stable_diffusion_flax.py
│   │   │   ├── test_stable_diffusion_flax_inpaint.py
│   │   │   ├── test_stable_diffusion_inpaint.py
│   │   │   ├── test_stable_diffusion_latent_upscale.py
│   │   │   ├── test_stable_diffusion_upscale.py
│   │   │   └── test_stable_diffusion_v_pred.py
│   │   ├── stable_diffusion_safe/
│   │   │   ├── __init__.py
│   │   │   └── test_safe_diffusion.py
│   │   ├── stable_unclip/
│   │   │   ├── __init__.py
│   │   │   ├── test_stable_unclip.py
│   │   │   └── test_stable_unclip_img2img.py
│   │   ├── test_pipeline_utils.py
│   │   ├── unclip/
│   │   │   ├── __init__.py
│   │   │   ├── test_unclip.py
│   │   │   └── test_unclip_image_variation.py
│   │   ├── versatile_diffusion/
│   │   │   ├── __init__.py
│   │   │   ├── test_versatile_diffusion_dual_guided.py
│   │   │   ├── test_versatile_diffusion_image_variation.py
│   │   │   ├── test_versatile_diffusion_mega.py
│   │   │   └── test_versatile_diffusion_text_to_image.py
│   │   └── vq_diffusion/
│   │       ├── __init__.py
│   │       └── test_vq_diffusion.py
│   ├── repo_utils/
│   │   ├── test_check_copies.py
│   │   └── test_check_dummies.py
│   ├── test_config.py
│   ├── test_hub_utils.py
│   ├── test_layers_utils.py
│   ├── test_modeling_common.py
│   ├── test_modeling_common_flax.py
│   ├── test_outputs.py
│   ├── test_pipelines.py
│   ├── test_pipelines_common.py
│   ├── test_pipelines_flax.py
│   ├── test_pipelines_onnx_common.py
│   ├── test_scheduler.py
│   ├── test_scheduler_flax.py
│   ├── test_training.py
│   ├── test_unet_2d_blocks.py
│   ├── test_unet_blocks_common.py
│   └── test_utils.py
└── utils/
    ├── check_config_docstrings.py
    ├── check_copies.py
    ├── check_doc_toc.py
    ├── check_dummies.py
    ├── check_inits.py
    ├── check_repo.py
    ├── check_table.py
    ├── custom_init_isort.py
    ├── get_modified_files.py
    ├── overwrite_expected_slice.py
    ├── print_env.py
    ├── release.py
    └── stale.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/ISSUE_TEMPLATE/bug-report.yml
================================================
name: "\U0001F41B Bug Report"
description: Report a bug on diffusers
labels: [ "bug" ]
body:
  - type: markdown
    attributes:
      value: |
        Thanks a lot for taking the time to file this issue 🤗.
        Issues do not only help to improve the library, but also publicly document common problems, questions, workflows for the whole community!
        Thus, issues are of the same importance as pull requests when contributing to this library ❤️.
        In order to make your issue as **useful for the community as possible**, let's try to stick to some simple guidelines:
        - 1. Please try to be as precise and concise as possible.
             *Give your issue a fitting title. Assume that someone which very limited knowledge of diffusers can understand your issue. Add links to the source code, documentation other issues, pull requests etc...*
        - 2. If your issue is about something not working, **always** provide a reproducible code snippet. The reader should be able to reproduce your issue by **only copy-pasting your code snippet into a Python shell**.
             *The community cannot solve your issue if it cannot reproduce it. If your bug is related to training, add your training script and make everything needed to train public. Otherwise, just add a simple Python code snippet.*
        - 3. Add the **minimum amount of code / context that is needed to understand, reproduce your issue**.
             *Make the life of maintainers easy. `diffusers` is getting many issues every day. Make sure your issue is about one bug and one bug only. Make sure you add only the context, code needed to understand your issues - nothing more. Generally, every issue is a way of documenting this library, try to make it a good documentation entry.*
  - type: markdown
    attributes:
      value: |
        For more in-detail information on how to write good issues you can have a look [here](https://huggingface.co/course/chapter8/5?fw=pt)
  - type: textarea
    id: bug-description
    attributes:
      label: Describe the bug
      description: A clear and concise description of what the bug is. If you intend to submit a pull request for this issue, tell us in the description. Thanks!
      placeholder: Bug description
    validations:
      required: true
  - type: textarea
    id: reproduction
    attributes:
      label: Reproduction
      description: Please provide a minimal reproducible code which we can copy/paste and reproduce the issue.
      placeholder: Reproduction
    validations:
      required: true
  - type: textarea
    id: logs
    attributes:
      label: Logs
      description: "Please include the Python logs if you can."
      render: shell
  - type: textarea
    id: system-info
    attributes:
      label: System Info
      description: Please share your system info with us. You can run the command `diffusers-cli env` and copy-paste its output below.
      placeholder: diffusers version, platform, python version, ...
    validations:
      required: true


================================================
FILE: .github/ISSUE_TEMPLATE/config.yml
================================================
contact_links:
  - name: Blank issue
    url: https://github.com/huggingface/diffusers/issues/new
    about: Other
  - name: Forum
    url: https://discuss.huggingface.co/
    about: General usage questions and community discussions

================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: "\U0001F680 Feature request"
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.


================================================
FILE: .github/ISSUE_TEMPLATE/feedback.md
================================================
---
name: "💬 Feedback about API Design"
about: Give feedback about the current API design
title: ''
labels: ''
assignees: ''

---

**What API design would you like to have changed or added to the library? Why?**

**What use case would this enable or better enable? Can you give us a code example?**


================================================
FILE: .github/ISSUE_TEMPLATE/new-model-addition.yml
================================================
name: "\U0001F31F New model/pipeline/scheduler addition"
description: Submit a proposal/request to implement a new diffusion model / pipeline / scheduler
labels: [ "New model/pipeline/scheduler" ]

body:
  - type: textarea
    id: description-request
    validations:
      required: true
    attributes:
      label: Model/Pipeline/Scheduler description
      description: |
        Put any and all important information relative to the model/pipeline/scheduler

  - type: checkboxes
    id: information-tasks
    attributes:
      label: Open source status
      description: |
          Please note that if the model implementation isn't available or if the weights aren't open-source, we are less likely to implement it in `diffusers`.
      options:
        - label: "The model implementation is available"
        - label: "The model weights are available (Only relevant if addition is not a scheduler)."

  - type: textarea
    id: additional-info
    attributes:
      label: Provide useful links for the implementation
      description: |
        Please provide information regarding the implementation, the weights, and the authors.
        Please mention the authors by @gh-username if you're aware of their usernames.


================================================
FILE: .github/actions/setup-miniconda/action.yml
================================================
name: Set up conda environment for testing

description: Sets up miniconda in your ${RUNNER_TEMP} environment and gives you the ${CONDA_RUN} environment variable so you don't have to worry about polluting non-empeheral runners anymore

inputs:
  python-version:
    description: If set to any value, dont use sudo to clean the workspace
    required: false
    type: string
    default: "3.9"
  miniconda-version:
    description: Miniconda version to install
    required: false
    type: string
    default: "4.12.0"
  environment-file:
    description: Environment file to install dependencies from
    required: false
    type: string
    default: ""

runs:
  using: composite
  steps:
      # Use the same trick from https://github.com/marketplace/actions/setup-miniconda
      # to refresh the cache daily. This is kind of optional though
      - name: Get date
        id: get-date
        shell: bash
        run: echo "::set-output name=today::$(/bin/date -u '+%Y%m%d')d"
      - name: Setup miniconda cache
        id: miniconda-cache
        uses: actions/cache@v2
        with:
          path: ${{ runner.temp }}/miniconda
          key: miniconda-${{ runner.os }}-${{ runner.arch }}-${{ inputs.python-version }}-${{ steps.get-date.outputs.today }}
      - name: Install miniconda (${{ inputs.miniconda-version }})
        if: steps.miniconda-cache.outputs.cache-hit != 'true'
        env:
          MINICONDA_VERSION: ${{ inputs.miniconda-version }}
        shell: bash -l {0}
        run: |
          MINICONDA_INSTALL_PATH="${RUNNER_TEMP}/miniconda"
          mkdir -p "${MINICONDA_INSTALL_PATH}"
          case ${RUNNER_OS}-${RUNNER_ARCH} in
            Linux-X64)
              MINICONDA_ARCH="Linux-x86_64"
              ;;
            macOS-ARM64)
              MINICONDA_ARCH="MacOSX-arm64"
              ;;
            macOS-X64)
              MINICONDA_ARCH="MacOSX-x86_64"
              ;;
            *)
            echo "::error::Platform ${RUNNER_OS}-${RUNNER_ARCH} currently unsupported using this action"
              exit 1
              ;;
          esac
          MINICONDA_URL="https://repo.anaconda.com/miniconda/Miniconda3-py39_${MINICONDA_VERSION}-${MINICONDA_ARCH}.sh"
          curl -fsSL "${MINICONDA_URL}" -o "${MINICONDA_INSTALL_PATH}/miniconda.sh"
          bash "${MINICONDA_INSTALL_PATH}/miniconda.sh" -b -u -p "${MINICONDA_INSTALL_PATH}"
          rm -rf "${MINICONDA_INSTALL_PATH}/miniconda.sh"
      - name: Update GitHub path to include miniconda install
        shell: bash
        run: |
          MINICONDA_INSTALL_PATH="${RUNNER_TEMP}/miniconda"
          echo "${MINICONDA_INSTALL_PATH}/bin" >> $GITHUB_PATH
      - name: Setup miniconda env cache (with env file)
        id: miniconda-env-cache-env-file
        if: ${{ runner.os }} == 'macOS' && ${{ inputs.environment-file }} != ''
        uses: actions/cache@v2
        with:
          path: ${{ runner.temp }}/conda-python-${{ inputs.python-version }}
          key: miniconda-env-${{ runner.os }}-${{ runner.arch }}-${{ inputs.python-version }}-${{ steps.get-date.outputs.today }}-${{ hashFiles(inputs.environment-file) }}
      - name: Setup miniconda env cache (without env file)
        id: miniconda-env-cache
        if: ${{ runner.os }} == 'macOS' && ${{ inputs.environment-file }} == ''
        uses: actions/cache@v2
        with:
          path: ${{ runner.temp }}/conda-python-${{ inputs.python-version }}
          key: miniconda-env-${{ runner.os }}-${{ runner.arch }}-${{ inputs.python-version }}-${{ steps.get-date.outputs.today }}
      - name: Setup conda environment with python (v${{ inputs.python-version }})
        if: steps.miniconda-env-cache-env-file.outputs.cache-hit != 'true' && steps.miniconda-env-cache.outputs.cache-hit != 'true'
        shell: bash
        env:
          PYTHON_VERSION: ${{ inputs.python-version }}
          ENV_FILE: ${{ inputs.environment-file }}
        run: |
          CONDA_BASE_ENV="${RUNNER_TEMP}/conda-python-${PYTHON_VERSION}"
          ENV_FILE_FLAG=""
          if [[ -f "${ENV_FILE}" ]]; then
            ENV_FILE_FLAG="--file ${ENV_FILE}"
          elif [[ -n "${ENV_FILE}" ]]; then
            echo "::warning::Specified env file (${ENV_FILE}) not found, not going to include it"
          fi
          conda create \
            --yes \
            --prefix "${CONDA_BASE_ENV}" \
            "python=${PYTHON_VERSION}" \
            ${ENV_FILE_FLAG} \
            cmake=3.22 \
            conda-build=3.21 \
            ninja=1.10 \
            pkg-config=0.29 \
            wheel=0.37
      - name: Clone the base conda environment and update GitHub env
        shell: bash
        env:
          PYTHON_VERSION: ${{ inputs.python-version }}
          CONDA_BASE_ENV: ${{ runner.temp }}/conda-python-${{ inputs.python-version }}
        run: |
          CONDA_ENV="${RUNNER_TEMP}/conda_environment_${GITHUB_RUN_ID}"
          conda create \
            --yes \
            --prefix "${CONDA_ENV}" \
            --clone "${CONDA_BASE_ENV}"
          # TODO: conda-build could not be cloned because it hardcodes the path, so it
          # could not be cached
          conda install --yes -p ${CONDA_ENV} conda-build=3.21
          echo "CONDA_ENV=${CONDA_ENV}" >> "${GITHUB_ENV}"
          echo "CONDA_RUN=conda run -p ${CONDA_ENV} --no-capture-output" >> "${GITHUB_ENV}"
          echo "CONDA_BUILD=conda run -p ${CONDA_ENV} conda-build" >> "${GITHUB_ENV}"
          echo "CONDA_INSTALL=conda install -p ${CONDA_ENV}" >> "${GITHUB_ENV}"
      - name: Get disk space usage and throw an error for low disk space
        shell: bash
        run: |
          echo "Print the available disk space for manual inspection"
          df -h
          # Set the minimum requirement space to 4GB
          MINIMUM_AVAILABLE_SPACE_IN_GB=4
          MINIMUM_AVAILABLE_SPACE_IN_KB=$(($MINIMUM_AVAILABLE_SPACE_IN_GB * 1024 * 1024))
          # Use KB to avoid floating point warning like 3.1GB
          df -k | tr -s ' ' | cut -d' ' -f 4,9 | while read -r LINE;
          do
            AVAIL=$(echo $LINE | cut -f1 -d' ')
            MOUNT=$(echo $LINE | cut -f2 -d' ')
            if [ "$MOUNT" = "/" ]; then
              if [ "$AVAIL" -lt "$MINIMUM_AVAILABLE_SPACE_IN_KB" ]; then
                echo "There is only ${AVAIL}KB free space left in $MOUNT, which is less than the minimum requirement of ${MINIMUM_AVAILABLE_SPACE_IN_KB}KB. Please help create an issue to PyTorch Release Engineering via https://github.com/pytorch/test-infra/issues and provide the link to the workflow run."
                exit 1;
              else
                echo "There is ${AVAIL}KB free space left in $MOUNT, continue"
              fi
            fi
          done

================================================
FILE: .github/workflows/build_docker_images.yml
================================================
name: Build Docker images (nightly)

on:
  workflow_dispatch:
  schedule:
    - cron: "0 0 * * *" # every day at midnight

concurrency:
  group: docker-image-builds
  cancel-in-progress: false

env:
  REGISTRY: diffusers

jobs:
  build-docker-images:
    runs-on: ubuntu-latest

    permissions:
      contents: read
      packages: write

    strategy:
      fail-fast: false
      matrix:
        image-name:
          - diffusers-pytorch-cpu
          - diffusers-pytorch-cuda
          - diffusers-flax-cpu
          - diffusers-flax-tpu
          - diffusers-onnxruntime-cpu
          - diffusers-onnxruntime-cuda

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ env.REGISTRY }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      - name: Build and push
        uses: docker/build-push-action@v3
        with:
          no-cache: true
          context: ./docker/${{ matrix.image-name }}
          push: true
          tags: ${{ env.REGISTRY }}/${{ matrix.image-name }}:latest


================================================
FILE: .github/workflows/build_documentation.yml
================================================
name: Build documentation

on:
  push:
    branches:
      - main
      - doc-builder*
      - v*-release

jobs:
   build:
    uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
    with:
      commit_sha: ${{ github.sha }}
      package: diffusers
      languages: en ko
    secrets:
      token: ${{ secrets.HUGGINGFACE_PUSH }}


================================================
FILE: .github/workflows/build_pr_documentation.yml
================================================
name: Build PR Documentation

on:
  pull_request:

concurrency:
  group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
  cancel-in-progress: true

jobs:
  build:
    uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
    with:
      commit_sha: ${{ github.event.pull_request.head.sha }}
      pr_number: ${{ github.event.number }}
      package: diffusers
      languages: en ko


================================================
FILE: .github/workflows/delete_doc_comment.yml
================================================
name: Delete dev documentation

on:
  pull_request:
    types: [ closed ]


jobs:
  delete:
    uses: huggingface/doc-builder/.github/workflows/delete_doc_comment.yml@main
    with:
      pr_number: ${{ github.event.number }}
      package: diffusers


================================================
FILE: .github/workflows/nightly_tests.yml
================================================
name: Nightly tests on main

on:
  schedule:
    - cron: "0 0 * * *" # every day at midnight

env:
  DIFFUSERS_IS_CI: yes
  HF_HOME: /mnt/cache
  OMP_NUM_THREADS: 8
  MKL_NUM_THREADS: 8
  PYTEST_TIMEOUT: 600
  RUN_SLOW: yes
  RUN_NIGHTLY: yes

jobs:
  run_nightly_tests:
    strategy:
      fail-fast: false
      matrix:
        config:
          - name: Nightly PyTorch CUDA tests on Ubuntu
            framework: pytorch
            runner: docker-gpu
            image: diffusers/diffusers-pytorch-cuda
            report: torch_cuda
          - name: Nightly Flax TPU tests on Ubuntu
            framework: flax
            runner: docker-tpu
            image: diffusers/diffusers-flax-tpu
            report: flax_tpu
          - name: Nightly ONNXRuntime CUDA tests on Ubuntu
            framework: onnxruntime
            runner: docker-gpu
            image: diffusers/diffusers-onnxruntime-cuda
            report: onnx_cuda

    name: ${{ matrix.config.name }}

    runs-on: ${{ matrix.config.runner }}

    container:
      image: ${{ matrix.config.image }}
      options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ ${{ matrix.config.runner == 'docker-tpu' && '--privileged' || '--gpus 0'}}

    defaults:
      run:
        shell: bash

    steps:
      - name: Checkout diffusers
        uses: actions/checkout@v3
        with:
          fetch-depth: 2

      - name: NVIDIA-SMI
        if: ${{ matrix.config.runner == 'docker-gpu' }}
        run: |
          nvidia-smi

      - name: Install dependencies
        run: |
          python -m pip install -e .[quality,test]
          python -m pip install -U git+https://github.com/huggingface/transformers
          python -m pip install git+https://github.com/huggingface/accelerate

      - name: Environment
        run: |
          python utils/print_env.py

      - name: Run nightly PyTorch CUDA tests
        if: ${{ matrix.config.framework == 'pytorch' }}
        env:
          HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
        run: |
          python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
            -s -v -k "not Flax and not Onnx" \
            --make-reports=tests_${{ matrix.config.report }} \
            tests/

      - name: Run nightly Flax TPU tests
        if: ${{ matrix.config.framework == 'flax' }}
        env:
          HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
        run: |
          python -m pytest -n 0 \
            -s -v -k "Flax" \
            --make-reports=tests_${{ matrix.config.report }} \
            tests/

      - name: Run nightly ONNXRuntime CUDA tests
        if: ${{ matrix.config.framework == 'onnxruntime' }}
        env:
          HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
        run: |
          python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
            -s -v -k "Onnx" \
            --make-reports=tests_${{ matrix.config.report }} \
            tests/

      - name: Failure short reports
        if: ${{ failure() }}
        run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt

      - name: Test suite reports artifacts
        if: ${{ always() }}
        uses: actions/upload-artifact@v2
        with:
          name: ${{ matrix.config.report }}_test_reports
          path: reports

  run_nightly_tests_apple_m1:
    name: Nightly PyTorch MPS tests on MacOS
    runs-on: [ self-hosted, apple-m1 ]

    steps:
      - name: Checkout diffusers
        uses: actions/checkout@v3
        with:
          fetch-depth: 2

      - name: Clean checkout
        shell: arch -arch arm64 bash {0}
        run: |
          git clean -fxd

      - name: Setup miniconda
        uses: ./.github/actions/setup-miniconda
        with:
          python-version: 3.9

      - name: Install dependencies
        shell: arch -arch arm64 bash {0}
        run: |
          ${CONDA_RUN} python -m pip install --upgrade pip
          ${CONDA_RUN} python -m pip install -e .[quality,test]
          ${CONDA_RUN} python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
          ${CONDA_RUN} python -m pip install git+https://github.com/huggingface/accelerate

      - name: Environment
        shell: arch -arch arm64 bash {0}
        run: |
          ${CONDA_RUN} python utils/print_env.py

      - name: Run nightly PyTorch tests on M1 (MPS)
        shell: arch -arch arm64 bash {0}
        env:
          HF_HOME: /System/Volumes/Data/mnt/cache
          HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
        run: |
          ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps tests/

      - name: Failure short reports
        if: ${{ failure() }}
        run: cat reports/tests_torch_mps_failures_short.txt

      - name: Test suite reports artifacts
        if: ${{ always() }}
        uses: actions/upload-artifact@v2
        with:
          name: torch_mps_test_reports
          path: reports


================================================
FILE: .github/workflows/pr_quality.yml
================================================
name: Run code quality checks

on:
  pull_request:
    branches:
      - main
  push:
    branches:
      - main

concurrency:
  group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
  cancel-in-progress: true

jobs:
  check_code_quality:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: "3.7"
      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install .[quality]
      - name: Check quality
        run: |
          black --check examples tests src utils scripts
          ruff examples tests src utils scripts
          doc-builder style src/diffusers docs/source --max_len 119 --check_only --path_to_docs docs/source

  check_repository_consistency:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: "3.7"
      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install .[quality]
      - name: Check quality
        run: |
          python utils/check_copies.py
          python utils/check_dummies.py


================================================
FILE: .github/workflows/pr_tests.yml
================================================
name: Fast tests for PRs

on:
  pull_request:
    branches:
      - main

concurrency:
  group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
  cancel-in-progress: true

env:
  DIFFUSERS_IS_CI: yes
  OMP_NUM_THREADS: 4
  MKL_NUM_THREADS: 4
  PYTEST_TIMEOUT: 60

jobs:
  run_fast_tests:
    strategy:
      fail-fast: false
      matrix:
        config:
          - name: Fast PyTorch CPU tests on Ubuntu
            framework: pytorch
            runner: docker-cpu
            image: diffusers/diffusers-pytorch-cpu
            report: torch_cpu
          - name: Fast Flax CPU tests on Ubuntu
            framework: flax
            runner: docker-cpu
            image: diffusers/diffusers-flax-cpu
            report: flax_cpu
          - name: Fast ONNXRuntime CPU tests on Ubuntu
            framework: onnxruntime
            runner: docker-cpu
            image: diffusers/diffusers-onnxruntime-cpu
            report: onnx_cpu
          - name: PyTorch Example CPU tests on Ubuntu
            framework: pytorch_examples
            runner: docker-cpu
            image: diffusers/diffusers-pytorch-cpu
            report: torch_cpu

    name: ${{ matrix.config.name }}

    runs-on: ${{ matrix.config.runner }}

    container:
      image: ${{ matrix.config.image }}
      options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/

    defaults:
      run:
        shell: bash

    steps:
    - name: Checkout diffusers
      uses: actions/checkout@v3
      with:
        fetch-depth: 2

    - name: Install dependencies
      run: |
        apt-get update && apt-get install libsndfile1-dev -y
        python -m pip install -e .[quality,test]
        python -m pip install -U git+https://github.com/huggingface/transformers
        python -m pip install git+https://github.com/huggingface/accelerate

    - name: Environment
      run: |
        python utils/print_env.py

    - name: Run fast PyTorch CPU tests
      if: ${{ matrix.config.framework == 'pytorch' }}
      run: |
        python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
          -s -v -k "not Flax and not Onnx" \
          --make-reports=tests_${{ matrix.config.report }} \
          tests/

    - name: Run fast Flax TPU tests
      if: ${{ matrix.config.framework == 'flax' }}
      run: |
        python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
          -s -v -k "Flax" \
          --make-reports=tests_${{ matrix.config.report }} \
          tests/

    - name: Run fast ONNXRuntime CPU tests
      if: ${{ matrix.config.framework == 'onnxruntime' }}
      run: |
        python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
          -s -v -k "Onnx" \
          --make-reports=tests_${{ matrix.config.report }} \
          tests/

    - name: Run example PyTorch CPU tests
      if: ${{ matrix.config.framework == 'pytorch_examples' }}
      run: |
        python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
          --make-reports=tests_${{ matrix.config.report }} \
          examples/test_examples.py 

    - name: Failure short reports
      if: ${{ failure() }}
      run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt

    - name: Test suite reports artifacts
      if: ${{ always() }}
      uses: actions/upload-artifact@v2
      with:
        name: pr_${{ matrix.config.report }}_test_reports
        path: reports

  run_fast_tests_apple_m1:
    name: Fast PyTorch MPS tests on MacOS
    runs-on: [ self-hosted, apple-m1 ]

    steps:
    - name: Checkout diffusers
      uses: actions/checkout@v3
      with:
        fetch-depth: 2

    - name: Clean checkout
      shell: arch -arch arm64 bash {0}
      run: |
        git clean -fxd

    - name: Setup miniconda
      uses: ./.github/actions/setup-miniconda
      with:
        python-version: 3.9

    - name: Install dependencies
      shell: arch -arch arm64 bash {0}
      run: |
        ${CONDA_RUN} python -m pip install --upgrade pip
        ${CONDA_RUN} python -m pip install -e .[quality,test]
        ${CONDA_RUN} python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
        ${CONDA_RUN} python -m pip install git+https://github.com/huggingface/accelerate
        ${CONDA_RUN} python -m pip install -U git+https://github.com/huggingface/transformers

    - name: Environment
      shell: arch -arch arm64 bash {0}
      run: |
        ${CONDA_RUN} python utils/print_env.py

    - name: Run fast PyTorch tests on M1 (MPS)
      shell: arch -arch arm64 bash {0}
      env:
        HF_HOME: /System/Volumes/Data/mnt/cache
        HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
      run: |
        ${CONDA_RUN} python -m pytest -n 0 -s -v --make-reports=tests_torch_mps tests/

    - name: Failure short reports
      if: ${{ failure() }}
      run: cat reports/tests_torch_mps_failures_short.txt

    - name: Test suite reports artifacts
      if: ${{ always() }}
      uses: actions/upload-artifact@v2
      with:
        name: pr_torch_mps_test_reports
        path: reports


================================================
FILE: .github/workflows/push_tests.yml
================================================
name: Slow tests on main

on:
  push:
    branches:
      - main

env:
  DIFFUSERS_IS_CI: yes
  HF_HOME: /mnt/cache
  OMP_NUM_THREADS: 8
  MKL_NUM_THREADS: 8
  PYTEST_TIMEOUT: 600
  RUN_SLOW: yes

jobs:
  run_slow_tests:
    strategy:
      fail-fast: false
      matrix:
        config:
          - name: Slow PyTorch CUDA tests on Ubuntu
            framework: pytorch
            runner: docker-gpu
            image: diffusers/diffusers-pytorch-cuda
            report: torch_cuda
          - name: Slow Flax TPU tests on Ubuntu
            framework: flax
            runner: docker-tpu
            image: diffusers/diffusers-flax-tpu
            report: flax_tpu
          - name: Slow ONNXRuntime CUDA tests on Ubuntu
            framework: onnxruntime
            runner: docker-gpu
            image: diffusers/diffusers-onnxruntime-cuda
            report: onnx_cuda

    name: ${{ matrix.config.name }}

    runs-on: ${{ matrix.config.runner }}

    container:
      image: ${{ matrix.config.image }}
      options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ ${{ matrix.config.runner == 'docker-tpu' && '--privileged' || '--gpus 0'}}

    defaults:
      run:
        shell: bash

    steps:
    - name: Checkout diffusers
      uses: actions/checkout@v3
      with:
        fetch-depth: 2

    - name: NVIDIA-SMI
      if : ${{ matrix.config.runner == 'docker-gpu' }}
      run: |
        nvidia-smi

    - name: Install dependencies
      run: |
        python -m pip install -e .[quality,test]
        python -m pip install -U git+https://github.com/huggingface/transformers
        python -m pip install git+https://github.com/huggingface/accelerate

    - name: Environment
      run: |
        python utils/print_env.py

    - name: Run slow PyTorch CUDA tests
      if: ${{ matrix.config.framework == 'pytorch' }}
      env:
        HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
      run: |
        python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
          -s -v -k "not Flax and not Onnx" \
          --make-reports=tests_${{ matrix.config.report }} \
          tests/

    - name: Run slow Flax TPU tests
      if: ${{ matrix.config.framework == 'flax' }}
      env:
        HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
      run: |
        python -m pytest -n 0 \
          -s -v -k "Flax" \
          --make-reports=tests_${{ matrix.config.report }} \
          tests/

    - name: Run slow ONNXRuntime CUDA tests
      if: ${{ matrix.config.framework == 'onnxruntime' }}
      env:
        HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
      run: |
        python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
          -s -v -k "Onnx" \
          --make-reports=tests_${{ matrix.config.report }} \
          tests/

    - name: Failure short reports
      if: ${{ failure() }}
      run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt

    - name: Test suite reports artifacts
      if: ${{ always() }}
      uses: actions/upload-artifact@v2
      with:
        name: ${{ matrix.config.report }}_test_reports
        path: reports

  run_examples_tests:
    name: Examples PyTorch CUDA tests on Ubuntu

    runs-on: docker-gpu

    container:
      image: diffusers/diffusers-pytorch-cuda
      options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/

    steps:
    - name: Checkout diffusers
      uses: actions/checkout@v3
      with:
        fetch-depth: 2

    - name: NVIDIA-SMI
      run: |
        nvidia-smi

    - name: Install dependencies
      run: |
        python -m pip install -e .[quality,test,training]
        python -m pip install git+https://github.com/huggingface/accelerate
        python -m pip install -U git+https://github.com/huggingface/transformers

    - name: Environment
      run: |
        python utils/print_env.py

    - name: Run example tests on GPU
      env:
        HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
      run: |
        python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=examples_torch_cuda examples/

    - name: Failure short reports
      if: ${{ failure() }}
      run: cat reports/examples_torch_cuda_failures_short.txt

    - name: Test suite reports artifacts
      if: ${{ always() }}
      uses: actions/upload-artifact@v2
      with:
        name: examples_test_reports
        path: reports


================================================
FILE: .github/workflows/push_tests_fast.yml
================================================
name: Slow tests on main

on:
  push:
    branches:
      - main

env:
  DIFFUSERS_IS_CI: yes
  HF_HOME: /mnt/cache
  OMP_NUM_THREADS: 8
  MKL_NUM_THREADS: 8
  PYTEST_TIMEOUT: 600
  RUN_SLOW: no

jobs:
  run_fast_tests:
    strategy:
      fail-fast: false
      matrix:
        config:
          - name: Fast PyTorch CPU tests on Ubuntu
            framework: pytorch
            runner: docker-cpu
            image: diffusers/diffusers-pytorch-cpu
            report: torch_cpu
          - name: Fast Flax CPU tests on Ubuntu
            framework: flax
            runner: docker-cpu
            image: diffusers/diffusers-flax-cpu
            report: flax_cpu
          - name: Fast ONNXRuntime CPU tests on Ubuntu
            framework: onnxruntime
            runner: docker-cpu
            image: diffusers/diffusers-onnxruntime-cpu
            report: onnx_cpu
          - name: PyTorch Example CPU tests on Ubuntu
            framework: pytorch_examples
            runner: docker-cpu
            image: diffusers/diffusers-pytorch-cpu
            report: torch_cpu

    name: ${{ matrix.config.name }}

    runs-on: ${{ matrix.config.runner }}

    container:
      image: ${{ matrix.config.image }}
      options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/

    defaults:
      run:
        shell: bash

    steps:
    - name: Checkout diffusers
      uses: actions/checkout@v3
      with:
        fetch-depth: 2

    - name: Install dependencies
      run: |
        apt-get update && apt-get install libsndfile1-dev -y
        python -m pip install -e .[quality,test]
        python -m pip install -U git+https://github.com/huggingface/transformers
        python -m pip install git+https://github.com/huggingface/accelerate

    - name: Environment
      run: |
        python utils/print_env.py

    - name: Run fast PyTorch CPU tests
      if: ${{ matrix.config.framework == 'pytorch' }}
      run: |
        python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
          -s -v -k "not Flax and not Onnx" \
          --make-reports=tests_${{ matrix.config.report }} \
          tests/

    - name: Run fast Flax TPU tests
      if: ${{ matrix.config.framework == 'flax' }}
      run: |
        python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
          -s -v -k "Flax" \
          --make-reports=tests_${{ matrix.config.report }} \
          tests/

    - name: Run fast ONNXRuntime CPU tests
      if: ${{ matrix.config.framework == 'onnxruntime' }}
      run: |
        python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
          -s -v -k "Onnx" \
          --make-reports=tests_${{ matrix.config.report }} \
          tests/

    - name: Run example PyTorch CPU tests
      if: ${{ matrix.config.framework == 'pytorch_examples' }}
      run: |
        python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
          --make-reports=tests_${{ matrix.config.report }} \
          examples/test_examples.py 

    - name: Failure short reports
      if: ${{ failure() }}
      run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt

    - name: Test suite reports artifacts
      if: ${{ always() }}
      uses: actions/upload-artifact@v2
      with:
        name: pr_${{ matrix.config.report }}_test_reports
        path: reports

  run_fast_tests_apple_m1:
    name: Fast PyTorch MPS tests on MacOS
    runs-on: [ self-hosted, apple-m1 ]

    steps:
    - name: Checkout diffusers
      uses: actions/checkout@v3
      with:
        fetch-depth: 2

    - name: Clean checkout
      shell: arch -arch arm64 bash {0}
      run: |
        git clean -fxd

    - name: Setup miniconda
      uses: ./.github/actions/setup-miniconda
      with:
        python-version: 3.9

    - name: Install dependencies
      shell: arch -arch arm64 bash {0}
      run: |
        ${CONDA_RUN} python -m pip install --upgrade pip
        ${CONDA_RUN} python -m pip install -e .[quality,test]
        ${CONDA_RUN} python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
        ${CONDA_RUN} python -m pip install git+https://github.com/huggingface/accelerate
        ${CONDA_RUN} python -m pip install -U git+https://github.com/huggingface/transformers

    - name: Environment
      shell: arch -arch arm64 bash {0}
      run: |
        ${CONDA_RUN} python utils/print_env.py

    - name: Run fast PyTorch tests on M1 (MPS)
      shell: arch -arch arm64 bash {0}
      env:
        HF_HOME: /System/Volumes/Data/mnt/cache
        HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
      run: |
        ${CONDA_RUN} python -m pytest -n 0 -s -v --make-reports=tests_torch_mps tests/

    - name: Failure short reports
      if: ${{ failure() }}
      run: cat reports/tests_torch_mps_failures_short.txt

    - name: Test suite reports artifacts
      if: ${{ always() }}
      uses: actions/upload-artifact@v2
      with:
        name: pr_torch_mps_test_reports
        path: reports


================================================
FILE: .github/workflows/stale.yml
================================================
name: Stale Bot

on:
  schedule:
    - cron: "0 15 * * *"

jobs:
  close_stale_issues:
    name: Close Stale Issues
    if: github.repository == 'huggingface/diffusers'
    runs-on: ubuntu-latest
    env:
      GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    steps:
    - uses: actions/checkout@v2

    - name: Setup Python
      uses: actions/setup-python@v1
      with:
        python-version: 3.7

    - name: Install requirements
      run: |
        pip install PyGithub
    - name: Close stale issues
      run: |
        python utils/stale.py


================================================
FILE: .github/workflows/typos.yml
================================================
name: Check typos

on:
  workflow_dispatch:

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v3

      - name: typos-action
        uses: crate-ci/typos@v1.12.4


================================================
FILE: .gitignore
================================================
# Initially taken from Github's Python gitignore file

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# tests and logs
tests/fixtures/cached_*_text.txt
logs/
lightning_logs/
lang_code_data/

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# vscode
.vs
.vscode

# Pycharm
.idea

# TF code
tensorflow_code

# Models
proc_data

# examples
runs
/runs_old
/wandb
/examples/runs
/examples/**/*.args
/examples/rag/sweep

# data
/data
serialization_dir

# emacs
*.*~
debug.env

# vim
.*.swp

#ctags
tags

# pre-commit
.pre-commit*

# .lock
*.lock

# DS_Store (MacOS)
.DS_Store
# RL pipelines may produce mp4 outputs
*.mp4

# dependencies
/transformers

# ruff
.ruff_cache


================================================
FILE: CITATION.cff
================================================
cff-version: 1.2.0
title: 'Diffusers: State-of-the-art diffusion models'
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Patrick
    family-names: von Platen
  - given-names: Suraj
    family-names: Patil
  - given-names: Anton
    family-names: Lozhkov
  - given-names: Pedro
    family-names: Cuenca
  - given-names: Nathan
    family-names: Lambert
  - given-names: Kashif
    family-names: Rasul
  - given-names: Mishig
    family-names: Davaadorj
  - given-names: Thomas
    family-names: Wolf
repository-code: 'https://github.com/huggingface/diffusers'
abstract: >-
  Diffusers provides pretrained diffusion models across
  multiple modalities, such as vision and audio, and serves
  as a modular toolbox for inference and training of
  diffusion models.
keywords:
  - deep-learning
  - pytorch
  - image-generation
  - diffusion
  - text2image
  - image2image
  - score-based-generative-modeling
  - stable-diffusion
license: Apache-2.0
version: 0.12.1


================================================
FILE: CODE_OF_CONDUCT.md
================================================

# Contributor Covenant Code of Conduct

## Our Pledge

We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.

## Our Standards

Examples of behavior that contributes to a positive environment for our
community include:

* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
  and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
  overall community

Examples of unacceptable behavior include:

* The use of sexualized language or imagery, and sexual attention or
  advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
  address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
  professional setting

## Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.

Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.

## Scope

This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
feedback@huggingface.co.
All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the
reporter of any incident.

## Enforcement Guidelines

Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:

### 1. Correction

**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.

**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.

### 2. Warning

**Community Impact**: A violation through a single incident or series
of actions.

**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.

### 3. Temporary Ban

**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.

**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.

### 4. Permanent Ban

**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior,  harassment of an
individual, or aggression toward or disparagement of classes of individuals.

**Consequence**: A permanent ban from any sort of public interaction within
the community.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.

Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).

[homepage]: https://www.contributor-covenant.org

For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.


================================================
FILE: CONTRIBUTING.md
================================================
<!---
Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# How to contribute to diffusers?

Everyone is welcome to contribute, and we value everybody's contribution. Code
is thus not the only way to help the community. Answering questions, helping
others, reaching out and improving the documentations are immensely valuable to
the community.

It also helps us if you spread the word: reference the library from blog posts
on the awesome projects it made possible, shout out on Twitter every time it has
helped you, or simply star the repo to say "thank you".

Whichever way you choose to contribute, please be mindful to respect our
[code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md).

## You can contribute in so many ways!

There are 4 ways you can contribute to diffusers:
* Fixing outstanding issues with the existing code;
* Implementing [new diffusion pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines#contribution), [new schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) or [new models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)
* [Contributing to the examples](https://github.com/huggingface/diffusers/tree/main/examples) or to the documentation;
* Submitting issues related to bugs or desired new features.

In particular there is a special [Good First Issue](https://github.com/huggingface/diffusers/contribute) listing. 
It will give you a list of open Issues that are open to anybody to work on. Just comment in the issue that you'd like to work on it. 
In that same listing you will also find some Issues with `Good Second Issue` label. These are
typically slightly more complicated than the Issues with just `Good First Issue` label. But if you
feel you know what you're doing, go for it.

*All are equally valuable to the community.*

## Submitting a new issue or feature request

Do your best to follow these guidelines when submitting an issue or a feature
request. It will make it easier for us to come back to you quickly and with good
feedback.

### Did you find a bug?

The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of
the problems they encounter. So thank you for reporting an issue.

First, we would really appreciate it if you could **make sure the bug was not
already reported** (use the search bar on Github under Issues).

### Do you want to implement a new diffusion pipeline / diffusion model?

Awesome! Please provide the following information:

* Short description of the diffusion pipeline and link to the paper;
* Link to the implementation if it is open-source;
* Link to the model weights if they are available.

If you are willing to contribute the model yourself, let us know so we can best
guide you.

### Do you want a new feature (that is not a model)?

A world-class feature request addresses the following points:

1. Motivation first:
  * Is it related to a problem/frustration with the library? If so, please explain
    why. Providing a code snippet that demonstrates the problem is best.
  * Is it related to something you would need for a project? We'd love to hear
    about it!
  * Is it something you worked on and think could benefit the community?
    Awesome! Tell us what problem it solved for you.
2. Write a *full paragraph* describing the feature;
3. Provide a **code snippet** that demonstrates its future use;
4. In case this is related to a paper, please attach a link;
5. Attach any additional information (drawings, screenshots, etc.) you think may help.

If your issue is well written we're already 80% of the way there by the time you
post it.

## Start contributing! (Pull Requests)

Before writing code, we strongly advise you to search through the existing PRs or
issues to make sure that nobody is already working on the same thing. If you are
unsure, it is always a good idea to open an issue to get some feedback.

You will need basic `git` proficiency to be able to contribute to
🧨 Diffusers. `git` is not the easiest tool to use but it has the greatest
manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro
Git](https://git-scm.com/book/en/v2) is a very good reference.

Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L426)):

1. Fork the [repository](https://github.com/huggingface/diffusers) by
   clicking on the 'Fork' button on the repository's page. This creates a copy of the code
   under your GitHub user account.

2. Clone your fork to your local disk, and add the base repository as a remote:

   ```bash
   $ git clone git@github.com:<your Github handle>/diffusers.git
   $ cd diffusers
   $ git remote add upstream https://github.com/huggingface/diffusers.git
   ```

3. Create a new branch to hold your development changes:

   ```bash
   $ git checkout -b a-descriptive-name-for-my-changes
   ```

   **Do not** work on the `main` branch.

4. Set up a development environment by running the following command in a virtual environment:

   ```bash
   $ pip install -e ".[dev]"
   ```

   (If diffusers was already installed in the virtual environment, remove
   it with `pip uninstall diffusers` before reinstalling it in editable
   mode with the `-e` flag.)

   To run the full test suite, you might need the additional dependency on `transformers` and `datasets` which requires a separate source
   install:

   ```bash
   $ git clone https://github.com/huggingface/transformers
   $ cd transformers
   $ pip install -e .
   ```

   ```bash
   $ git clone https://github.com/huggingface/datasets
   $ cd datasets
   $ pip install -e .
   ```

   If you have already cloned that repo, you might need to `git pull` to get the most recent changes in the `datasets`
   library.

5. Develop the features on your branch.

   As you work on the features, you should make sure that the test suite
   passes. You should run the tests impacted by your changes like this:

   ```bash
   $ pytest tests/<TEST_TO_RUN>.py
   ```

   You can also run the full suite with the following command, but it takes
   a beefy machine to produce a result in a decent amount of time now that
   Diffusers has grown a lot. Here is the command for it:

   ```bash
   $ make test
   ```

   For more information about tests, check out the
   [dedicated documentation](https://huggingface.co/docs/diffusers/testing)

   🧨 Diffusers relies on `black` and `isort` to format its source code
   consistently. After you make changes, apply automatic style corrections and code verifications
   that can't be automated in one go with:

   ```bash
   $ make style
   ```

   🧨 Diffusers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality
   control runs in CI, however you can also run the same checks with:

   ```bash
   $ make quality
   ```

   Once you're happy with your changes, add changed files using `git add` and
   make a commit with `git commit` to record your changes locally:

   ```bash
   $ git add modified_file.py
   $ git commit
   ```

   It is a good idea to sync your copy of the code with the original
   repository regularly. This way you can quickly account for changes:

   ```bash
   $ git fetch upstream
   $ git rebase upstream/main
   ```

   Push the changes to your account using:

   ```bash
   $ git push -u origin a-descriptive-name-for-my-changes
   ```

6. Once you are satisfied (**and the checklist below is happy too**), go to the
   webpage of your fork on GitHub. Click on 'Pull request' to send your changes
   to the project maintainers for review.

7. It's ok if maintainers ask you for changes. It happens to core contributors
   too! So everyone can see the changes in the Pull request, work in your local
   branch and push the changes to your fork. They will automatically appear in
   the pull request.


### Checklist

1. The title of your pull request should be a summary of its contribution;
2. If your pull request addresses an issue, please mention the issue number in
   the pull request description to make sure they are linked (and people
   consulting the issue know you are working on it);
3. To indicate a work in progress please prefix the title with `[WIP]`. These
   are useful to avoid duplicated work, and to differentiate it from PRs ready
   to be merged;
4. Make sure existing tests pass;
5. Add high-coverage tests. No quality testing = no merge.
   - If you are adding new `@slow` tests, make sure they pass using
     `RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`.
   - If you are adding a new tokenizer, write tests, and make sure
     `RUN_SLOW=1 python -m pytest tests/test_tokenization_{your_model_name}.py` passes.
   CircleCI does not run the slow tests, but github actions does every night!
6. All public methods must have informative docstrings that work nicely with sphinx. See `modeling_bert.py` for an
   example.
7. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
   the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference 
   them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
   If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
   to this dataset.

### Tests

An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the [tests folder](https://github.com/huggingface/diffusers/tree/main/tests).

We like `pytest` and `pytest-xdist` because it's faster. From the root of the
repository, here's how to run tests with `pytest` for the library:

```bash
$ python -m pytest -n auto --dist=loadfile -s -v ./tests/
```

In fact, that's how `make test` is implemented (sans the `pip install` line)!

You can specify a smaller set of tests in order to test only the feature
you're working on.

By default, slow tests are skipped. Set the `RUN_SLOW` environment variable to
`yes` to run them. This will download many gigabytes of models — make sure you
have enough disk space and a good Internet connection, or a lot of patience!

```bash
$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/
```

This means `unittest` is fully supported. Here's how to run tests with
`unittest`:

```bash
$ python -m unittest discover -s tests -t . -v
$ python -m unittest discover -s examples -t examples -v
```


### Style guide

For documentation strings, 🧨 Diffusers follows the [google style](https://google.github.io/styleguide/pyguide.html).

**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**

### Syncing forked main with upstream (HuggingFace) main

To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs,
when syncing the main branch of a forked repository, please, follow these steps:
1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead merge directly into the forked main.
2. If a PR is absolutely necessary, use the following steps after checking out your branch:
```
$ git checkout -b your-branch-for-syncing
$ git pull --squash --no-commit upstream main
$ git commit -m '<your message without GitHub references>'
$ git push --set-upstream origin your-branch-for-syncing
```


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: MANIFEST.in
================================================
include LICENSE
include src/diffusers/utils/model_card_template.md


================================================
FILE: Makefile
================================================
.PHONY: deps_table_update modified_only_fixup extra_style_checks quality style fixup fix-copies test test-examples

# make sure to test the local checkout in scripts and not the pre-installed one (don't use quotes!)
export PYTHONPATH = src

check_dirs := examples scripts src tests utils

modified_only_fixup:
	$(eval modified_py_files := $(shell python utils/get_modified_files.py $(check_dirs)))
	@if test -n "$(modified_py_files)"; then \
		echo "Checking/fixing $(modified_py_files)"; \
		black $(modified_py_files); \
		ruff $(modified_py_files); \
	else \
		echo "No library .py files were modified"; \
	fi

# Update src/diffusers/dependency_versions_table.py

deps_table_update:
	@python setup.py deps_table_update

deps_table_check_updated:
	@md5sum src/diffusers/dependency_versions_table.py > md5sum.saved
	@python setup.py deps_table_update
	@md5sum -c --quiet md5sum.saved || (printf "\nError: the version dependency table is outdated.\nPlease run 'make fixup' or 'make style' and commit the changes.\n\n" && exit 1)
	@rm md5sum.saved

# autogenerating code

autogenerate_code: deps_table_update

# Check that the repo is in a good state

repo-consistency:
	python utils/check_dummies.py
	python utils/check_repo.py
	python utils/check_inits.py

# this target runs checks on all files

quality:
	black --check $(check_dirs)
	ruff $(check_dirs)
	doc-builder style src/diffusers docs/source --max_len 119 --check_only --path_to_docs docs/source
	python utils/check_doc_toc.py

# Format source code automatically and check is there are any problems left that need manual fixing

extra_style_checks:
	python utils/custom_init_isort.py
	doc-builder style src/diffusers docs/source --max_len 119 --path_to_docs docs/source
	python utils/check_doc_toc.py --fix_and_overwrite

# this target runs checks on all files and potentially modifies some of them

style:
	black $(check_dirs)
	ruff $(check_dirs) --fix
	${MAKE} autogenerate_code
	${MAKE} extra_style_checks

# Super fast fix and check target that only works on relevant modified files since the branch was made

fixup: modified_only_fixup extra_style_checks autogenerate_code repo-consistency

# Make marked copies of snippets of codes conform to the original

fix-copies:
	python utils/check_copies.py --fix_and_overwrite
	python utils/check_dummies.py --fix_and_overwrite

# Run tests for the library

test:
	python -m pytest -n auto --dist=loadfile -s -v ./tests/

# Run tests for examples

test-examples:
	python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/


# Release stuff

pre-release:
	python utils/release.py

pre-patch:
	python utils/release.py --patch

post-release:
	python utils/release.py --post_release

post-patch:
	python utils/release.py --post_release --patch


================================================
FILE: README.md
================================================
<p align="center">
    <br>
    <img src="./docs/source/en/imgs/diffusers_library.jpg" width="400"/>
    <br>
<p>
<p align="center">
    <a href="https://github.com/huggingface/diffusers/blob/main/LICENSE">
        <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/datasets.svg?color=blue">
    </a>
    <a href="https://github.com/huggingface/diffusers/releases">
        <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/diffusers.svg">
    </a>
    <a href="CODE_OF_CONDUCT.md">
        <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg">
    </a>
</p>

🤗 Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves
as a modular toolbox for inference and training of diffusion models.

More precisely, 🤗 Diffusers offers:

- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)). Check [this overview](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/README.md#pipelines-summary) to see all supported pipelines and their corresponding official papers.
- Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
- Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
- Training examples to show how to train the most popular diffusion model tasks (see [examples](https://github.com/huggingface/diffusers/tree/main/examples), *e.g.* [unconditional-image-generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation)).

## Installation

### For PyTorch

**With `pip`** (official package)
    
```bash
pip install --upgrade diffusers[torch]
```

**With `conda`** (maintained by the community)

```sh
conda install -c conda-forge diffusers
```

### For Flax

**With `pip`**

```bash
pip install --upgrade diffusers[flax]
```

**Apple Silicon (M1/M2) support**

Please, refer to [the documentation](https://huggingface.co/docs/diffusers/optimization/mps).

## Contributing

We ❤️  contributions from the open-source community! 
If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md).
You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library.
- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute
- See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines
- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22)

Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or
just hang out ☕.

## Quickstart

In order to get started, we recommend taking a look at two notebooks:

- The [Getting started with Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) notebook, which showcases an end-to-end example of usage for diffusion models, schedulers and pipelines.
  Take a look at this notebook to learn how to use the pipeline abstraction, which takes care of everything (model, scheduler, noise handling) for you, and also to understand each independent building block in the library.
- The [Training a diffusers model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook summarizes diffusion models training methods. This notebook takes a step-by-step approach to training your
  diffusion models on an image dataset, with explanatory graphics. 
  
## Stable Diffusion is fully compatible with `diffusers`!  

Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [LAION](https://laion.ai/) and [RunwayML](https://runwayml.com/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 4GB VRAM.
See the [model card](https://huggingface.co/CompVis/stable-diffusion) for more information.


### Text-to-Image generation with Stable Diffusion

First let's install

```bash
pip install --upgrade diffusers transformers accelerate
```

We recommend using the model in [half-precision (`fp16`)](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/) as it gives almost always the same results as full
precision while being roughly twice as fast and requiring half the amount of GPU RAM.

```python
import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  
```

#### Running the model locally

You can also simply download the model folder and pass the path to the local folder to the `StableDiffusionPipeline`.

```
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
```

Assuming the folder is stored locally under `./stable-diffusion-v1-5`, you can run stable diffusion
as follows:

```python
pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  
```

If you are limited by GPU memory, you might want to consider chunking the attention computation in addition 
to using `fp16`.
The following snippet should result in less than 4GB VRAM.

```python
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_attention_slicing()
image = pipe(prompt).images[0]  
```

If you wish to use a different scheduler (e.g.: DDIM, LMS, PNDM/PLMS), you can instantiate
it before the pipeline and pass it to `from_pretrained`.
    
```python
from diffusers import LMSDiscreteScheduler

pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config)

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  
    
image.save("astronaut_rides_horse.png")
```

If you want to run Stable Diffusion on CPU or you want to have maximum precision on GPU, 
please run the model in the default *full-precision* setting:

```python
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")

# disable the following line if you run on CPU
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  
    
image.save("astronaut_rides_horse.png")
```

### JAX/Flax

Diffusers offers a JAX / Flax implementation of Stable Diffusion for very fast inference. JAX shines specially on TPU hardware because each TPU server has 8 accelerators working in parallel, but it runs great on GPUs too.

Running the pipeline with the default PNDMScheduler:

```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard

from diffusers import FlaxStableDiffusionPipeline

pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5", revision="flax", dtype=jax.numpy.bfloat16
)

prompt = "a photo of an astronaut riding a horse on mars"

prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50

num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)

# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)

images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```

**Note**:
If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch.

```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard

from diffusers import FlaxStableDiffusionPipeline

pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16
)

prompt = "a photo of an astronaut riding a horse on mars"

prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50

num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)

# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)

images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```

Diffusers also has a Image-to-Image generation pipeline with Flax/Jax
```python
import jax
import numpy as np
import jax.numpy as jnp
from flax.jax_utils import replicate
from flax.training.common_utils import shard
import requests
from io import BytesIO
from PIL import Image
from diffusers import FlaxStableDiffusionImg2ImgPipeline

def create_key(seed=0):
    return jax.random.PRNGKey(seed)
rng = create_key(0)

url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_img = Image.open(BytesIO(response.content)).convert("RGB")
init_img = init_img.resize((768, 512))

prompts = "A fantasy landscape, trending on artstation"

pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4", revision="flax",
    dtype=jnp.bfloat16,
)

num_samples = jax.device_count()
rng = jax.random.split(rng, jax.device_count())
prompt_ids, processed_image = pipeline.prepare_inputs(prompt=[prompts]*num_samples, image = [init_img]*num_samples)
p_params = replicate(params)
prompt_ids = shard(prompt_ids)
processed_image = shard(processed_image)

output = pipeline(
    prompt_ids=prompt_ids, 
    image=processed_image, 
    params=p_params, 
    prng_seed=rng, 
    strength=0.75, 
    num_inference_steps=50, 
    jit=True, 
    height=512,
    width=768).images

output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:])))
```

Diffusers also has a Text-guided inpainting pipeline with Flax/Jax

```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
import PIL
import requests
from io import BytesIO


from diffusers import FlaxStableDiffusionInpaintPipeline

def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))

pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained("xvjiarui/stable-diffusion-2-inpainting")

prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50

num_samples = jax.device_count()
prompt = num_samples * [prompt]
init_image = num_samples * [init_image]
mask_image = num_samples * [mask_image]
prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs(prompt, init_image, mask_image)


# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
processed_masked_images = shard(processed_masked_images)
processed_masks = shard(processed_masks)

images = pipeline(prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```

### Image-to-Image text-guided generation with Stable Diffusion

The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images.

```python
import requests
import torch
from PIL import Image
from io import BytesIO

from diffusers import StableDiffusionImg2ImgPipeline

# load the pipeline
device = "cuda"
model_id_or_path = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)

# or download via git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
# and pass `model_id_or_path="./stable-diffusion-v1-5"`.
pipe = pipe.to(device)

# let's download an initial image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((768, 512))

prompt = "A fantasy landscape, trending on artstation"

images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images

images[0].save("fantasy_landscape.png")
```
You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)

### In-painting using Stable Diffusion

The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and a text prompt.

```python
import PIL
import requests
import torch
from io import BytesIO

from diffusers import StableDiffusionInpaintPipeline

def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(BytesIO(response.content)).convert("RGB")

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))

pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```

### Tweak prompts reusing seeds and latents

You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked.
Please have a look at [Reusing seeds for deterministic generation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/reusing_seeds).

## Fine-Tuning Stable Diffusion

Fine-tuning techniques make it possible to adapt Stable Diffusion to your own dataset, or add new subjects to it. These are some of the techniques supported in `diffusers`:

Textual Inversion is a technique for capturing novel concepts from a small number of example images in a way that can later be used to control text-to-image pipelines. It does so by learning new 'words' in the embedding space of the pipeline's text encoder. These special words can then be used within text prompts to achieve very fine-grained control of the resulting images. 

- Textual Inversion. Capture novel concepts from a small set of sample images, and associate them with new "words" in the embedding space of the text encoder. Please, refer to [our training examples](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) or [documentation](https://huggingface.co/docs/diffusers/training/text_inversion) to try for yourself.

- Dreambooth. Another technique to capture new concepts in Stable Diffusion. This method fine-tunes the UNet (and, optionally, also the text encoder) of the pipeline to achieve impressive results. Please, refer to [our training example](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) and [training report](https://huggingface.co/blog/dreambooth) for additional details and training recommendations.

- Full Stable Diffusion fine-tuning. If you have a more sizable dataset with a specific look or style, you can fine-tune Stable Diffusion so that it outputs images following those examples. This was the approach taken to create [a Pokémon Stable Diffusion model](https://huggingface.co/justinpinkney/pokemon-stable-diffusion) (by Justing Pinkney / Lambda Labs), [a Japanese specific version of Stable Diffusion](https://huggingface.co/spaces/rinna/japanese-stable-diffusion) (by [Rinna Co.](https://github.com/rinnakk/japanese-stable-diffusion/) and others. You can start at [our text-to-image fine-tuning example](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) and go from there.


## Stable Diffusion Community Pipelines

The release of Stable Diffusion as an open source model has fostered a lot of interesting ideas and experimentation. 
Our [Community Examples folder](https://github.com/huggingface/diffusers/tree/main/examples/community) contains many ideas worth exploring, like interpolating to create animated videos, using CLIP Guidance for additional prompt fidelity, term weighting, and much more! [Take a look](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview) and [contribute your own](https://huggingface.co/docs/diffusers/using-diffusers/contribute_pipeline).

## Other Examples

There are many ways to try running Diffusers! Here we outline code-focused tools (primarily using `DiffusionPipeline`s and Google Colab) and interactive web-tools.

### Running Code

If you want to run the code yourself 💻, you can try out:
- [Text-to-Image Latent Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256)
```python
# !pip install diffusers["torch"] transformers
from diffusers import DiffusionPipeline

device = "cuda"
model_id = "CompVis/ldm-text2im-large-256"

# load model and scheduler
ldm = DiffusionPipeline.from_pretrained(model_id)
ldm = ldm.to(device)

# run pipeline in inference (sample random noise and denoise)
prompt = "A painting of a squirrel eating a burger"
image = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images[0]

# save image
image.save("squirrel.png")
```
- [Unconditional Diffusion with discrete scheduler](https://huggingface.co/google/ddpm-celebahq-256)
```python
# !pip install diffusers["torch"]
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline

model_id = "google/ddpm-celebahq-256"
device = "cuda"

# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id)  # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
ddpm.to(device)

# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]

# save image
image.save("ddpm_generated_image.png")
```
- [Unconditional Latent Diffusion](https://huggingface.co/CompVis/ldm-celebahq-256)
- [Unconditional Diffusion with continuous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024)

**Other Image Notebooks**:
* [image-to-image generation with Stable Diffusion](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
* [tweak images via repeated Stable Diffusion seeds](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),

**Diffusers for Other Modalities**:
* [Molecule conformation generation](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/geodiff_molecule_conformation.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
* [Model-based reinforcement learning](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),

### Web Demos
If you just want to play around with some web demos, you can try out the following 🚀 Spaces:
| Model                          	| Hugging Face Spaces                                                                                                                                               	|
|--------------------------------	|-------------------------------------------------------------------------------------------------------------------------------------------------------------------	|
| Text-to-Image Latent Diffusion 	| [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/CompVis/text2img-latent-diffusion) 	|
| Faces generator                	| [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/CompVis/celeba-latent-diffusion)    	|
| DDPM with different schedulers 	| [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/fusing/celeba-diffusion)           	|
| Conditional generation from sketch  	| [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/huggingface/diffuse-the-rest)           	|
| Composable diffusion | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Shuang59/Composable-Diffusion)           	|

## Definitions

**Models**: Neural network that models $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$ (see image below) and is trained end-to-end to *denoise* a noisy input to an image.
*Examples*: UNet, Conditioned UNet, 3D UNet, Transformer UNet

<p align="center">
    <img src="https://user-images.githubusercontent.com/10695622/174349667-04e9e485-793b-429a-affe-096e8199ad5b.png" width="800"/>
    <br>
    <em> Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
    
**Schedulers**: Algorithm class for both **inference** and **training**.
The class provides functionality to compute previous image according to alpha, beta schedule as well as predict noise for training. Also known as **Samplers**.
*Examples*: [DDPM](https://arxiv.org/abs/2006.11239), [DDIM](https://arxiv.org/abs/2010.02502), [PNDM](https://arxiv.org/abs/2202.09778), [DEIS](https://arxiv.org/abs/2204.13902)

<p align="center">
    <img src="https://user-images.githubusercontent.com/10695622/174349706-53d58acc-a4d1-4cda-b3e8-432d9dc7ad38.png" width="800"/>
    <br>
    <em> Sampling and training algorithms. Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
    

**Diffusion Pipeline**: End-to-end pipeline that includes multiple diffusion models, possible text encoders, ...
*Examples*: Glide, Latent-Diffusion, Imagen, DALL-E 2

<p align="center">
    <img src="https://user-images.githubusercontent.com/10695622/174348898-481bd7c2-5457-4830-89bc-f0907756f64c.jpeg" width="550"/>
    <br>
    <em> Figure from ImageGen (https://imagen.research.google/). </em>
<p>
    
## Philosophy

- Readability and clarity is preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continuous outputs**, *e.g.* vision and audio.
- Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of another library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).

## In the works

For the first release, 🤗 Diffusers focuses on text-to-image diffusion techniques. However, diffusers can be used for much more than that! Over the upcoming releases, we'll be focusing on:

- Diffusers for audio
- Diffusers for reinforcement learning (initial work happening in https://github.com/huggingface/diffusers/pull/105).
- Diffusers for video generation
- Diffusers for molecule generation (initial work happening in https://github.com/huggingface/diffusers/pull/54)

A few pipeline components are already being worked on, namely:

- BDDMPipeline for spectrogram-to-sound vocoding
- GLIDEPipeline to support OpenAI's GLIDE model
- Grad-TTS for text to audio generation / conditional audio generation

We want diffusers to be a toolbox useful for diffusers models in general; if you find yourself limited in any way by the current API, or would like to see additional models, schedulers, or techniques, please open a [GitHub issue](https://github.com/huggingface/diffusers/issues) mentioning what you would like to see.

## Credits

This library concretizes previous work by many different authors and would not have been possible without their great research and implementations. We'd like to thank, in particular, the following implementations which have helped us in our development and without which the API could not have been as polished today:

- @CompVis' latent diffusion models library, available [here](https://github.com/CompVis/latent-diffusion)
- @hojonathanho original DDPM implementation, available [here](https://github.com/hojonathanho/diffusion) as well as the extremely useful translation into PyTorch by @pesser, available [here](https://github.com/pesser/pytorch_diffusion)
- @ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim).
- @yang-song's Score-VE and Score-VP implementations, available [here](https://github.com/yang-song/score_sde_pytorch)

We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available [here](https://github.com/heejkoo/Awesome-Diffusion-Models) as well as @crowsonkb and @rromb for useful discussions and insights.

## Citation

```bibtex
@misc{von-platen-etal-2022-diffusers,
  author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Thomas Wolf},
  title = {Diffusers: State-of-the-art diffusion models},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/diffusers}}
}
```


================================================
FILE: _typos.toml
================================================
# Files for typos
# Instruction:  https://github.com/marketplace/actions/typos-action#getting-started

[default.extend-identifiers]

[default.extend-words]
NIN="NIN" # NIN is used in scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py
nd="np" # nd may be np (numpy)
parms="parms" # parms is used in scripts/convert_original_stable_diffusion_to_diffusers.py


[files]
extend-exclude = ["_typos.toml"]


================================================
FILE: docker/diffusers-flax-cpu/Dockerfile
================================================
FROM ubuntu:20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y bash \
                   build-essential \
                   git \
                   git-lfs \
                   curl \
                   ca-certificates \
                   libsndfile1-dev \
                   python3.8 \
                   python3-pip \
                   python3.8-venv && \
    rm -rf /var/lib/apt/lists

# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
# follow the instructions here: https://cloud.google.com/tpu/docs/run-in-container#train_a_jax_model_in_a_docker_container
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
    python3 -m pip install --upgrade --no-cache-dir \
        clu \
        "jax[cpu]>=0.2.16,!=0.3.2" \
        "flax>=0.4.1" \
        "jaxlib>=0.1.65" && \
    python3 -m pip install --no-cache-dir \
        accelerate \
        datasets \
        hf-doc-builder \
        huggingface-hub \
        Jinja2 \
        librosa \
        numpy \
        scipy \
        tensorboard \
        transformers

CMD ["/bin/bash"]

================================================
FILE: docker/diffusers-flax-tpu/Dockerfile
================================================
FROM ubuntu:20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y bash \
                   build-essential \
                   git \
                   git-lfs \
                   curl \
                   ca-certificates \
                   libsndfile1-dev \
                   python3.8 \
                   python3-pip \
                   python3.8-venv && \
    rm -rf /var/lib/apt/lists

# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
# follow the instructions here: https://cloud.google.com/tpu/docs/run-in-container#train_a_jax_model_in_a_docker_container
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
    python3 -m pip install --no-cache-dir \
        "jax[tpu]>=0.2.16,!=0.3.2" \
        -f https://storage.googleapis.com/jax-releases/libtpu_releases.html && \
    python3 -m pip install --upgrade --no-cache-dir \
        clu \
        "flax>=0.4.1" \
        "jaxlib>=0.1.65" && \
    python3 -m pip install --no-cache-dir \
        accelerate \
        datasets \
        hf-doc-builder \
        huggingface-hub \
        Jinja2 \
        librosa \        
        numpy \
        scipy \
        tensorboard \
        transformers

CMD ["/bin/bash"]

================================================
FILE: docker/diffusers-onnxruntime-cpu/Dockerfile
================================================
FROM ubuntu:20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y bash \
                   build-essential \
                   git \
                   git-lfs \
                   curl \
                   ca-certificates \
                   libsndfile1-dev \
                   python3.8 \
                   python3-pip \
                   python3.8-venv && \
    rm -rf /var/lib/apt/lists

# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
    python3 -m pip install --no-cache-dir \
        torch \
        torchvision \
        torchaudio \
        onnxruntime \
        --extra-index-url https://download.pytorch.org/whl/cpu && \
    python3 -m pip install --no-cache-dir \
        accelerate \
        datasets \
        hf-doc-builder \
        huggingface-hub \
        Jinja2 \
        librosa \
        numpy \
        scipy \
        tensorboard \
        transformers

CMD ["/bin/bash"]

================================================
FILE: docker/diffusers-onnxruntime-cuda/Dockerfile
================================================
FROM nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y bash \
                   build-essential \
                   git \
                   git-lfs \
                   curl \
                   ca-certificates \
                   libsndfile1-dev \
                   python3.8 \
                   python3-pip \
                   python3.8-venv && \
    rm -rf /var/lib/apt/lists

# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
    python3 -m pip install --no-cache-dir \
        torch \
        torchvision \
        torchaudio \
        "onnxruntime-gpu>=1.13.1" \
        --extra-index-url https://download.pytorch.org/whl/cu117 && \
    python3 -m pip install --no-cache-dir \
        accelerate \
        datasets \
        hf-doc-builder \
        huggingface-hub \
        Jinja2 \
        librosa \
        numpy \
        scipy \
        tensorboard \
        transformers

CMD ["/bin/bash"]

================================================
FILE: docker/diffusers-pytorch-cpu/Dockerfile
================================================
FROM ubuntu:20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y bash \
                   build-essential \
                   git \
                   git-lfs \
                   curl \
                   ca-certificates \
                   libsndfile1-dev \
                   python3.8 \
                   python3-pip \
                   python3.8-venv && \
    rm -rf /var/lib/apt/lists

# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
    python3 -m pip install --no-cache-dir \
        torch \
        torchvision \
        torchaudio \
        --extra-index-url https://download.pytorch.org/whl/cpu && \
    python3 -m pip install --no-cache-dir \
        accelerate \
        datasets \
        hf-doc-builder \
        huggingface-hub \
        Jinja2 \
        librosa \
        numpy \
        scipy \
        tensorboard \
        transformers

CMD ["/bin/bash"]

================================================
FILE: docker/diffusers-pytorch-cuda/Dockerfile
================================================
FROM nvidia/cuda:11.7.1-cudnn8-runtime-ubuntu20.04
LABEL maintainer="Hugging Face"
LABEL repository="diffusers"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y bash \
                   build-essential \
                   git \
                   git-lfs \
                   curl \
                   ca-certificates \
                   libsndfile1-dev \
                   python3.8 \
                   python3-pip \
                   python3.8-venv && \
    rm -rf /var/lib/apt/lists

# make sure to use venv
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
    python3 -m pip install --no-cache-dir \
        torch \
        torchvision \
        torchaudio \
        --extra-index-url https://download.pytorch.org/whl/cu117 && \
    python3 -m pip install --no-cache-dir \
        accelerate \
        datasets \
        hf-doc-builder \
        huggingface-hub \
        Jinja2 \
        librosa \
        numpy \
        scipy \
        tensorboard \
        transformers

CMD ["/bin/bash"]

================================================
FILE: docs/README.md
================================================
<!---
Copyright 2023- The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# Generating the documentation

To generate the documentation, you first have to build it. Several packages are necessary to build the doc, 
you can install them with the following command, at the root of the code repository:

```bash
pip install -e ".[docs]"
```

Then you need to install our open source documentation builder tool:

```bash
pip install git+https://github.com/huggingface/doc-builder
```

---
**NOTE**

You only need to generate the documentation to inspect it locally (if you're planning changes and want to
check how they look before committing for instance). You don't have to commit the built documentation.

---

## Previewing the documentation

To preview the docs, first install the `watchdog` module with:

```bash
pip install watchdog
```

Then run the following command:

```bash
doc-builder preview {package_name} {path_to_docs}
```

For example:

```bash
doc-builder preview diffusers docs/source/en
```

The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.

---
**NOTE**

The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).

---

## Adding a new element to the navigation bar

Accepted files are Markdown (.md or .mdx).

Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/diffusers/blob/main/docs/source/_toctree.yml) file.

## Renaming section headers and moving sections

It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.

Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.

So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:

```
Sections that were moved:

[ <a href="#section-b">Section A</a><a id="section-a"></a> ]
```
and of course, if you moved it to another file, then:

```
Sections that were moved:

[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
```

Use the relative style to link to the new file so that the versioned docs continue to work.

For an example of a rich moved section set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.mdx).


## Writing Documentation - Specification

The `huggingface/diffusers` documentation follows the
[Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
although we can write them directly in Markdown.

### Adding a new tutorial

Adding a new tutorial or section is done in two steps:

- Add a new file under `docs/source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
- Link that file in `docs/source/_toctree.yml` on the correct toc-tree.

Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or four.

### Adding a new pipeline/scheduler

When adding a new pipeline:

- create a file `xxx.mdx` under `docs/source/api/pipelines` (don't hesitate to copy an existing file as template).
- Link that file in (*Diffusers Summary*) section in `docs/source/api/pipelines/overview.mdx`, along with the link to the paper, and a colab notebook (if available).
- Write a short overview of the diffusion model:
    - Overview with paper & authors
    - Paper abstract
    - Tips and tricks and how to use it best
    - Possible an end-to-end example of how to use it
- Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. By default as follows:

```
## XXXPipeline

[[autodoc]] XXXPipeline
    - all
	- __call__
```

This will include every public method of the pipeline that is documented, as well as the  `__call__` method that is not documented by default. If you just want to add additional methods that are not documented, you can put the list of all methods to add in a list that contains `all`.

```
[[autodoc]] XXXPipeline
    - all
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
    - enable_xformers_memory_efficient_attention 
    - disable_xformers_memory_efficient_attention
```

You can follow the same process to create a new scheduler under the `docs/source/api/schedulers` folder

### Writing source documentation

Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
and objects like True, None, or any strings should usually be put in `code`.

When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or 
function to be in the main package.

If you want to create a link to some internal class or function, you need to
provide its path. For instance: \[\`pipelines.ImagePipelineOutput\`\]. This will be converted into a link with
`pipelines.ImagePipelineOutput` in the description. To get rid of the path and only keep the name of the object you are
linking to in the description, add a ~: \[\`~pipelines.ImagePipelineOutput\`\] will generate a link with `ImagePipelineOutput` in the description.

The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].

#### Defining arguments in a method

Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
description:

```
    Args:
        n_layers (`int`): The number of layers of the model.
```

If the description is too long to fit in one line, another indentation is necessary before writing the description
after the argument.

Here's an example showcasing everything so far:

```
    Args:
        input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
            Indices of input sequence tokens in the vocabulary.

            Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and
            [`~PreTrainedTokenizer.__call__`] for details.

            [What are input IDs?](../glossary#input-ids)
```

For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
following signature:

```
def my_function(x: str = None, a: float = 1):
```

then its documentation should look like this:

```
    Args:
        x (`str`, *optional*):
            This argument controls ...
        a (`float`, *optional*, defaults to 1):
            This argument is used to ...
```

Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
however write as many lines as you want in the indented description (see the example above with `input_ids`).

#### Writing a multi-line code block

Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:


````
```
# first line of code
# second line
# etc
```
````

#### Writing a return block

The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
The first line should be the type of the return, followed by a line return. No need to indent further for the elements
building the return.

Here's an example of a single value return:

```
    Returns:
        `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
```

Here's an example of a tuple return, comprising several objects:

```
    Returns:
        `tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
        - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
          Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
        - **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
          Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
```

#### Adding an image

Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.

## Styling the docstring

We have an automatic script running with the `make style` command that will make sure that:
- the docstrings fully take advantage of the line width
- all code examples are formatted using black, like the code of the Transformers library

This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's
recommended to commit your changes before running `make style`, so you can revert the changes done by that script
easily.



================================================
FILE: docs/TRANSLATING.md
================================================
### Translating the Diffusers documentation into your language

As part of our mission to democratize machine learning, we'd love to make the Diffusers library available in many more languages! Follow the steps below if you want to help translate the documentation into your language 🙏.

**🗞️ Open an issue**

To get started, navigate to the [Issues](https://github.com/huggingface/diffusers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the "Translation template" from the "New issue" button.

Once an issue exists, post a comment to indicate which chapters you'd like to work on, and we'll add your name to the list.


**🍴 Fork the repository**

First, you'll need to [fork the Diffusers repo](https://docs.github.com/en/get-started/quickstart/fork-a-repo). You can do this by clicking on the **Fork** button on the top-right corner of this repo's page.

Once you've forked the repo, you'll want to get the files on your local machine for editing. You can do that by cloning the fork with Git as follows:

```bash
git clone https://github.com/YOUR-USERNAME/diffusers.git
```

**📋 Copy-paste the English version with a new language code**

The documentation files are in one leading directory:

- [`docs/source`](https://github.com/huggingface/diffusers/tree/main/docs/source): All the documentation materials are organized here by language.

You'll only need to copy the files in the [`docs/source/en`](https://github.com/huggingface/diffusers/tree/main/docs/source/en) directory, so first navigate to your fork of the repo and run the following:

```bash
cd ~/path/to/diffusers/docs
cp -r source/en source/LANG-ID
```

Here, `LANG-ID` should be one of the ISO 639-1 or ISO 639-2 language codes -- see [here](https://www.loc.gov/standards/iso639-2/php/code_list.php) for a handy table.

**✍️ Start translating**

The fun part comes - translating the text!

The first thing we recommend is translating the part of the `_toctree.yml` file that corresponds to your doc chapter. This file is used to render the table of contents on the website. 

> 🙋 If the `_toctree.yml` file doesn't yet exist for your language, you can create one by copy-pasting from the English version and deleting the sections unrelated to your chapter. Just make sure it exists in the `docs/source/LANG-ID/` directory!

The fields you should add are `local` (with the name of the file containing the translation; e.g. `autoclass_tutorial`), and `title` (with the title of the doc in your language; e.g. `Load pretrained instances with an AutoClass`) -- as a reference, here is the `_toctree.yml` for [English](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml):

```yaml
- sections:
  - local: pipeline_tutorial # Do not change this! Use the same name for your .md file
    title: Pipelines for inference # Translate this!
    ...
  title: Tutorials # Translate this!
```

Once you have translated the `_toctree.yml` file, you can start translating the [MDX](https://mdxjs.com/) files associated with your docs chapter.

> 🙋 If you'd like others to help you with the translation, you should [open an issue](https://github.com/huggingface/diffusers/issues) and tag @patrickvonplaten.


================================================
FILE: docs/source/en/_toctree.yml
================================================
- sections:
  - local: index
    title: 🧨 Diffusers
  - local: quicktour
    title: Quicktour
  - local: stable_diffusion
    title: Stable Diffusion
  - local: installation
    title: Installation
  title: Get started
- sections:
  - local: tutorials/basic_training
    title: Train a diffusion model
  title: Tutorials
- sections:
  - sections:
    - local: using-diffusers/loading
      title: Loading Pipelines, Models, and Schedulers
    - local: using-diffusers/schedulers
      title: Using different Schedulers
    - local: using-diffusers/configuration
      title: Configuring Pipelines, Models, and Schedulers
    - local: using-diffusers/custom_pipeline_overview
      title: Loading and Adding Custom Pipelines
    - local: using-diffusers/kerascv
      title: Using KerasCV Stable Diffusion Checkpoints in Diffusers
    title: Loading & Hub
  - sections:
    - local: using-diffusers/unconditional_image_generation
      title: Unconditional Image Generation
    - local: using-diffusers/conditional_image_generation
      title: Text-to-Image Generation
    - local: using-diffusers/img2img
      title: Text-Guided Image-to-Image
    - local: using-diffusers/inpaint
      title: Text-Guided Image-Inpainting
    - local: using-diffusers/depth2img
      title: Text-Guided Depth-to-Image
    - local: using-diffusers/controlling_generation
      title: Controlling generation
    - local: using-diffusers/reusing_seeds
      title: Reusing seeds for deterministic generation
    - local: using-diffusers/reproducibility
      title: Reproducibility
    - local: using-diffusers/custom_pipeline_examples
      title: Community Pipelines
    - local: using-diffusers/contribute_pipeline
      title: How to contribute a Pipeline
    - local: using-diffusers/using_safetensors
      title: Using safetensors
    title: Pipelines for Inference
  - sections:
    - local: using-diffusers/rl
      title: Reinforcement Learning
    - local: using-diffusers/audio
      title: Audio
    - local: using-diffusers/other-modalities
      title: Other Modalities
    title: Taking Diffusers Beyond Images
  title: Using Diffusers
- sections:
  - local: optimization/fp16
    title: Memory and Speed
  - local: optimization/torch2.0
    title: Torch2.0 support
  - local: optimization/xformers
    title: xFormers
  - local: optimization/onnx
    title: ONNX
  - local: optimization/open_vino
    title: OpenVINO
  - local: optimization/mps
    title: MPS
  - local: optimization/habana
    title: Habana Gaudi
  title: Optimization/Special Hardware
- sections:
  - local: training/overview
    title: Overview
  - local: training/unconditional_training
    title: Unconditional Image Generation
  - local: training/text_inversion
    title: Textual Inversion
  - local: training/dreambooth
    title: Dreambooth
  - local: training/text2image
    title: Text-to-image fine-tuning
  - local: training/lora
    title: LoRA Support in Diffusers
  title: Training
- sections:
  - local: conceptual/philosophy
    title: Philosophy
  - local: conceptual/contribution
    title: How to contribute?
  - local: conceptual/ethical_guidelines
    title: Diffusers' Ethical Guidelines
  title: Conceptual Guides
- sections:
  - sections:
    - local: api/models
      title: Models
    - local: api/diffusion_pipeline
      title: Diffusion Pipeline
    - local: api/logging
      title: Logging
    - local: api/configuration
      title: Configuration
    - local: api/outputs
      title: Outputs
    - local: api/loaders
      title: Loaders
    title: Main Classes
  - sections:
    - local: api/pipelines/overview
      title: Overview
    - local: api/pipelines/alt_diffusion
      title: AltDiffusion
    - local: api/pipelines/audio_diffusion
      title: Audio Diffusion
    - local: api/pipelines/cycle_diffusion
      title: Cycle Diffusion
    - local: api/pipelines/dance_diffusion
      title: Dance Diffusion
    - local: api/pipelines/ddim
      title: DDIM
    - local: api/pipelines/ddpm
      title: DDPM
    - local: api/pipelines/dit
      title: DiT
    - local: api/pipelines/latent_diffusion
      title: Latent Diffusion
    - local: api/pipelines/paint_by_example
      title: PaintByExample
    - local: api/pipelines/pndm
      title: PNDM
    - local: api/pipelines/repaint
      title: RePaint
    - local: api/pipelines/stable_diffusion_safe
      title: Safe Stable Diffusion
    - local: api/pipelines/score_sde_ve
      title: Score SDE VE
    - local: api/pipelines/semantic_stable_diffusion
      title: Semantic Guidance
    - sections:
      - local: api/pipelines/stable_diffusion/overview
        title: Overview
      - local: api/pipelines/stable_diffusion/text2img
        title: Text-to-Image
      - local: api/pipelines/stable_diffusion/img2img
        title: Image-to-Image
      - local: api/pipelines/stable_diffusion/inpaint
        title: Inpaint
      - local: api/pipelines/stable_diffusion/depth2img
        title: Depth-to-Image
      - local: api/pipelines/stable_diffusion/image_variation
        title: Image-Variation
      - local: api/pipelines/stable_diffusion/upscale
        title: Super-Resolution
      - local: api/pipelines/stable_diffusion/latent_upscale
        title: Stable-Diffusion-Latent-Upscaler
      - local: api/pipelines/stable_diffusion/pix2pix
        title: InstructPix2Pix
      - local: api/pipelines/stable_diffusion/attend_and_excite
        title: Attend and Excite
      - local: api/pipelines/stable_diffusion/pix2pix_zero
        title: Pix2Pix Zero
      - local: api/pipelines/stable_diffusion/self_attention_guidance
        title: Self-Attention Guidance
      - local: api/pipelines/stable_diffusion/panorama
        title: MultiDiffusion Panorama
      - local: api/pipelines/stable_diffusion/controlnet
        title: Text-to-Image Generation with ControlNet Conditioning
      title: Stable Diffusion
    - local: api/pipelines/stable_diffusion_2
      title: Stable Diffusion 2
    - local: api/pipelines/stable_unclip
      title: Stable unCLIP
    - local: api/pipelines/stochastic_karras_ve
      title: Stochastic Karras VE
    - local: api/pipelines/unclip
      title: UnCLIP
    - local: api/pipelines/latent_diffusion_uncond
      title: Unconditional Latent Diffusion
    - local: api/pipelines/versatile_diffusion
      title: Versatile Diffusion
    - local: api/pipelines/vq_diffusion
      title: VQ Diffusion
    title: Pipelines
  - sections:
    - local: api/schedulers/overview
      title: Overview
    - local: api/schedulers/ddim
      title: DDIM
    - local: api/schedulers/ddim_inverse
      title: DDIMInverse
    - local: api/schedulers/ddpm
      title: DDPM
    - local: api/schedulers/deis
      title: DEIS
    - local: api/schedulers/dpm_discrete
      title: DPM Discrete Scheduler
    - local: api/schedulers/dpm_discrete_ancestral
      title: DPM Discrete Scheduler with ancestral sampling
    - local: api/schedulers/euler_ancestral
      title: Euler Ancestral Scheduler
    - local: api/schedulers/euler
      title: Euler scheduler
    - local: api/schedulers/heun
      title: Heun Scheduler
    - local: api/schedulers/ipndm
      title: IPNDM
    - local: api/schedulers/lms_discrete
      title: Linear Multistep
    - local: api/schedulers/multistep_dpm_solver
      title: Multistep DPM-Solver
    - local: api/schedulers/pndm
      title: PNDM
    - local: api/schedulers/repaint
      title: RePaint Scheduler
    - local: api/schedulers/singlestep_dpm_solver
      title: Singlestep DPM-Solver
    - local: api/schedulers/stochastic_karras_ve
      title: Stochastic Kerras VE
    - local: api/schedulers/unipc
      title: UniPCMultistepScheduler
    - local: api/schedulers/score_sde_ve
      title: VE-SDE
    - local: api/schedulers/score_sde_vp
      title: VP-SDE
    - local: api/schedulers/vq_diffusion
      title: VQDiffusionScheduler
    title: Schedulers
  - sections:
    - local: api/experimental/rl
      title: RL Planning
    title: Experimental Features
  title: API


================================================
FILE: docs/source/en/api/configuration.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Configuration

In Diffusers, schedulers of type [`schedulers.scheduling_utils.SchedulerMixin`], and models of type [`ModelMixin`] inherit from [`ConfigMixin`] which conveniently takes care of storing all parameters that are 
passed to the respective `__init__` methods in a JSON-configuration file.

## ConfigMixin

[[autodoc]] ConfigMixin
	- load_config
	- from_config
	- save_config


================================================
FILE: docs/source/en/api/diffusion_pipeline.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Pipelines

The [`DiffusionPipeline`] is the easiest way to load any pretrained diffusion pipeline from the [Hub](https://huggingface.co/models?library=diffusers) and to use it in inference.

<Tip>
	
	One should not use the Diffusion Pipeline class for training or fine-tuning a diffusion model. Individual 
	components of diffusion pipelines are usually trained individually, so we suggest to directly work 
	with [`UNetModel`] and [`UNetConditionModel`].

</Tip>

Any diffusion pipeline that is loaded with [`~DiffusionPipeline.from_pretrained`] will automatically 
detect the pipeline type, *e.g.* [`StableDiffusionPipeline`] and consequently load each component of the 
pipeline and pass them into the `__init__` function of the pipeline, *e.g.* [`~StableDiffusionPipeline.__init__`].

Any pipeline object can be saved locally with [`~DiffusionPipeline.save_pretrained`].

## DiffusionPipeline
[[autodoc]] DiffusionPipeline
	- all
	- __call__
	- device
	- to
	- components

## ImagePipelineOutput
By default diffusion pipelines return an object of class

[[autodoc]] pipelines.ImagePipelineOutput

## AudioPipelineOutput
By default diffusion pipelines return an object of class

[[autodoc]] pipelines.AudioPipelineOutput


================================================
FILE: docs/source/en/api/experimental/rl.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# TODO

Coming soon!

================================================
FILE: docs/source/en/api/loaders.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Loaders

There are many ways to train adapter neural networks for diffusion models, such as 
- [Textual Inversion](./training/text_inversion.mdx)
- [LoRA](https://github.com/cloneofsimo/lora)
- [Hypernetworks](https://arxiv.org/abs/1609.09106)

Such adapter neural networks often only consist of a fraction of the number of weights compared 
to the pretrained model and as such are very portable. The Diffusers library offers an easy-to-use
API to load such adapter neural networks via the [`loaders.py` module](https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders.py). 

**Note**: This module is still highly experimental and prone to future changes.

## LoaderMixins

### UNet2DConditionLoadersMixin

[[autodoc]] loaders.UNet2DConditionLoadersMixin


================================================
FILE: docs/source/en/api/logging.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Logging

🧨 Diffusers has a centralized logging system, so that you can setup the verbosity of the library easily.

Currently the default verbosity of the library is `WARNING`.

To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity
to the INFO level.

```python
import diffusers

diffusers.logging.set_verbosity_info()
```

You can also use the environment variable `DIFFUSERS_VERBOSITY` to override the default verbosity. You can set it
to one of the following: `debug`, `info`, `warning`, `error`, `critical`. For example:

```bash
DIFFUSERS_VERBOSITY=error ./myprogram.py
```

Additionally, some `warnings` can be disabled by setting the environment variable
`DIFFUSERS_NO_ADVISORY_WARNINGS` to a true value, like *1*. This will disable any warning that is logged using
[`logger.warning_advice`]. For example:

```bash
DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
```

Here is an example of how to use the same logger as the library in your own module or script:

```python
from diffusers.utils import logging

logging.set_verbosity_info()
logger = logging.get_logger("diffusers")
logger.info("INFO")
logger.warning("WARN")
```


All the methods of this logging module are documented below, the main ones are
[`logging.get_verbosity`] to get the current level of verbosity in the logger and
[`logging.set_verbosity`] to set the verbosity to the level of your choice. In order (from the least
verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are:

- `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL` (int value, 50): only report the most
  critical errors.
- `diffusers.logging.ERROR` (int value, 40): only report errors.
- `diffusers.logging.WARNING` or `diffusers.logging.WARN` (int value, 30): only reports error and
  warnings. This the default level used by the library.
- `diffusers.logging.INFO` (int value, 20): reports error, warnings and basic information.
- `diffusers.logging.DEBUG` (int value, 10): report all information.

By default, `tqdm` progress bars will be displayed during model download. [`logging.disable_progress_bar`] and [`logging.enable_progress_bar`] can be used to suppress or unsuppress this behavior.

## Base setters

[[autodoc]] logging.set_verbosity_error

[[autodoc]] logging.set_verbosity_warning

[[autodoc]] logging.set_verbosity_info

[[autodoc]] logging.set_verbosity_debug

## Other functions

[[autodoc]] logging.get_verbosity

[[autodoc]] logging.set_verbosity

[[autodoc]] logging.get_logger

[[autodoc]] logging.enable_default_handler

[[autodoc]] logging.disable_default_handler

[[autodoc]] logging.enable_explicit_format

[[autodoc]] logging.reset_format

[[autodoc]] logging.enable_progress_bar

[[autodoc]] logging.disable_progress_bar


================================================
FILE: docs/source/en/api/models.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Models

Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models.
The primary function of these models is to denoise an input sample, by modeling the distribution $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$.
The models are built on the base class ['ModelMixin'] that is a `torch.nn.module` with basic functionality for saving and loading models both locally and from the HuggingFace hub.

## ModelMixin
[[autodoc]] ModelMixin

## UNet2DOutput
[[autodoc]] models.unet_2d.UNet2DOutput

## UNet2DModel
[[autodoc]] UNet2DModel

## UNet1DOutput
[[autodoc]] models.unet_1d.UNet1DOutput

## UNet1DModel
[[autodoc]] UNet1DModel

## UNet2DConditionOutput
[[autodoc]] models.unet_2d_condition.UNet2DConditionOutput

## UNet2DConditionModel
[[autodoc]] UNet2DConditionModel

## DecoderOutput
[[autodoc]] models.vae.DecoderOutput

## VQEncoderOutput
[[autodoc]] models.vq_model.VQEncoderOutput

## VQModel
[[autodoc]] VQModel

## AutoencoderKLOutput
[[autodoc]] models.autoencoder_kl.AutoencoderKLOutput

## AutoencoderKL
[[autodoc]] AutoencoderKL

## Transformer2DModel
[[autodoc]] Transformer2DModel

## Transformer2DModelOutput
[[autodoc]] models.transformer_2d.Transformer2DModelOutput

## PriorTransformer
[[autodoc]] models.prior_transformer.PriorTransformer

## PriorTransformerOutput
[[autodoc]] models.prior_transformer.PriorTransformerOutput

## ControlNetOutput
[[autodoc]] models.controlnet.ControlNetOutput

## ControlNetModel
[[autodoc]] ControlNetModel

## FlaxModelMixin
[[autodoc]] FlaxModelMixin

## FlaxUNet2DConditionOutput
[[autodoc]] models.unet_2d_condition_flax.FlaxUNet2DConditionOutput

## FlaxUNet2DConditionModel
[[autodoc]] FlaxUNet2DConditionModel

## FlaxDecoderOutput
[[autodoc]] models.vae_flax.FlaxDecoderOutput

## FlaxAutoencoderKLOutput
[[autodoc]] models.vae_flax.FlaxAutoencoderKLOutput

## FlaxAutoencoderKL
[[autodoc]] FlaxAutoencoderKL


================================================
FILE: docs/source/en/api/outputs.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# BaseOutputs

All models have outputs that are instances of subclasses of [`~utils.BaseOutput`]. Those are
data structures containing all the information returned by the model, but that can also be used as tuples or
dictionaries.

Let's see how this looks in an example:

```python
from diffusers import DDIMPipeline

pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32")
outputs = pipeline()
```

The `outputs` object is a [`~pipelines.ImagePipelineOutput`], as we can see in the
documentation of that class below, it means it has an image attribute.

You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get `None`:

```python
outputs.images
```

or via keyword lookup

```python
outputs["images"]
```

When considering our `outputs` object as tuple, it only considers the attributes that don't have `None` values.
Here for instance, we could retrieve images via indexing:

```python
outputs[:1]
```

which will return the tuple `(outputs.images)` for instance.

## BaseOutput

[[autodoc]] utils.BaseOutput
    - to_tuple


================================================
FILE: docs/source/en/api/pipelines/alt_diffusion.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# AltDiffusion

AltDiffusion was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu

The abstract of the paper is the following:

*In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.*


*Overview*:

| Pipeline | Tasks | Colab | Demo
|---|---|:---:|:---:|
| [pipeline_alt_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py) | *Text-to-Image Generation* | - | -
| [pipeline_alt_diffusion_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py) | *Image-to-Image Text-Guided Generation* | - |-

## Tips

- AltDiffusion is conceptually exaclty the same as [Stable Diffusion](./api/pipelines/stable_diffusion/overview).

- *Run AltDiffusion*

AltDiffusion can be tested very easily with the [`AltDiffusionPipeline`], [`AltDiffusionImg2ImgPipeline`] and the `"BAAI/AltDiffusion-m9"` checkpoint exactly in the same way it is shown in the [Conditional Image Generation Guide](./using-diffusers/conditional_image_generation) and the [Image-to-Image Generation Guide](./using-diffusers/img2img).

- *How to load and use different schedulers.*

The alt diffusion pipeline uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the alt diffusion pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:

```python
>>> from diffusers import AltDiffusionPipeline, EulerDiscreteScheduler

>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9")
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)

>>> # or
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("BAAI/AltDiffusion-m9", subfolder="scheduler")
>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", scheduler=euler_scheduler)
```


- *How to convert all use cases with multiple or single pipeline*

If you want to use all possible use cases in a single `DiffusionPipeline` we recommend using the `components` functionality to instantiate all components in the most memory-efficient way:

```python
>>> from diffusers import (
...     AltDiffusionPipeline,
...     AltDiffusionImg2ImgPipeline,
... )

>>> text2img = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9")
>>> img2img = AltDiffusionImg2ImgPipeline(**text2img.components)

>>> # now you can use text2img(...) and img2img(...) just like the call methods of each respective pipeline
```

## AltDiffusionPipelineOutput
[[autodoc]] pipelines.alt_diffusion.AltDiffusionPipelineOutput
	- all
	- __call__

## AltDiffusionPipeline
[[autodoc]] AltDiffusionPipeline
	- all
	- __call__

## AltDiffusionImg2ImgPipeline
[[autodoc]] AltDiffusionImg2ImgPipeline
	- all
	- __call__


================================================
FILE: docs/source/en/api/pipelines/audio_diffusion.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Audio Diffusion

## Overview

[Audio Diffusion](https://github.com/teticio/audio-diffusion) by Robert Dargavel Smith.

Audio Diffusion leverages the recent advances in image generation using diffusion models by converting audio samples to
and from mel spectrogram images.

The original codebase of this implementation can be found [here](https://github.com/teticio/audio-diffusion), including
training scripts and example notebooks.

## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_audio_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py) | *Unconditional Audio Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/teticio/audio-diffusion/blob/master/notebooks/audio_diffusion_pipeline.ipynb) |


## Examples:

### Audio Diffusion

```python
import torch
from IPython.display import Audio
from diffusers import DiffusionPipeline

device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device)

output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=mel.get_sample_rate()))
```

### Latent Audio Diffusion

```python
import torch
from IPython.display import Audio
from diffusers import DiffusionPipeline

device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device)

output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
```

### Audio Diffusion with DDIM (faster)

```python
import torch
from IPython.display import Audio
from diffusers import DiffusionPipeline

device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256").to(device)

output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
```

### Variations, in-painting, out-painting etc.

```python
output = pipe(
    raw_audio=output.audios[0, 0],
    start_step=int(pipe.get_default_steps() / 2),
    mask_start_secs=1,
    mask_end_secs=1,
)
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
```

## AudioDiffusionPipeline
[[autodoc]] AudioDiffusionPipeline
	- all
	- __call__

## Mel
[[autodoc]] Mel


================================================
FILE: docs/source/en/api/pipelines/cycle_diffusion.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Cycle Diffusion

## Overview

Cycle Diffusion is a Text-Guided Image-to-Image Generation model proposed in [Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance](https://arxiv.org/abs/2210.05559) by Chen Henry Wu, Fernando De la Torre.

The abstract of the paper is the following:

*Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs.*

*Tips*:
- The Cycle Diffusion pipeline is fully compatible with any [Stable Diffusion](./stable_diffusion) checkpoints
- Currently Cycle Diffusion only works with the [`DDIMScheduler`].

*Example*:

In the following we should how to best use the [`CycleDiffusionPipeline`]

```python
import requests
import torch
from PIL import Image
from io import BytesIO

from diffusers import CycleDiffusionPipeline, DDIMScheduler

# load the pipeline
# make sure you're logged in with `huggingface-cli login`
model_id_or_path = "CompVis/stable-diffusion-v1-4"
scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler")
pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda")

# let's download an initial image
url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
init_image.save("horse.png")

# let's specify a prompt
source_prompt = "An astronaut riding a horse"
prompt = "An astronaut riding an elephant"

# call the pipeline
image = pipe(
    prompt=prompt,
    source_prompt=source_prompt,
    image=init_image,
    num_inference_steps=100,
    eta=0.1,
    strength=0.8,
    guidance_scale=2,
    source_guidance_scale=1,
).images[0]

image.save("horse_to_elephant.png")

# let's try another example
# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion
url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
init_image.save("black.png")

source_prompt = "A black colored car"
prompt = "A blue colored car"

# call the pipeline
torch.manual_seed(0)
image = pipe(
    prompt=prompt,
    source_prompt=source_prompt,
    image=init_image,
    num_inference_steps=100,
    eta=0.1,
    strength=0.85,
    guidance_scale=3,
    source_guidance_scale=1,
).images[0]

image.save("black_to_blue.png")
```

## CycleDiffusionPipeline
[[autodoc]] CycleDiffusionPipeline
	- all
	- __call__


================================================
FILE: docs/source/en/api/pipelines/dance_diffusion.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Dance Diffusion

## Overview

[Dance Diffusion](https://github.com/Harmonai-org/sample-generator) by Zach Evans.

Dance Diffusion is the first in a suite of generative audio tools for producers and musicians to be released by Harmonai.
For more info or to get involved in the development of these tools, please visit https://harmonai.org and fill out the form on the front page.

The original codebase of this implementation can be found [here](https://github.com/Harmonai-org/sample-generator).

## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_dance_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py) | *Unconditional Audio Generation* | - |


## DanceDiffusionPipeline
[[autodoc]] DanceDiffusionPipeline
	- all
	- __call__


================================================
FILE: docs/source/en/api/pipelines/ddim.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# DDIM

## Overview

[Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.

The abstract of the paper is the following:

Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.

The original codebase of this paper can be found here: [ermongroup/ddim](https://github.com/ermongroup/ddim).
For questions, feel free to contact the author on [tsong.me](https://tsong.me/).

## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_ddim.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddim/pipeline_ddim.py) | *Unconditional Image Generation* | - |


## DDIMPipeline
[[autodoc]] DDIMPipeline
	- all
	- __call__


================================================
FILE: docs/source/en/api/pipelines/ddpm.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# DDPM

## Overview

[Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) 
 (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline.

The abstract of the paper is the following:

We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.

The original codebase of this paper can be found [here](https://github.com/hojonathanho/diffusion).


## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_ddpm.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddpm/pipeline_ddpm.py) | *Unconditional Image Generation* | - |


# DDPMPipeline
[[autodoc]] DDPMPipeline
	- all
	- __call__


================================================
FILE: docs/source/en/api/pipelines/dit.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Scalable Diffusion Models with Transformers (DiT)

## Overview

[Scalable Diffusion Models with Transformers](https://arxiv.org/abs/2212.09748) (DiT) by William Peebles and Saining Xie.

The abstract of the paper is the following:

*We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops -- through increased transformer depth/width or increased number of input tokens -- consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.*

The original codebase of this paper can be found here: [facebookresearch/dit](https://github.com/facebookresearch/dit).

## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_dit.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dit/pipeline_dit.py) | *Conditional Image Generation* | - |


## Usage example

```python
from diffusers import DiTPipeline, DPMSolverMultistepScheduler
import torch

pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")

# pick words from Imagenet class labels
pipe.labels  # to print all available words

# pick words that exist in ImageNet
words = ["white shark", "umbrella"]

class_ids = pipe.get_label_ids(words)

generator = torch.manual_seed(33)
output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator)

image = output.images[0]  # label 'white shark'
```

## DiTPipeline
[[autodoc]] DiTPipeline
	- all
	- __call__


================================================
FILE: docs/source/en/api/pipelines/latent_diffusion.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Latent Diffusion

## Overview

Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.

The abstract of the paper is the following:

*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*

The original codebase can be found [here](https://github.com/CompVis/latent-diffusion).

## Tips:

- 
- 
- 

## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py) | *Text-to-Image Generation* | - |
| [pipeline_latent_diffusion_superresolution.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py) | *Super Resolution* | - |

## Examples:


## LDMTextToImagePipeline
[[autodoc]] LDMTextToImagePipeline
	- all
	- __call__

## LDMSuperResolutionPipeline
[[autodoc]] LDMSuperResolutionPipeline
	- all
	- __call__


================================================
FILE: docs/source/en/api/pipelines/latent_diffusion_uncond.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Unconditional Latent Diffusion

## Overview

Unconditional Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.

The abstract of the paper is the following:

*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*

The original codebase can be found [here](https://github.com/CompVis/latent-diffusion).

## Tips:

- 
- 
- 

## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_latent_diffusion_uncond.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py) | *Unconditional Image Generation* | - |

## Examples:

## LDMPipeline
[[autodoc]] LDMPipeline
	- all
	- __call__


================================================
FILE: docs/source/en/api/pipelines/overview.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Pipelines

Pipelines provide a simple way to run state-of-the-art diffusion models in inference.
Most diffusion systems consist of multiple independently-trained models and highly adaptable scheduler 
components - all of which are needed to have a functioning end-to-end diffusion system.

As an example, [Stable Diffusion](https://huggingface.co/blog/stable_diffusion) has three independently trained models:
- [Autoencoder](./api/models#vae)
- [Conditional Unet](./api/models#UNet2DConditionModel)
- [CLIP text encoder](https://huggingface.co/docs/transformers/v4.21.2/en/model_doc/clip#transformers.CLIPTextModel)
- a scheduler component, [scheduler](./api/scheduler#pndm), 
- a [CLIPFeatureExtractor](https://huggingface.co/docs/transformers/v4.21.2/en/model_doc/clip#transformers.CLIPFeatureExtractor),
- as well as a [safety checker](./stable_diffusion#safety_checker).
All of these components are necessary to run stable diffusion in inference even though they were trained 
or created independently from each other.

To that end, we strive to offer all open-sourced, state-of-the-art diffusion system under a unified API. 
More specifically, we strive to provide pipelines that
- 1. can load the officially published weights and yield 1-to-1 the same outputs as the original implementation according to the corresponding paper (*e.g.* [LDMTextToImagePipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/latent_diffusion), uses the officially released weights of [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)),
- 2. have a simple user interface to run the model in inference (see the [Pipelines API](#pipelines-api) section), 
- 3. are easy to understand with code that is self-explanatory and can be read along-side the official paper (see [Pipelines summary](#pipelines-summary)),
- 4. can easily be contributed by the community (see the [Contribution](#contribution) section).

**Note** that pipelines do not (and should not) offer any training functionality. 
If you are looking for *official* training examples, please have a look at [examples](https://github.com/huggingface/diffusers/tree/main/examples).

## 🧨 Diffusers Summary

The following table summarizes all officially supported pipelines, their corresponding paper, and if 
available a colab notebook to directly try them out.


| Pipeline | Paper | Tasks | Colab
|---|---|:---:|:---:|
| [alt_diffusion](./alt_diffusion) | [**AltDiffusion**](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation | -
| [audio_diffusion](./audio_diffusion) | [**Audio Diffusion**](https://github.com/teticio/audio_diffusion.git) | Unconditional Audio Generation |
| [controlnet](./api/pipelines/stable_diffusion/controlnet) | [**ControlNet with Stable Diffusion**](https://arxiv.org/abs/2302.05543) | Image-to-Image Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/controlnet.ipynb)
| [cycle_diffusion](./cycle_diffusion) | [**Cycle Diffusion**](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation |
| [dance_diffusion](./dance_diffusion) | [**Dance Diffusion**](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation |
| [ddpm](./ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
| [ddim](./ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation |
| [latent_diffusion](./latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation | 
| [latent_diffusion](./latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image | 
| [latent_diffusion_uncond](./latent_diffusion_uncond) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation | 
| [paint_by_example](./paint_by_example) | [**Paint by Example: Exemplar-based Image Editing with Diffusion Models**](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting | 
| [pndm](./pndm) | [**Pseudo Numerical Methods for Diffusion Models on Manifolds**](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation | 
| [score_sde_ve](./score_sde_ve) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation | 
| [score_sde_vp](./score_sde_vp) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation | 
| [semantic_stable_diffusion](./semantic_stable_diffusion) | [**SEGA: Instructing Diffusion using Semantic Dimensions**](https://arxiv.org/abs/2301.12247) | Text-to-Image Generation |
| [stable_diffusion_text2img](./stable_diffusion/text2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
| [stable_diffusion_img2img](./stable_diffusion/img2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
| [stable_diffusion_inpaint](./stable_diffusion/inpaint) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
| [stable_diffusion_panorama](./stable_diffusion/panorama) | [**MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation**](https://arxiv.org/abs/2302.08113) | Text-Guided Panorama View Generation |
| [stable_diffusion_pix2pix](./stable_diffusion/pix2pix) | [**InstructPix2Pix: Learning to Follow Image Editing Instructions**](https://arxiv.org/abs/2211.09800) | Text-Based Image Editing |
| [stable_diffusion_pix2pix_zero](./stable_diffusion/pix2pix_zero) | [**Zero-shot Image-to-Image Translation**](https://arxiv.org/abs/2302.03027) | Text-Based Image Editing |
| [stable_diffusion_attend_and_excite](./stable_diffusion/attend_and_excite) | [**Attend and Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models**](https://arxiv.org/abs/2301.13826) | Text-to-Image Generation |
| [stable_diffusion_self_attention_guidance](./stable_diffusion/self_attention_guidance) | [**Self-Attention Guidance**](https://arxiv.org/abs/2210.00939) | Text-to-Image Generation |
| [stable_diffusion_image_variation](./stable_diffusion/image_variation) | [**Stable Diffusion Image Variations**](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) | Image-to-Image Generation |
| [stable_diffusion_latent_upscale](./stable_diffusion/latent_upscale) | [**Stable Diffusion Latent Upscaler**](https://twitter.com/StabilityAI/status/1590531958815064065) | Text-Guided Super Resolution Image-to-Image |
| [stable_diffusion_2](./stable_diffusion_2/) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation | 
| [stable_diffusion_2](./stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting | 
| [stable_diffusion_2](./stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Depth-to-Image Text-Guided Generation |
| [stable_diffusion_2](./stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |
| [stable_diffusion_safe](./stable_diffusion_safe) | [**Safe Stable Diffusion**](https://arxiv.org/abs/2211.05105) | Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ml-research/safe-latent-diffusion/blob/main/examples/Safe%20Latent%20Diffusion.ipynb)
| [stable_unclip](./stable_unclip) | **Stable unCLIP** | Text-to-Image Generation |
| [stable_unclip](./stable_unclip) | **Stable unCLIP** | Image-to-Image Text-Guided Generation |
| [stochastic_karras_ve](./stochastic_karras_ve) | [**Elucidating the Design Space of Diffusion-Based Generative Models**](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
| [unclip](./unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) | Text-to-Image Generation |
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation | 
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation | 
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation | 
| [vq_diffusion](./vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation | 


**Note**: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers. 

However, most of them can be adapted to use different scheduler components or even different model components. Some pipeline examples are shown in the [Examples](#examples) below.

## Pipelines API

Diffusion models often consist of multiple independently-trained models or other previously existing components. 


Each model has been trained independently on a different task and the scheduler can easily be swapped out and replaced with a different one. 
During inference, we however want to be able to easily load all components and use them in inference - even if one component, *e.g.* CLIP's text encoder, originates from a different library, such as [Transformers](https://github.com/huggingface/transformers). To that end, all pipelines provide the following functionality:

- [`from_pretrained` method](../diffusion_pipeline) that accepts a Hugging Face Hub repository id, *e.g.* [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) or a path to a local directory, *e.g.*
"./stable-diffusion". To correctly retrieve which models and components should be loaded, one has to provide a `model_index.json` file, *e.g.* [runwayml/stable-diffusion-v1-5/model_index.json](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), which defines all components that should be
loaded into the pipelines. More specifically, for each model/component one needs to define the format `<name>: ["<library>", "<class name>"]`. `<name>` is the attribute name given to the loaded instance of `<class name>` which can be found in the library or pipeline folder called `"<library>"`.
- [`save_pretrained`](../diffusion_pipeline) that accepts a local path, *e.g.* `./stable-diffusion` under which all models/components of the pipeline will be saved. For each component/model a folder is created inside the local path that is named after the given attribute name, *e.g.* `./stable_diffusion/unet`. 
In addition, a `model_index.json` file is created at the root of the local path, *e.g.* `./stable_diffusion/model_index.json` so that the complete pipeline can again be instantiated 
from the local path.
- [`to`](../diffusion_pipeline) which accepts a `string` or `torch.device` to move all models that are of type `torch.nn.Module` to the passed device. The behavior is fully analogous to [PyTorch's `to` method](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.to).
- [`__call__`] method to use the pipeline in inference. `__call__` defines inference logic of the pipeline and should ideally encompass all aspects of it, from pre-processing to forwarding tensors to the different models and schedulers, as well as post-processing. The API of the `__call__` method can strongly vary from pipeline to pipeline. *E.g.* a text-to-image pipeline, such as [`StableDiffusionPipeline`](./stable_diffusion) should accept among other things the text prompt to generate the image. A pure image generation pipeline, such as [DDPMPipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/ddpm) on the other hand can be run without providing any inputs. To better understand what inputs can be adapted for 
each pipeline, one should look directly into the respective pipeline.

**Note**: All pipelines have PyTorch's autograd disabled by decorating the `__call__` method with a [`torch.no_grad`](https://pytorch.org/docs/stable/generated/torch.no_grad.html) decorator because pipelines should
not be used for training. If you want to store the gradients during the forward pass, we recommend writing your own pipeline, see also our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/community)

## Contribution

We are more than happy about any contribution to the officially supported pipelines 🤗. We aspire
all of our pipelines to be  **self-contained**, **easy-to-tweak**, **beginner-friendly** and for **one-purpose-only**.

- **Self-contained**: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file itself, should be inherited from (and only from) the [`DiffusionPipeline` class](.../diffusion_pipeline) or be directly attached to the model and scheduler components of the pipeline. 
- **Easy-to-use**: Pipelines should be extremely easy to use - one should be able to load the pipeline and 
use it for its designated task, *e.g.* text-to-image generation, in just a couple of lines of code. Most 
logic including pre-processing, an unrolled diffusion loop, and post-processing should all happen inside the `__call__` method.
- **Easy-to-tweak**: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. We try to make the pipeline code as readable as possible so that each part –from pre-processing to diffusing to post-processing– can easily be adapted. If you would like the community to benefit from your customized pipeline, we would love to see a contribution to our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/community). If you feel that an important pipeline should be part of the official pipelines but isn't, a contribution to the [official pipelines](./overview) would be even better.
- **One-purpose-only**: Pipelines should be used for one task and one task only. Even if two tasks are very similar from a modeling point of view, *e.g.* image2image translation and in-painting, pipelines shall be used for one task only to keep them *easy-to-tweak* and *readable*.

## Examples

### Text-to-Image generation with Stable Diffusion

```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]

image.save("astronaut_rides_horse.png")
```

### Image-to-Image text-guided generation with Stable Diffusion

The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images.

```python
import requests
from PIL import Image
from io import BytesIO

from diffusers import StableDiffusionImg2ImgPipeline

# load the pipeline
device = "cuda"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to(
    device
)

# let's download an initial image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((768, 512))

prompt = "A fantasy landscape, trending on artstation"

images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images

images[0].save("fantasy_landscape.png")
```
You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)

### Tweak prompts reusing seeds and latents

You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. [This notebook](https://github.com/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) shows how to do it step by step. You can also run it in Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb).


### In-painting using Stable Diffusion

The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and text prompt.

```python
import PIL
import requests
import torch
from io import BytesIO

from diffusers import StableDiffusionInpaintPipeline


def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(BytesIO(response.content)).convert("RGB")


img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))

pipe = StableDiffusionInpaintPipeline.from_pretrained(
    "runwayml/stable-diffusion-inpainting",
    torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")

prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```

You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)


================================================
FILE: docs/source/en/api/pipelines/paint_by_example.mdx
================================================
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# PaintByExample

## Overview

[Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://arxiv.org/abs/2211.13227) by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen

The abstract of the paper is the following:

*Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.*

The original codebase can be found [here](https://github.com/Fantasy-Studio/Paint-by-Example).

## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_paint_by_example.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py) | *Image-Guided Image Painting* | - |

## Tips

- PaintByExample is supported by the official [Fantasy-Studio/Paint-by-Example](https://huggingface.co/Fantasy-Studio/Paint-by-Example) checkpoint. The checkpoint has been warm-started from the [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) and with the objective to inpaint partly masked images conditioned on example / reference images
- To quickly demo *PaintByExample*, please have a look at [this demo](https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example)
- You can run the following code snippet as an example:


```python
# !pip install diffusers transformers

import PIL
import requests
import torch
from io import BytesIO
from diffusers import DiffusionPipeline


def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(BytesIO(response.content)).convert("RGB")


img_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png"
mask_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png"
example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
example_image = download_image(example_url).resize((512, 512))

pipe = DiffusionPipeline.from_pretrained(
    "Fantasy-Studio/Paint-by-Example",
    torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")

image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0]
image
```

## PaintByE
Download .txt
gitextract_2gqdad1t/

├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.yml
│   │   ├── config.yml
│   │   ├── feature_request.md
│   │   ├── feedback.md
│   │   └── new-model-addition.yml
│   ├── actions/
│   │   └── setup-miniconda/
│   │       └── action.yml
│   └── workflows/
│       ├── build_docker_images.yml
│       ├── build_documentation.yml
│       ├── build_pr_documentation.yml
│       ├── delete_doc_comment.yml
│       ├── nightly_tests.yml
│       ├── pr_quality.yml
│       ├── pr_tests.yml
│       ├── push_tests.yml
│       ├── push_tests_fast.yml
│       ├── stale.yml
│       └── typos.yml
├── .gitignore
├── CITATION.cff
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── Makefile
├── README.md
├── _typos.toml
├── docker/
│   ├── diffusers-flax-cpu/
│   │   └── Dockerfile
│   ├── diffusers-flax-tpu/
│   │   └── Dockerfile
│   ├── diffusers-onnxruntime-cpu/
│   │   └── Dockerfile
│   ├── diffusers-onnxruntime-cuda/
│   │   └── Dockerfile
│   ├── diffusers-pytorch-cpu/
│   │   └── Dockerfile
│   └── diffusers-pytorch-cuda/
│       └── Dockerfile
├── docs/
│   ├── README.md
│   ├── TRANSLATING.md
│   └── source/
│       ├── en/
│       │   ├── _toctree.yml
│       │   ├── api/
│       │   │   ├── configuration.mdx
│       │   │   ├── diffusion_pipeline.mdx
│       │   │   ├── experimental/
│       │   │   │   └── rl.mdx
│       │   │   ├── loaders.mdx
│       │   │   ├── logging.mdx
│       │   │   ├── models.mdx
│       │   │   ├── outputs.mdx
│       │   │   ├── pipelines/
│       │   │   │   ├── alt_diffusion.mdx
│       │   │   │   ├── audio_diffusion.mdx
│       │   │   │   ├── cycle_diffusion.mdx
│       │   │   │   ├── dance_diffusion.mdx
│       │   │   │   ├── ddim.mdx
│       │   │   │   ├── ddpm.mdx
│       │   │   │   ├── dit.mdx
│       │   │   │   ├── latent_diffusion.mdx
│       │   │   │   ├── latent_diffusion_uncond.mdx
│       │   │   │   ├── overview.mdx
│       │   │   │   ├── paint_by_example.mdx
│       │   │   │   ├── pndm.mdx
│       │   │   │   ├── repaint.mdx
│       │   │   │   ├── score_sde_ve.mdx
│       │   │   │   ├── semantic_stable_diffusion.mdx
│       │   │   │   ├── stable_diffusion/
│       │   │   │   │   ├── attend_and_excite.mdx
│       │   │   │   │   ├── controlnet.mdx
│       │   │   │   │   ├── depth2img.mdx
│       │   │   │   │   ├── image_variation.mdx
│       │   │   │   │   ├── img2img.mdx
│       │   │   │   │   ├── inpaint.mdx
│       │   │   │   │   ├── latent_upscale.mdx
│       │   │   │   │   ├── overview.mdx
│       │   │   │   │   ├── panorama.mdx
│       │   │   │   │   ├── pix2pix.mdx
│       │   │   │   │   ├── pix2pix_zero.mdx
│       │   │   │   │   ├── self_attention_guidance.mdx
│       │   │   │   │   ├── text2img.mdx
│       │   │   │   │   └── upscale.mdx
│       │   │   │   ├── stable_diffusion_2.mdx
│       │   │   │   ├── stable_diffusion_safe.mdx
│       │   │   │   ├── stable_unclip.mdx
│       │   │   │   ├── stochastic_karras_ve.mdx
│       │   │   │   ├── unclip.mdx
│       │   │   │   ├── versatile_diffusion.mdx
│       │   │   │   └── vq_diffusion.mdx
│       │   │   └── schedulers/
│       │   │       ├── ddim.mdx
│       │   │       ├── ddim_inverse.mdx
│       │   │       ├── ddpm.mdx
│       │   │       ├── deis.mdx
│       │   │       ├── dpm_discrete.mdx
│       │   │       ├── dpm_discrete_ancestral.mdx
│       │   │       ├── euler.mdx
│       │   │       ├── euler_ancestral.mdx
│       │   │       ├── heun.mdx
│       │   │       ├── ipndm.mdx
│       │   │       ├── lms_discrete.mdx
│       │   │       ├── multistep_dpm_solver.mdx
│       │   │       ├── overview.mdx
│       │   │       ├── pndm.mdx
│       │   │       ├── repaint.mdx
│       │   │       ├── score_sde_ve.mdx
│       │   │       ├── score_sde_vp.mdx
│       │   │       ├── singlestep_dpm_solver.mdx
│       │   │       ├── stochastic_karras_ve.mdx
│       │   │       ├── unipc.mdx
│       │   │       └── vq_diffusion.mdx
│       │   ├── conceptual/
│       │   │   ├── contribution.mdx
│       │   │   ├── ethical_guidelines.mdx
│       │   │   └── philosophy.mdx
│       │   ├── index.mdx
│       │   ├── installation.mdx
│       │   ├── optimization/
│       │   │   ├── fp16.mdx
│       │   │   ├── habana.mdx
│       │   │   ├── mps.mdx
│       │   │   ├── onnx.mdx
│       │   │   ├── open_vino.mdx
│       │   │   ├── torch2.0.mdx
│       │   │   └── xformers.mdx
│       │   ├── quicktour.mdx
│       │   ├── stable_diffusion.mdx
│       │   ├── training/
│       │   │   ├── dreambooth.mdx
│       │   │   ├── lora.mdx
│       │   │   ├── overview.mdx
│       │   │   ├── text2image.mdx
│       │   │   ├── text_inversion.mdx
│       │   │   └── unconditional_training.mdx
│       │   ├── tutorials/
│       │   │   └── basic_training.mdx
│       │   └── using-diffusers/
│       │       ├── audio.mdx
│       │       ├── conditional_image_generation.mdx
│       │       ├── configuration.mdx
│       │       ├── contribute_pipeline.mdx
│       │       ├── controlling_generation.mdx
│       │       ├── custom_pipeline_examples.mdx
│       │       ├── custom_pipeline_overview.mdx
│       │       ├── depth2img.mdx
│       │       ├── img2img.mdx
│       │       ├── inpaint.mdx
│       │       ├── kerascv.mdx
│       │       ├── loading.mdx
│       │       ├── other-modalities.mdx
│       │       ├── reproducibility.mdx
│       │       ├── reusing_seeds.mdx
│       │       ├── rl.mdx
│       │       ├── schedulers.mdx
│       │       ├── unconditional_image_generation.mdx
│       │       ├── using_safetensors
│       │       └── using_safetensors.mdx
│       └── ko/
│           ├── _toctree.yml
│           ├── in_translation.mdx
│           ├── index.mdx
│           ├── installation.mdx
│           └── quicktour.mdx
├── examples/
│   ├── README.md
│   ├── community/
│   │   ├── README.md
│   │   ├── bit_diffusion.py
│   │   ├── checkpoint_merger.py
│   │   ├── clip_guided_stable_diffusion.py
│   │   ├── composable_stable_diffusion.py
│   │   ├── imagic_stable_diffusion.py
│   │   ├── img2img_inpainting.py
│   │   ├── interpolate_stable_diffusion.py
│   │   ├── lpw_stable_diffusion.py
│   │   ├── lpw_stable_diffusion_onnx.py
│   │   ├── magic_mix.py
│   │   ├── multilingual_stable_diffusion.py
│   │   ├── one_step_unet.py
│   │   ├── sd_text2img_k_diffusion.py
│   │   ├── seed_resize_stable_diffusion.py
│   │   ├── speech_to_image_diffusion.py
│   │   ├── stable_diffusion_comparison.py
│   │   ├── stable_diffusion_mega.py
│   │   ├── stable_unclip.py
│   │   ├── text_inpainting.py
│   │   ├── tiled_upscaling.py
│   │   ├── unclip_image_interpolation.py
│   │   ├── unclip_text_interpolation.py
│   │   └── wildcard_stable_diffusion.py
│   ├── conftest.py
│   ├── dreambooth/
│   │   ├── README.md
│   │   ├── requirements.txt
│   │   ├── requirements_flax.txt
│   │   ├── train_dreambooth.py
│   │   ├── train_dreambooth_flax.py
│   │   └── train_dreambooth_lora.py
│   ├── inference/
│   │   ├── README.md
│   │   ├── image_to_image.py
│   │   └── inpainting.py
│   ├── research_projects/
│   │   ├── README.md
│   │   ├── colossalai/
│   │   │   ├── README.md
│   │   │   ├── inference.py
│   │   │   ├── requirement.txt
│   │   │   └── train_dreambooth_colossalai.py
│   │   ├── dreambooth_inpaint/
│   │   │   ├── README.md
│   │   │   ├── requirements.txt
│   │   │   ├── train_dreambooth_inpaint.py
│   │   │   └── train_dreambooth_inpaint_lora.py
│   │   ├── intel_opts/
│   │   │   ├── README.md
│   │   │   ├── inference_bf16.py
│   │   │   └── textual_inversion/
│   │   │       ├── README.md
│   │   │       ├── requirements.txt
│   │   │       └── textual_inversion_bf16.py
│   │   ├── multi_subject_dreambooth/
│   │   │   ├── README.md
│   │   │   ├── requirements.txt
│   │   │   └── train_multi_subject_dreambooth.py
│   │   └── onnxruntime/
│   │       ├── README.md
│   │       ├── text_to_image/
│   │       │   ├── README.md
│   │       │   ├── requirements.txt
│   │       │   └── train_text_to_image.py
│   │       ├── textual_inversion/
│   │       │   ├── README.md
│   │       │   ├── requirements.txt
│   │       │   └── textual_inversion.py
│   │       └── unconditional_image_generation/
│   │           ├── README.md
│   │           ├── requirements.txt
│   │           └── train_unconditional.py
│   ├── rl/
│   │   ├── README.md
│   │   └── run_diffuser_locomotion.py
│   ├── test_examples.py
│   ├── text_to_image/
│   │   ├── README.md
│   │   ├── requirements.txt
│   │   ├── requirements_flax.txt
│   │   ├── train_text_to_image.py
│   │   ├── train_text_to_image_flax.py
│   │   └── train_text_to_image_lora.py
│   ├── textual_inversion/
│   │   ├── README.md
│   │   ├── requirements.txt
│   │   ├── requirements_flax.txt
│   │   ├── textual_inversion.py
│   │   └── textual_inversion_flax.py
│   └── unconditional_image_generation/
│       ├── README.md
│       ├── requirements.txt
│       └── train_unconditional.py
├── pyproject.toml
├── scripts/
│   ├── __init__.py
│   ├── change_naming_configs_and_checkpoints.py
│   ├── conversion_ldm_uncond.py
│   ├── convert_dance_diffusion_to_diffusers.py
│   ├── convert_ddpm_original_checkpoint_to_diffusers.py
│   ├── convert_diffusers_to_original_stable_diffusion.py
│   ├── convert_dit_to_diffusers.py
│   ├── convert_k_upscaler_to_diffusers.py
│   ├── convert_kakao_brain_unclip_to_diffusers.py
│   ├── convert_ldm_original_checkpoint_to_diffusers.py
│   ├── convert_models_diffuser_to_diffusers.py
│   ├── convert_ncsnpp_original_checkpoint_to_diffusers.py
│   ├── convert_original_stable_diffusion_to_diffusers.py
│   ├── convert_stable_diffusion_checkpoint_to_onnx.py
│   ├── convert_unclip_txt2img_to_image_variation.py
│   ├── convert_vae_pt_to_diffusers.py
│   ├── convert_versatile_diffusion_to_diffusers.py
│   ├── convert_vq_diffusion_to_diffusers.py
│   └── generate_logits.py
├── setup.cfg
├── setup.py
├── src/
│   └── diffusers/
│       ├── __init__.py
│       ├── commands/
│       │   ├── __init__.py
│       │   ├── diffusers_cli.py
│       │   └── env.py
│       ├── configuration_utils.py
│       ├── dependency_versions_check.py
│       ├── dependency_versions_table.py
│       ├── experimental/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   └── rl/
│       │       ├── __init__.py
│       │       └── value_guided_sampling.py
│       ├── loaders.py
│       ├── models/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   ├── attention.py
│       │   ├── attention_flax.py
│       │   ├── autoencoder_kl.py
│       │   ├── controlnet.py
│       │   ├── cross_attention.py
│       │   ├── dual_transformer_2d.py
│       │   ├── embeddings.py
│       │   ├── embeddings_flax.py
│       │   ├── modeling_flax_pytorch_utils.py
│       │   ├── modeling_flax_utils.py
│       │   ├── modeling_pytorch_flax_utils.py
│       │   ├── modeling_utils.py
│       │   ├── prior_transformer.py
│       │   ├── resnet.py
│       │   ├── resnet_flax.py
│       │   ├── transformer_2d.py
│       │   ├── unet_1d.py
│       │   ├── unet_1d_blocks.py
│       │   ├── unet_2d.py
│       │   ├── unet_2d_blocks.py
│       │   ├── unet_2d_blocks_flax.py
│       │   ├── unet_2d_condition.py
│       │   ├── unet_2d_condition_flax.py
│       │   ├── vae.py
│       │   ├── vae_flax.py
│       │   └── vq_model.py
│       ├── optimization.py
│       ├── pipeline_utils.py
│       ├── pipelines/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   ├── alt_diffusion/
│       │   │   ├── __init__.py
│       │   │   ├── modeling_roberta_series.py
│       │   │   ├── pipeline_alt_diffusion.py
│       │   │   └── pipeline_alt_diffusion_img2img.py
│       │   ├── audio_diffusion/
│       │   │   ├── __init__.py
│       │   │   ├── mel.py
│       │   │   └── pipeline_audio_diffusion.py
│       │   ├── dance_diffusion/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_dance_diffusion.py
│       │   ├── ddim/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_ddim.py
│       │   ├── ddpm/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_ddpm.py
│       │   ├── dit/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_dit.py
│       │   ├── latent_diffusion/
│       │   │   ├── __init__.py
│       │   │   ├── pipeline_latent_diffusion.py
│       │   │   └── pipeline_latent_diffusion_superresolution.py
│       │   ├── latent_diffusion_uncond/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_latent_diffusion_uncond.py
│       │   ├── onnx_utils.py
│       │   ├── paint_by_example/
│       │   │   ├── __init__.py
│       │   │   ├── image_encoder.py
│       │   │   └── pipeline_paint_by_example.py
│       │   ├── pipeline_flax_utils.py
│       │   ├── pipeline_utils.py
│       │   ├── pndm/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_pndm.py
│       │   ├── repaint/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_repaint.py
│       │   ├── score_sde_ve/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_score_sde_ve.py
│       │   ├── semantic_stable_diffusion/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_semantic_stable_diffusion.py
│       │   ├── stable_diffusion/
│       │   │   ├── README.md
│       │   │   ├── __init__.py
│       │   │   ├── convert_from_ckpt.py
│       │   │   ├── pipeline_cycle_diffusion.py
│       │   │   ├── pipeline_flax_stable_diffusion.py
│       │   │   ├── pipeline_flax_stable_diffusion_img2img.py
│       │   │   ├── pipeline_flax_stable_diffusion_inpaint.py
│       │   │   ├── pipeline_onnx_stable_diffusion.py
│       │   │   ├── pipeline_onnx_stable_diffusion_img2img.py
│       │   │   ├── pipeline_onnx_stable_diffusion_inpaint.py
│       │   │   ├── pipeline_onnx_stable_diffusion_inpaint_legacy.py
│       │   │   ├── pipeline_stable_diffusion.py
│       │   │   ├── pipeline_stable_diffusion_attend_and_excite.py
│       │   │   ├── pipeline_stable_diffusion_controlnet.py
│       │   │   ├── pipeline_stable_diffusion_depth2img.py
│       │   │   ├── pipeline_stable_diffusion_image_variation.py
│       │   │   ├── pipeline_stable_diffusion_img2img.py
│       │   │   ├── pipeline_stable_diffusion_inpaint.py
│       │   │   ├── pipeline_stable_diffusion_inpaint_legacy.py
│       │   │   ├── pipeline_stable_diffusion_instruct_pix2pix.py
│       │   │   ├── pipeline_stable_diffusion_k_diffusion.py
│       │   │   ├── pipeline_stable_diffusion_latent_upscale.py
│       │   │   ├── pipeline_stable_diffusion_panorama.py
│       │   │   ├── pipeline_stable_diffusion_pix2pix_zero.py
│       │   │   ├── pipeline_stable_diffusion_sag.py
│       │   │   ├── pipeline_stable_diffusion_upscale.py
│       │   │   ├── pipeline_stable_unclip.py
│       │   │   ├── pipeline_stable_unclip_img2img.py
│       │   │   ├── safety_checker.py
│       │   │   ├── safety_checker_flax.py
│       │   │   └── stable_unclip_image_normalizer.py
│       │   ├── stable_diffusion_safe/
│       │   │   ├── __init__.py
│       │   │   ├── pipeline_stable_diffusion_safe.py
│       │   │   └── safety_checker.py
│       │   ├── stochastic_karras_ve/
│       │   │   ├── __init__.py
│       │   │   └── pipeline_stochastic_karras_ve.py
│       │   ├── unclip/
│       │   │   ├── __init__.py
│       │   │   ├── pipeline_unclip.py
│       │   │   ├── pipeline_unclip_image_variation.py
│       │   │   └── text_proj.py
│       │   ├── versatile_diffusion/
│       │   │   ├── __init__.py
│       │   │   ├── modeling_text_unet.py
│       │   │   ├── pipeline_versatile_diffusion.py
│       │   │   ├── pipeline_versatile_diffusion_dual_guided.py
│       │   │   ├── pipeline_versatile_diffusion_image_variation.py
│       │   │   └── pipeline_versatile_diffusion_text_to_image.py
│       │   └── vq_diffusion/
│       │       ├── __init__.py
│       │       └── pipeline_vq_diffusion.py
│       ├── schedulers/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   ├── scheduling_ddim.py
│       │   ├── scheduling_ddim_flax.py
│       │   ├── scheduling_ddim_inverse.py
│       │   ├── scheduling_ddpm.py
│       │   ├── scheduling_ddpm_flax.py
│       │   ├── scheduling_deis_multistep.py
│       │   ├── scheduling_dpmsolver_multistep.py
│       │   ├── scheduling_dpmsolver_multistep_flax.py
│       │   ├── scheduling_dpmsolver_singlestep.py
│       │   ├── scheduling_euler_ancestral_discrete.py
│       │   ├── scheduling_euler_discrete.py
│       │   ├── scheduling_heun_discrete.py
│       │   ├── scheduling_ipndm.py
│       │   ├── scheduling_k_dpm_2_ancestral_discrete.py
│       │   ├── scheduling_k_dpm_2_discrete.py
│       │   ├── scheduling_karras_ve.py
│       │   ├── scheduling_karras_ve_flax.py
│       │   ├── scheduling_lms_discrete.py
│       │   ├── scheduling_lms_discrete_flax.py
│       │   ├── scheduling_pndm.py
│       │   ├── scheduling_pndm_flax.py
│       │   ├── scheduling_repaint.py
│       │   ├── scheduling_sde_ve.py
│       │   ├── scheduling_sde_ve_flax.py
│       │   ├── scheduling_sde_vp.py
│       │   ├── scheduling_unclip.py
│       │   ├── scheduling_unipc_multistep.py
│       │   ├── scheduling_utils.py
│       │   ├── scheduling_utils_flax.py
│       │   └── scheduling_vq_diffusion.py
│       ├── training_utils.py
│       └── utils/
│           ├── __init__.py
│           ├── accelerate_utils.py
│           ├── constants.py
│           ├── deprecation_utils.py
│           ├── doc_utils.py
│           ├── dummy_flax_and_transformers_objects.py
│           ├── dummy_flax_objects.py
│           ├── dummy_onnx_objects.py
│           ├── dummy_pt_objects.py
│           ├── dummy_torch_and_librosa_objects.py
│           ├── dummy_torch_and_scipy_objects.py
│           ├── dummy_torch_and_transformers_and_k_diffusion_objects.py
│           ├── dummy_torch_and_transformers_and_onnx_objects.py
│           ├── dummy_torch_and_transformers_objects.py
│           ├── dynamic_modules_utils.py
│           ├── hub_utils.py
│           ├── import_utils.py
│           ├── logging.py
│           ├── model_card_template.md
│           ├── outputs.py
│           ├── pil_utils.py
│           ├── testing_utils.py
│           └── torch_utils.py
├── tests/
│   ├── __init__.py
│   ├── conftest.py
│   ├── fixtures/
│   │   └── custom_pipeline/
│   │       ├── pipeline.py
│   │       └── what_ever.py
│   ├── models/
│   │   ├── __init__.py
│   │   ├── test_models_unet_1d.py
│   │   ├── test_models_unet_2d.py
│   │   ├── test_models_unet_2d_condition.py
│   │   ├── test_models_unet_2d_flax.py
│   │   ├── test_models_vae.py
│   │   ├── test_models_vae_flax.py
│   │   └── test_models_vq.py
│   ├── pipeline_params.py
│   ├── pipelines/
│   │   ├── __init__.py
│   │   ├── altdiffusion/
│   │   │   ├── __init__.py
│   │   │   ├── test_alt_diffusion.py
│   │   │   └── test_alt_diffusion_img2img.py
│   │   ├── audio_diffusion/
│   │   │   ├── __init__.py
│   │   │   └── test_audio_diffusion.py
│   │   ├── dance_diffusion/
│   │   │   ├── __init__.py
│   │   │   └── test_dance_diffusion.py
│   │   ├── ddim/
│   │   │   ├── __init__.py
│   │   │   └── test_ddim.py
│   │   ├── ddpm/
│   │   │   ├── __init__.py
│   │   │   └── test_ddpm.py
│   │   ├── dit/
│   │   │   ├── __init__.py
│   │   │   └── test_dit.py
│   │   ├── karras_ve/
│   │   │   ├── __init__.py
│   │   │   └── test_karras_ve.py
│   │   ├── latent_diffusion/
│   │   │   ├── __init__.py
│   │   │   ├── test_latent_diffusion.py
│   │   │   ├── test_latent_diffusion_superresolution.py
│   │   │   └── test_latent_diffusion_uncond.py
│   │   ├── paint_by_example/
│   │   │   ├── __init__.py
│   │   │   └── test_paint_by_example.py
│   │   ├── pndm/
│   │   │   ├── __init__.py
│   │   │   └── test_pndm.py
│   │   ├── repaint/
│   │   │   ├── __init__.py
│   │   │   └── test_repaint.py
│   │   ├── score_sde_ve/
│   │   │   ├── __init__.py
│   │   │   └── test_score_sde_ve.py
│   │   ├── semantic_stable_diffusion/
│   │   │   ├── __init__.py
│   │   │   └── test_semantic_diffusion.py
│   │   ├── stable_diffusion/
│   │   │   ├── __init__.py
│   │   │   ├── test_cycle_diffusion.py
│   │   │   ├── test_onnx_stable_diffusion.py
│   │   │   ├── test_onnx_stable_diffusion_img2img.py
│   │   │   ├── test_onnx_stable_diffusion_inpaint.py
│   │   │   ├── test_onnx_stable_diffusion_inpaint_legacy.py
│   │   │   ├── test_stable_diffusion.py
│   │   │   ├── test_stable_diffusion_controlnet.py
│   │   │   ├── test_stable_diffusion_image_variation.py
│   │   │   ├── test_stable_diffusion_img2img.py
│   │   │   ├── test_stable_diffusion_inpaint.py
│   │   │   ├── test_stable_diffusion_inpaint_legacy.py
│   │   │   ├── test_stable_diffusion_instruction_pix2pix.py
│   │   │   ├── test_stable_diffusion_k_diffusion.py
│   │   │   ├── test_stable_diffusion_panorama.py
│   │   │   ├── test_stable_diffusion_pix2pix_zero.py
│   │   │   └── test_stable_diffusion_sag.py
│   │   ├── stable_diffusion_2/
│   │   │   ├── __init__.py
│   │   │   ├── test_stable_diffusion.py
│   │   │   ├── test_stable_diffusion_attend_and_excite.py
│   │   │   ├── test_stable_diffusion_depth.py
│   │   │   ├── test_stable_diffusion_flax.py
│   │   │   ├── test_stable_diffusion_flax_inpaint.py
│   │   │   ├── test_stable_diffusion_inpaint.py
│   │   │   ├── test_stable_diffusion_latent_upscale.py
│   │   │   ├── test_stable_diffusion_upscale.py
│   │   │   └── test_stable_diffusion_v_pred.py
│   │   ├── stable_diffusion_safe/
│   │   │   ├── __init__.py
│   │   │   └── test_safe_diffusion.py
│   │   ├── stable_unclip/
│   │   │   ├── __init__.py
│   │   │   ├── test_stable_unclip.py
│   │   │   └── test_stable_unclip_img2img.py
│   │   ├── test_pipeline_utils.py
│   │   ├── unclip/
│   │   │   ├── __init__.py
│   │   │   ├── test_unclip.py
│   │   │   └── test_unclip_image_variation.py
│   │   ├── versatile_diffusion/
│   │   │   ├── __init__.py
│   │   │   ├── test_versatile_diffusion_dual_guided.py
│   │   │   ├── test_versatile_diffusion_image_variation.py
│   │   │   ├── test_versatile_diffusion_mega.py
│   │   │   └── test_versatile_diffusion_text_to_image.py
│   │   └── vq_diffusion/
│   │       ├── __init__.py
│   │       └── test_vq_diffusion.py
│   ├── repo_utils/
│   │   ├── test_check_copies.py
│   │   └── test_check_dummies.py
│   ├── test_config.py
│   ├── test_hub_utils.py
│   ├── test_layers_utils.py
│   ├── test_modeling_common.py
│   ├── test_modeling_common_flax.py
│   ├── test_outputs.py
│   ├── test_pipelines.py
│   ├── test_pipelines_common.py
│   ├── test_pipelines_flax.py
│   ├── test_pipelines_onnx_common.py
│   ├── test_scheduler.py
│   ├── test_scheduler_flax.py
│   ├── test_training.py
│   ├── test_unet_2d_blocks.py
│   ├── test_unet_blocks_common.py
│   └── test_utils.py
└── utils/
    ├── check_config_docstrings.py
    ├── check_copies.py
    ├── check_doc_toc.py
    ├── check_dummies.py
    ├── check_inits.py
    ├── check_repo.py
    ├── check_table.py
    ├── custom_init_isort.py
    ├── get_modified_files.py
    ├── overwrite_expected_slice.py
    ├── print_env.py
    ├── release.py
    └── stale.py
Download .txt
Showing preview only (353K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (4032 symbols across 300 files)

FILE: examples/community/bit_diffusion.py
  function decimal_to_bits (line 15) | def decimal_to_bits(x, bits=BITS):
  function bits_to_decimal (line 31) | def bits_to_decimal(x, bits=BITS):
  function ddim_bit_scheduler_step (line 45) | def ddim_bit_scheduler_step(
  function ddpm_bit_scheduler_step (line 135) | def ddpm_bit_scheduler_step(
  class BitDiffusion (line 213) | class BitDiffusion(DiffusionPipeline):
    method __init__ (line 214) | def __init__(
    method __call__ (line 229) | def __call__(

FILE: examples/community/checkpoint_merger.py
  class CheckpointMergerPipeline (line 20) | class CheckpointMergerPipeline(DiffusionPipeline):
    method __init__ (line 41) | def __init__(self):
    method _compare_model_configs (line 45) | def _compare_model_configs(self, dict0, dict1):
    method _remove_meta_keys (line 56) | def _remove_meta_keys(self, config_dict: Dict):
    method merge (line 66) | def merge(self, pretrained_model_name_or_path_list: List[Union[str, os...
    method weighted_sum (line 271) | def weighted_sum(theta0, theta1, theta2, alpha):
    method sigmoid (line 276) | def sigmoid(theta0, theta1, theta2, alpha):
    method inv_sigmoid (line 282) | def inv_sigmoid(theta0, theta1, theta2, alpha):
    method add_difference (line 289) | def add_difference(theta0, theta1, theta2, alpha):

FILE: examples/community/clip_guided_stable_diffusion.py
  class MakeCutouts (line 21) | class MakeCutouts(nn.Module):
    method __init__ (line 22) | def __init__(self, cut_size, cut_power=1.0):
    method forward (line 28) | def forward(self, pixel_values, num_cutouts):
  function spherical_dist_loss (line 42) | def spherical_dist_loss(x, y):
  function set_requires_grad (line 48) | def set_requires_grad(model, value):
  class CLIPGuidedStableDiffusion (line 53) | class CLIPGuidedStableDiffusion(DiffusionPipeline):
    method __init__ (line 59) | def __init__(
    method enable_attention_slicing (line 91) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 98) | def disable_attention_slicing(self):
    method freeze_vae (line 101) | def freeze_vae(self):
    method unfreeze_vae (line 104) | def unfreeze_vae(self):
    method freeze_unet (line 107) | def freeze_unet(self):
    method unfreeze_unet (line 110) | def unfreeze_unet(self):
    method cond_fn (line 114) | def cond_fn(
    method __call__ (line 183) | def __call__(

FILE: examples/community/composable_stable_diffusion.py
  class ComposableStableDiffusionPipeline (line 43) | class ComposableStableDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 72) | def __init__(
    method enable_vae_slicing (line 168) | def enable_vae_slicing(self):
    method disable_vae_slicing (line 177) | def disable_vae_slicing(self):
    method enable_sequential_cpu_offload (line 184) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 207) | def _execution_device(self):
    method _encode_prompt (line 224) | def _encode_prompt(self, prompt, device, num_images_per_prompt, do_cla...
    method run_safety_checker (line 329) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 339) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 347) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 364) | def check_inputs(self, prompt, height, width, callback_steps):
    method prepare_latents (line 379) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 397) | def __call__(

FILE: examples/community/imagic_stable_diffusion.py
  function preprocess (line 49) | def preprocess(image):
  class ImagicStableDiffusionPipeline (line 59) | class ImagicStableDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 87) | def __init__(
    method enable_attention_slicing (line 108) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 125) | def disable_attention_slicing(self):
    method train (line 133) | def train(
    method __call__ (line 334) | def __call__(

FILE: examples/community/img2img_inpainting.py
  function prepare_mask_and_masked_image (line 21) | def prepare_mask_and_masked_image(image, mask):
  function check_size (line 38) | def check_size(image, height, width):
  function overlay_inner_image (line 48) | def overlay_inner_image(image, inner_image, paste_offset: Tuple[int] = (...
  class ImageToImageInpaintingPipeline (line 58) | class ImageToImageInpaintingPipeline(DiffusionPipeline):
    method __init__ (line 86) | def __init__(
    method enable_attention_slicing (line 132) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 151) | def disable_attention_slicing(self):
    method __call__ (line 160) | def __call__(

FILE: examples/community/interpolate_stable_diffusion.py
  function slerp (line 22) | def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
  class StableDiffusionWalkPipeline (line 49) | class StableDiffusionWalkPipeline(DiffusionPipeline):
    method __init__ (line 77) | def __init__(
    method enable_attention_slicing (line 123) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 142) | def disable_attention_slicing(self):
    method __call__ (line 151) | def __call__(
    method embed_text (line 403) | def embed_text(self, text):
    method get_noise (line 416) | def get_noise(self, seed, dtype=torch.float32, height=512, width=512):
    method walk (line 425) | def walk(

FILE: examples/community/lpw_stable_diffusion.py
  function parse_prompt_attention (line 61) | def parse_prompt_attention(text):
  function get_prompts_with_weights (line 147) | def get_prompts_with_weights(pipe: StableDiffusionPipeline, prompt: List...
  function pad_tokens_and_weights (line 182) | def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, no_bos...
  function get_unweighted_text_embeddings (line 207) | def get_unweighted_text_embeddings(
  function get_weighted_text_embeddings (line 247) | def get_weighted_text_embeddings(
  function preprocess_image (line 377) | def preprocess_image(image):
  function preprocess_mask (line 387) | def preprocess_mask(mask, scale_factor=8):
  class StableDiffusionLongPromptWeightingPipeline (line 400) | class StableDiffusionLongPromptWeightingPipeline(StableDiffusionPipeline):
    method __init__ (line 431) | def __init__(
    method __init__ (line 456) | def __init__(
    method __init__additional__ (line 477) | def __init__additional__(self):
    method _execution_device (line 482) | def _execution_device(self):
    method _encode_prompt (line 499) | def _encode_prompt(
    method check_inputs (line 557) | def check_inputs(self, prompt, height, width, strength, callback_steps):
    method get_timesteps (line 575) | def get_timesteps(self, num_inference_steps, strength, device, is_text...
    method run_safety_checker (line 588) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 598) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 606) | def prepare_extra_step_kwargs(self, generator, eta):
    method prepare_latents (line 623) | def prepare_latents(self, image, timestep, batch_size, height, width, ...
    method __call__ (line 663) | def __call__(
    method text2img (line 863) | def text2img(
    method img2img (line 958) | def img2img(
    method inpaint (line 1052) | def inpaint(

FILE: examples/community/lpw_stable_diffusion_onnx.py
  function parse_prompt_attention (line 78) | def parse_prompt_attention(text):
  function get_prompts_with_weights (line 164) | def get_prompts_with_weights(pipe, prompt: List[str], max_length: int):
  function pad_tokens_and_weights (line 199) | def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, no_bos...
  function get_unweighted_text_embeddings (line 224) | def get_unweighted_text_embeddings(
  function get_weighted_text_embeddings (line 265) | def get_weighted_text_embeddings(
  function preprocess_image (line 404) | def preprocess_image(image):
  function preprocess_mask (line 413) | def preprocess_mask(mask, scale_factor=8):
  class OnnxStableDiffusionLongPromptWeightingPipeline (line 425) | class OnnxStableDiffusionLongPromptWeightingPipeline(OnnxStableDiffusion...
    method __init__ (line 435) | def __init__(
    method __init__ (line 462) | def __init__(
    method __init__additional__ (line 485) | def __init__additional__(self):
    method _encode_prompt (line 489) | def _encode_prompt(
    method check_inputs (line 540) | def check_inputs(self, prompt, height, width, strength, callback_steps):
    method get_timesteps (line 558) | def get_timesteps(self, num_inference_steps, strength, is_text2img):
    method run_safety_checker (line 571) | def run_safety_checker(self, image):
    method decode_latents (line 589) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 600) | def prepare_extra_step_kwargs(self, generator, eta):
    method prepare_latents (line 617) | def prepare_latents(self, image, timestep, batch_size, height, width, ...
    method __call__ (line 650) | def __call__(
    method text2img (line 865) | def text2img(
    method img2img (line 957) | def img2img(
    method inpaint (line 1048) | def inpaint(

FILE: examples/community/magic_mix.py
  class MagicMixPipeline (line 19) | class MagicMixPipeline(DiffusionPipeline):
    method __init__ (line 20) | def __init__(
    method encode (line 33) | def encode(self, img):
    method decode (line 40) | def decode(self, latent):
    method prep_text (line 50) | def prep_text(self, prompt):
    method __call__ (line 73) | def __call__(

FILE: examples/community/multilingual_stable_diffusion.py
  function detect_language (line 26) | def detect_language(pipe, prompt, batch_size):
  function translate_prompt (line 41) | def translate_prompt(prompt, translation_tokenizer, translation_model, d...
  class MultilingualStableDiffusion (line 51) | class MultilingualStableDiffusion(DiffusionPipeline):
    method __init__ (line 86) | def __init__(
    method enable_attention_slicing (line 138) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 157) | def disable_attention_slicing(self):
    method __call__ (line 166) | def __call__(

FILE: examples/community/one_step_unet.py
  class UnetSchedulerOneForwardPipeline (line 7) | class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
    method __init__ (line 8) | def __init__(self, unet, scheduler):
    method __call__ (line 13) | def __call__(self):

FILE: examples/community/sd_text2img_k_diffusion.py
  class ModelWrapper (line 30) | class ModelWrapper:
    method __init__ (line 31) | def __init__(self, model, alphas_cumprod):
    method apply_model (line 35) | def apply_model(self, *args, **kwargs):
  class StableDiffusionPipeline (line 44) | class StableDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 73) | def __init__(
    method set_sampler (line 113) | def set_sampler(self, scheduler_type: str):
    method set_scheduler (line 117) | def set_scheduler(self, scheduler_type: str):
    method enable_attention_slicing (line 122) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 141) | def disable_attention_slicing(self):
    method enable_sequential_cpu_offload (line 149) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 167) | def _execution_device(self):
    method _encode_prompt (line 184) | def _encode_prompt(self, prompt, device, num_images_per_prompt, do_cla...
    method run_safety_checker (line 289) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 299) | def decode_latents(self, latents):
    method check_inputs (line 307) | def check_inputs(self, prompt, height, width, callback_steps):
    method prepare_latents (line 322) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 339) | def __call__(

FILE: examples/community/seed_resize_stable_diffusion.py
  class SeedResizeStableDiffusionPipeline (line 21) | class SeedResizeStableDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 49) | def __init__(
    method enable_attention_slicing (line 70) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 89) | def disable_attention_slicing(self):
    method __call__ (line 98) | def __call__(

FILE: examples/community/speech_to_image_diffusion.py
  class SpeechToImagePipeline (line 29) | class SpeechToImagePipeline(DiffusionPipeline):
    method __init__ (line 30) | def __init__(
    method enable_attention_slicing (line 65) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 70) | def disable_attention_slicing(self):
    method __call__ (line 74) | def __call__(

FILE: examples/community/stable_diffusion_comparison.py
  class StableDiffusionComparisonPipeline (line 25) | class StableDiffusionComparisonPipeline(DiffusionPipeline):
    method __init__ (line 53) | def __init__(
    method layers (line 83) | def layers(self) -> Dict[str, Any]:
    method enable_attention_slicing (line 86) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 103) | def disable_attention_slicing(self):
    method text2img_sd1_1 (line 112) | def text2img_sd1_1(
    method text2img_sd1_2 (line 149) | def text2img_sd1_2(
    method text2img_sd1_3 (line 186) | def text2img_sd1_3(
    method text2img_sd1_4 (line 223) | def text2img_sd1_4(
    method _call_ (line 260) | def _call_(

FILE: examples/community/stable_diffusion_mega.py
  class StableDiffusionMegaPipeline (line 26) | class StableDiffusionMegaPipeline(DiffusionPipeline):
    method __init__ (line 55) | def __init__(
    method components (line 93) | def components(self) -> Dict[str, Any]:
    method enable_attention_slicing (line 96) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 115) | def disable_attention_slicing(self):
    method inpaint (line 124) | def inpaint(
    method img2img (line 159) | def img2img(
    method text2img (line 194) | def text2img(

FILE: examples/community/stable_unclip.py
  function _encode_image (line 17) | def _encode_image(self, image, device, num_images_per_prompt, do_classif...
  class StableUnCLIPPipeline (line 38) | class StableUnCLIPPipeline(DiffusionPipeline):
    method __init__ (line 39) | def __init__(
    method _encode_prompt (line 67) | def _encode_prompt(
    method _execution_device (line 152) | def _execution_device(self):
    method prepare_latents (line 169) | def prepare_latents(self, shape, dtype, device, generator, latents, sc...
    method to (line 180) | def to(self, torch_device: Optional[Union[str, torch.device]] = None):
    method __call__ (line 185) | def __call__(

FILE: examples/community/text_inpainting.py
  class TextInpainting (line 25) | class TextInpainting(DiffusionPipeline):
    method __init__ (line 59) | def __init__(
    method enable_attention_slicing (line 123) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 142) | def disable_attention_slicing(self):
    method enable_sequential_cpu_offload (line 150) | def enable_sequential_cpu_offload(self):
    method _execution_device (line 169) | def _execution_device(self):
    method __call__ (line 187) | def __call__(

FILE: examples/community/tiled_upscaling.py
  function make_transparency_mask (line 29) | def make_transparency_mask(size, overlap_pixels, remove_borders=[]):
  function clamp (line 52) | def clamp(n, smallest, largest):
  function clamp_rect (line 56) | def clamp_rect(rect: [int], min: [int], max: [int]):
  function add_overlap_rect (line 65) | def add_overlap_rect(rect: [int], overlap: int, image_size: [int]):
  function squeeze_tile (line 75) | def squeeze_tile(tile, original_image, original_slice, slice_x):
  function unsqueeze_tile (line 87) | def unsqueeze_tile(tile, original_image_slice):
  function next_divisible (line 93) | def next_divisible(n, d):
  class StableDiffusionTiledUpscalePipeline (line 98) | class StableDiffusionTiledUpscalePipeline(StableDiffusionUpscalePipeline):
    method __init__ (line 125) | def __init__(
    method _process_tile (line 145) | def _process_tile(self, original_image_slice, x, y, tile_size, tile_bo...
    method __call__ (line 185) | def __call__(
  function main (line 282) | def main():

FILE: examples/community/unclip_image_interpolation.py
  function slerp (line 28) | def slerp(val, low, high):
  class UnCLIPImageInterpolationPipeline (line 40) | class UnCLIPImageInterpolationPipeline(DiffusionPipeline):
    method __init__ (line 87) | def __init__(
    method prepare_latents (line 116) | def prepare_latents(self, shape, dtype, device, generator, latents, sc...
    method _encode_prompt (line 128) | def _encode_prompt(self, prompt, device, num_images_per_prompt, do_cla...
    method _encode_image (line 192) | def _encode_image(self, image, device, num_images_per_prompt, image_em...
    method enable_sequential_cpu_offload (line 207) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 233) | def _execution_device(self):
    method __call__ (line 251) | def __call__(

FILE: examples/community/unclip_text_interpolation.py
  function slerp (line 24) | def slerp(val, low, high):
  class UnCLIPTextInterpolationPipeline (line 36) | class UnCLIPTextInterpolationPipeline(DiffusionPipeline):
    method __init__ (line 82) | def __init__(
    method prepare_latents (line 111) | def prepare_latents(self, shape, dtype, device, generator, latents, sc...
    method _encode_prompt (line 123) | def _encode_prompt(
    method enable_sequential_cpu_offload (line 215) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 242) | def _execution_device(self):
    method __call__ (line 260) | def __call__(

FILE: examples/community/wildcard_stable_diffusion.py
  function get_filename (line 25) | def get_filename(path: str):
  function read_wildcard_values (line 30) | def read_wildcard_values(path: str):
  function grab_wildcard_values (line 35) | def grab_wildcard_values(wildcard_option_dict: Dict[str, List[str]] = {}...
  function replace_prompt_with_wildcards (line 45) | def replace_prompt_with_wildcards(
  class WildcardStableDiffusionOutput (line 62) | class WildcardStableDiffusionOutput(StableDiffusionPipelineOutput):
  class WildcardStableDiffusionPipeline (line 66) | class WildcardStableDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 111) | def __init__(
    method __call__ (line 158) | def __call__(

FILE: examples/conftest.py
  function pytest_addoption (line 34) | def pytest_addoption(parser):
  function pytest_terminal_summary (line 40) | def pytest_terminal_summary(terminalreporter):

FILE: examples/dreambooth/train_dreambooth.py
  function import_model_class_from_model_name_or_path (line 55) | def import_model_class_from_model_name_or_path(pretrained_model_name_or_...
  function parse_args (line 75) | def parse_args(input_args=None):
  class DreamBoothDataset (line 368) | class DreamBoothDataset(Dataset):
    method __init__ (line 374) | def __init__(
    method __len__ (line 416) | def __len__(self):
    method __getitem__ (line 419) | def __getitem__(self, index):
  function collate_fn (line 449) | def collate_fn(examples, with_prior_preservation=False):
  class PromptDataset (line 471) | class PromptDataset(Dataset):
    method __init__ (line 474) | def __init__(self, prompt, num_samples):
    method __len__ (line 478) | def __len__(self):
    method __getitem__ (line 481) | def __getitem__(self, index):
  function get_full_repo_name (line 488) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 498) | def main(args):

FILE: examples/dreambooth/train_dreambooth_flax.py
  function parse_args (line 47) | def parse_args():
  class DreamBoothDataset (line 221) | class DreamBoothDataset(Dataset):
    method __init__ (line 227) | def __init__(
    method __len__ (line 269) | def __len__(self):
    method __getitem__ (line 272) | def __getitem__(self, index):
  class PromptDataset (line 300) | class PromptDataset(Dataset):
    method __init__ (line 303) | def __init__(self, prompt, num_samples):
    method __len__ (line 307) | def __len__(self):
    method __getitem__ (line 310) | def __getitem__(self, index):
  function get_full_repo_name (line 317) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function get_params_to_save (line 327) | def get_params_to_save(params):
  function main (line 331) | def main():

FILE: examples/dreambooth/train_dreambooth_lora.py
  function save_model_card (line 62) | def save_model_card(repo_name, images=None, base_model=str, prompt=str, ...
  function import_model_class_from_model_name_or_path (line 92) | def import_model_class_from_model_name_or_path(pretrained_model_name_or_...
  function parse_args (line 112) | def parse_args(input_args=None):
  class DreamBoothDataset (line 407) | class DreamBoothDataset(Dataset):
    method __init__ (line 413) | def __init__(
    method __len__ (line 455) | def __len__(self):
    method __getitem__ (line 458) | def __getitem__(self, index):
  function collate_fn (line 488) | def collate_fn(examples, with_prior_preservation=False):
  class PromptDataset (line 510) | class PromptDataset(Dataset):
    method __init__ (line 513) | def __init__(self, prompt, num_samples):
    method __len__ (line 517) | def __len__(self):
    method __getitem__ (line 520) | def __getitem__(self, index):
  function get_full_repo_name (line 527) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 537) | def main(args):

FILE: examples/research_projects/colossalai/train_dreambooth_colossalai.py
  function import_model_class_from_model_name_or_path (line 34) | def import_model_class_from_model_name_or_path(pretrained_model_name_or_...
  function parse_args (line 54) | def parse_args(input_args=None):
  class DreamBoothDataset (line 251) | class DreamBoothDataset(Dataset):
    method __init__ (line 257) | def __init__(
    method __len__ (line 299) | def __len__(self):
    method __getitem__ (line 302) | def __getitem__(self, index):
  class PromptDataset (line 330) | class PromptDataset(Dataset):
    method __init__ (line 333) | def __init__(self, prompt, num_samples):
    method __len__ (line 337) | def __len__(self):
    method __getitem__ (line 340) | def __getitem__(self, index):
  function get_full_repo_name (line 347) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function gemini_zero_dpp (line 358) | def gemini_zero_dpp(model: torch.nn.Module, placememt_policy: str = "aut...
  function main (line 367) | def main(args):

FILE: examples/research_projects/dreambooth_inpaint/train_dreambooth_inpaint.py
  function prepare_mask_and_masked_image (line 41) | def prepare_mask_and_masked_image(image, mask):
  function random_mask (line 59) | def random_mask(im_shape, ratio=1, mask_full_image=False):
  function parse_args (line 83) | def parse_args():
  class DreamBoothDataset (line 298) | class DreamBoothDataset(Dataset):
    method __init__ (line 304) | def __init__(
    method __len__ (line 351) | def __len__(self):
    method __getitem__ (line 354) | def __getitem__(self, index):
  class PromptDataset (line 388) | class PromptDataset(Dataset):
    method __init__ (line 391) | def __init__(self, prompt, num_samples):
    method __len__ (line 395) | def __len__(self):
    method __getitem__ (line 398) | def __getitem__(self, index):
  function get_full_repo_name (line 405) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 415) | def main():

FILE: examples/research_projects/dreambooth_inpaint/train_dreambooth_inpaint_lora.py
  function prepare_mask_and_masked_image (line 37) | def prepare_mask_and_masked_image(image, mask):
  function random_mask (line 55) | def random_mask(im_shape, ratio=1, mask_full_image=False):
  function parse_args (line 79) | def parse_args():
  class DreamBoothDataset (line 297) | class DreamBoothDataset(Dataset):
    method __init__ (line 303) | def __init__(
    method __len__ (line 350) | def __len__(self):
    method __getitem__ (line 353) | def __getitem__(self, index):
  class PromptDataset (line 387) | class PromptDataset(Dataset):
    method __init__ (line 390) | def __init__(self, prompt, num_samples):
    method __len__ (line 394) | def __len__(self):
    method __getitem__ (line 397) | def __getitem__(self, index):
  function get_full_repo_name (line 404) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 414) | def main():

FILE: examples/research_projects/intel_opts/inference_bf16.py
  function image_grid (line 8) | def image_grid(imgs, rows, cols):

FILE: examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py
  function save_progress (line 60) | def save_progress(text_encoder, placeholder_token_id, accelerator, args,...
  function parse_args (line 67) | def parse_args():
  class TextualInversionDataset (line 273) | class TextualInversionDataset(Dataset):
    method __init__ (line 274) | def __init__(
    method __len__ (line 313) | def __len__(self):
    method __getitem__ (line 316) | def __getitem__(self, i):
  function get_full_repo_name (line 359) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function freeze_params (line 369) | def freeze_params(params):
  function main (line 374) | def main():

FILE: examples/research_projects/multi_subject_dreambooth/train_multi_subject_dreambooth.py
  function import_model_class_from_model_name_or_path (line 39) | def import_model_class_from_model_name_or_path(pretrained_model_name_or_...
  function parse_args (line 59) | def parse_args(input_args=None):
  class DreamBoothDataset (line 326) | class DreamBoothDataset(Dataset):
    method __init__ (line 332) | def __init__(
    method __len__ (line 387) | def __len__(self):
    method __getitem__ (line 390) | def __getitem__(self, index):
  function collate_fn (line 422) | def collate_fn(num_instances, examples, with_prior_preservation=False):
  class PromptDataset (line 449) | class PromptDataset(Dataset):
    method __init__ (line 452) | def __init__(self, prompt, num_samples):
    method __len__ (line 456) | def __len__(self):
    method __getitem__ (line 459) | def __getitem__(self, index):
  function get_full_repo_name (line 466) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 476) | def main(args):

FILE: examples/research_projects/onnxruntime/text_to_image/train_text_to_image.py
  function parse_args (line 54) | def parse_args():
  function get_full_repo_name (line 316) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 331) | def main():

FILE: examples/research_projects/onnxruntime/textual_inversion/textual_inversion.py
  function save_progress (line 84) | def save_progress(text_encoder, placeholder_token_id, accelerator, args,...
  function parse_args (line 91) | def parse_args():
  class TextualInversionDataset (line 380) | class TextualInversionDataset(Dataset):
    method __init__ (line 381) | def __init__(
    method __len__ (line 420) | def __len__(self):
    method __getitem__ (line 423) | def __getitem__(self, i):
  function get_full_repo_name (line 466) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 476) | def main():

FILE: examples/research_projects/onnxruntime/unconditional_image_generation/train_unconditional.py
  function _extract_into_tensor (line 34) | def _extract_into_tensor(arr, timesteps, broadcast_shape):
  function parse_args (line 51) | def parse_args():
  function get_full_repo_name (line 266) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 276) | def main(args):

FILE: examples/test_examples.py
  class SubprocessCallException (line 37) | class SubprocessCallException(Exception):
  function run_command (line 41) | def run_command(command: List[str], return_stdout=False):
  class ExamplesTestsAccelerate (line 62) | class ExamplesTestsAccelerate(unittest.TestCase):
    method setUpClass (line 64) | def setUpClass(cls):
    method tearDownClass (line 73) | def tearDownClass(cls):
    method test_train_unconditional (line 77) | def test_train_unconditional(self):
    method test_textual_inversion (line 98) | def test_textual_inversion(self):
    method test_dreambooth (line 122) | def test_dreambooth(self):
    method test_dreambooth_checkpointing (line 145) | def test_dreambooth_checkpointing(self):
    method test_text_to_image (line 224) | def test_text_to_image(self):
    method test_text_to_image_checkpointing (line 248) | def test_text_to_image_checkpointing(self):
    method test_text_to_image_checkpointing_use_ema (line 328) | def test_text_to_image_checkpointing_use_ema(self):

FILE: examples/text_to_image/train_text_to_image.py
  function parse_args (line 55) | def parse_args():
  function get_full_repo_name (line 317) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 332) | def main():

FILE: examples/text_to_image/train_text_to_image_flax.py
  function parse_args (line 42) | def parse_args():
  function get_full_repo_name (line 218) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function get_params_to_save (line 233) | def get_params_to_save(params):
  function main (line 237) | def main():

FILE: examples/text_to_image/train_text_to_image_lora.py
  function save_model_card (line 56) | def save_model_card(repo_name, images=None, base_model=str, dataset_name...
  function parse_args (line 84) | def parse_args():
  function get_full_repo_name (line 349) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 364) | def main():

FILE: examples/textual_inversion/textual_inversion.py
  function log_validation (line 86) | def log_validation(text_encoder, tokenizer, unet, vae, args, accelerator...
  function save_progress (line 130) | def save_progress(text_encoder, placeholder_token_id, accelerator, args,...
  function parse_args (line 137) | def parse_args():
  class TextualInversionDataset (line 436) | class TextualInversionDataset(Dataset):
    method __init__ (line 437) | def __init__(
    method __len__ (line 476) | def __len__(self):
    method __getitem__ (line 479) | def __getitem__(self, i):
  function get_full_repo_name (line 522) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 532) | def main():

FILE: examples/textual_inversion/textual_inversion_flax.py
  function parse_args (line 65) | def parse_args():
  class TextualInversionDataset (line 243) | class TextualInversionDataset(Dataset):
    method __init__ (line 244) | def __init__(
    method __len__ (line 283) | def __len__(self):
    method __getitem__ (line 286) | def __getitem__(self, i):
  function get_full_repo_name (line 329) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function resize_token_embeddings (line 339) | def resize_token_embeddings(model, new_num_tokens, initializer_token_id,...
  function get_params_to_save (line 359) | def get_params_to_save(params):
  function main (line 363) | def main():

FILE: examples/unconditional_image_generation/train_unconditional.py
  function _extract_into_tensor (line 36) | def _extract_into_tensor(arr, timesteps, broadcast_shape):
  function parse_args (line 54) | def parse_args():
  function get_full_repo_name (line 278) | def get_full_repo_name(model_id: str, organization: Optional[str] = None...
  function main (line 288) | def main(args):

FILE: scripts/conversion_ldm_uncond.py
  function convert_ldm_original (line 9) | def convert_ldm_original(checkpoint_path, config_path, output_path):

FILE: scripts/convert_dance_diffusion_to_diffusers.py
  function alpha_sigma_to_t (line 49) | def alpha_sigma_to_t(alpha, sigma):
  function get_crash_schedule (line 55) | def get_crash_schedule(t):
  class Object (line 61) | class Object(object):
  class DiffusionUncond (line 65) | class DiffusionUncond(nn.Module):
    method __init__ (line 66) | def __init__(self, global_args):
  function download (line 74) | def download(model_name):
  function convert_resconv_naming (line 135) | def convert_resconv_naming(name):
  function convert_attn_naming (line 146) | def convert_attn_naming(name):
  function rename (line 155) | def rename(input_string, max_depth=13):
  function rename_orig_weights (line 214) | def rename_orig_weights(state_dict):
  function transform_conv_attns (line 232) | def transform_conv_attns(new_state_dict, new_k, v):
  function main (line 252) | def main(args):

FILE: scripts/convert_ddpm_original_checkpoint_to_diffusers.py
  function shave_segments (line 9) | def shave_segments(path, n_shave_prefix_segments=1):
  function renew_resnet_paths (line 19) | def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
  function renew_attention_paths (line 35) | def renew_attention_paths(old_list, n_shave_prefix_segments=0, in_mid=Fa...
  function assign_to_checkpoint (line 56) | def assign_to_checkpoint(
  function convert_ddpm_checkpoint (line 99) | def convert_ddpm_checkpoint(checkpoint, config):
  function convert_vq_autoenc_checkpoint (line 235) | def convert_vq_autoenc_checkpoint(checkpoint, config):

FILE: scripts/convert_diffusers_to_original_stable_diffusion.py
  function convert_unet_state_dict (line 92) | def convert_unet_state_dict(unet_state_dict):
  function reshape_weight_for_sd (line 163) | def reshape_weight_for_sd(w):
  function convert_vae_state_dict (line 168) | def convert_vae_state_dict(vae_state_dict):
  function convert_text_enc_state_dict_v20 (line 213) | def convert_text_enc_state_dict_v20(text_enc_dict):
  function convert_text_enc_state_dict (line 260) | def convert_text_enc_state_dict(text_enc_dict):

FILE: scripts/convert_dit_to_diffusers.py
  function download_model (line 13) | def download_model(model_name):
  function main (line 26) | def main(args):

FILE: scripts/convert_k_upscaler_to_diffusers.py
  function resnet_to_diffusers_checkpoint (line 13) | def resnet_to_diffusers_checkpoint(resnet, checkpoint, *, diffusers_resn...
  function self_attn_to_diffusers_checkpoint (line 39) | def self_attn_to_diffusers_checkpoint(checkpoint, *, diffusers_attention...
  function cross_attn_to_diffusers_checkpoint (line 65) | def cross_attn_to_diffusers_checkpoint(
  function block_to_diffusers_checkpoint (line 115) | def block_to_diffusers_checkpoint(block, checkpoint, block_idx, block_ty...
  function unet_to_diffusers_checkpoint (line 175) | def unet_to_diffusers_checkpoint(model, checkpoint):
  function unet_model_from_original_config (line 217) | def unet_model_from_original_config(original_config):
  function main (line 269) | def main(args):

FILE: scripts/convert_kakao_brain_unclip_to_diffusers.py
  function prior_model_from_original_config (line 45) | def prior_model_from_original_config():
  function prior_original_checkpoint_to_diffusers_checkpoint (line 51) | def prior_original_checkpoint_to_diffusers_checkpoint(model, checkpoint,...
  function prior_attention_to_diffusers (line 172) | def prior_attention_to_diffusers(
  function prior_ff_to_diffusers (line 207) | def prior_ff_to_diffusers(checkpoint, *, diffusers_ff_prefix, original_f...
  function decoder_model_from_original_config (line 255) | def decoder_model_from_original_config():
  function decoder_original_checkpoint_to_diffusers_checkpoint (line 261) | def decoder_original_checkpoint_to_diffusers_checkpoint(model, checkpoint):
  function text_proj_from_original_config (line 330) | def text_proj_from_original_config():
  function text_proj_original_checkpoint_to_diffusers_checkpoint (line 343) | def text_proj_original_checkpoint_to_diffusers_checkpoint(checkpoint):
  function super_res_unet_first_steps_model_from_original_config (line 399) | def super_res_unet_first_steps_model_from_original_config():
  function super_res_unet_first_steps_original_checkpoint_to_diffusers_checkpoint (line 405) | def super_res_unet_first_steps_original_checkpoint_to_diffusers_checkpoi...
  function super_res_unet_last_step_model_from_original_config (line 494) | def super_res_unet_last_step_model_from_original_config():
  function super_res_unet_last_step_original_checkpoint_to_diffusers_checkpoint (line 500) | def super_res_unet_last_step_original_checkpoint_to_diffusers_checkpoint...
  function unet_time_embeddings (line 568) | def unet_time_embeddings(checkpoint, original_unet_prefix):
  function unet_conv_in (line 584) | def unet_conv_in(checkpoint, original_unet_prefix):
  function unet_conv_norm_out (line 598) | def unet_conv_norm_out(checkpoint, original_unet_prefix):
  function unet_conv_out (line 612) | def unet_conv_out(checkpoint, original_unet_prefix):
  function unet_downblock_to_diffusers_checkpoint (line 626) | def unet_downblock_to_diffusers_checkpoint(
  function unet_midblock_to_diffusers_checkpoint (line 685) | def unet_midblock_to_diffusers_checkpoint(model, checkpoint, *, original...
  function unet_upblock_to_diffusers_checkpoint (line 729) | def unet_upblock_to_diffusers_checkpoint(
  function resnet_to_diffusers_checkpoint (line 799) | def resnet_to_diffusers_checkpoint(checkpoint, *, diffusers_resnet_prefi...
  function attention_to_diffusers_checkpoint (line 826) | def attention_to_diffusers_checkpoint(checkpoint, *, diffusers_attention...
  function split_attentions (line 887) | def split_attentions(*, weight, bias, split, chunk_size):
  function text_encoder (line 919) | def text_encoder():
  function prior (line 942) | def prior(*, args, checkpoint_map_location):
  function decoder (line 966) | def decoder(*, args, checkpoint_map_location):
  function super_res_unet (line 1001) | def super_res_unet(*, args, checkpoint_map_location):
  function load_checkpoint_to_model (line 1033) | def load_checkpoint_to_model(checkpoint, model, strict=False):

FILE: scripts/convert_ldm_original_checkpoint_to_diffusers.py
  function shave_segments (line 25) | def shave_segments(path, n_shave_prefix_segments=1):
  function renew_resnet_paths (line 35) | def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
  function renew_attention_paths (line 57) | def renew_attention_paths(old_list, n_shave_prefix_segments=0):
  function assign_to_checkpoint (line 78) | def assign_to_checkpoint(
  function convert_ldm_checkpoint (line 130) | def convert_ldm_checkpoint(checkpoint, config):

FILE: scripts/convert_models_diffuser_to_diffusers.py
  function unet (line 15) | def unet(hor):
  function value_function (line 59) | def value_function():

FILE: scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py
  function convert_ncsnpp_checkpoint (line 25) | def convert_ncsnpp_checkpoint(checkpoint, config):

FILE: scripts/convert_stable_diffusion_checkpoint_to_onnx.py
  function onnx_export (line 31) | def onnx_export(
  function convert_models (line 71) | def convert_models(model_path: str, output_path: str, opset: int, fp16: ...

FILE: scripts/convert_vae_pt_to_diffusers.py
  function custom_convert_ldm_vae_checkpoint (line 18) | def custom_convert_ldm_vae_checkpoint(checkpoint, config):
  function vae_pt_to_vae_diffuser (line 119) | def vae_pt_to_vae_diffuser(

FILE: scripts/convert_versatile_diffusion_to_diffusers.py
  function shave_segments (line 96) | def shave_segments(path, n_shave_prefix_segments=1):
  function renew_resnet_paths (line 106) | def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
  function renew_vae_resnet_paths (line 128) | def renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0):
  function renew_attention_paths (line 144) | def renew_attention_paths(old_list, n_shave_prefix_segments=0):
  function renew_vae_attention_paths (line 165) | def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0):
  function assign_to_checkpoint (line 195) | def assign_to_checkpoint(
  function conv_attn_to_linear (line 247) | def conv_attn_to_linear(checkpoint):
  function create_image_unet_diffusers_config (line 259) | def create_image_unet_diffusers_config(unet_params):
  function create_text_unet_diffusers_config (line 298) | def create_text_unet_diffusers_config(unet_params):
  function create_vae_diffusers_config (line 337) | def create_vae_diffusers_config(vae_params):
  function create_diffusers_scheduler (line 359) | def create_diffusers_scheduler(original_config):
  function convert_vd_unet_checkpoint (line 369) | def convert_vd_unet_checkpoint(checkpoint, config, unet_key, extract_ema...
  function convert_vd_vae_checkpoint (line 574) | def convert_vd_vae_checkpoint(checkpoint, config):

FILE: scripts/convert_vq_diffusion_to_diffusers.py
  function vqvae_model_from_original_config (line 61) | def vqvae_model_from_original_config(original_config):
  function get_down_block_types (line 110) | def get_down_block_types(original_encoder_config):
  function get_up_block_types (line 131) | def get_up_block_types(original_decoder_config):
  function coerce_attn_resolutions (line 152) | def coerce_attn_resolutions(attn_resolutions):
  function coerce_resolution (line 163) | def coerce_resolution(resolution):
  function vqvae_original_checkpoint_to_diffusers_checkpoint (line 179) | def vqvae_original_checkpoint_to_diffusers_checkpoint(model, checkpoint):
  function vqvae_encoder_to_diffusers_checkpoint (line 210) | def vqvae_encoder_to_diffusers_checkpoint(model, checkpoint):
  function vqvae_decoder_to_diffusers_checkpoint (line 310) | def vqvae_decoder_to_diffusers_checkpoint(model, checkpoint):
  function vqvae_resnet_to_diffusers_checkpoint (line 413) | def vqvae_resnet_to_diffusers_checkpoint(resnet, checkpoint, *, diffuser...
  function vqvae_attention_to_diffusers_checkpoint (line 440) | def vqvae_attention_to_diffusers_checkpoint(checkpoint, *, diffusers_att...
  function transformer_model_from_original_config (line 471) | def transformer_model_from_original_config(
  function transformer_original_checkpoint_to_diffusers_checkpoint (line 537) | def transformer_original_checkpoint_to_diffusers_checkpoint(model, check...
  function transformer_ada_norm_to_diffusers_checkpoint (line 653) | def transformer_ada_norm_to_diffusers_checkpoint(checkpoint, *, diffuser...
  function transformer_attention_to_diffusers_checkpoint (line 661) | def transformer_attention_to_diffusers_checkpoint(checkpoint, *, diffuse...
  function transformer_feedforward_to_diffusers_checkpoint (line 678) | def transformer_feedforward_to_diffusers_checkpoint(checkpoint, *, diffu...
  function read_config_file (line 690) | def read_config_file(filename):

FILE: setup.py
  function deps_list (line 138) | def deps_list(*pkgs):
  class DepsTableUpdateCommand (line 142) | class DepsTableUpdateCommand(Command):
    method initialize_options (line 154) | def initialize_options(self):
    method finalize_options (line 157) | def finalize_options(self):
    method run (line 160) | def run(self):

FILE: src/diffusers/commands/__init__.py
  class BaseDiffusersCLICommand (line 19) | class BaseDiffusersCLICommand(ABC):
    method register_subcommand (line 22) | def register_subcommand(parser: ArgumentParser):
    method run (line 26) | def run(self):

FILE: src/diffusers/commands/diffusers_cli.py
  function main (line 21) | def main():

FILE: src/diffusers/commands/env.py
  function info_command_factory (line 25) | def info_command_factory(_):
  class EnvironmentCommand (line 29) | class EnvironmentCommand(BaseDiffusersCLICommand):
    method register_subcommand (line 31) | def register_subcommand(parser: ArgumentParser):
    method run (line 35) | def run(self):
    method format_dict (line 83) | def format_dict(d):

FILE: src/diffusers/configuration_utils.py
  class FrozenDict (line 42) | class FrozenDict(OrderedDict):
    method __init__ (line 43) | def __init__(self, *args, **kwargs):
    method __delitem__ (line 51) | def __delitem__(self, *args, **kwargs):
    method setdefault (line 54) | def setdefault(self, *args, **kwargs):
    method pop (line 57) | def pop(self, *args, **kwargs):
    method update (line 60) | def update(self, *args, **kwargs):
    method __setattr__ (line 63) | def __setattr__(self, name, value):
    method __setitem__ (line 68) | def __setitem__(self, name, value):
  class ConfigMixin (line 74) | class ConfigMixin:
    method register_to_config (line 97) | def register_to_config(self, **kwargs):
    method save_config (line 120) | def save_config(self, save_directory: Union[str, os.PathLike], push_to...
    method from_config (line 141) | def from_config(cls, config: Union[FrozenDict, Dict[str, Any]] = None,...
    method get_config_dict (line 224) | def get_config_dict(cls, *args, **kwargs):
    method load_config (line 233) | def load_config(
    method _get_init_keys (line 390) | def _get_init_keys(cls):
    method extract_init_dict (line 394) | def extract_init_dict(cls, config_dict, **kwargs):
    method _dict_from_json_file (line 478) | def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]):
    method __repr__ (line 483) | def __repr__(self):
    method config (line 487) | def config(self) -> Dict[str, Any]:
    method to_json_string (line 496) | def to_json_string(self) -> str:
    method to_json_file (line 517) | def to_json_file(self, json_file_path: Union[str, os.PathLike]):
  function register_to_config (line 529) | def register_to_config(init):
  function flax_register_to_config (line 574) | def flax_register_to_config(cls):

FILE: src/diffusers/dependency_versions_check.py
  function dep_version_check (line 46) | def dep_version_check(pkg, hint=None):

FILE: src/diffusers/experimental/rl/value_guided_sampling.py
  class ValueGuidedRLPipeline (line 25) | class ValueGuidedRLPipeline(DiffusionPipeline):
    method __init__ (line 42) | def __init__(
    method normalize (line 70) | def normalize(self, x_in, key):
    method de_normalize (line 73) | def de_normalize(self, x_in, key):
    method to_torch (line 76) | def to_torch(self, x_in):
    method reset_x0 (line 83) | def reset_x0(self, x_in, cond, act_dim):
    method run_diffusion (line 88) | def run_diffusion(self, x, conditions, n_guide_steps, scale):
    method __call__ (line 121) | def __call__(self, obs, batch_size=64, planning_horizon=32, n_guide_st...

FILE: src/diffusers/loaders.py
  class AttnProcsLayers (line 36) | class AttnProcsLayers(torch.nn.Module):
    method __init__ (line 37) | def __init__(self, state_dict: Dict[str, torch.Tensor]):
  class UNet2DConditionLoadersMixin (line 66) | class UNet2DConditionLoadersMixin:
    method load_attn_procs (line 67) | def load_attn_procs(self, pretrained_model_name_or_path_or_dict: Union...
    method save_attn_procs (line 224) | def save_attn_procs(

FILE: src/diffusers/models/attention.py
  class AttentionBlock (line 33) | class AttentionBlock(nn.Module):
    method __init__ (line 51) | def __init__(
    method reshape_heads_to_batch_dim (line 77) | def reshape_heads_to_batch_dim(self, tensor):
    method reshape_batch_dim_to_heads (line 84) | def reshape_batch_dim_to_heads(self, tensor):
    method set_use_memory_efficient_attention_xformers (line 91) | def set_use_memory_efficient_attention_xformers(
    method forward (line 121) | def forward(self, hidden_states):
  class BasicTransformerBlock (line 177) | class BasicTransformerBlock(nn.Module):
    method __init__ (line 194) | def __init__(
    method forward (line 271) | def forward(
  class FeedForward (line 331) | class FeedForward(nn.Module):
    method __init__ (line 344) | def __init__(
    method forward (line 377) | def forward(self, hidden_states):
  class GELU (line 383) | class GELU(nn.Module):
    method __init__ (line 388) | def __init__(self, dim_in: int, dim_out: int, approximate: str = "none"):
    method gelu (line 393) | def gelu(self, gate):
    method forward (line 399) | def forward(self, hidden_states):
  class GEGLU (line 405) | class GEGLU(nn.Module):
    method __init__ (line 414) | def __init__(self, dim_in: int, dim_out: int):
    method gelu (line 418) | def gelu(self, gate):
    method forward (line 424) | def forward(self, hidden_states):
  class ApproximateGELU (line 429) | class ApproximateGELU(nn.Module):
    method __init__ (line 436) | def __init__(self, dim_in: int, dim_out: int):
    method forward (line 440) | def forward(self, x):
  class AdaLayerNorm (line 445) | class AdaLayerNorm(nn.Module):
    method __init__ (line 450) | def __init__(self, embedding_dim, num_embeddings):
    method forward (line 457) | def forward(self, x, timestep):
  class AdaLayerNormZero (line 464) | class AdaLayerNormZero(nn.Module):
    method __init__ (line 469) | def __init__(self, embedding_dim, num_embeddings):
    method forward (line 478) | def forward(self, x, timestep, class_labels, hidden_dtype=None):
  class AdaGroupNorm (line 485) | class AdaGroupNorm(nn.Module):
    method __init__ (line 490) | def __init__(
    method forward (line 508) | def forward(self, x, emb):

FILE: src/diffusers/models/attention_flax.py
  class FlaxCrossAttention (line 19) | class FlaxCrossAttention(nn.Module):
    method setup (line 42) | def setup(self):
    method reshape_heads_to_batch_dim (line 53) | def reshape_heads_to_batch_dim(self, tensor):
    method reshape_batch_dim_to_heads (line 61) | def reshape_batch_dim_to_heads(self, tensor):
    method __call__ (line 69) | def __call__(self, hidden_states, context=None, deterministic=True):
  class FlaxBasicTransformerBlock (line 92) | class FlaxBasicTransformerBlock(nn.Module):
    method setup (line 119) | def setup(self):
    method __call__ (line 129) | def __call__(self, hidden_states, context, deterministic=True):
  class FlaxTransformer2DModel (line 151) | class FlaxTransformer2DModel(nn.Module):
    method setup (line 182) | def setup(self):
    method __call__ (line 220) | def __call__(self, hidden_states, context, deterministic=True):
  class FlaxFeedForward (line 245) | class FlaxFeedForward(nn.Module):
    method setup (line 266) | def setup(self):
    method __call__ (line 272) | def __call__(self, hidden_states, deterministic=True):
  class FlaxGEGLU (line 278) | class FlaxGEGLU(nn.Module):
    method setup (line 295) | def setup(self):
    method __call__ (line 299) | def __call__(self, hidden_states, deterministic=True):

FILE: src/diffusers/models/autoencoder_kl.py
  class AutoencoderKLOutput (line 27) | class AutoencoderKLOutput(BaseOutput):
  class AutoencoderKL (line 40) | class AutoencoderKL(ModelMixin, ConfigMixin):
    method __init__ (line 69) | def __init__(
    method enable_tiling (line 124) | def enable_tiling(self, use_tiling: bool = True):
    method disable_tiling (line 132) | def disable_tiling(self):
    method enable_slicing (line 139) | def enable_slicing(self):
    method disable_slicing (line 146) | def disable_slicing(self):
    method encode (line 154) | def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> Au...
    method _decode (line 167) | def _decode(self, z: torch.FloatTensor, return_dict: bool = True) -> U...
    method decode (line 180) | def decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Un...
    method blend_v (line 192) | def blend_v(self, a, b, blend_extent):
    method blend_h (line 197) | def blend_h(self, a, b, blend_extent):
    method tiled_encode (line 202) | def tiled_encode(self, x: torch.FloatTensor, return_dict: bool = True)...
    method tiled_decode (line 248) | def tiled_decode(self, z: torch.FloatTensor, return_dict: bool = True)...
    method forward (line 294) | def forward(

FILE: src/diffusers/models/controlnet.py
  class ControlNetOutput (line 38) | class ControlNetOutput(BaseOutput):
  class ControlNetConditioningEmbedding (line 43) | class ControlNetConditioningEmbedding(nn.Module):
    method __init__ (line 53) | def __init__(
    method forward (line 75) | def forward(self, conditioning):
  class ControlNetModel (line 88) | class ControlNetModel(ModelMixin, ConfigMixin):
    method __init__ (line 92) | def __init__(
    method attn_processors (line 262) | def attn_processors(self) -> Dict[str, AttnProcessor]:
    method set_attn_processor (line 286) | def set_attn_processor(self, processor: Union[AttnProcessor, Dict[str,...
    method set_attention_slice (line 317) | def set_attention_slice(self, slice_size):
    method _set_gradient_checkpointing (line 382) | def _set_gradient_checkpointing(self, module, value=False):
    method forward (line 386) | def forward(
  function zero_module (line 503) | def zero_module(module):

FILE: src/diffusers/models/cross_attention.py
  class CrossAttention (line 34) | class CrossAttention(nn.Module):
    method __init__ (line 49) | def __init__(
    method set_use_memory_efficient_attention_xformers (line 108) | def set_use_memory_efficient_attention_xformers(
    method set_attention_slice (line 173) | def set_attention_slice(self, slice_size):
    method set_processor (line 188) | def set_processor(self, processor: "AttnProcessor"):
    method forward (line 201) | def forward(self, hidden_states, encoder_hidden_states=None, attention...
    method batch_to_head_dim (line 213) | def batch_to_head_dim(self, tensor):
    method head_to_batch_dim (line 220) | def head_to_batch_dim(self, tensor):
    method get_attention_scores (line 227) | def get_attention_scores(self, query, key, attention_mask=None):
    method prepare_attention_mask (line 258) | def prepare_attention_mask(self, attention_mask, target_length, batch_...
  class CrossAttnProcessor (line 290) | class CrossAttnProcessor:
    method __call__ (line 291) | def __call__(
  class LoRALinearLayer (line 326) | class LoRALinearLayer(nn.Module):
    method __init__ (line 327) | def __init__(self, in_features, out_features, rank=4):
    method forward (line 339) | def forward(self, hidden_states):
  class LoRACrossAttnProcessor (line 349) | class LoRACrossAttnProcessor(nn.Module):
    method __init__ (line 350) | def __init__(self, hidden_size, cross_attention_dim=None, rank=4):
    method __call__ (line 362) | def __call__(
  class CrossAttnAddedKVProcessor (line 391) | class CrossAttnAddedKVProcessor:
    method __call__ (line 392) | def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden...
  class XFormersCrossAttnProcessor (line 433) | class XFormersCrossAttnProcessor:
    method __init__ (line 434) | def __init__(self, attention_op: Optional[Callable] = None):
    method __call__ (line 437) | def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden...
  class AttnProcessor2_0 (line 469) | class AttnProcessor2_0:
    method __init__ (line 470) | def __init__(self):
    method __call__ (line 474) | def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden...
  class LoRAXFormersCrossAttnProcessor (line 513) | class LoRAXFormersCrossAttnProcessor(nn.Module):
    method __init__ (line 514) | def __init__(self, hidden_size, cross_attention_dim, rank=4, attention...
    method __call__ (line 527) | def __call__(
  class SlicedAttnProcessor (line 557) | class SlicedAttnProcessor:
    method __init__ (line 558) | def __init__(self, slice_size):
    method __call__ (line 561) | def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden...
  class SlicedAttnAddedKVProcessor (line 609) | class SlicedAttnAddedKVProcessor:
    method __init__ (line 610) | def __init__(self, slice_size):
    method __call__ (line 613) | def __call__(self, attn: "CrossAttention", hidden_states, encoder_hidd...

FILE: src/diffusers/models/dual_transformer_2d.py
  class DualTransformer2DModel (line 21) | class DualTransformer2DModel(nn.Module):
    method __init__ (line 48) | def __init__(
    method forward (line 97) | def forward(

FILE: src/diffusers/models/embeddings.py
  function get_timestep_embedding (line 22) | def get_timestep_embedding(
  function get_2d_sincos_pos_embed (line 65) | def get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False, extra...
  function get_2d_sincos_pos_embed_from_grid (line 82) | def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
  function get_1d_sincos_pos_embed_from_grid (line 94) | def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
  class PatchEmbed (line 115) | class PatchEmbed(nn.Module):
    method __init__ (line 118) | def __init__(
    method forward (line 146) | def forward(self, latent):
  class TimestepEmbedding (line 155) | class TimestepEmbedding(nn.Module):
    method __init__ (line 156) | def __init__(
    method forward (line 200) | def forward(self, sample, condition=None):
  class Timesteps (line 215) | class Timesteps(nn.Module):
    method __init__ (line 216) | def __init__(self, num_channels: int, flip_sin_to_cos: bool, downscale...
    method forward (line 222) | def forward(self, timesteps):
  class GaussianFourierProjection (line 232) | class GaussianFourierProjection(nn.Module):
    method __init__ (line 235) | def __init__(
    method forward (line 249) | def forward(self, x):
  class ImagePositionalEmbeddings (line 262) | class ImagePositionalEmbeddings(nn.Module):
    method __init__ (line 286) | def __init__(
    method forward (line 304) | def forward(self, index):
  class LabelEmbedding (line 327) | class LabelEmbedding(nn.Module):
    method __init__ (line 337) | def __init__(self, num_classes, hidden_size, dropout_prob):
    method token_drop (line 344) | def token_drop(self, labels, force_drop_ids=None):
    method forward (line 355) | def forward(self, labels, force_drop_ids=None):
  class CombinedTimestepLabelEmbeddings (line 363) | class CombinedTimestepLabelEmbeddings(nn.Module):
    method __init__ (line 364) | def __init__(self, num_classes, embedding_dim, class_dropout_prob=0.1):
    method forward (line 371) | def forward(self, timestep, class_labels, hidden_dtype=None):

FILE: src/diffusers/models/embeddings_flax.py
  function get_sinusoidal_embeddings (line 20) | def get_sinusoidal_embeddings(
  class FlaxTimestepEmbedding (line 58) | class FlaxTimestepEmbedding(nn.Module):
    method __call__ (line 72) | def __call__(self, temb):
  class FlaxTimesteps (line 79) | class FlaxTimesteps(nn.Module):
    method __call__ (line 92) | def __call__(self, timesteps):

FILE: src/diffusers/models/modeling_flax_pytorch_utils.py
  function rename_key (line 28) | def rename_key(key):
  function rename_key_and_reshape_tensor (line 43) | def rename_key_and_reshape_tensor(pt_tuple_key, pt_tensor, random_flax_s...
  function convert_pytorch_state_dict_to_flax (line 90) | def convert_pytorch_state_dict_to_flax(pt_state_dict, flax_model, init_k...

FILE: src/diffusers/models/modeling_flax_utils.py
  class FlaxModelMixin (line 45) | class FlaxModelMixin:
    method _from_config (line 57) | def _from_config(cls, config, **kwargs):
    method _cast_floating_to (line 63) | def _cast_floating_to(self, params: Union[Dict, FrozenDict], dtype: jn...
    method to_bf16 (line 87) | def to_bf16(self, params: Union[Dict, FrozenDict], mask: Any = None):
    method to_fp32 (line 126) | def to_fp32(self, params: Union[Dict, FrozenDict], mask: Any = None):
    method to_fp16 (line 153) | def to_fp16(self, params: Union[Dict, FrozenDict], mask: Any = None):
    method init_weights (line 192) | def init_weights(self, rng: jax.random.KeyArray) -> Dict:
    method from_pretrained (line 196) | def from_pretrained(
    method save_pretrained (line 487) | def save_pretrained(

FILE: src/diffusers/models/modeling_pytorch_flax_utils.py
  function load_flax_checkpoint_in_pytorch_model (line 37) | def load_flax_checkpoint_in_pytorch_model(pt_model, model_file):
  function load_flax_weights_in_pytorch_model (line 58) | def load_flax_weights_in_pytorch_model(pt_model, flax_state):

FILE: src/diffusers/models/modeling_utils.py
  function get_parameter_device (line 65) | def get_parameter_device(parameter: torch.nn.Module):
  function get_parameter_dtype (line 80) | def get_parameter_dtype(parameter: torch.nn.Module):
  function load_state_dict (line 95) | def load_state_dict(checkpoint_file: Union[str, os.PathLike], variant: O...
  function _load_state_dict_into_model (line 126) | def _load_state_dict_into_model(model_to_load, state_dict):
  function _add_variant (line 147) | def _add_variant(weights_name: str, variant: Optional[str] = None) -> str:
  class ModelMixin (line 156) | class ModelMixin(torch.nn.Module):
    method __init__ (line 170) | def __init__(self):
    method is_gradient_checkpointing (line 174) | def is_gradient_checkpointing(self) -> bool:
    method enable_gradient_checkpointing (line 183) | def enable_gradient_checkpointing(self):
    method disable_gradient_checkpointing (line 194) | def disable_gradient_checkpointing(self):
    method set_use_memory_efficient_attention_xformers (line 204) | def set_use_memory_efficient_attention_xformers(
    method enable_xformers_memory_efficient_attention (line 221) | def enable_xformers_memory_efficient_attention(self, attention_op: Opt...
    method disable_xformers_memory_efficient_attention (line 253) | def disable_xformers_memory_efficient_attention(self):
    method save_pretrained (line 259) | def save_pretrained(
    method from_pretrained (line 320) | def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union...
    method _load_pretrained_model (line 646) | def _load_pretrained_model(
    method device (line 750) | def device(self) -> device:
    method dtype (line 758) | def dtype(self) -> torch.dtype:
    method num_parameters (line 764) | def num_parameters(self, only_trainable: bool = False, exclude_embeddi...
  function _get_model_file (line 793) | def _get_model_file(

FILE: src/diffusers/models/prior_transformer.py
  class PriorTransformerOutput (line 16) | class PriorTransformerOutput(BaseOutput):
  class PriorTransformer (line 26) | class PriorTransformer(ModelMixin, ConfigMixin):
    method __init__ (line 52) | def __init__(
    method forward (line 107) | def forward(
    method post_process_latents (line 192) | def post_process_latents(self, prior_latents):

FILE: src/diffusers/models/resnet.py
  class Upsample1D (line 11) | class Upsample1D(nn.Module):
    method __init__ (line 22) | def __init__(self, channels, use_conv=False, use_conv_transpose=False,...
    method forward (line 36) | def forward(self, x):
  class Downsample1D (line 49) | class Downsample1D(nn.Module):
    method __init__ (line 60) | def __init__(self, channels, use_conv=False, out_channels=None, paddin...
    method forward (line 75) | def forward(self, x):
  class Upsample2D (line 80) | class Upsample2D(nn.Module):
    method __init__ (line 91) | def __init__(self, channels, use_conv=False, use_conv_transpose=False,...
    method forward (line 111) | def forward(self, hidden_states, output_size=None):
  class Downsample2D (line 149) | class Downsample2D(nn.Module):
    method __init__ (line 160) | def __init__(self, channels, use_conv=False, out_channels=None, paddin...
    method forward (line 184) | def forward(self, hidden_states):
  class FirUpsample2D (line 196) | class FirUpsample2D(nn.Module):
    method __init__ (line 197) | def __init__(self, channels=None, out_channels=None, use_conv=False, f...
    method _upsample_2d (line 206) | def _upsample_2d(self, hidden_states, weight=None, kernel=None, factor...
    method forward (line 286) | def forward(self, hidden_states):
  class FirDownsample2D (line 296) | class FirDownsample2D(nn.Module):
    method __init__ (line 297) | def __init__(self, channels=None, out_channels=None, use_conv=False, f...
    method _downsample_2d (line 306) | def _downsample_2d(self, hidden_states, weight=None, kernel=None, fact...
    method forward (line 360) | def forward(self, hidden_states):
  class KDownsample2D (line 371) | class KDownsample2D(nn.Module):
    method __init__ (line 372) | def __init__(self, pad_mode="reflect"):
    method forward (line 379) | def forward(self, x):
  class KUpsample2D (line 387) | class KUpsample2D(nn.Module):
    method __init__ (line 388) | def __init__(self, pad_mode="reflect"):
    method forward (line 395) | def forward(self, x):
  class ResnetBlock2D (line 403) | class ResnetBlock2D(nn.Module):
    method __init__ (line 434) | def __init__(
    method forward (line 534) | def forward(self, input_tensor, temb):
  class Mish (line 585) | class Mish(torch.nn.Module):
    method forward (line 586) | def forward(self, hidden_states):
  function rearrange_dims (line 591) | def rearrange_dims(tensor):
  class Conv1dBlock (line 602) | class Conv1dBlock(nn.Module):
    method __init__ (line 607) | def __init__(self, inp_channels, out_channels, kernel_size, n_groups=8):
    method forward (line 614) | def forward(self, x):
  class ResidualTemporalBlock1D (line 624) | class ResidualTemporalBlock1D(nn.Module):
    method __init__ (line 625) | def __init__(self, inp_channels, out_channels, embed_dim, kernel_size=5):
    method forward (line 637) | def forward(self, x, t):
  function upsample_2d (line 653) | def upsample_2d(hidden_states, kernel=None, factor=2, gain=1):
  function downsample_2d (line 690) | def downsample_2d(hidden_states, kernel=None, factor=2, gain=1):
  function upfirdn2d_native (line 725) | def upfirdn2d_native(tensor, kernel, up=1, down=1, pad=(0, 0)):

FILE: src/diffusers/models/resnet_flax.py
  class FlaxUpsample2D (line 19) | class FlaxUpsample2D(nn.Module):
    method setup (line 23) | def setup(self):
    method __call__ (line 32) | def __call__(self, hidden_states):
  class FlaxDownsample2D (line 43) | class FlaxDownsample2D(nn.Module):
    method setup (line 47) | def setup(self):
    method __call__ (line 56) | def __call__(self, hidden_states):
  class FlaxResnetBlock2D (line 63) | class FlaxResnetBlock2D(nn.Module):
    method setup (line 70) | def setup(self):
    method __call__ (line 106) | def __call__(self, hidden_states, temb, deterministic=True):

FILE: src/diffusers/models/transformer_2d.py
  class Transformer2DModelOutput (line 30) | class Transformer2DModelOutput(BaseOutput):
  class Transformer2DModel (line 41) | class Transformer2DModel(ModelMixin, ConfigMixin):
    method __init__ (line 80) | def __init__(
    method forward (line 214) | def forward(

FILE: src/diffusers/models/unet_1d.py
  class UNet1DOutput (line 29) | class UNet1DOutput(BaseOutput):
  class UNet1DModel (line 39) | class UNet1DModel(ModelMixin, ConfigMixin):
    method __init__ (line 70) | def __init__(
    method forward (line 190) | def forward(

FILE: src/diffusers/models/unet_1d_blocks.py
  class DownResnetBlock1D (line 23) | class DownResnetBlock1D(nn.Module):
    method __init__ (line 24) | def __init__(
    method forward (line 71) | def forward(self, hidden_states, temb=None):
  class UpResnetBlock1D (line 89) | class UpResnetBlock1D(nn.Module):
    method __init__ (line 90) | def __init__(
    method forward (line 135) | def forward(self, hidden_states, res_hidden_states_tuple=None, temb=No...
  class ValueFunctionMidBlock1D (line 153) | class ValueFunctionMidBlock1D(nn.Module):
    method __init__ (line 154) | def __init__(self, in_channels, out_channels, embed_dim):
    method forward (line 165) | def forward(self, x, temb=None):
  class MidResTemporalBlock1D (line 173) | class MidResTemporalBlock1D(nn.Module):
    method __init__ (line 174) | def __init__(
    method forward (line 217) | def forward(self, hidden_states, temb):
  class OutConv1DBlock (line 230) | class OutConv1DBlock(nn.Module):
    method __init__ (line 231) | def __init__(self, num_groups_out, out_channels, embed_dim, act_fn):
    method forward (line 241) | def forward(self, hidden_states, temb=None):
  class OutValueFunctionBlock (line 251) | class OutValueFunctionBlock(nn.Module):
    method __init__ (line 252) | def __init__(self, fc_dim, embed_dim):
    method forward (line 262) | def forward(self, hidden_states, temb):
  class Downsample1d (line 291) | class Downsample1d(nn.Module):
    method __init__ (line 292) | def __init__(self, kernel="linear", pad_mode="reflect"):
    method forward (line 299) | def forward(self, hidden_states):
  class Upsample1d (line 307) | class Upsample1d(nn.Module):
    method __init__ (line 308) | def __init__(self, kernel="linear", pad_mode="reflect"):
    method forward (line 315) | def forward(self, hidden_states, temb=None):
  class SelfAttention1d (line 323) | class SelfAttention1d(nn.Module):
    method __init__ (line 324) | def __init__(self, in_channels, n_head=1, dropout_rate=0.0):
    method transpose_for_scores (line 338) | def transpose_for_scores(self, projection: torch.Tensor) -> torch.Tensor:
    method forward (line 344) | def forward(self, hidden_states):
  class ResConvBlock (line 381) | class ResConvBlock(nn.Module):
    method __init__ (line 382) | def __init__(self, in_channels, mid_channels, out_channels, is_last=Fa...
    method forward (line 399) | def forward(self, hidden_states):
  class UNetMidBlock1D (line 415) | class UNetMidBlock1D(nn.Module):
    method __init__ (line 416) | def __init__(self, mid_channels, in_channels, out_channels=None):
    method forward (line 444) | def forward(self, hidden_states, temb=None):
  class AttnDownBlock1D (line 455) | class AttnDownBlock1D(nn.Module):
    method __init__ (line 456) | def __init__(self, out_channels, in_channels, mid_channels=None):
    method forward (line 475) | def forward(self, hidden_states, temb=None):
  class DownBlock1D (line 485) | class DownBlock1D(nn.Module):
    method __init__ (line 486) | def __init__(self, out_channels, in_channels, mid_channels=None):
    method forward (line 499) | def forward(self, hidden_states, temb=None):
  class DownBlock1DNoSkip (line 508) | class DownBlock1DNoSkip(nn.Module):
    method __init__ (line 509) | def __init__(self, out_channels, in_channels, mid_channels=None):
    method forward (line 521) | def forward(self, hidden_states, temb=None):
  class AttnUpBlock1D (line 529) | class AttnUpBlock1D(nn.Module):
    method __init__ (line 530) | def __init__(self, in_channels, out_channels, mid_channels=None):
    method forward (line 549) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
  class UpBlock1D (line 562) | class UpBlock1D(nn.Module):
    method __init__ (line 563) | def __init__(self, in_channels, out_channels, mid_channels=None):
    method forward (line 576) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
  class UpBlock1DNoSkip (line 588) | class UpBlock1DNoSkip(nn.Module):
    method __init__ (line 589) | def __init__(self, in_channels, out_channels, mid_channels=None):
    method forward (line 601) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
  function get_down_block (line 611) | def get_down_block(down_block_type, num_layers, in_channels, out_channel...
  function get_up_block (line 629) | def get_up_block(up_block_type, num_layers, in_channels, out_channels, t...
  function get_mid_block (line 647) | def get_mid_block(mid_block_type, num_layers, in_channels, mid_channels,...
  function get_out_block (line 663) | def get_out_block(*, out_block_type, num_groups_out, embed_dim, out_chan...

FILE: src/diffusers/models/unet_2d.py
  class UNet2DOutput (line 28) | class UNet2DOutput(BaseOutput):
  class UNet2DModel (line 38) | class UNet2DModel(ModelMixin, ConfigMixin):
    method __init__ (line 81) | def __init__(
    method forward (line 217) | def forward(

FILE: src/diffusers/models/unet_2d_blocks.py
  function get_down_block (line 27) | def get_down_block(
  function get_up_block (line 199) | def get_up_block(
  class UNetMidBlock2D (line 371) | class UNetMidBlock2D(nn.Module):
    method __init__ (line 372) | def __init__(
    method forward (line 440) | def forward(self, hidden_states, temb=None):
  class UNetMidBlock2DCrossAttn (line 450) | class UNetMidBlock2DCrossAttn(nn.Module):
    method __init__ (line 451) | def __init__(
    method forward (line 535) | def forward(
  class UNetMidBlock2DSimpleCrossAttn (line 550) | class UNetMidBlock2DSimpleCrossAttn(nn.Module):
    method __init__ (line 551) | def __init__(
    method forward (line 624) | def forward(
  class AttnDownBlock2D (line 644) | class AttnDownBlock2D(nn.Module):
    method __init__ (line 645) | def __init__(
    method forward (line 706) | def forward(self, hidden_states, temb=None):
  class CrossAttnDownBlock2D (line 723) | class CrossAttnDownBlock2D(nn.Module):
    method __init__ (line 724) | def __init__(
    method forward (line 810) | def forward(
  class DownBlock2D (line 854) | class DownBlock2D(nn.Module):
    method __init__ (line 855) | def __init__(
    method forward (line 906) | def forward(self, hidden_states, temb=None):
  class DownEncoderBlock2D (line 933) | class DownEncoderBlock2D(nn.Module):
    method __init__ (line 934) | def __init__(
    method forward (line 982) | def forward(self, hidden_states):
  class AttnDownEncoderBlock2D (line 993) | class AttnDownEncoderBlock2D(nn.Module):
    method __init__ (line 994) | def __init__(
    method forward (line 1054) | def forward(self, hidden_states):
  class AttnSkipDownBlock2D (line 1066) | class AttnSkipDownBlock2D(nn.Module):
    method __init__ (line 1067) | def __init__(
    method forward (line 1136) | def forward(self, hidden_states, temb=None, skip_sample=None):
  class SkipDownBlock2D (line 1156) | class SkipDownBlock2D(nn.Module):
    method __init__ (line 1157) | def __init__(
    method forward (line 1216) | def forward(self, hidden_states, temb=None, skip_sample=None):
  class ResnetDownsampleBlock2D (line 1235) | class ResnetDownsampleBlock2D(nn.Module):
    method __init__ (line 1236) | def __init__(
    method forward (line 1296) | def forward(self, hidden_states, temb=None):
  class SimpleCrossAttnDownBlock2D (line 1323) | class SimpleCrossAttnDownBlock2D(nn.Module):
    method __init__ (line 1324) | def __init__(
    method forward (line 1406) | def forward(
  class KDownBlock2D (line 1435) | class KDownBlock2D(nn.Module):
    method __init__ (line 1436) | def __init__(
    method forward (line 1481) | def forward(self, hidden_states, temb=None):
  class KCrossAttnDownBlock2D (line 1506) | class KCrossAttnDownBlock2D(nn.Module):
    method __init__ (line 1507) | def __init__(
    method forward (line 1571) | def forward(
  class AttnUpBlock2D (line 1618) | class AttnUpBlock2D(nn.Module):
    method __init__ (line 1619) | def __init__(
    method forward (line 1676) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
  class CrossAttnUpBlock2D (line 1693) | class CrossAttnUpBlock2D(nn.Module):
    method __init__ (line 1694) | def __init__(
    method forward (line 1776) | def forward(
  class UpBlock2D (line 1826) | class UpBlock2D(nn.Module):
    method __init__ (line 1827) | def __init__(
    method forward (line 1874) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None, u...
  class UpDecoderBlock2D (line 1900) | class UpDecoderBlock2D(nn.Module):
    method __init__ (line 1901) | def __init__(
    method forward (line 1943) | def forward(self, hidden_states):
  class AttnUpDecoderBlock2D (line 1954) | class AttnUpDecoderBlock2D(nn.Module):
    method __init__ (line 1955) | def __init__(
    method forward (line 2009) | def forward(self, hidden_states):
  class AttnSkipUpBlock2D (line 2021) | class AttnSkipUpBlock2D(nn.Module):
    method __init__ (line 2022) | def __init__(
    method forward (line 2101) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None, s...
  class SkipUpBlock2D (line 2129) | class SkipUpBlock2D(nn.Module):
    method __init__ (line 2130) | def __init__(
    method forward (line 2198) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None, s...
  class ResnetUpsampleBlock2D (line 2224) | class ResnetUpsampleBlock2D(nn.Module):
    method __init__ (line 2225) | def __init__(
    method forward (line 2288) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None, u...
  class SimpleCrossAttnUpBlock2D (line 2314) | class SimpleCrossAttnUpBlock2D(nn.Module):
    method __init__ (line 2315) | def __init__(
    method forward (line 2399) | def forward(
  class KUpBlock2D (line 2434) | class KUpBlock2D(nn.Module):
    method __init__ (line 2435) | def __init__(
    method forward (line 2482) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None, u...
  class KCrossAttnUpBlock2D (line 2507) | class KCrossAttnUpBlock2D(nn.Module):
    method __init__ (line 2508) | def __init__(
    method forward (line 2591) | def forward(
  class KAttentionBlock (line 2643) | class KAttentionBlock(nn.Module):
    method __init__ (line 2660) | def __init__(
    method _to_3d (line 2703) | def _to_3d(self, hidden_states, height, weight):
    method _to_4d (line 2706) | def _to_4d(self, hidden_states, height, weight):
    method forward (line 2709) | def forward(

FILE: src/diffusers/models/unet_2d_blocks_flax.py
  class FlaxCrossAttnDownBlock2D (line 22) | class FlaxCrossAttnDownBlock2D(nn.Module):
    method setup (line 53) | def setup(self):
    method __call__ (line 85) | def __call__(self, hidden_states, temb, encoder_hidden_states, determi...
  class FlaxDownBlock2D (line 100) | class FlaxDownBlock2D(nn.Module):
    method setup (line 125) | def setup(self):
    method __call__ (line 143) | def __call__(self, hidden_states, temb, deterministic=True):
  class FlaxCrossAttnUpBlock2D (line 157) | class FlaxCrossAttnUpBlock2D(nn.Module):
    method setup (line 189) | def setup(self):
    method __call__ (line 222) | def __call__(self, hidden_states, res_hidden_states_tuple, temb, encod...
  class FlaxUpBlock2D (line 238) | class FlaxUpBlock2D(nn.Module):
    method setup (line 266) | def setup(self):
    method __call__ (line 286) | def __call__(self, hidden_states, res_hidden_states_tuple, temb, deter...
  class FlaxUNetMidBlock2DCrossAttn (line 301) | class FlaxUNetMidBlock2DCrossAttn(nn.Module):
    method setup (line 324) | def setup(self):
    method __call__ (line 359) | def __call__(self, hidden_states, temb, encoder_hidden_states, determi...

FILE: src/diffusers/models/unet_2d_condition.py
  class UNet2DConditionOutput (line 43) | class UNet2DConditionOutput(BaseOutput):
  class UNet2DConditionModel (line 53) | class UNet2DConditionModel(ModelMixin, ConfigMixin, UNet2DConditionLoade...
    method __init__ (line 113) | def __init__(
    method attn_processors (line 364) | def attn_processors(self) -> Dict[str, AttnProcessor]:
    method set_attn_processor (line 387) | def set_attn_processor(self, processor: Union[AttnProcessor, Dict[str,...
    method set_attention_slice (line 417) | def set_attention_slice(self, slice_size):
    method _set_gradient_checkpointing (line 482) | def _set_gradient_checkpointing(self, module, value=False):
    method forward (line 486) | def forward(

FILE: src/diffusers/models/unet_2d_condition_flax.py
  class FlaxUNet2DConditionOutput (line 36) | class FlaxUNet2DConditionOutput(BaseOutput):
  class FlaxUNet2DConditionModel (line 47) | class FlaxUNet2DConditionModel(nn.Module, FlaxModelMixin, ConfigMixin):
    method init_weights (line 115) | def init_weights(self, rng: jax.random.KeyArray) -> FrozenDict:
    method setup (line 127) | def setup(self):
    method __call__ (line 247) | def __call__(

FILE: src/diffusers/models/vae.py
  class DecoderOutput (line 26) | class DecoderOutput(BaseOutput):
  class Encoder (line 38) | class Encoder(nn.Module):
    method __init__ (line 39) | def __init__(
    method forward (line 99) | def forward(self, x):
  class Decoder (line 118) | class Decoder(nn.Module):
    method __init__ (line 119) | def __init__(
    method forward (line 179) | def forward(self, z):
  class VectorQuantizer (line 198) | class VectorQuantizer(nn.Module):
    method __init__ (line 207) | def __init__(
    method remap_to_used (line 236) | def remap_to_used(self, inds):
    method unmap_to_all (line 250) | def unmap_to_all(self, inds):
    method forward (line 260) | def forward(self, z):
    method get_codebook_entry (line 294) | def get_codebook_entry(self, indices, shape):
  class DiagonalGaussianDistribution (line 312) | class DiagonalGaussianDistribution(object):
    method __init__ (line 313) | def __init__(self, parameters, deterministic=False):
    method sample (line 325) | def sample(self, generator: Optional[torch.Generator] = None) -> torch...
    method kl (line 333) | def kl(self, other=None):
    method nll (line 349) | def nll(self, sample, dims=[1, 2, 3]):
    method mode (line 355) | def mode(self):

FILE: src/diffusers/models/vae_flax.py
  class FlaxDecoderOutput (line 33) | class FlaxDecoderOutput(BaseOutput):
  class FlaxAutoencoderKLOutput (line 48) | class FlaxAutoencoderKLOutput(BaseOutput):
  class FlaxUpsample2D (line 61) | class FlaxUpsample2D(nn.Module):
    method setup (line 75) | def setup(self):
    method __call__ (line 84) | def __call__(self, hidden_states):
  class FlaxDownsample2D (line 95) | class FlaxDownsample2D(nn.Module):
    method setup (line 109) | def setup(self):
    method __call__ (line 118) | def __call__(self, hidden_states):
  class FlaxResnetBlock2D (line 125) | class FlaxResnetBlock2D(nn.Module):
    method setup (line 151) | def setup(self):
    method __call__ (line 185) | def __call__(self, hidden_states, deterministic=True):
  class FlaxAttentionBlock (line 202) | class FlaxAttentionBlock(nn.Module):
    method setup (line 222) | def setup(self):
    method transpose_for_scores (line 231) | def transpose_for_scores(self, projection):
    method __call__ (line 239) | def __call__(self, hidden_states):
  class FlaxDownEncoderBlock2D (line 274) | class FlaxDownEncoderBlock2D(nn.Module):
    method setup (line 302) | def setup(self):
    method __call__ (line 320) | def __call__(self, hidden_states, deterministic=True):
  class FlaxUpDecoderBlock2D (line 330) | class FlaxUpDecoderBlock2D(nn.Module):
    method setup (line 358) | def setup(self):
    method __call__ (line 376) | def __call__(self, hidden_states, deterministic=True):
  class FlaxUNetMidBlock2D (line 386) | class FlaxUNetMidBlock2D(nn.Module):
    method setup (line 411) | def setup(self):
    method __call__ (line 448) | def __call__(self, hidden_states, deterministic=True):
  class FlaxEncoder (line 457) | class FlaxEncoder(nn.Module):
    method setup (line 501) | def setup(self):
    method __call__ (line 550) | def __call__(self, sample, deterministic: bool = True):
  class FlaxDecoder (line 569) | class FlaxDecoder(nn.Module):
    method setup (line 612) | def setup(self):
    method __call__ (line 665) | def __call__(self, sample, deterministic: bool = True):
  class FlaxDiagonalGaussianDistribution (line 683) | class FlaxDiagonalGaussianDistribution(object):
    method __init__ (line 684) | def __init__(self, parameters, deterministic=False):
    method sample (line 694) | def sample(self, key):
    method kl (line 697) | def kl(self, other=None):
    method nll (line 709) | def nll(self, sample, axis=[1, 2, 3]):
    method mode (line 716) | def mode(self):
  class FlaxAutoencoderKL (line 721) | class FlaxAutoencoderKL(nn.Module, FlaxModelMixin, ConfigMixin):
    method setup (line 780) | def setup(self):
    method init_weights (line 817) | def init_weights(self, rng: jax.random.KeyArray) -> FrozenDict:
    method encode (line 827) | def encode(self, sample, deterministic: bool = True, return_dict: bool...
    method decode (line 839) | def decode(self, latents, deterministic: bool = True, return_dict: boo...
    method __call__ (line 853) | def __call__(self, sample, sample_posterior=False, deterministic: bool...

FILE: src/diffusers/models/vq_model.py
  class VQEncoderOutput (line 27) | class VQEncoderOutput(BaseOutput):
  class VQModel (line 39) | class VQModel(ModelMixin, ConfigMixin):
    method __init__ (line 70) | def __init__(
    method encode (line 117) | def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> VQ...
    method decode (line 126) | def decode(
    method forward (line 142) | def forward(self, sample: torch.FloatTensor, return_dict: bool = True)...

FILE: src/diffusers/optimization.py
  class SchedulerType (line 30) | class SchedulerType(Enum):
  function get_constant_schedule (line 39) | def get_constant_schedule(optimizer: Optimizer, last_epoch: int = -1):
  function get_constant_schedule_with_warmup (line 55) | def get_constant_schedule_with_warmup(optimizer: Optimizer, num_warmup_s...
  function get_linear_schedule_with_warmup (line 80) | def get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_tra...
  function get_cosine_schedule_with_warmup (line 109) | def get_cosine_schedule_with_warmup(
  function get_cosine_with_hard_restarts_schedule_with_warmup (line 143) | def get_cosine_with_hard_restarts_schedule_with_warmup(
  function get_polynomial_decay_schedule_with_warmup (line 178) | def get_polynomial_decay_schedule_with_warmup(
  function get_scheduler (line 238) | def get_scheduler(

FILE: src/diffusers/pipelines/alt_diffusion/__init__.py
  class AltDiffusionPipelineOutput (line 13) | class AltDiffusionPipelineOutput(BaseOutput):

FILE: src/diffusers/pipelines/alt_diffusion/modeling_roberta_series.py
  class TransformationModelOutput (line 11) | class TransformationModelOutput(ModelOutput):
  class RobertaSeriesConfig (line 39) | class RobertaSeriesConfig(XLMRobertaConfig):
    method __init__ (line 40) | def __init__(
  class RobertaSeriesModelWithTransformation (line 58) | class RobertaSeriesModelWithTransformation(RobertaPreTrainedModel):
    method __init__ (line 64) | def __init__(self, config):
    method forward (line 70) | def forward(

FILE: src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py
  class AltDiffusionPipeline (line 52) | class AltDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 81) | def __init__(
    method enable_vae_slicing (line 170) | def enable_vae_slicing(self):
    method disable_vae_slicing (line 179) | def disable_vae_slicing(self):
    method enable_vae_tiling (line 186) | def enable_vae_tiling(self):
    method disable_vae_tiling (line 195) | def disable_vae_tiling(self):
    method enable_sequential_cpu_offload (line 202) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 227) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 256) | def _execution_device(self):
    method _encode_prompt (line 273) | def _encode_prompt(
    method run_safety_checker (line 411) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 421) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 429) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 446) | def check_inputs(
    method prepare_latents (line 493) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 512) | def __call__(

FILE: src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py
  function preprocess (line 68) | def preprocess(image):
  class AltDiffusionImg2ImgPipeline (line 90) | class AltDiffusionImg2ImgPipeline(DiffusionPipeline):
    method __init__ (line 119) | def __init__(
    method enable_sequential_cpu_offload (line 208) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 233) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 262) | def _execution_device(self):
    method _encode_prompt (line 279) | def _encode_prompt(
    method run_safety_checker (line 417) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 427) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 435) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 452) | def check_inputs(
    method get_timesteps (line 492) | def get_timesteps(self, num_inference_steps, strength, device):
    method prepare_latents (line 501) | def prepare_latents(self, image, timestep, batch_size, num_images_per_...
    method __call__ (line 555) | def __call__(

FILE: src/diffusers/pipelines/audio_diffusion/mel.py
  class Mel (line 37) | class Mel(ConfigMixin, SchedulerMixin):
    method __init__ (line 52) | def __init__(
    method set_resolution (line 73) | def set_resolution(self, x_res: int, y_res: int):
    method load_audio (line 85) | def load_audio(self, audio_file: str = None, raw_audio: np.ndarray = N...
    method get_number_of_slices (line 101) | def get_number_of_slices(self) -> int:
    method get_audio_slice (line 109) | def get_audio_slice(self, slice: int = 0) -> np.ndarray:
    method get_sample_rate (line 120) | def get_sample_rate(self) -> int:
    method audio_slice_to_image (line 128) | def audio_slice_to_image(self, slice: int) -> Image.Image:
    method image_to_audio (line 145) | def image_to_audio(self, image: Image.Image) -> np.ndarray:

FILE: src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py
  class AudioDiffusionPipeline (line 30) | class AudioDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 44) | def __init__(
    method get_input_dims (line 54) | def get_input_dims(self) -> Tuple:
    method get_default_steps (line 69) | def get_default_steps(self) -> int:
    method __call__ (line 78) | def __call__(
    method encode (line 216) | def encode(self, images: List[Image.Image], steps: int = 50) -> np.nda...
    method slerp (line 253) | def slerp(x0: torch.Tensor, x1: torch.Tensor, alpha: float) -> torch.T...

FILE: src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py
  class DanceDiffusionPipeline (line 27) | class DanceDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 39) | def __init__(self, unet, scheduler):
    method __call__ (line 44) | def __call__(

FILE: src/diffusers/pipelines/ddim/pipeline_ddim.py
  class DDIMPipeline (line 24) | class DDIMPipeline(DiffusionPipeline):
    method __init__ (line 36) | def __init__(self, unet, scheduler):
    method __call__ (line 45) | def __call__(

FILE: src/diffusers/pipelines/ddpm/pipeline_ddpm.py
  class DDPMPipeline (line 24) | class DDPMPipeline(DiffusionPipeline):
    method __init__ (line 36) | def __init__(self, unet, scheduler):
    method __call__ (line 41) | def __call__(

FILE: src/diffusers/pipelines/dit/pipeline_dit.py
  class DiTPipeline (line 31) | class DiTPipeline(DiffusionPipeline):
    method __init__ (line 45) | def __init__(
    method get_label_ids (line 63) | def get_label_ids(self, label: Union[str, List[str]]) -> List[int]:
    method __call__ (line 87) | def __call__(

FILE: src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
  class LDMTextToImagePipeline (line 32) | class LDMTextToImagePipeline(DiffusionPipeline):
    method __init__ (line 51) | def __init__(
    method __call__ (line 64) | def __call__(
  class LDMBertConfig (line 220) | class LDMBertConfig(PretrainedConfig):
    method __init__ (line 225) | def __init__(
  function _expand_mask (line 267) | def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Option...
  class LDMBertAttention (line 282) | class LDMBertAttention(nn.Module):
    method __init__ (line 285) | def __init__(
    method _shape (line 309) | def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
    method forward (line 312) | def forward(
  class LDMBertEncoderLayer (line 426) | class LDMBertEncoderLayer(nn.Module):
    method __init__ (line 427) | def __init__(self, config: LDMBertConfig):
    method forward (line 444) | def forward(
  class LDMBertPreTrainedModel (line 496) | class LDMBertPreTrainedModel(PreTrainedModel):
    method _init_weights (line 502) | def _init_weights(self, module):
    method _set_gradient_checkpointing (line 513) | def _set_gradient_checkpointing(self, module, value=False):
    method dummy_inputs (line 518) | def dummy_inputs(self):
  class LDMBertEncoder (line 528) | class LDMBertEncoder(LDMBertPreTrainedModel):
    method __init__ (line 538) | def __init__(self, config: LDMBertConfig):
    method get_input_embeddings (line 556) | def get_input_embeddings(self):
    method set_input_embeddings (line 559) | def set_input_embeddings(self, value):
    method forward (line 562) | def forward(
  class LDMBertModel (line 695) | class LDMBertModel(LDMBertPreTrainedModel):
    method __init__ (line 698) | def __init__(self, config: LDMBertConfig):
    method forward (line 703) | def forward(

FILE: src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py
  function preprocess (line 22) | def preprocess(image):
  class LDMSuperResolutionPipeline (line 32) | class LDMSuperResolutionPipeline(DiffusionPipeline):
    method __init__ (line 49) | def __init__(
    method __call__ (line 66) | def __call__(

FILE: src/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py
  class LDMPipeline (line 26) | class LDMPipeline(DiffusionPipeline):
    method __init__ (line 39) | def __init__(self, vqvae: VQModel, unet: UNet2DModel, scheduler: DDIMS...
    method __call__ (line 44) | def __call__(

FILE: src/diffusers/pipelines/onnx_utils.py
  class OnnxRuntimeModel (line 51) | class OnnxRuntimeModel:
    method __init__ (line 52) | def __init__(self, model=None, **kwargs):
    method __call__ (line 58) | def __call__(self, **kwargs):
    method load_model (line 63) | def load_model(path: Union[str, Path], provider=None, sess_options=None):
    method _save_pretrained (line 79) | def _save_pretrained(self, save_directory: Union[str, Path], file_name...
    method save_pretrained (line 110) | def save_pretrained(
    method _from_pretrained (line 133) | def _from_pretrained(
    method from_pretrained (line 193) | def from_pretrained(

FILE: src/diffusers/pipelines/paint_by_example/image_encoder.py
  class PaintByExampleImageEncoder (line 25) | class PaintByExampleImageEncoder(CLIPPreTrainedModel):
    method __init__ (line 26) | def __init__(self, config, proj_size=768):
    method forward (line 38) | def forward(self, pixel_values, return_uncond_vector=False):
  class PaintByExampleMapper (line 50) | class PaintByExampleMapper(nn.Module):
    method __init__ (line 51) | def __init__(self, config):
    method forward (line 63) | def forward(self, hidden_states):

FILE: src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py
  function prepare_mask_and_masked_image (line 37) | def prepare_mask_and_masked_image(image, mask):
  class PaintByExamplePipeline (line 137) | class PaintByExamplePipeline(DiffusionPipeline):
    method __init__ (line 168) | def __init__(
    method enable_sequential_cpu_offload (line 191) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 212) | def _execution_device(self):
    method run_safety_checker (line 230) | def run_safety_checker(self, image, device, dtype):
    method prepare_extra_step_kwargs (line 241) | def prepare_extra_step_kwargs(self, generator, eta):
    method decode_latents (line 259) | def decode_latents(self, latents):
    method check_inputs (line 268) | def check_inputs(self, image, height, width, callback_steps):
    method prepare_latents (line 291) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method prepare_mask_latents (line 309) | def prepare_mask_latents(
    method _encode_image (line 360) | def _encode_image(self, image, device, num_images_per_prompt, do_class...
    method __call__ (line 386) | def __call__(

FILE: src/diffusers/pipelines/pipeline_flax_utils.py
  function import_flax_or_no_model (line 66) | def import_flax_or_no_model(module, class_name):
  class FlaxImagePipelineOutput (line 80) | class FlaxImagePipelineOutput(BaseOutput):
  class FlaxDiffusionPipeline (line 93) | class FlaxDiffusionPipeline(ConfigMixin):
    method register_modules (line 110) | def register_modules(self, **kwargs):
    method save_pretrained (line 143) | def save_pretrained(self, save_directory: Union[str, os.PathLike], par...
    method from_pretrained (line 194) | def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union...
    method _get_signature_keys (line 477) | def _get_signature_keys(obj):
    method components (line 485) | def components(self) -> Dict[str, Any]:
    method numpy_to_pil (line 522) | def numpy_to_pil(images):
    method progress_bar (line 538) | def progress_bar(self, iterable):
    method set_progress_bar_config (line 548) | def set_progress_bar_config(self, **kwargs):

FILE: src/diffusers/pipelines/pipeline_utils.py
  class ImagePipelineOutput (line 109) | class ImagePipelineOutput(BaseOutput):
  class AudioPipelineOutput (line 123) | class AudioPipelineOutput(BaseOutput):
  function is_safetensors_compatible (line 136) | def is_safetensors_compatible(filenames, variant=None) -> bool:
  function variant_compatible_siblings (line 182) | def variant_compatible_siblings(info, variant=None) -> Union[List[os.Pat...
  class DiffusionPipeline (line 223) | class DiffusionPipeline(ConfigMixin):
    method register_modules (line 243) | def register_modules(self, **kwargs):
    method save_pretrained (line 276) | def save_pretrained(
    method to (line 345) | def to(self, torch_device: Optional[Union[str, torch.device]] = None, ...
    method device (line 400) | def device(self) -> torch.device:
    method from_pretrained (line 413) | def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union...
    method _get_signature_keys (line 972) | def _get_signature_keys(obj):
    method components (line 980) | def components(self) -> Dict[str, Any]:
    method numpy_to_pil (line 1017) | def numpy_to_pil(images):
    method progress_bar (line 1032) | def progress_bar(self, iterable=None, total=None):
    method set_progress_bar_config (line 1047) | def set_progress_bar_config(self, **kwargs):
    method enable_xformers_memory_efficient_attention (line 1050) | def enable_xformers_memory_efficient_attention(self, attention_op: Opt...
    method disable_xformers_memory_efficient_attention (line 1082) | def disable_xformers_memory_efficient_attention(self):
    method set_use_memory_efficient_attention_xformers (line 1088) | def set_use_memory_efficient_attention_xformers(
    method enable_attention_slicing (line 1107) | def enable_attention_slicing(self, slice_size: Optional[Union[str, int...
    method disable_attention_slicing (line 1123) | def disable_attention_slicing(self):
    method set_attention_slice (line 1131) | def set_attention_slice(self, slice_size: Optional[int]):

FILE: src/diffusers/pipelines/pndm/pipeline_pndm.py
  class PNDMPipeline (line 26) | class PNDMPipeline(DiffusionPipeline):
    method __init__ (line 40) | def __init__(self, unet: UNet2DModel, scheduler: PNDMScheduler):
    method __call__ (line 48) | def __call__(

FILE: src/diffusers/pipelines/repaint/pipeline_repaint.py
  function _preprocess_image (line 32) | def _preprocess_image(image: Union[List, PIL.Image.Image, torch.Tensor]):
  function _preprocess_mask (line 53) | def _preprocess_mask(mask: Union[List, PIL.Image.Image, torch.Tensor]):
  class RePaintPipeline (line 73) | class RePaintPipeline(DiffusionPipeline):
    method __init__ (line 77) | def __init__(self, unet, scheduler):
    method __call__ (line 82) | def __call__(

FILE: src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py
  class ScoreSdeVePipeline (line 25) | class ScoreSdeVePipeline(DiffusionPipeline):
    method __init__ (line 36) | def __init__(self, unet: UNet2DModel, scheduler: DiffusionPipeline):
    method __call__ (line 41) | def __call__(

FILE: src/diffusers/pipelines/semantic_stable_diffusion/__init__.py
  class SemanticStableDiffusionPipelineOutput (line 13) | class SemanticStableDiffusionPipelineOutput(BaseOutput):

FILE: src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py
  class SemanticStableDiffusionPipeline (line 61) | class SemanticStableDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 93) | def __init__(
    method decode_latents (line 135) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 144) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 162) | def check_inputs(
    method prepare_latents (line 210) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 228) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/__init__.py
  class StableDiffusionPipelineOutput (line 22) | class StableDiffusionPipelineOutput(BaseOutput):
  class FlaxStableDiffusionPipelineOutput (line 112) | class FlaxStableDiffusionPipelineOutput(BaseOutput):

FILE: src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py
  function shave_segments (line 67) | def shave_segments(path, n_shave_prefix_segments=1):
  function renew_resnet_paths (line 77) | def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
  function renew_vae_resnet_paths (line 99) | def renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0):
  function renew_attention_paths (line 115) | def renew_attention_paths(old_list, n_shave_prefix_segments=0):
  function renew_vae_attention_paths (line 136) | def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0):
  function assign_to_checkpoint (line 166) | def assign_to_checkpoint(
  function conv_attn_to_linear (line 217) | def conv_attn_to_linear(checkpoint):
  function create_unet_diffusers_config (line 229) | def create_unet_diffusers_config(original_config, image_size: int, contr...
  function create_vae_diffusers_config (line 298) | def create_vae_diffusers_config(original_config, image_size: int):
  function create_diffusers_schedular (line 322) | def create_diffusers_schedular(original_config):
  function create_ldm_bert_config (line 332) | def create_ldm_bert_config(original_config):
  function convert_ldm_unet_checkpoint (line 342) | def convert_ldm_unet_checkpoint(checkpoint, config, path=None, extract_e...
  function convert_ldm_vae_checkpoint (line 573) | def convert_ldm_vae_checkpoint(checkpoint, config):
  function convert_ldm_bert_checkpoint (line 680) | def convert_ldm_bert_checkpoint(checkpoint, config):
  function convert_ldm_clip_checkpoint (line 730) | def convert_ldm_clip_checkpoint(checkpoint):
  function convert_paint_by_example_checkpoint (line 770) | def convert_paint_by_example_checkpoint(checkpoint):
  function convert_open_clip_checkpoint (line 837) | def convert_open_clip_checkpoint(checkpoint):
  function stable_unclip_image_encoder (line 880) | def stable_unclip_image_encoder(original_config):
  function stable_unclip_image_noising_components (line 913) | def stable_unclip_image_noising_components(
  function load_pipeline_from_original_stable_diffusion_ckpt (line 958) | def load_pipeline_from_original_stable_diffusion_ckpt(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py
  function preprocess (line 39) | def preprocess(image):
  function posterior_sample (line 60) | def posterior_sample(scheduler, latents, timestep, clean_latents, genera...
  function compute_noise (line 87) | def compute_noise(scheduler, prev_latents, latents, timestep, noise_pred...
  class CycleDiffusionPipeline (line 121) | class CycleDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 150) | def __init__(
    method enable_sequential_cpu_offload (line 225) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 251) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 281) | def _execution_device(self):
    method _encode_prompt (line 299) | def _encode_prompt(
    method check_inputs (line 438) | def check_inputs(
    method prepare_extra_step_kwargs (line 479) | def prepare_extra_step_kwargs(self, generator, eta):
    method run_safety_checker (line 497) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 508) | def decode_latents(self, latents):
    method get_timesteps (line 517) | def get_timesteps(self, num_inference_steps, strength, device):
    method prepare_latents (line 526) | def prepare_latents(self, image, timestep, batch_size, num_images_per_...
    method __call__ (line 576) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion.py
  class FlaxStableDiffusionPipeline (line 48) | class FlaxStableDiffusionPipeline(FlaxDiffusionPipeline):
    method __init__ (line 77) | def __init__(
    method prepare_inputs (line 135) | def prepare_inputs(self, prompt: Union[str, List[str]]):
    method _get_has_nsfw_concepts (line 148) | def _get_has_nsfw_concepts(self, features, params):
    method _run_safety_checker (line 152) | def _run_safety_checker(self, images, safety_model_params, jit=False):
    method _generate (line 182) | def _generate(
    method __call__ (line 275) | def __call__(
  function _p_generate (line 397) | def _p_generate(
  function _p_get_has_nsfw_concepts (line 423) | def _p_get_has_nsfw_concepts(pipe, features, params):
  function unshard (line 427) | def unshard(x: jnp.ndarray):

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py
  class FlaxStableDiffusionImg2ImgPipeline (line 47) | class FlaxStableDiffusionImg2ImgPipeline(FlaxDiffusionPipeline):
    method __init__ (line 76) | def __init__(
    method prepare_inputs (line 113) | def prepare_inputs(self, prompt: Union[str, List[str]], image: Union[I...
    method _get_has_nsfw_concepts (line 134) | def _get_has_nsfw_concepts(self, features, params):
    method _run_safety_checker (line 138) | def _run_safety_checker(self, images, safety_model_params, jit=False):
    method get_timestep_start (line 168) | def get_timestep_start(self, num_inference_steps, strength):
    method _generate (line 176) | def _generate(
    method __call__ (line 280) | def __call__(
  function _p_generate (line 419) | def _p_generate(
  function _p_get_has_nsfw_concepts (line 449) | def _p_get_has_nsfw_concepts(pipe, features, params):
  function unshard (line 453) | def unshard(x: jnp.ndarray):
  function preprocess (line 460) | def preprocess(image, dtype):

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_inpaint.py
  class FlaxStableDiffusionInpaintPipeline (line 48) | class FlaxStableDiffusionInpaintPipeline(FlaxDiffusionPipeline):
    method __init__ (line 77) | def __init__(
    method prepare_inputs (line 135) | def prepare_inputs(
    method _get_has_nsfw_concepts (line 174) | def _get_has_nsfw_concepts(self, features, params):
    method _run_safety_checker (line 178) | def _run_safety_checker(self, images, safety_model_params, jit=False):
    method _generate (line 208) | def _generate(
    method __call__ (line 335) | def __call__(
  function _p_generate (line 466) | def _p_generate(
  function _p_get_has_nsfw_concepts (line 496) | def _p_get_has_nsfw_concepts(pipe, features, params):
  function unshard (line 500) | def unshard(x: jnp.ndarray):
  function preprocess_image (line 507) | def preprocess_image(image, dtype):
  function preprocess_mask (line 516) | def preprocess_mask(mask, dtype):

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py
  class OnnxStableDiffusionPipeline (line 33) | class OnnxStableDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 45) | def __init__(
    method _encode_prompt (line 114) | def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_...
    method __call__ (line 191) | def __call__(
  class StableDiffusionOnnxPipeline (line 326) | class StableDiffusionOnnxPipeline(OnnxStableDiffusionPipeline):
    method __init__ (line 327) | def __init__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py
  function preprocess (line 35) | def preprocess(image):
  class OnnxStableDiffusionImg2ImgPipeline (line 56) | class OnnxStableDiffusionImg2ImgPipeline(DiffusionPipeline):
    method __init__ (line 94) | def __init__(
    method _encode_prompt (line 164) | def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_...
    method __call__ (line 241) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint.py
  function prepare_mask_and_masked_image (line 38) | def prepare_mask_and_masked_image(image, mask, latents_shape):
  class OnnxStableDiffusionInpaintPipeline (line 56) | class OnnxStableDiffusionInpaintPipeline(DiffusionPipeline):
    method __init__ (line 94) | def __init__(
    method _encode_prompt (line 165) | def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_...
    method __call__ (line 243) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint_legacy.py
  function preprocess (line 20) | def preprocess(image):
  function preprocess_mask (line 29) | def preprocess_mask(mask, scale_factor=8):
  class OnnxStableDiffusionInpaintPipelineLegacy (line 41) | class OnnxStableDiffusionInpaintPipelineLegacy(DiffusionPipeline):
    method __init__ (line 80) | def __init__(
    method _encode_prompt (line 150) | def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_...
    method __call__ (line 227) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
  class StableDiffusionPipeline (line 55) | class StableDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 84) | def __init__(
    method enable_vae_slicing (line 173) | def enable_vae_slicing(self):
    method disable_vae_slicing (line 182) | def disable_vae_slicing(self):
    method enable_vae_tiling (line 189) | def enable_vae_tiling(self):
    method disable_vae_tiling (line 198) | def disable_vae_tiling(self):
    method enable_sequential_cpu_offload (line 205) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 230) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 259) | def _execution_device(self):
    method _encode_prompt (line 276) | def _encode_prompt(
    method run_safety_checker (line 414) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 424) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 432) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 449) | def check_inputs(
    method prepare_latents (line 496) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 515) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
  class AttentionStore (line 71) | class AttentionStore:
    method get_empty_store (line 73) | def get_empty_store():
    method __call__ (line 76) | def __call__(self, attn, is_cross: bool, place_in_unet: str):
    method between_steps (line 86) | def between_steps(self):
    method get_average_attention (line 90) | def get_average_attention(self):
    method aggregate_attention (line 94) | def aggregate_attention(self, from_where: List[str]) -> torch.Tensor:
    method reset (line 106) | def reset(self):
    method __init__ (line 111) | def __init__(self, attn_res=16):
  class AttendExciteCrossAttnProcessor (line 124) | class AttendExciteCrossAttnProcessor:
    method __init__ (line 125) | def __init__(self, attnstore, place_in_unet):
    method __call__ (line 130) | def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden...
  class StableDiffusionAttendAndExcitePipeline (line 162) | class StableDiffusionAttendAndExcitePipeline(DiffusionPipeline):
    method __init__ (line 191) | def __init__(
    method enable_vae_slicing (line 233) | def enable_vae_slicing(self):
    method disable_vae_slicing (line 243) | def disable_vae_slicing(self):
    method enable_sequential_cpu_offload (line 251) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 278) | def _execution_device(self):
    method _encode_prompt (line 296) | def _encode_prompt(
    method run_safety_checker (line 435) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 446) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 455) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 472) | def check_inputs(
    method prepare_latents (line 546) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method _compute_max_attention_per_index (line 564) | def _compute_max_attention_per_index(
    method _aggregate_and_get_max_attention_per_token (line 586) | def _aggregate_and_get_max_attention_per_token(
    method _compute_loss (line 601) | def _compute_loss(max_attention_per_index: List[torch.Tensor]) -> torc...
    method _update_latent (line 608) | def _update_latent(latents: torch.Tensor, loss: torch.Tensor, step_siz...
    method _perform_iterative_refinement_step (line 614) | def _perform_iterative_refinement_step(
    method register_attention_control (line 668) | def register_attention_control(self):
    method get_indices (line 689) | def get_indices(self, prompt: str) -> Dict[str, int]:
    method __call__ (line 697) | def __call__(
  class GaussianSmoothing (line 978) | class GaussianSmoothing(torch.nn.Module):
    method __init__ (line 991) | def __init__(
    method forward (line 1032) | def forward(self, input):

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
  class StableDiffusionControlNetPipeline (line 88) | class StableDiffusionControlNetPipeline(DiffusionPipeline):
    method __init__ (line 119) | def __init__(
    method enable_vae_slicing (line 163) | def enable_vae_slicing(self):
    method disable_vae_slicing (line 173) | def disable_vae_slicing(self):
    method enable_sequential_cpu_offload (line 180) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 201) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 231) | def _execution_device(self):
    method _encode_prompt (line 249) | def _encode_prompt(
    method run_safety_checker (line 388) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 399) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 408) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 425) | def check_inputs(
    method prepare_image (line 504) | def prepare_image(self, image, width, height, batch_size, num_images_p...
    method prepare_latents (line 535) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method _default_height_width (line 552) | def _default_height_width(self, height, width, image):
    method __call__ (line 576) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py
  function preprocess (line 36) | def preprocess(image):
  class StableDiffusionDepth2ImgPipeline (line 57) | class StableDiffusionDepth2ImgPipeline(DiffusionPipeline):
    method __init__ (line 80) | def __init__(
    method enable_sequential_cpu_offload (line 124) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 143) | def _execution_device(self):
    method _encode_prompt (line 161) | def _encode_prompt(
    method run_safety_checker (line 300) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 311) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 320) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 338) | def check_inputs(
    method get_timesteps (line 379) | def get_timesteps(self, num_inference_steps, strength, device):
    method prepare_latents (line 389) | def prepare_latents(self, image, timestep, batch_size, num_images_per_...
    method prepare_depth_map (line 441) | def prepare_depth_map(self, image, depth_map, batch_size, do_classifie...
    method __call__ (line 483) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py
  class StableDiffusionImageVariationPipeline (line 35) | class StableDiffusionImageVariationPipeline(DiffusionPipeline):
    method __init__ (line 63) | def __init__(
    method enable_sequential_cpu_offload (line 123) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 142) | def _execution_device(self):
    method _encode_image (line 159) | def _encode_image(self, image, device, num_images_per_prompt, do_class...
    method run_safety_checker (line 185) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 196) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 205) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 222) | def check_inputs(self, image, height, width, callback_steps):
    method prepare_latents (line 245) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 263) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py
  function preprocess (line 72) | def preprocess(image):
  class StableDiffusionImg2ImgPipeline (line 93) | class StableDiffusionImg2ImgPipeline(DiffusionPipeline):
    method __init__ (line 123) | def __init__(
    method enable_sequential_cpu_offload (line 213) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 239) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 269) | def _execution_device(self):
    method _encode_prompt (line 287) | def _encode_prompt(
    method run_safety_checker (line 426) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 437) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 446) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 463) | def check_inputs(
    method get_timesteps (line 503) | def get_timesteps(self, num_inference_steps, strength, device):
    method prepare_latents (line 512) | def prepare_latents(self, image, timestep, batch_size, num_images_per_...
    method __call__ (line 566) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
  function prepare_mask_and_masked_image (line 36) | def prepare_mask_and_masked_image(image, mask):
  class StableDiffusionInpaintPipeline (line 140) | class StableDiffusionInpaintPipeline(DiffusionPipeline):
    method __init__ (line 169) | def __init__(
    method enable_sequential_cpu_offload (line 260) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 286) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 316) | def _execution_device(self):
    method _encode_prompt (line 334) | def _encode_prompt(
    method run_safety_checker (line 473) | def run_safety_checker(self, image, device, dtype):
    method prepare_extra_step_kwargs (line 484) | def prepare_extra_step_kwargs(self, generator, eta):
    method decode_latents (line 502) | def decode_latents(self, latents):
    method check_inputs (line 511) | def check_inputs(
    method prepare_latents (line 559) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method prepare_mask_latents (line 576) | def prepare_mask_latents(
    method __call__ (line 628) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py
  function preprocess_image (line 43) | def preprocess_image(image):
  function preprocess_mask (line 53) | def preprocess_mask(mask, scale_factor=8):
  class StableDiffusionInpaintPipelineLegacy (line 84) | class StableDiffusionInpaintPipelineLegacy(DiffusionPipeline):
    method __init__ (line 114) | def __init__(
    method enable_sequential_cpu_offload (line 204) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 230) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 260) | def _execution_device(self):
    method _encode_prompt (line 278) | def _encode_prompt(
    method run_safety_checker (line 417) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 428) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 437) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 455) | def check_inputs(
    method get_timesteps (line 496) | def get_timesteps(self, num_inference_steps, strength, device):
    method prepare_latents (line 505) | def prepare_latents(self, image, timestep, batch_size, num_images_per_...
    method __call__ (line 522) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
  function preprocess (line 42) | def preprocess(image):
  class StableDiffusionInstructPix2PixPipeline (line 63) | class StableDiffusionInstructPix2PixPipeline(DiffusionPipeline):
    method __init__ (line 92) | def __init__(
    method __call__ (line 134) | def __call__(
    method enable_sequential_cpu_offload (line 399) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 425) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 455) | def _execution_device(self):
    method _encode_prompt (line 472) | def _encode_prompt(
    method run_safety_checker (line 612) | def run_safety_checker(self, image, device, dtype):
    method prepare_extra_step_kwargs (line 623) | def prepare_extra_step_kwargs(self, generator, eta):
    method decode_latents (line 641) | def decode_latents(self, latents):
    method check_inputs (line 649) | def check_inputs(
    method prepare_latents (line 687) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method prepare_image_latents (line 704) | def prepare_image_latents(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_k_diffusion.py
  class ModelWrapper (line 30) | class ModelWrapper:
    method __init__ (line 31) | def __init__(self, model, alphas_cumprod):
    method apply_model (line 35) | def apply_model(self, *args, **kwargs):
  class StableDiffusionKDiffusionPipeline (line 44) | class StableDiffusionKDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 79) | def __init__(
    method set_scheduler (line 119) | def set_scheduler(self, scheduler_type: str):
    method enable_sequential_cpu_offload (line 125) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 151) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 181) | def _execution_device(self):
    method _encode_prompt (line 199) | def _encode_prompt(
    method run_safety_checker (line 338) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 349) | def decode_latents(self, latents):
    method check_inputs (line 357) | def check_inputs(self, prompt, height, width, callback_steps):
    method prepare_latents (line 372) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 385) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
  function preprocess (line 33) | def preprocess(image):
  class StableDiffusionLatentUpscalePipeline (line 54) | class StableDiffusionLatentUpscalePipeline(DiffusionPipeline):
    method __init__ (line 77) | def __init__(
    method enable_sequential_cpu_offload (line 95) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 114) | def _execution_device(self):
    method _encode_prompt (line 131) | def _encode_prompt(self, prompt, device, do_classifier_free_guidance, ...
    method decode_latents (line 222) | def decode_latents(self, latents):
    method check_inputs (line 230) | def check_inputs(self, prompt, image, callback_steps):
    method prepare_latents (line 268) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 282) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
  class StableDiffusionPanoramaPipeline (line 50) | class StableDiffusionPanoramaPipeline(DiffusionPipeline):
    method __init__ (line 83) | def __init__(
    method enable_vae_slicing (line 128) | def enable_vae_slicing(self):
    method disable_vae_slicing (line 138) | def disable_vae_slicing(self):
    method enable_sequential_cpu_offload (line 146) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 173) | def _execution_device(self):
    method _encode_prompt (line 191) | def _encode_prompt(
    method run_safety_checker (line 330) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 341) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 350) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 368) | def check_inputs(
    method prepare_latents (line 416) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method get_views (line 433) | def get_views(self, panorama_height, panorama_width, window_size=64, s...
    method __call__ (line 451) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py
  class Pix2PixInversionPipelineOutput (line 53) | class Pix2PixInversionPipelineOutput(BaseOutput):
  function preprocess (line 173) | def preprocess(image):
  function prepare_unet (line 194) | def prepare_unet(unet: UNet2DConditionModel):
  class Pix2PixZeroL2Loss (line 211) | class Pix2PixZeroL2Loss:
    method __init__ (line 212) | def __init__(self):
    method compute_loss (line 215) | def compute_loss(self, predictions, targets):
  class Pix2PixZeroCrossAttnProcessor (line 219) | class Pix2PixZeroCrossAttnProcessor:
    method __init__ (line 223) | def __init__(self, is_pix2pix_zero=False):
    method __call__ (line 228) | def __call__(
  class StableDiffusionPix2PixZeroPipeline (line 274) | class StableDiffusionPix2PixZeroPipeline(DiffusionPipeline):
    method __init__ (line 312) | def __init__(
    method enable_sequential_cpu_offload (line 360) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method enable_model_cpu_offload (line 385) | def enable_model_cpu_offload(self, gpu_id=0):
    method _execution_device (line 411) | def _execution_device(self):
    method _encode_prompt (line 429) | def _encode_prompt(
    method run_safety_checker (line 568) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 579) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 588) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 605) | def check_inputs(
    method prepare_latents (line 637) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method generate_caption (line 655) | def generate_caption(self, images):
    method construct_direction (line 674) | def construct_direction(self, embs_source: torch.Tensor, embs_target: ...
    method get_embeds (line 679) | def get_embeds(self, prompt: List[str], batch_size: int = 16) -> torch...
    method prepare_image_latents (line 698) | def prepare_image_latents(self, image, batch_size, dtype, device, gene...
    method auto_corr_loss (line 733) | def auto_corr_loss(self, hidden_states, generator=None):
    method kl_divergence (line 753) | def kl_divergence(self, hidden_states):
    method __call__ (line 760) | def __call__(
    method invert (line 1037) | def invert(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
  class CrossAttnStoreProcessor (line 51) | class CrossAttnStoreProcessor:
    method __init__ (line 52) | def __init__(self):
    method __call__ (line 55) | def __call__(
  class StableDiffusionSAGPipeline (line 91) | class StableDiffusionSAGPipeline(DiffusionPipeline):
    method __init__ (line 120) | def __init__(
    method enable_vae_slicing (line 146) | def enable_vae_slicing(self):
    method disable_vae_slicing (line 156) | def disable_vae_slicing(self):
    method enable_sequential_cpu_offload (line 164) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 191) | def _execution_device(self):
    method _encode_prompt (line 209) | def _encode_prompt(
    method run_safety_checker (line 348) | def run_safety_checker(self, image, device, dtype):
    method decode_latents (line 359) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 368) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 386) | def check_inputs(
    method prepare_latents (line 434) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 453) | def __call__(
    method sag_masking (line 683) | def sag_masking(self, original_latents, attn_map, t, eps):
    method pred_x0 (line 711) | def pred_x0(self, sample, model_output, timestep):
    method pred_epsilon (line 731) | def pred_epsilon(self, sample, model_output, timestep):
  function gaussian_blur_2d (line 751) | def gaussian_blur_2d(img, kernel_size, sigma):

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
  function preprocess (line 32) | def preprocess(image):
  class StableDiffusionUpscalePipeline (line 53) | class StableDiffusionUpscalePipeline(DiffusionPipeline):
    method __init__ (line 79) | def __init__(
    method enable_sequential_cpu_offload (line 117) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 136) | def _execution_device(self):
    method _encode_prompt (line 154) | def _encode_prompt(
    method prepare_extra_step_kwargs (line 293) | def prepare_extra_step_kwargs(self, generator, eta):
    method decode_latents (line 311) | def decode_latents(self, latents):
    method check_inputs (line 319) | def check_inputs(self, prompt, image, noise_level, callback_steps):
    method prepare_latents (line 360) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 374) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
  class StableUnCLIPPipeline (line 50) | class StableUnCLIPPipeline(DiffusionPipeline):
    method __init__ (line 103) | def __init__(
    method enable_vae_slicing (line 140) | def enable_vae_slicing(self):
    method disable_vae_slicing (line 150) | def disable_vae_slicing(self):
    method enable_sequential_cpu_offload (line 157) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 183) | def _execution_device(self):
    method _encode_prior_prompt (line 201) | def _encode_prior_prompt(
    method _encode_prompt (line 303) | def _encode_prompt(
    method decode_latents (line 442) | def decode_latents(self, latents):
    method prepare_prior_extra_step_kwargs (line 451) | def prepare_prior_extra_step_kwargs(self, generator, eta):
    method prepare_extra_step_kwargs (line 469) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 486) | def check_inputs(
    method prepare_latents (line 547) | def prepare_latents(self, shape, dtype, device, generator, latents, sc...
    method noise_image_embeddings (line 558) | def noise_image_embeddings(
    method __call__ (line 605) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
  class StableUnCLIPImg2ImgPipeline (line 63) | class StableUnCLIPImg2ImgPipeline(DiffusionPipeline):
    method __init__ (line 109) | def __init__(
    method enable_vae_slicing (line 142) | def enable_vae_slicing(self):
    method disable_vae_slicing (line 152) | def disable_vae_slicing(self):
    method enable_sequential_cpu_offload (line 159) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 185) | def _execution_device(self):
    method _encode_prompt (line 203) | def _encode_prompt(
    method _encode_image (line 341) | def _encode_image(
    method decode_latents (line 397) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 406) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 423) | def check_inputs(
    method prepare_latents (line 507) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method noise_image_embeddings (line 525) | def noise_image_embeddings(
    method __call__ (line 572) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/safety_checker.py
  function cosine_distance (line 26) | def cosine_distance(image_embeds, text_embeds):
  class StableDiffusionSafetyChecker (line 32) | class StableDiffusionSafetyChecker(PreTrainedModel):
    method __init__ (line 37) | def __init__(self, config: CLIPConfig):
    method forward (line 50) | def forward(self, clip_input, images):
    method forward_onnx (line 99) | def forward_onnx(self, clip_input: torch.FloatTensor, images: torch.Fl...

FILE: src/diffusers/pipelines/stable_diffusion/safety_checker_flax.py
  function jax_cosine_distance (line 25) | def jax_cosine_distance(emb_1, emb_2, eps=1e-12):
  class FlaxStableDiffusionSafetyCheckerModule (line 31) | class FlaxStableDiffusionSafetyCheckerModule(nn.Module):
    method setup (line 35) | def setup(self):
    method __call__ (line 47) | def __call__(self, clip_input):
  class FlaxStableDiffusionSafetyChecker (line 71) | class FlaxStableDiffusionSafetyChecker(FlaxPreTrainedModel):
    method __init__ (line 76) | def __init__(
    method init_weights (line 90) | def init_weights(self, rng: jax.random.KeyArray, input_shape: Tuple, p...
    method __call__ (line 101) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion/stable_unclip_image_normalizer.py
  class StableUnCLIPImageNormalizer (line 22) | class StableUnCLIPImageNormalizer(ModelMixin, ConfigMixin):
    method __init__ (line 31) | def __init__(
    method scale (line 40) | def scale(self, embeds):
    method unscale (line 44) | def unscale(self, embeds):

FILE: src/diffusers/pipelines/stable_diffusion_safe/__init__.py
  class SafetyConfig (line 13) | class SafetyConfig(object):
  class StableDiffusionSafePipelineOutput (line 45) | class StableDiffusionSafePipelineOutput(BaseOutput):

FILE: src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
  class StableDiffusionPipelineSafe (line 22) | class StableDiffusionPipelineSafe(DiffusionPipeline):
    method __init__ (line 54) | def __init__(
    method safety_concept (line 150) | def safety_concept(self):
    method safety_concept (line 160) | def safety_concept(self, concept):
    method enable_sequential_cpu_offload (line 170) | def enable_sequential_cpu_offload(self):
    method _execution_device (line 189) | def _execution_device(self):
    method _encode_prompt (line 206) | def _encode_prompt(
    method run_safety_checker (line 341) | def run_safety_checker(self, image, device, dtype, enable_safety_guida...
    method decode_latents (line 365) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 374) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 392) | def check_inputs(
    method prepare_latents (line 440) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method perform_safety_guidance (line 457) | def perform_safety_guidance(
    method __call__ (line 500) | def __call__(

FILE: src/diffusers/pipelines/stable_diffusion_safe/safety_checker.py
  function cosine_distance (line 25) | def cosine_distance(image_embeds, text_embeds):
  class SafeStableDiffusionSafetyChecker (line 31) | class SafeStableDiffusionSafetyChecker(PreTrainedModel):
    method __init__ (line 36) | def __init__(self, config: CLIPConfig):
    method forward (line 49) | def forward(self, clip_input, images):
    method forward_onnx (line 88) | def forward_onnx(self, clip_input: torch.FloatTensor, images: torch.Fl...

FILE: src/diffusers/pipelines/stochastic_karras_ve/pipeline_stochastic_karras_ve.py
  class KarrasVePipeline (line 25) | class KarrasVePipeline(DiffusionPipeline):
    method __init__ (line 44) | def __init__(self, unet: UNet2DModel, scheduler: KarrasVeScheduler):
    method __call__ (line 49) | def __call__(

FILE: src/diffusers/pipelines/unclip/pipeline_unclip.py
  class UnCLIPPipeline (line 34) | class UnCLIPPipeline(DiffusionPipeline):
    method __init__ (line 78) | def __init__(
    method prepare_latents (line 106) | def prepare_latents(self, shape, dtype, device, generator, latents, sc...
    method _encode_prompt (line 117) | def _encode_prompt(
    method enable_sequential_cpu_offload (line 208) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 234) | def _execution_device(self):
    method __call__ (line 252) | def __call__(

FILE: src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py
  class UnCLIPImageVariationPipeline (line 38) | class UnCLIPImageVariationPipeline(DiffusionPipeline):
    method __init__ (line 84) | def __init__(
    method prepare_latents (line 113) | def prepare_latents(self, shape, dtype, device, generator, latents, sc...
    method _encode_prompt (line 124) | def _encode_prompt(self, prompt, device, num_images_per_prompt, do_cla...
    method _encode_image (line 187) | def _encode_image(self, image, device, num_images_per_prompt, image_em...
    method enable_sequential_cpu_offload (line 201) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 227) | def _execution_device(self):
    method __call__ (line 245) | def __call__(

FILE: src/diffusers/pipelines/unclip/text_proj.py
  class UnCLIPTextProjModel (line 22) | class UnCLIPTextProjModel(ModelMixin, ConfigMixin):
    method __init__ (line 31) | def __init__(
    method forward (line 55) | def forward(self, *, image_embeddings, prompt_embeds, text_encoder_hid...

FILE: src/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py
  function get_down_block (line 21) | def get_down_block(
  function get_up_block (line 77) | def get_up_block(
  class UNetFlatConditionModel (line 134) | class UNetFlatConditionModel(ModelMixin, ConfigMixin):
    method __init__ (line 194) | def __init__(
    method attn_processors (line 454) | def attn_processors(self) -> Dict[str, AttnProcessor]:
    method set_attn_processor (line 477) | def set_attn_processor(self, processor: Union[AttnProcessor, Dict[str,...
    method set_attention_slice (line 507) | def set_attention_slice(self, slice_size):
    method _set_gradient_checkpointing (line 572) | def _set_gradient_checkpointing(self, module, value=False):
    method forward (line 576) | def forward(
  class LinearMultiDim (line 747) | class LinearMultiDim(nn.Linear):
    method __init__ (line 748) | def __init__(self, in_features, out_features=None, second_dim=4, *args...
    method forward (line 757) | def forward(self, input_tensor, *args, **kwargs):
  class ResnetBlockFlat (line 766) | class ResnetBlockFlat(nn.Module):
    method __init__ (line 767) | def __init__(
    method forward (line 827) | def forward(self, input_tensor, temb):
  class DownBlockFlat (line 861) | class DownBlockFlat(nn.Module):
    method __init__ (line 862) | def __init__(
    method forward (line 913) | def forward(self, hidden_states, temb=None):
  class CrossAttnDownBlockFlat (line 941) | class CrossAttnDownBlockFlat(nn.Module):
    method __init__ (line 942) | def __init__(
    method forward (line 1028) | def forward(
  class UpBlockFlat (line 1073) | class UpBlockFlat(nn.Module):
    method __init__ (line 1074) | def __init__(
    method forward (line 1121) | def forward(self, hidden_states, res_hidden_states_tuple, temb=None, u...
  class CrossAttnUpBlockFlat (line 1148) | class CrossAttnUpBlockFlat(nn.Module):
    method __init__ (line 1149) | def __init__(
    method forward (line 1231) | def forward(
  class UNetMidBlockFlatCrossAttn (line 1282) | class UNetMidBlockFlatCrossAttn(nn.Module):
    method __init__ (line 1283) | def __init__(
    method forward (line 1367) | def forward(
  class UNetMidBlockFlatSimpleCrossAttn (line 1383) | class UNetMidBlockFlatSimpleCrossAttn(nn.Module):
    method __init__ (line 1384) | def __init__(
    method forward (line 1457) | def forward(

FILE: src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion.py
  class VersatileDiffusionPipeline (line 20) | class VersatileDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 57) | def __init__(
    method image_variation (line 83) | def image_variation(
    method text_to_image (line 199) | def text_to_image(
    method dual_guided (line 311) | def dual_guided(

FILE: src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py
  class VersatileDiffusionDualGuidedPipeline (line 39) | class VersatileDiffusionDualGuidedPipeline(DiffusionPipeline):
    method __init__ (line 68) | def __init__(
    method remove_unused_weights (line 98) | def remove_unused_weights(self):
    method _convert_to_dual_attention (line 101) | def _convert_to_dual_attention(self):
    method _revert_dual_attention (line 135) | def _revert_dual_attention(self):
    method enable_sequential_cpu_offload (line 148) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 167) | def _execution_device(self):
    method _encode_text_prompt (line 184) | def _encode_text_prompt(self, prompt, device, num_images_per_prompt, d...
    method _encode_image_prompt (line 275) | def _encode_image_prompt(self, prompt, device, num_images_per_prompt, ...
    method decode_latents (line 331) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 340) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 357) | def check_inputs(self, prompt, image, height, width, callback_steps):
    method prepare_latents (line 375) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method set_transformer_params (line 392) | def set_transformer_params(self, mix_ratio: float = 0.5, condition_typ...
    method __call__ (line 406) | def __call__(

FILE: src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_image_variation.py
  class VersatileDiffusionImageVariationPipeline (line 33) | class VersatileDiffusionImageVariationPipeline(DiffusionPipeline):
    method __init__ (line 57) | def __init__(
    method enable_sequential_cpu_offload (line 75) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 94) | def _execution_device(self):
    method _encode_prompt (line 111) | def _encode_prompt(self, prompt, device, num_images_per_prompt, do_cla...
    method decode_latents (line 191) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 200) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 218) | def check_inputs(self, image, height, width, callback_steps):
    method prepare_latents (line 241) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 259) | def __call__(

FILE: src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py
  class VersatileDiffusionTextToImagePipeline (line 32) | class VersatileDiffusionTextToImagePipeline(DiffusionPipeline):
    method __init__ (line 60) | def __init__(
    method _swap_unet_attention_blocks (line 83) | def _swap_unet_attention_blocks(self):
    method remove_unused_weights (line 96) | def remove_unused_weights(self):
    method enable_sequential_cpu_offload (line 99) | def enable_sequential_cpu_offload(self, gpu_id=0):
    method _execution_device (line 118) | def _execution_device(self):
    method _encode_prompt (line 135) | def _encode_prompt(self, prompt, device, num_images_per_prompt, do_cla...
    method decode_latents (line 248) | def decode_latents(self, latents):
    method prepare_extra_step_kwargs (line 257) | def prepare_extra_step_kwargs(self, generator, eta):
    method check_inputs (line 275) | def check_inputs(
    method prepare_latents (line 323) | def prepare_latents(self, batch_size, num_channels_latents, height, wi...
    method __call__ (line 341) | def __call__(

FILE: src/diffusers/pipelines/vq_diffusion/pipeline_vq_diffusion.py
  class LearnedClassifierFreeSamplingEmbeddings (line 30) | class LearnedClassifierFreeSamplingEmbeddings(ModelMixin, ConfigMixin):
    method __init__ (line 36) | def __init__(self, learnable: bool, hidden_size: Optional[int] = None,...
  class VQDiffusionPipeline (line 52) | class VQDiffusionPipeline(DiffusionPipeline):
    method __init__ (line 83) | def __init__(
    method _encode_prompt (line 103) | def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_...
    method __call__ (line 167) | def __call__(
    method truncate (line 310) | def truncate(self, log_p_x_0: torch.FloatTensor, truncation_rate: floa...

FILE: src/diffusers/schedulers/scheduling_ddim.py
  class DDIMSchedulerOutput (line 32) | class DDIMSchedulerOutput(BaseOutput):
  function betas_for_alpha_bar (line 50) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> torc...
  class DDIMScheduler (line 79) | class DDIMScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 120) | def __init__(
    method scale_model_input (line 163) | def scale_model_input(self, sample: torch.FloatTensor, timestep: Optio...
    method _get_variance (line 177) | def _get_variance(self, timestep, prev_timestep):
    method set_timesteps (line 187) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method step (line 211) | def step(
    method add_noise (line 329) | def add_noise(
    method get_velocity (line 352) | def get_velocity(
    method __len__ (line 372) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_ddim_flax.py
  class DDIMSchedulerState (line 36) | class DDIMSchedulerState:
    method create (line 46) | def create(
  class FlaxDDIMSchedulerOutput (line 62) | class FlaxDDIMSchedulerOutput(FlaxSchedulerOutput):
  class FlaxDDIMScheduler (line 66) | class FlaxDDIMScheduler(FlaxSchedulerMixin, ConfigMixin):
    method has_state (line 109) | def has_state(self):
    method __init__ (line 113) | def __init__(
    method create_state (line 127) | def create_state(self, common: Optional[CommonSchedulerState] = None) ...
    method scale_model_input (line 151) | def scale_model_input(
    method set_timesteps (line 165) | def set_timesteps(
    method _get_variance (line 187) | def _get_variance(self, state: DDIMSchedulerState, timestep, prev_time...
    method step (line 199) | def step(
    method add_noise (line 285) | def add_noise(
    method get_velocity (line 294) | def get_velocity(
    method __len__ (line 303) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_ddim_inverse.py
  class DDIMSchedulerOutput (line 31) | class DDIMSchedulerOutput(BaseOutput):
  function betas_for_alpha_bar (line 49) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> torc...
  class DDIMInverseScheduler (line 78) | class DDIMInverseScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 117) | def __init__(
    method scale_model_input (line 160) | def scale_model_input(self, sample: torch.FloatTensor, timestep: Optio...
    method set_timesteps (line 174) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method step (line 198) | def step(
    method __len__ (line 226) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_ddpm.py
  class DDPMSchedulerOutput (line 30) | class DDPMSchedulerOutput(BaseOutput):
  function betas_for_alpha_bar (line 47) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class DDPMScheduler (line 76) | class DDPMScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 112) | def __init__(
    method scale_model_input (line 156) | def scale_model_input(self, sample: torch.FloatTensor, timestep: Optio...
    method set_timesteps (line 170) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method _get_variance (line 192) | def _get_variance(self, t, predicted_variance=None, variance_type=None):
    method step (line 229) | def step(
    method add_noise (line 323) | def add_noise(
    method get_velocity (line 346) | def get_velocity(
    method __len__ (line 366) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_ddpm_flax.py
  class DDPMSchedulerState (line 36) | class DDPMSchedulerState:
    method create (line 45) | def create(cls, common: CommonSchedulerState, init_noise_sigma: jnp.nd...
  class FlaxDDPMSchedulerOutput (line 50) | class FlaxDDPMSchedulerOutput(FlaxSchedulerOutput):
  class FlaxDDPMScheduler (line 54) | class FlaxDDPMScheduler(FlaxSchedulerMixin, ConfigMixin):
    method has_state (line 92) | def has_state(self):
    method __init__ (line 96) | def __init__(
    method create_state (line 110) | def create_state(self, common: Optional[CommonSchedulerState] = None) ...
    method scale_model_input (line 125) | def scale_model_input(
    method set_timesteps (line 139) | def set_timesteps(
    method _get_variance (line 162) | def _get_variance(self, state: DDPMSchedulerState, t, predicted_varian...
    method step (line 195) | def step(
    method add_noise (line 280) | def add_noise(
    method get_velocity (line 289) | def get_velocity(
    method __len__ (line 298) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_deis_multistep.py
  function betas_for_alpha_bar (line 29) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class DEISMultistepScheduler (line 58) | class DEISMultistepScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 113) | def __init__(
    method set_timesteps (line 174) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method convert_model_output (line 197) | def convert_model_output(
    method deis_first_order_update (line 247) | def deis_first_order_update(
    method multistep_deis_second_order_update (line 277) | def multistep_deis_second_order_update(
    method multistep_deis_third_order_update (line 319) | def multistep_deis_third_order_update(
    method step (line 376) | def step(
    method scale_model_input (line 444) | def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs...
    method add_noise (line 457) | def add_noise(
    method __len__ (line 480) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
  function betas_for_alpha_bar (line 28) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class DPMSolverMultistepScheduler (line 57) | class DPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 124) | def __init__(
    method set_timesteps (line 184) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method convert_model_output (line 207) | def convert_model_output(
    method dpm_solver_first_order_update (line 278) | def dpm_solver_first_order_update(
    method multistep_dpm_solver_second_order_update (line 310) | def multistep_dpm_solver_second_order_update(
    method multistep_dpm_solver_third_order_update (line 369) | def multistep_dpm_solver_third_order_update(
    method step (line 424) | def step(
    method scale_model_input (line 492) | def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs...
    method add_noise (line 505) | def add_noise(
    method __len__ (line 528) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py
  class DPMSolverMultistepSchedulerState (line 35) | class DPMSolverMultistepSchedulerState:
    method create (line 53) | def create(
  class FlaxDPMSolverMultistepSchedulerOutput (line 73) | class FlaxDPMSolverMultistepSchedulerOutput(FlaxSchedulerOutput):
  class FlaxDPMSolverMultistepScheduler (line 77) | class FlaxDPMSolverMultistepScheduler(FlaxSchedulerMixin, ConfigMixin):
    method has_state (line 147) | def has_state(self):
    method __init__ (line 151) | def __init__(
    method create_state (line 170) | def create_state(self, common: Optional[CommonSchedulerState] = None) ...
    method set_timesteps (line 199) | def set_timesteps(
    method convert_model_output (line 236) | def convert_model_output(
    method dpm_solver_first_order_update (line 306) | def dpm_solver_first_order_update(
    method multistep_dpm_solver_second_order_update (line 341) | def multistep_dpm_solver_second_order_update(
    method multistep_dpm_solver_third_order_update (line 401) | def multistep_dpm_solver_third_order_update(
    method step (line 457) | def step(
    method scale_model_input (line 594) | def scale_model_input(
    method add_noise (line 612) | def add_noise(
    method __len__ (line 621) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py
  function betas_for_alpha_bar (line 28) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class DPMSolverSinglestepScheduler (line 57) | class DPMSolverSinglestepScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 123) | def __init__(
    method get_order_list (line 184) | def get_order_list(self, num_inference_steps: int) -> List[int]:
    method set_timesteps (line 218) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method convert_model_output (line 240) | def convert_model_output(
    method dpm_solver_first_order_update (line 311) | def dpm_solver_first_order_update(
    method singlestep_dpm_solver_second_order_update (line 343) | def singlestep_dpm_solver_second_order_update(
    method singlestep_dpm_solver_third_order_update (line 404) | def singlestep_dpm_solver_third_order_update(
    method singlestep_dpm_solver_update (line 475) | def singlestep_dpm_solver_update(
    method step (line 512) | def step(
    method scale_model_input (line 568) | def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs...
    method add_noise (line 581) | def add_noise(
    method __len__ (line 604) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py
  class EulerAncestralDiscreteSchedulerOutput (line 32) | class EulerAncestralDiscreteSchedulerOutput(BaseOutput):
  function betas_for_alpha_bar (line 50) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> torc...
  class EulerAncestralDiscreteScheduler (line 79) | class EulerAncestralDiscreteScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 109) | def __init__(
    method scale_model_input (line 149) | def scale_model_input(
    method set_timesteps (line 170) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method step (line 193) | def step(
    method add_noise (line 282) | def add_noise(
    method __len__ (line 308) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_euler_discrete.py
  class EulerDiscreteSchedulerOutput (line 32) | class EulerDiscreteSchedulerOutput(BaseOutput):
  function betas_for_alpha_bar (line 50) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class EulerDiscreteScheduler (line 79) | class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 112) | def __init__(
    method scale_model_input (line 153) | def scale_model_input(
    method set_timesteps (line 176) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method step (line 209) | def step(
    method add_noise (line 308) | def add_noise(
    method __len__ (line 334) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_heun_discrete.py
  function betas_for_alpha_bar (line 26) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> torc...
  class HeunDiscreteScheduler (line 55) | class HeunDiscreteScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 85) | def __init__(
    method index_for_timestep (line 115) | def index_for_timestep(self, timestep):
    method scale_model_input (line 123) | def scale_model_input(
    method set_timesteps (line 142) | def set_timesteps(
    method state_in_first_order (line 186) | def state_in_first_order(self):
    method step (line 189) | def step(
    method add_noise (line 273) | def add_noise(
    method __len__ (line 298) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_ipndm.py
  class IPNDMScheduler (line 25) | class IPNDMScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 44) | def __init__(
    method set_timesteps (line 61) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method step (line 85) | def step(
    method scale_model_input (line 135) | def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs...
    method _get_prev_sample (line 148) | def _get_prev_sample(self, sample, timestep_index, prev_timestep_index...
    method __len__ (line 160) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py
  function betas_for_alpha_bar (line 27) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> torc...
  class KDPM2AncestralDiscreteScheduler (line 56) | class KDPM2AncestralDiscreteScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 87) | def __init__(
    method index_for_timestep (line 117) | def index_for_timestep(self, timestep):
    method scale_model_input (line 125) | def scale_model_input(
    method set_timesteps (line 148) | def set_timesteps(
    method sigma_to_t (line 211) | def sigma_to_t(self, sigma):
    method state_in_first_order (line 235) | def state_in_first_order(self):
    method step (line 238) | def step(
    method add_noise (line 326) | def add_noise(
    method __len__ (line 351) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py
  function betas_for_alpha_bar (line 26) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> torc...
  class KDPM2DiscreteScheduler (line 55) | class KDPM2DiscreteScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 86) | def __init__(
    method index_for_timestep (line 116) | def index_for_timestep(self, timestep):
    method scale_model_input (line 124) | def scale_model_input(
    method set_timesteps (line 147) | def set_timesteps(
    method sigma_to_t (line 200) | def sigma_to_t(self, sigma):
    method state_in_first_order (line 224) | def state_in_first_order(self):
    method step (line 227) | def step(
    method add_noise (line 307) | def add_noise(
    method __len__ (line 332) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_karras_ve.py
  class KarrasVeOutput (line 28) | class KarrasVeOutput(BaseOutput):
  class KarrasVeScheduler (line 48) | class KarrasVeScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 83) | def __init__(
    method scale_model_input (line 100) | def scale_model_input(self, sample: torch.FloatTensor, timestep: Optio...
    method set_timesteps (line 114) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method add_noise_to_input (line 135) | def add_noise_to_input(
    method step (line 156) | def step(
    method step_correct (line 194) | def step_correct(
    method add_noise (line 231) | def add_noise(self, original_samples, noise, timesteps):

FILE: src/diffusers/schedulers/scheduling_karras_ve_flax.py
  class KarrasVeSchedulerState (line 29) | class KarrasVeSchedulerState:
    method create (line 36) | def create(cls):
  class FlaxKarrasVeOutput (line 41) | class FlaxKarrasVeOutput(BaseOutput):
  class FlaxKarrasVeScheduler (line 59) | class FlaxKarrasVeScheduler(FlaxSchedulerMixin, ConfigMixin):
    method has_state (line 91) | def has_state(self):
    method __init__ (line 95) | def __init__(
    method create_state (line 106) | def create_state(self):
    method set_timesteps (line 109) | def set_timesteps(
    method add_noise_to_input (line 137) | def add_noise_to_input(
    method step (line 163) | def step(
    method step_correct (line 199) | def step_correct(
    method add_noise (line 236) | def add_noise(self, state: KarrasVeSchedulerState, original_samples, n...

FILE: src/diffusers/schedulers/scheduling_lms_discrete.py
  class LMSDiscreteSchedulerOutput (line 30) | class LMSDiscreteSchedulerOutput(BaseOutput):
  function betas_for_alpha_bar (line 48) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class LMSDiscreteScheduler (line 77) | class LMSDiscreteScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 107) | def __init__(
    method scale_model_input (line 148) | def scale_model_input(
    method get_lms_coefficient (line 169) | def get_lms_coefficient(self, order, t, current_order):
    method set_timesteps (line 191) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method step (line 217) | def step(
    method add_noise (line 287) | def add_noise(
    method __len__ (line 312) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_lms_discrete_flax.py
  class LMSDiscreteSchedulerState (line 33) | class LMSDiscreteSchedulerState:
    method create (line 46) | def create(
  class FlaxLMSSchedulerOutput (line 53) | class FlaxLMSSchedulerOutput(FlaxSchedulerOutput):
  class FlaxLMSDiscreteScheduler (line 57) | class FlaxLMSDiscreteScheduler(FlaxSchedulerMixin, ConfigMixin):
    method has_state (line 90) | def has_state(self):
    method __init__ (line 94) | def __init__(
    method create_state (line 106) | def create_state(self, common: Optional[CommonSchedulerState] = None) ...
    method scale_model_input (line 123) | def scale_model_input(self, state: LMSDiscreteSchedulerState, sample: ...
    method get_lms_coefficient (line 145) | def get_lms_coefficient(self, state: LMSDiscreteSchedulerState, order,...
    method set_timesteps (line 167) | def set_timesteps(
    method step (line 203) | def step(
    method add_noise (line 268) | def add_noise(
    method __len__ (line 282) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_pndm.py
  function betas_for_alpha_bar (line 28) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class PNDMScheduler (line 57) | class PNDMScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 99) | def __init__(
    method set_timesteps (line 152) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method step (line 192) | def step(
    method step_prk (line 223) | def step_prk(
    method step_plms (line 278) | def step_plms(
    method scale_model_input (line 345) | def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs...
    method _get_prev_sample (line 358) | def _get_prev_sample(self, sample, timestep, prev_timestep, model_outp...
    method add_noise (line 401) | def add_noise(
    method __len__ (line 424) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_pndm_flax.py
  class PNDMSchedulerState (line 35) | class PNDMSchedulerState:
    method create (line 53) | def create(
  class FlaxPNDMSchedulerOutput (line 69) | class FlaxPNDMSchedulerOutput(FlaxSchedulerOutput):
  class FlaxPNDMScheduler (line 73) | class FlaxPNDMScheduler(FlaxSchedulerMixin, ConfigMixin):
    method has_state (line 119) | def has_state(self):
    method __init__ (line 123) | def __init__(
    method create_state (line 143) | def create_state(self, common: Optional[CommonSchedulerState] = None) ...
    method set_timesteps (line 167) | def set_timesteps(self, state: PNDMSchedulerState, num_inference_steps...
    method scale_model_input (line 222) | def scale_model_input(
    method step (line 239) | def step(
    method step_prk (line 294) | def step_prk(
    method step_plms (line 362) | def step_plms(
    method _get_prev_sample (line 456) | def _get_prev_sample(self, state: PNDMSchedulerState, sample, timestep...
    method add_noise (line 501) | def add_noise(
    method __len__ (line 510) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_repaint.py
  class RePaintSchedulerOutput (line 28) | class RePaintSchedulerOutput(BaseOutput):
  function betas_for_alpha_bar (line 46) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class RePaintScheduler (line 75) | class RePaintScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 109) | def __init__(
    method scale_model_input (line 153) | def scale_model_input(self, sample: torch.FloatTensor, timestep: Optio...
    method set_timesteps (line 167) | def set_timesteps(
    method _get_variance (line 197) | def _get_variance(self, t):
    method step (line 216) | def step(
    method undo_step (line 303) | def undo_step(self, sample, timestep, generator=None):
    method add_noise (line 320) | def add_noise(
    method __len__ (line 328) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_sde_ve.py
  class SdeVeOutput (line 29) | class SdeVeOutput(BaseOutput):
  class ScoreSdeVeScheduler (line 45) | class ScoreSdeVeScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 72) | def __init__(
    method scale_model_input (line 89) | def scale_model_input(self, sample: torch.FloatTensor, timestep: Optio...
    method set_timesteps (line 103) | def set_timesteps(
    method set_sigmas (line 119) | def set_sigmas(
    method get_adjacent_sigma (line 146) | def get_adjacent_sigma(self, timesteps, t):
    method step_pred (line 153) | def step_pred(
    method step_correct (line 216) | def step_correct(
    method add_noise (line 267) | def add_noise(
    method __len__ (line 280) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_sde_ve_flax.py
  class ScoreSdeVeSchedulerState (line 29) | class ScoreSdeVeSchedulerState:
    method create (line 36) | def create(cls):
  class FlaxSdeVeOutput (line 41) | class FlaxSdeVeOutput(FlaxSchedulerOutput):
  class FlaxScoreSdeVeScheduler (line 59) | class FlaxScoreSdeVeScheduler(FlaxSchedulerMixin, ConfigMixin):
    method has_state (line 84) | def has_state(self):
    method __init__ (line 88) | def __init__(
    method create_state (line 99) | def create_state(self):
    method set_timesteps (line 109) | def set_timesteps(
    method set_sigmas (line 127) | def set_sigmas(
    method get_adjacent_sigma (line 160) | def get_adjacent_sigma(self, state, timesteps, t):
    method step_pred (line 163) | def step_pred(
    method step_correct (line 223) | def step_correct(
    method __len__ (line 275) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_sde_vp.py
  class ScoreSdeVpScheduler (line 27) | class ScoreSdeVpScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 45) | def __init__(self, num_train_timesteps=2000, beta_min=0.1, beta_max=20...
    method set_timesteps (line 50) | def set_timesteps(self, num_inference_steps, device: Union[str, torch....
    method step_pred (line 53) | def step_pred(self, score, x, t, generator=None):
    method __len__ (line 89) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_unclip.py
  class UnCLIPSchedulerOutput (line 29) | class UnCLIPSchedulerOutput(BaseOutput):
  function betas_for_alpha_bar (line 47) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class UnCLIPScheduler (line 76) | class UnCLIPScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 103) | def __init__(
    method scale_model_input (line 130) | def scale_model_input(self, sample: torch.FloatTensor, timestep: Optio...
    method set_timesteps (line 144) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method _get_variance (line 161) | def _get_variance(self, t, prev_timestep=None, predicted_variance=None...
    method step (line 197) | def step(

FILE: src/diffusers/schedulers/scheduling_unipc_multistep.py
  function betas_for_alpha_bar (line 28) | def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
  class UniPCMultistepScheduler (line 57) | class UniPCMultistepScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 126) | def __init__(
    method set_timesteps (line 187) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method convert_model_output (line 213) | def convert_model_output(
    method multistep_uni_p_bh_update (line 275) | def multistep_uni_p_bh_update(
    method multistep_uni_c_bh_update (line 380) | def multistep_uni_c_bh_update(
    method step (line 486) | def step(
    method scale_model_input (line 570) | def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs...
    method add_noise (line 583) | def add_noise(
    method __len__ (line 606) | def __len__(self):

FILE: src/diffusers/schedulers/scheduling_utils.py
  class KarrasDiffusionSchedulers (line 32) | class KarrasDiffusionSchedulers(Enum):
  class SchedulerOutput (line 49) | class SchedulerOutput(BaseOutput):
  class SchedulerMixin (line 62) | class SchedulerMixin:
    method from_pretrained (line 77) | def from_pretrained(
    method save_pretrained (line 147) | def save_pretrained(self, save_directory: Union[str, os.PathLike], pus...
    method compatibles (line 159) | def compatibles(self):
    method _get_compatibles (line 169) | def _get_compatibles(cls):

FILE: src/diffusers/schedulers/scheduling_utils_flax.py
  class FlaxKarrasDiffusionSchedulers (line 34) | class FlaxKarrasDiffusionSchedulers(Enum):
  class FlaxSchedulerOutput (line 43) | class FlaxSchedulerOutput(BaseOutput):
  class FlaxSchedulerMixin (line 56) | class FlaxSchedulerMixin:
    method from_pretrained (line 72) | def from_pretrained(
    method save_pretrained (line 151) | def save_pretrained(self, save_directory: Union[str, os.PathLike], pus...
    method compatibles (line 163) | def compatibles(self):
    method _get_compatibles (line 173) | def _get_compatibles(cls):
  function broadcast_to_shape_from_left (line 182) | def broadcast_to_shape_from_left(x: jnp.ndarray, shape: Tuple[int]) -> j...
  function betas_for_alpha_bar (line 187) | def betas_for_alpha_bar(num_diffusion_timesteps: int, max_beta=0.999, dt...
  class CommonSchedulerState (line 217) | class CommonSchedulerState:
    method create (line 223) | def create(cls, scheduler):
  function get_sqrt_alpha_prod (line 257) | def get_sqrt_alpha_prod(
  function add_noise_common (line 273) | def add_noise_common(
  function get_velocity_common (line 281) | def get_velocity_common(state: CommonSchedulerState, sample: jnp.ndarray...

FILE: src/diffusers/schedulers/scheduling_vq_diffusion.py
  class VQDiffusionSchedulerOutput (line 28) | class VQDiffusionSchedulerOutput(BaseOutput):
  function index_to_log_onehot (line 41) | def index_to_log_onehot(x: torch.LongTensor, num_classes: int) -> torch....
  function gumbel_noised (line 62) | def gumbel_noised(logits: torch.FloatTensor, generator: Optional[torch.G...
  function alpha_schedules (line 72) | def alpha_schedules(num_diffusion_timesteps: int, alpha_cum_start=0.9999...
  function gamma_schedules (line 88) | def gamma_schedules(num_diffusion_timesteps: int, gamma_cum_start=0.0000...
  class VQDiffusionScheduler (line 106) | class VQDiffusionScheduler(SchedulerMixin, ConfigMixin):
    method __init__ (line 144) | def __init__(
    method set_timesteps (line 190) | def set_timesteps(self, num_inference_steps: int, device: Union[str, t...
    method step (line 212) | def step(
    method q_posterior (line 260) | def q_posterior(self, log_p_x_0, x_t, t):
    method log_Q_t_transitioning_to_known_class (line 379) | def log_Q_t_transitioning_to_known_class(
    method apply_cumulative_transitions (line 484) | def apply_cumulative_transitions(self, q, t):

FILE: src/diffusers/training_utils.py
  function enable_full_determinism (line 12) | def enable_full_determinism(seed: int):
  function set_seed (line 32) | def set_seed(seed: int):
  class EMAModel (line 46) | class EMAModel:
    method __init__ (line 51) | def __init__(
    method from_pretrained (line 133) | def from_pretrained(cls, path, model_cls) -> "EMAModel":
    method save_pretrained (line 142) | def save_pretrained(self, path):
    method get_decay (line 157) | def get_decay(self, optimization_step: int) -> float:
    method step (line 177) | def step(self, parameters: Iterable[torch.nn.Parameter]):
    method copy_to (line 208) | def copy_to(self, parameters: Iterable[torch.nn.Parameter]) -> None:
    method to (line 221) | def to(self, device=None, dtype=None) -> None:
    method state_dict (line 233) | def state_dict(self) -> dict:
    method store (line 252) | def store(self, parameters: Iterable[torch.nn.Parameter]) -> None:
    method restore (line 261) | def restore(self, parameters: Iterable[torch.nn.Parameter]) -> None:
    method load_state_dict (line 279) | def load_state_dict(self, state_dict: dict) -> None:

FILE: src/diffusers/utils/__init__.py
  function check_min_version (line 95) | def check_min_version(min_version):

FILE: src/diffusers/utils/accelerate_utils.py
  function apply_forward_hook (line 27) | def apply_forward_hook(method):

FILE: src/diffusers/utils/deprecation_utils.py
  function deprecate (line 8) | def deprecate(*args, take_from: Optional[Union[Dict, Any]] = None, stand...

FILE: src/diffusers/utils/doc_utils.py
  function replace_example_docstring (line 20) | def replace_example_docstring(example_docstring):

FILE: src/diffusers/utils/dummy_flax_and_transformers_objects.py
  class FlaxStableDiffusionImg2ImgPipeline (line 5) | class FlaxStableDiffusionImg2ImgPipeline(metaclass=DummyObject):
    method __init__ (line 8) | def __init__(self, *args, **kwargs):
    method from_config (line 12) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 16) | def from_pretrained(cls, *args, **kwargs):
  class FlaxStableDiffusionInpaintPipeline (line 20) | class FlaxStableDiffusionInpaintPipeline(metaclass=DummyObject):
    method __init__ (line 23) | def __init__(self, *args, **kwargs):
    method from_config (line 27) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 31) | def from_pretrained(cls, *args, **kwargs):
  class FlaxStableDiffusionPipeline (line 35) | class FlaxStableDiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 38) | def __init__(self, *args, **kwargs):
    method from_config (line 42) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 46) | def from_pretrained(cls, *args, **kwargs):

FILE: src/diffusers/utils/dummy_flax_objects.py
  class FlaxModelMixin (line 5) | class FlaxModelMixin(metaclass=DummyObject):
    method __init__ (line 8) | def __init__(self, *args, **kwargs):
    method from_config (line 12) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 16) | def from_pretrained(cls, *args, **kwargs):
  class FlaxUNet2DConditionModel (line 20) | class FlaxUNet2DConditionModel(metaclass=DummyObject):
    method __init__ (line 23) | def __init__(self, *args, **kwargs):
    method from_config (line 27) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 31) | def from_pretrained(cls, *args, **kwargs):
  class FlaxAutoencoderKL (line 35) | class FlaxAutoencoderKL(metaclass=DummyObject):
    method __init__ (line 38) | def __init__(self, *args, **kwargs):
    method from_config (line 42) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 46) | def from_pretrained(cls, *args, **kwargs):
  class FlaxDiffusionPipeline (line 50) | class FlaxDiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 53) | def __init__(self, *args, **kwargs):
    method from_config (line 57) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 61) | def from_pretrained(cls, *args, **kwargs):
  class FlaxDDIMScheduler (line 65) | class FlaxDDIMScheduler(metaclass=DummyObject):
    method __init__ (line 68) | def __init__(self, *args, **kwargs):
    method from_config (line 72) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 76) | def from_pretrained(cls, *args, **kwargs):
  class FlaxDDPMScheduler (line 80) | class FlaxDDPMScheduler(metaclass=DummyObject):
    method __init__ (line 83) | def __init__(self, *args, **kwargs):
    method from_config (line 87) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 91) | def from_pretrained(cls, *args, **kwargs):
  class FlaxDPMSolverMultistepScheduler (line 95) | class FlaxDPMSolverMultistepScheduler(metaclass=DummyObject):
    method __init__ (line 98) | def __init__(self, *args, **kwargs):
    method from_config (line 102) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 106) | def from_pretrained(cls, *args, **kwargs):
  class FlaxKarrasVeScheduler (line 110) | class FlaxKarrasVeScheduler(metaclass=DummyObject):
    method __init__ (line 113) | def __init__(self, *args, **kwargs):
    method from_config (line 117) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 121) | def from_pretrained(cls, *args, **kwargs):
  class FlaxLMSDiscreteScheduler (line 125) | class FlaxLMSDiscreteScheduler(metaclass=DummyObject):
    method __init__ (line 128) | def __init__(self, *args, **kwargs):
    method from_config (line 132) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 136) | def from_pretrained(cls, *args, **kwargs):
  class FlaxPNDMScheduler (line 140) | class FlaxPNDMScheduler(metaclass=DummyObject):
    method __init__ (line 143) | def __init__(self, *args, **kwargs):
    method from_config (line 147) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 151) | def from_pretrained(cls, *args, **kwargs):
  class FlaxSchedulerMixin (line 155) | class FlaxSchedulerMixin(metaclass=DummyObject):
    method __init__ (line 158) | def __init__(self, *args, **kwargs):
    method from_config (line 162) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 166) | def from_pretrained(cls, *args, **kwargs):
  class FlaxScoreSdeVeScheduler (line 170) | class FlaxScoreSdeVeScheduler(metaclass=DummyObject):
    method __init__ (line 173) | def __init__(self, *args, **kwargs):
    method from_config (line 177) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 181) | def from_pretrained(cls, *args, **kwargs):

FILE: src/diffusers/utils/dummy_onnx_objects.py
  class OnnxRuntimeModel (line 5) | class OnnxRuntimeModel(metaclass=DummyObject):
    method __init__ (line 8) | def __init__(self, *args, **kwargs):
    method from_config (line 12) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 16) | def from_pretrained(cls, *args, **kwargs):

FILE: src/diffusers/utils/dummy_pt_objects.py
  class AutoencoderKL (line 5) | class AutoencoderKL(metaclass=DummyObject):
    method __init__ (line 8) | def __init__(self, *args, **kwargs):
    method from_config (line 12) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 16) | def from_pretrained(cls, *args, **kwargs):
  class ControlNetModel (line 20) | class ControlNetModel(metaclass=DummyObject):
    method __init__ (line 23) | def __init__(self, *args, **kwargs):
    method from_config (line 27) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 31) | def from_pretrained(cls, *args, **kwargs):
  class ModelMixin (line 35) | class ModelMixin(metaclass=DummyObject):
    method __init__ (line 38) | def __init__(self, *args, **kwargs):
    method from_config (line 42) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 46) | def from_pretrained(cls, *args, **kwargs):
  class PriorTransformer (line 50) | class PriorTransformer(metaclass=DummyObject):
    method __init__ (line 53) | def __init__(self, *args, **kwargs):
    method from_config (line 57) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 61) | def from_pretrained(cls, *args, **kwargs):
  class Transformer2DModel (line 65) | class Transformer2DModel(metaclass=DummyObject):
    method __init__ (line 68) | def __init__(self, *args, **kwargs):
    method from_config (line 72) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 76) | def from_pretrained(cls, *args, **kwargs):
  class UNet1DModel (line 80) | class UNet1DModel(metaclass=DummyObject):
    method __init__ (line 83) | def __init__(self, *args, **kwargs):
    method from_config (line 87) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 91) | def from_pretrained(cls, *args, **kwargs):
  class UNet2DConditionModel (line 95) | class UNet2DConditionModel(metaclass=DummyObject):
    method __init__ (line 98) | def __init__(self, *args, **kwargs):
    method from_config (line 102) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 106) | def from_pretrained(cls, *args, **kwargs):
  class UNet2DModel (line 110) | class UNet2DModel(metaclass=DummyObject):
    method __init__ (line 113) | def __init__(self, *args, **kwargs):
    method from_config (line 117) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 121) | def from_pretrained(cls, *args, **kwargs):
  class VQModel (line 125) | class VQModel(metaclass=DummyObject):
    method __init__ (line 128) | def __init__(self, *args, **kwargs):
    method from_config (line 132) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 136) | def from_pretrained(cls, *args, **kwargs):
  function get_constant_schedule (line 140) | def get_constant_schedule(*args, **kwargs):
  function get_constant_schedule_with_warmup (line 144) | def get_constant_schedule_with_warmup(*args, **kwargs):
  function get_cosine_schedule_with_warmup (line 148) | def get_cosine_schedule_with_warmup(*args, **kwargs):
  function get_cosine_with_hard_restarts_schedule_with_warmup (line 152) | def get_cosine_with_hard_restarts_schedule_with_warmup(*args, **kwargs):
  function get_linear_schedule_with_warmup (line 156) | def get_linear_schedule_with_warmup(*args, **kwargs):
  function get_polynomial_decay_schedule_with_warmup (line 160) | def get_polynomial_decay_schedule_with_warmup(*args, **kwargs):
  function get_scheduler (line 164) | def get_scheduler(*args, **kwargs):
  class AudioPipelineOutput (line 168) | class AudioPipelineOutput(metaclass=DummyObject):
    method __init__ (line 171) | def __init__(self, *args, **kwargs):
    method from_config (line 175) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 179) | def from_pretrained(cls, *args, **kwargs):
  class DanceDiffusionPipeline (line 183) | class DanceDiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 186) | def __init__(self, *args, **kwargs):
    method from_config (line 190) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 194) | def from_pretrained(cls, *args, **kwargs):
  class DDIMPipeline (line 198) | class DDIMPipeline(metaclass=DummyObject):
    method __init__ (line 201) | def __init__(self, *args, **kwargs):
    method from_config (line 205) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 209) | def from_pretrained(cls, *args, **kwargs):
  class DDPMPipeline (line 213) | class DDPMPipeline(metaclass=DummyObject):
    method __init__ (line 216) | def __init__(self, *args, **kwargs):
    method from_config (line 220) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 224) | def from_pretrained(cls, *args, **kwargs):
  class DiffusionPipeline (line 228) | class DiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 231) | def __init__(self, *args, **kwargs):
    method from_config (line 235) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 239) | def from_pretrained(cls, *args, **kwargs):
  class DiTPipeline (line 243) | class DiTPipeline(metaclass=DummyObject):
    method __init__ (line 246) | def __init__(self, *args, **kwargs):
    method from_config (line 250) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 254) | def from_pretrained(cls, *args, **kwargs):
  class ImagePipelineOutput (line 258) | class ImagePipelineOutput(metaclass=DummyObject):
    method __init__ (line 261) | def __init__(self, *args, **kwargs):
    method from_config (line 265) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 269) | def from_pretrained(cls, *args, **kwargs):
  class KarrasVePipeline (line 273) | class KarrasVePipeline(metaclass=DummyObject):
    method __init__ (line 276) | def __init__(self, *args, **kwargs):
    method from_config (line 280) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 284) | def from_pretrained(cls, *args, **kwargs):
  class LDMPipeline (line 288) | class LDMPipeline(metaclass=DummyObject):
    method __init__ (line 291) | def __init__(self, *args, **kwargs):
    method from_config (line 295) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 299) | def from_pretrained(cls, *args, **kwargs):
  class LDMSuperResolutionPipeline (line 303) | class LDMSuperResolutionPipeline(metaclass=DummyObject):
    method __init__ (line 306) | def __init__(self, *args, **kwargs):
    method from_config (line 310) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 314) | def from_pretrained(cls, *args, **kwargs):
  class PNDMPipeline (line 318) | class PNDMPipeline(metaclass=DummyObject):
    method __init__ (line 321) | def __init__(self, *args, **kwargs):
    method from_config (line 325) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 329) | def from_pretrained(cls, *args, **kwargs):
  class RePaintPipeline (line 333) | class RePaintPipeline(metaclass=DummyObject):
    method __init__ (line 336) | def __init__(self, *args, **kwargs):
    method from_config (line 340) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 344) | def from_pretrained(cls, *args, **kwargs):
  class ScoreSdeVePipeline (line 348) | class ScoreSdeVePipeline(metaclass=DummyObject):
    method __init__ (line 351) | def __init__(self, *args, **kwargs):
    method from_config (line 355) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 359) | def from_pretrained(cls, *args, **kwargs):
  class DDIMInverseScheduler (line 363) | class DDIMInverseScheduler(metaclass=DummyObject):
    method __init__ (line 366) | def __init__(self, *args, **kwargs):
    method from_config (line 370) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 374) | def from_pretrained(cls, *args, **kwargs):
  class DDIMScheduler (line 378) | class DDIMScheduler(metaclass=DummyObject):
    method __init__ (line 381) | def __init__(self, *args, **kwargs):
    method from_config (line 385) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 389) | def from_pretrained(cls, *args, **kwargs):
  class DDPMScheduler (line 393) | class DDPMScheduler(metaclass=DummyObject):
    method __init__ (line 396) | def __init__(self, *args, **kwargs):
    method from_config (line 400) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 404) | def from_pretrained(cls, *args, **kwargs):
  class DEISMultistepScheduler (line 408) | class DEISMultistepScheduler(metaclass=DummyObject):
    method __init__ (line 411) | def __init__(self, *args, **kwargs):
    method from_config (line 415) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 419) | def from_pretrained(cls, *args, **kwargs):
  class DPMSolverMultistepScheduler (line 423) | class DPMSolverMultistepScheduler(metaclass=DummyObject):
    method __init__ (line 426) | def __init__(self, *args, **kwargs):
    method from_config (line 430) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 434) | def from_pretrained(cls, *args, **kwargs):
  class DPMSolverSinglestepScheduler (line 438) | class DPMSolverSinglestepScheduler(metaclass=DummyObject):
    method __init__ (line 441) | def __init__(self, *args, **kwargs):
    method from_config (line 445) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 449) | def from_pretrained(cls, *args, **kwargs):
  class EulerAncestralDiscreteScheduler (line 453) | class EulerAncestralDiscreteScheduler(metaclass=DummyObject):
    method __init__ (line 456) | def __init__(self, *args, **kwargs):
    method from_config (line 460) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 464) | def from_pretrained(cls, *args, **kwargs):
  class EulerDiscreteScheduler (line 468) | class EulerDiscreteScheduler(metaclass=DummyObject):
    method __init__ (line 471) | def __init__(self, *args, **kwargs):
    method from_config (line 475) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 479) | def from_pretrained(cls, *args, **kwargs):
  class HeunDiscreteScheduler (line 483) | class HeunDiscreteScheduler(metaclass=DummyObject):
    method __init__ (line 486) | def __init__(self, *args, **kwargs):
    method from_config (line 490) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 494) | def from_pretrained(cls, *args, **kwargs):
  class IPNDMScheduler (line 498) | class IPNDMScheduler(metaclass=DummyObject):
    method __init__ (line 501) | def __init__(self, *args, **kwargs):
    method from_config (line 505) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 509) | def from_pretrained(cls, *args, **kwargs):
  class KarrasVeScheduler (line 513) | class KarrasVeScheduler(metaclass=DummyObject):
    method __init__ (line 516) | def __init__(self, *args, **kwargs):
    method from_config (line 520) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 524) | def from_pretrained(cls, *args, **kwargs):
  class KDPM2AncestralDiscreteScheduler (line 528) | class KDPM2AncestralDiscreteScheduler(metaclass=DummyObject):
    method __init__ (line 531) | def __init__(self, *args, **kwargs):
    method from_config (line 535) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 539) | def from_pretrained(cls, *args, **kwargs):
  class KDPM2DiscreteScheduler (line 543) | class KDPM2DiscreteScheduler(metaclass=DummyObject):
    method __init__ (line 546) | def __init__(self, *args, **kwargs):
    method from_config (line 550) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 554) | def from_pretrained(cls, *args, **kwargs):
  class PNDMScheduler (line 558) | class PNDMScheduler(metaclass=DummyObject):
    method __init__ (line 561) | def __init__(self, *args, **kwargs):
    method from_config (line 565) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 569) | def from_pretrained(cls, *args, **kwargs):
  class RePaintScheduler (line 573) | class RePaintScheduler(metaclass=DummyObject):
    method __init__ (line 576) | def __init__(self, *args, **kwargs):
    method from_config (line 580) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 584) | def from_pretrained(cls, *args, **kwargs):
  class SchedulerMixin (line 588) | class SchedulerMixin(metaclass=DummyObject):
    method __init__ (line 591) | def __init__(self, *args, **kwargs):
    method from_config (line 595) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 599) | def from_pretrained(cls, *args, **kwargs):
  class ScoreSdeVeScheduler (line 603) | class ScoreSdeVeScheduler(metaclass=DummyObject):
    method __init__ (line 606) | def __init__(self, *args, **kwargs):
    method from_config (line 610) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 614) | def from_pretrained(cls, *args, **kwargs):
  class UnCLIPScheduler (line 618) | class UnCLIPScheduler(metaclass=DummyObject):
    method __init__ (line 621) | def __init__(self, *args, **kwargs):
    method from_config (line 625) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 629) | def from_pretrained(cls, *args, **kwargs):
  class UniPCMultistepScheduler (line 633) | class UniPCMultistepScheduler(metaclass=DummyObject):
    method __init__ (line 636) | def __init__(self, *args, **kwargs):
    method from_config (line 640) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 644) | def from_pretrained(cls, *args, **kwargs):
  class VQDiffusionScheduler (line 648) | class VQDiffusionScheduler(metaclass=DummyObject):
    method __init__ (line 651) | def __init__(self, *args, **kwargs):
    method from_config (line 655) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 659) | def from_pretrained(cls, *args, **kwargs):
  class EMAModel (line 663) | class EMAModel(metaclass=DummyObject):
    method __init__ (line 666) | def __init__(self, *args, **kwargs):
    method from_config (line 670) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 674) | def from_pretrained(cls, *args, **kwargs):

FILE: src/diffusers/utils/dummy_torch_and_librosa_objects.py
  class AudioDiffusionPipeline (line 5) | class AudioDiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 8) | def __init__(self, *args, **kwargs):
    method from_config (line 12) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 16) | def from_pretrained(cls, *args, **kwargs):
  class Mel (line 20) | class Mel(metaclass=DummyObject):
    method __init__ (line 23) | def __init__(self, *args, **kwargs):
    method from_config (line 27) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 31) | def from_pretrained(cls, *args, **kwargs):

FILE: src/diffusers/utils/dummy_torch_and_scipy_objects.py
  class LMSDiscreteScheduler (line 5) | class LMSDiscreteScheduler(metaclass=DummyObject):
    method __init__ (line 8) | def __init__(self, *args, **kwargs):
    method from_config (line 12) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 16) | def from_pretrained(cls, *args, **kwargs):

FILE: src/diffusers/utils/dummy_torch_and_transformers_and_k_diffusion_objects.py
  class StableDiffusionKDiffusionPipeline (line 5) | class StableDiffusionKDiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 8) | def __init__(self, *args, **kwargs):
    method from_config (line 12) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 16) | def from_pretrained(cls, *args, **kwargs):

FILE: src/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py
  class OnnxStableDiffusionImg2ImgPipeline (line 5) | class OnnxStableDiffusionImg2ImgPipeline(metaclass=DummyObject):
    method __init__ (line 8) | def __init__(self, *args, **kwargs):
    method from_config (line 12) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 16) | def from_pretrained(cls, *args, **kwargs):
  class OnnxStableDiffusionInpaintPipeline (line 20) | class OnnxStableDiffusionInpaintPipeline(metaclass=DummyObject):
    method __init__ (line 23) | def __init__(self, *args, **kwargs):
    method from_config (line 27) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 31) | def from_pretrained(cls, *args, **kwargs):
  class OnnxStableDiffusionInpaintPipelineLegacy (line 35) | class OnnxStableDiffusionInpaintPipelineLegacy(metaclass=DummyObject):
    method __init__ (line 38) | def __init__(self, *args, **kwargs):
    method from_config (line 42) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 46) | def from_pretrained(cls, *args, **kwargs):
  class OnnxStableDiffusionPipeline (line 50) | class OnnxStableDiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 53) | def __init__(self, *args, **kwargs):
    method from_config (line 57) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 61) | def from_pretrained(cls, *args, **kwargs):
  class StableDiffusionOnnxPipeline (line 65) | class StableDiffusionOnnxPipeline(metaclass=DummyObject):
    method __init__ (line 68) | def __init__(self, *args, **kwargs):
    method from_config (line 72) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 76) | def from_pretrained(cls, *args, **kwargs):

FILE: src/diffusers/utils/dummy_torch_and_transformers_objects.py
  class AltDiffusionImg2ImgPipeline (line 5) | class AltDiffusionImg2ImgPipeline(metaclass=DummyObject):
    method __init__ (line 8) | def __init__(self, *args, **kwargs):
    method from_config (line 12) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 16) | def from_pretrained(cls, *args, **kwargs):
  class AltDiffusionPipeline (line 20) | class AltDiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 23) | def __init__(self, *args, **kwargs):
    method from_config (line 27) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 31) | def from_pretrained(cls, *args, **kwargs):
  class CycleDiffusionPipeline (line 35) | class CycleDiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 38) | def __init__(self, *args, **kwargs):
    method from_config (line 42) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 46) | def from_pretrained(cls, *args, **kwargs):
  class LDMTextToImagePipeline (line 50) | class LDMTextToImagePipeline(metaclass=DummyObject):
    method __init__ (line 53) | def __init__(self, *args, **kwargs):
    method from_config (line 57) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 61) | def from_pretrained(cls, *args, **kwargs):
  class PaintByExamplePipeline (line 65) | class PaintByExamplePipeline(metaclass=DummyObject):
    method __init__ (line 68) | def __init__(self, *args, **kwargs):
    method from_config (line 72) | def from_config(cls, *args, **kwargs):
    method from_pretrained (line 76) | def from_pretrained(cls, *args, **kwargs):
  class SemanticStableDiffusionPipeline (line 80) | class SemanticStableDiffusionPipeline(metaclass=DummyObject):
    method __init__ (line 83) | def __init__(self, *args, **kwargs):
    method from_config (
Condensed preview — 545 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (6,147K chars).
[
  {
    "path": ".github/ISSUE_TEMPLATE/bug-report.yml",
    "chars": 3026,
    "preview": "name: \"\\U0001F41B Bug Report\"\ndescription: Report a bug on diffusers\nlabels: [ \"bug\" ]\nbody:\n  - type: markdown\n    attr"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "chars": 232,
    "preview": "contact_links:\n  - name: Blank issue\n    url: https://github.com/huggingface/diffusers/issues/new\n    about: Other\n  - n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "chars": 608,
    "preview": "---\nname: \"\\U0001F680 Feature request\"\nabout: Suggest an idea for this project\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feedback.md",
    "chars": 299,
    "preview": "---\nname: \"💬 Feedback about API Design\"\nabout: Give feedback about the current API design\ntitle: ''\nlabels: ''\nassignees"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/new-model-addition.yml",
    "chars": 1231,
    "preview": "name: \"\\U0001F31F New model/pipeline/scheduler addition\"\ndescription: Submit a proposal/request to implement a new diffu"
  },
  {
    "path": ".github/actions/setup-miniconda/action.yml",
    "chars": 6721,
    "preview": "name: Set up conda environment for testing\n\ndescription: Sets up miniconda in your ${RUNNER_TEMP} environment and gives "
  },
  {
    "path": ".github/workflows/build_docker_images.yml",
    "chars": 1129,
    "preview": "name: Build Docker images (nightly)\n\non:\n  workflow_dispatch:\n  schedule:\n    - cron: \"0 0 * * *\" # every day at midnigh"
  },
  {
    "path": ".github/workflows/build_documentation.yml",
    "chars": 361,
    "preview": "name: Build documentation\n\non:\n  push:\n    branches:\n      - main\n      - doc-builder*\n      - v*-release\n\njobs:\n   buil"
  },
  {
    "path": ".github/workflows/build_pr_documentation.yml",
    "chars": 425,
    "preview": "name: Build PR Documentation\n\non:\n  pull_request:\n\nconcurrency:\n  group: ${{ github.workflow }}-${{ github.head_ref || g"
  },
  {
    "path": ".github/workflows/delete_doc_comment.yml",
    "chars": 251,
    "preview": "name: Delete dev documentation\n\non:\n  pull_request:\n    types: [ closed ]\n\n\njobs:\n  delete:\n    uses: huggingface/doc-bu"
  },
  {
    "path": ".github/workflows/nightly_tests.yml",
    "chars": 5017,
    "preview": "name: Nightly tests on main\n\non:\n  schedule:\n    - cron: \"0 0 * * *\" # every day at midnight\n\nenv:\n  DIFFUSERS_IS_CI: ye"
  },
  {
    "path": ".github/workflows/pr_quality.yml",
    "chars": 1292,
    "preview": "name: Run code quality checks\n\non:\n  pull_request:\n    branches:\n      - main\n  push:\n    branches:\n      - main\n\nconcur"
  },
  {
    "path": ".github/workflows/pr_tests.yml",
    "chars": 5098,
    "preview": "name: Fast tests for PRs\n\non:\n  pull_request:\n    branches:\n      - main\n\nconcurrency:\n  group: ${{ github.workflow }}-$"
  },
  {
    "path": ".github/workflows/push_tests.yml",
    "chars": 4456,
    "preview": "name: Slow tests on main\n\non:\n  push:\n    branches:\n      - main\n\nenv:\n  DIFFUSERS_IS_CI: yes\n  HF_HOME: /mnt/cache\n  OM"
  },
  {
    "path": ".github/workflows/push_tests_fast.yml",
    "chars": 5015,
    "preview": "name: Slow tests on main\n\non:\n  push:\n    branches:\n      - main\n\nenv:\n  DIFFUSERS_IS_CI: yes\n  HF_HOME: /mnt/cache\n  OM"
  },
  {
    "path": ".github/workflows/stale.yml",
    "chars": 548,
    "preview": "name: Stale Bot\n\non:\n  schedule:\n    - cron: \"0 15 * * *\"\n\njobs:\n  close_stale_issues:\n    name: Close Stale Issues\n    "
  },
  {
    "path": ".github/workflows/typos.yml",
    "chars": 198,
    "preview": "name: Check typos\n\non:\n  workflow_dispatch:\n\njobs:\n  build:\n    runs-on: ubuntu-latest\n\n    steps:\n      - uses: actions"
  },
  {
    "path": ".gitignore",
    "chars": 1876,
    "preview": "# Initially taken from Github's Python gitignore file\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$"
  },
  {
    "path": "CITATION.cff",
    "chars": 1045,
    "preview": "cff-version: 1.2.0\ntitle: 'Diffusers: State-of-the-art diffusion models'\nmessage: >-\n  If you use this software, please "
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "chars": 5226,
    "preview": "\n# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make particip"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 12381,
    "preview": "<!---\nCopyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Li"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "MANIFEST.in",
    "chars": 67,
    "preview": "include LICENSE\ninclude src/diffusers/utils/model_card_template.md\n"
  },
  {
    "path": "Makefile",
    "chars": 2756,
    "preview": ".PHONY: deps_table_update modified_only_fixup extra_style_checks quality style fixup fix-copies test test-examples\n\n# ma"
  },
  {
    "path": "README.md",
    "chars": 28990,
    "preview": "<p align=\"center\">\n    <br>\n    <img src=\"./docs/source/en/imgs/diffusers_library.jpg\" width=\"400\"/>\n    <br>\n<p>\n<p ali"
  },
  {
    "path": "_typos.toml",
    "chars": 407,
    "preview": "# Files for typos\n# Instruction:  https://github.com/marketplace/actions/typos-action#getting-started\n\n[default.extend-i"
  },
  {
    "path": "docker/diffusers-flax-cpu/Dockerfile",
    "chars": 1274,
    "preview": "FROM ubuntu:20.04\nLABEL maintainer=\"Hugging Face\"\nLABEL repository=\"diffusers\"\n\nENV DEBIAN_FRONTEND=noninteractive\n\nRUN "
  },
  {
    "path": "docker/diffusers-flax-tpu/Dockerfile",
    "chars": 1407,
    "preview": "FROM ubuntu:20.04\nLABEL maintainer=\"Hugging Face\"\nLABEL repository=\"diffusers\"\n\nENV DEBIAN_FRONTEND=noninteractive\n\nRUN "
  },
  {
    "path": "docker/diffusers-onnxruntime-cpu/Dockerfile",
    "chars": 1185,
    "preview": "FROM ubuntu:20.04\nLABEL maintainer=\"Hugging Face\"\nLABEL repository=\"diffusers\"\n\nENV DEBIAN_FRONTEND=noninteractive\n\nRUN "
  },
  {
    "path": "docker/diffusers-onnxruntime-cuda/Dockerfile",
    "chars": 1232,
    "preview": "FROM nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04\nLABEL maintainer=\"Hugging Face\"\nLABEL repository=\"diffusers\"\n\nENV DEBIA"
  },
  {
    "path": "docker/diffusers-pytorch-cpu/Dockerfile",
    "chars": 1163,
    "preview": "FROM ubuntu:20.04\nLABEL maintainer=\"Hugging Face\"\nLABEL repository=\"diffusers\"\n\nENV DEBIAN_FRONTEND=noninteractive\n\nRUN "
  },
  {
    "path": "docker/diffusers-pytorch-cuda/Dockerfile",
    "chars": 1198,
    "preview": "FROM nvidia/cuda:11.7.1-cudnn8-runtime-ubuntu20.04\nLABEL maintainer=\"Hugging Face\"\nLABEL repository=\"diffusers\"\n\nENV DEB"
  },
  {
    "path": "docs/README.md",
    "chars": 11085,
    "preview": "<!---\nCopyright 2023- The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"L"
  },
  {
    "path": "docs/TRANSLATING.md",
    "chars": 3267,
    "preview": "### Translating the Diffusers documentation into your language\n\nAs part of our mission to democratize machine learning, "
  },
  {
    "path": "docs/source/en/_toctree.yml",
    "chars": 8027,
    "preview": "- sections:\n  - local: index\n    title: 🧨 Diffusers\n  - local: quicktour\n    title: Quicktour\n  - local: stable_diffusio"
  },
  {
    "path": "docs/source/en/api/configuration.mdx",
    "chars": 975,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/diffusion_pipeline.mdx",
    "chars": 1814,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/experimental/rl.mdx",
    "chars": 608,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/loaders.mdx",
    "chars": 1361,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/logging.mdx",
    "chars": 3402,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/models.mdx",
    "chars": 2515,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/outputs.mdx",
    "chars": 1694,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/alt_diffusion.mdx",
    "chars": 4626,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/audio_diffusion.mdx",
    "chars": 3049,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/cycle_diffusion.mdx",
    "chars": 4548,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/dance_diffusion.mdx",
    "chars": 1430,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/ddim.mdx",
    "chars": 2191,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/ddpm.mdx",
    "chars": 2085,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/dit.mdx",
    "chars": 2638,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/latent_diffusion.mdx",
    "chars": 3138,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/latent_diffusion_uncond.mdx",
    "chars": 2867,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/overview.mdx",
    "chars": 19940,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/paint_by_example.mdx",
    "chars": 3923,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/pndm.mdx",
    "chars": 2700,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/repaint.mdx",
    "chars": 3946,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/score_sde_ve.mdx",
    "chars": 3119,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/semantic_stable_diffusion.mdx",
    "chars": 4269,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/attend_and_excite.mdx",
    "chars": 3310,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx",
    "chars": 13191,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/depth2img.mdx",
    "chars": 1716,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/image_variation.mdx",
    "chars": 1484,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/img2img.mdx",
    "chars": 1802,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/inpaint.mdx",
    "chars": 1901,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx",
    "chars": 1573,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/overview.mdx",
    "chars": 6783,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/panorama.mdx",
    "chars": 3139,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/pix2pix.mdx",
    "chars": 3263,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/pix2pix_zero.mdx",
    "chars": 12326,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/self_attention_guidance.mdx",
    "chars": 3496,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/text2img.mdx",
    "chars": 2404,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion/upscale.mdx",
    "chars": 1568,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion_2.mdx",
    "chars": 8604,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_diffusion_safe.mdx",
    "chars": 5920,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stable_unclip.mdx",
    "chars": 3055,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/stochastic_karras_ve.mdx",
    "chars": 2151,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/unclip.mdx",
    "chars": 2658,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\nLicensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "docs/source/en/api/pipelines/versatile_diffusion.mdx",
    "chars": 4553,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/pipelines/vq_diffusion.mdx",
    "chars": 2808,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/ddim.mdx",
    "chars": 1981,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/ddim_inverse.mdx",
    "chars": 1077,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/ddpm.mdx",
    "chars": 1854,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/deis.mdx",
    "chars": 886,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/dpm_discrete.mdx",
    "chars": 970,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/dpm_discrete_ancestral.mdx",
    "chars": 1012,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/euler.mdx",
    "chars": 1116,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/euler_ancestral.mdx",
    "chars": 1016,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/heun.mdx",
    "chars": 963,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/ipndm.mdx",
    "chars": 880,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/lms_discrete.mdx",
    "chars": 795,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/multistep_dpm_solver.mdx",
    "chars": 918,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/overview.mdx",
    "chars": 6881,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/pndm.mdx",
    "chars": 862,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/repaint.mdx",
    "chars": 1015,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/score_sde_ve.mdx",
    "chars": 800,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/score_sde_vp.mdx",
    "chars": 897,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/singlestep_dpm_solver.mdx",
    "chars": 921,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/stochastic_karras_ve.mdx",
    "chars": 784,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/unipc.mdx",
    "chars": 1104,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/api/schedulers/vq_diffusion.mdx",
    "chars": 751,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/conceptual/contribution.mdx",
    "chars": 13369,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/conceptual/ethical_guidelines.mdx",
    "chars": 4249,
    "preview": "# 🧨 Diffusers’ Ethical Guidelines\r\n\r\n## Preamble\r\n\r\n[Diffusers](https://huggingface.co/docs/diffusers/index) provides pr"
  },
  {
    "path": "docs/source/en/conceptual/philosophy.mdx",
    "chars": 15324,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/index.mdx",
    "chars": 10339,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/installation.mdx",
    "chars": 4750,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/optimization/fp16.mdx",
    "chars": 16899,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/optimization/habana.mdx",
    "chars": 3338,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/optimization/mps.mdx",
    "chars": 3462,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/optimization/onnx.mdx",
    "chars": 1727,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/optimization/open_vino.mdx",
    "chars": 621,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/optimization/torch2.0.mdx",
    "chars": 10977,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/optimization/xformers.mdx",
    "chars": 1849,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/quicktour.mdx",
    "chars": 6925,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/stable_diffusion.mdx",
    "chars": 69739,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.                                                           "
  },
  {
    "path": "docs/source/en/training/dreambooth.mdx",
    "chars": 13407,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/training/lora.mdx",
    "chars": 8756,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/training/overview.mdx",
    "chars": 6142,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/training/text2image.mdx",
    "chars": 5768,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/training/text_inversion.mdx",
    "chars": 6722,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/training/unconditional_training.mdx",
    "chars": 5356,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/tutorials/basic_training.mdx",
    "chars": 17710,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/audio.mdx",
    "chars": 731,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/conditional_image_generation.mdx",
    "chars": 2006,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/configuration.mdx",
    "chars": 829,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/contribute_pipeline.mdx",
    "chars": 7947,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/controlling_generation.mdx",
    "chars": 10646,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/custom_pipeline_examples.mdx",
    "chars": 17676,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/custom_pipeline_overview.mdx",
    "chars": 5381,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/depth2img.mdx",
    "chars": 1501,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/img2img.mdx",
    "chars": 1789,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/inpaint.mdx",
    "chars": 3008,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/kerascv.mdx",
    "chars": 9300,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/loading.mdx",
    "chars": 32327,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/other-modalities.mdx",
    "chars": 1202,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/reproducibility.mdx",
    "chars": 6948,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/reusing_seeds.mdx",
    "chars": 2911,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/rl.mdx",
    "chars": 1354,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/schedulers.mdx",
    "chars": 10642,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/unconditional_image_generation.mdx",
    "chars": 1931,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/en/using-diffusers/using_safetensors",
    "chars": 836,
    "preview": "# What is safetensors ? \n\n[safetensors](https://github.com/huggingface/safetensors) is a different format\nfrom the class"
  },
  {
    "path": "docs/source/en/using-diffusers/using_safetensors.mdx",
    "chars": 3732,
    "preview": "# What is safetensors ? \n\n[safetensors](https://github.com/huggingface/safetensors) is a different format\nfrom the class"
  },
  {
    "path": "docs/source/ko/_toctree.yml",
    "chars": 5447,
    "preview": "- sections:\n  - local: index\n    title: \"🧨 Diffusers\"\n  - local: quicktour\n    title: \"훑어보기\"\n  - local: installation\n   "
  },
  {
    "path": "docs/source/ko/in_translation.mdx",
    "chars": 629,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/ko/index.mdx",
    "chars": 7364,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/ko/installation.mdx",
    "chars": 3420,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "docs/source/ko/quicktour.mdx",
    "chars": 5397,
    "preview": "<!--Copyright 2023 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"Lice"
  },
  {
    "path": "examples/README.md",
    "chars": 6401,
    "preview": "<!---\nCopyright 2023 The HuggingFace Team. All rights reserved.\nLicensed under the Apache License, Version 2.0 (the \"Lic"
  },
  {
    "path": "examples/community/README.md",
    "chars": 53164,
    "preview": "# Community Examples\n\n> **For more information about community pipelines, please have a look at [this issue](https://git"
  },
  {
    "path": "examples/community/bit_diffusion.py",
    "chars": 10824,
    "preview": "from typing import Optional, Tuple, Union\n\nimport torch\nfrom einops import rearrange, reduce\n\nfrom diffusers import DDIM"
  },
  {
    "path": "examples/community/checkpoint_merger.py",
    "chars": 12956,
    "preview": "import glob\nimport os\nfrom typing import Dict, List, Union\n\nimport torch\n\nfrom diffusers.utils import is_safetensors_ava"
  },
  {
    "path": "examples/community/clip_guided_stable_diffusion.py",
    "chars": 14763,
    "preview": "import inspect\nfrom typing import List, Optional, Union\n\nimport torch\nfrom torch import nn\nfrom torch.nn import function"
  },
  {
    "path": "examples/community/composable_stable_diffusion.py",
    "chars": 29779,
    "preview": "# Copyright 2023 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"Lic"
  },
  {
    "path": "examples/community/imagic_stable_diffusion.py",
    "chars": 24622,
    "preview": "\"\"\"\n    modeled after the textual_inversion.py / train_dreambooth.py and the work\n    of justinpinkney here: https://git"
  },
  {
    "path": "examples/community/img2img_inpainting.py",
    "chars": 24318,
    "preview": "import inspect\nfrom typing import Callable, List, Optional, Tuple, Union\n\nimport numpy as np\nimport PIL\nimport torch\nfro"
  },
  {
    "path": "examples/community/interpolate_stable_diffusion.py",
    "chars": 26595,
    "preview": "import inspect\nimport time\nfrom pathlib import Path\nfrom typing import Callable, List, Optional, Union\n\nimport numpy as "
  },
  {
    "path": "examples/community/lpw_stable_diffusion.py",
    "chars": 57203,
    "preview": "import inspect\nimport re\nfrom typing import Callable, List, Optional, Union\n\nimport numpy as np\nimport PIL\nimport torch\n"
  },
  {
    "path": "examples/community/lpw_stable_diffusion_onnx.py",
    "chars": 54543,
    "preview": "import inspect\nimport re\nfrom typing import Callable, List, Optional, Union\n\nimport numpy as np\nimport PIL\nimport torch\n"
  },
  {
    "path": "examples/community/magic_mix.py",
    "chars": 4856,
    "preview": "from typing import Union\n\nimport torch\nfrom PIL import Image\nfrom torchvision import transforms as tfms\nfrom tqdm.auto i"
  },
  {
    "path": "examples/community/multilingual_stable_diffusion.py",
    "chars": 22788,
    "preview": "import inspect\nfrom typing import Callable, List, Optional, Union\n\nimport torch\nfrom transformers import (\n    CLIPFeatu"
  },
  {
    "path": "examples/community/one_step_unet.py",
    "chars": 700,
    "preview": "#!/usr/bin/env python3\nimport torch\n\nfrom diffusers import DiffusionPipeline\n\n\nclass UnetSchedulerOneForwardPipeline(Dif"
  },
  {
    "path": "examples/community/sd_text2img_k_diffusion.py",
    "chars": 22901,
    "preview": "# Copyright 2023 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"Lic"
  },
  {
    "path": "examples/community/seed_resize_stable_diffusion.py",
    "chars": 19064,
    "preview": "\"\"\"\n    modified based on diffusion library from Huggingface: https://github.com/huggingface/diffusers/blob/main/src/dif"
  },
  {
    "path": "examples/community/speech_to_image_diffusion.py",
    "chars": 11626,
    "preview": "import inspect\nfrom typing import Callable, List, Optional, Union\n\nimport torch\nfrom transformers import (\n    CLIPFeatu"
  },
  {
    "path": "examples/community/stable_diffusion_comparison.py",
    "chars": 17277,
    "preview": "from typing import Any, Callable, Dict, List, Optional, Union\n\nimport torch\nfrom transformers import CLIPFeatureExtracto"
  },
  {
    "path": "examples/community/stable_diffusion_mega.py",
    "chars": 10207,
    "preview": "from typing import Any, Callable, Dict, List, Optional, Union\n\nimport PIL.Image\nimport torch\nfrom transformers import CL"
  },
  {
    "path": "examples/community/stable_unclip.py",
    "chars": 12066,
    "preview": "import types\nfrom typing import List, Optional, Tuple, Union\n\nimport torch\nfrom transformers import CLIPTextModelWithPro"
  },
  {
    "path": "examples/community/text_inpainting.py",
    "chars": 16300,
    "preview": "from typing import Callable, List, Optional, Union\n\nimport PIL\nimport torch\nfrom transformers import (\n    CLIPFeatureEx"
  },
  {
    "path": "examples/community/tiled_upscaling.py",
    "chars": 13548,
    "preview": "# Copyright 2023 Peter Willemsen <peter@codebuffet.co>. All rights reserved.\n#\n# Licensed under the Apache License, Vers"
  },
  {
    "path": "examples/community/unclip_image_interpolation.py",
    "chars": 22832,
    "preview": "import inspect\nfrom typing import List, Optional, Union\n\nimport PIL\nimport torch\nfrom torch.nn import functional as F\nfr"
  },
  {
    "path": "examples/community/unclip_text_interpolation.py",
    "chars": 25574,
    "preview": "import inspect\nfrom typing import List, Optional, Tuple, Union\n\nimport torch\nfrom torch.nn import functional as F\nfrom t"
  },
  {
    "path": "examples/community/wildcard_stable_diffusion.py",
    "chars": 21126,
    "preview": "import inspect\nimport os\nimport random\nimport re\nfrom dataclasses import dataclass\nfrom typing import Callable, Dict, Li"
  },
  {
    "path": "examples/conftest.py",
    "chars": 1720,
    "preview": "# Copyright 2023 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"Lic"
  },
  {
    "path": "examples/dreambooth/README.md",
    "chars": 18809,
    "preview": "# DreamBooth training example\n\n[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image mode"
  },
  {
    "path": "examples/dreambooth/requirements.txt",
    "chars": 68,
    "preview": "accelerate\ntorchvision\ntransformers>=4.25.1\nftfy\ntensorboard\nJinja2\n"
  },
  {
    "path": "examples/dreambooth/requirements_flax.txt",
    "chars": 74,
    "preview": "transformers>=4.25.1\nflax\noptax\ntorch\ntorchvision\nftfy\ntensorboard\nJinja2\n"
  },
  {
    "path": "examples/dreambooth/train_dreambooth.py",
    "chars": 39695,
    "preview": "#!/usr/bin/env python\n# coding=utf-8\n# Copyright 2023 The HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under"
  },
  {
    "path": "examples/dreambooth/train_dreambooth_flax.py",
    "chars": 27725,
    "preview": "import argparse\nimport hashlib\nimport logging\nimport math\nimport os\nfrom pathlib import Path\nfrom typing import Optional"
  },
  {
    "path": "examples/dreambooth/train_dreambooth_lora.py",
    "chars": 42690,
    "preview": "#!/usr/bin/env python\n# coding=utf-8\n# Copyright 2023 The HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under"
  },
  {
    "path": "examples/inference/README.md",
    "chars": 854,
    "preview": "# Inference Examples\n\n**The inference examples folder is deprecated and will be removed in a future version**.\n**Officia"
  },
  {
    "path": "examples/inference/image_to_image.py",
    "chars": 243,
    "preview": "import warnings\n\nfrom diffusers import StableDiffusionImg2ImgPipeline  # noqa F401\n\n\nwarnings.warn(\n    \"The `image_to_i"
  },
  {
    "path": "examples/inference/inpainting.py",
    "chars": 273,
    "preview": "import warnings\n\nfrom diffusers import StableDiffusionInpaintPipeline as StableDiffusionInpaintPipeline  # noqa F401\n\n\nw"
  },
  {
    "path": "examples/research_projects/README.md",
    "chars": 617,
    "preview": "# Research projects\n\nThis folder contains various research projects using 🧨 Diffusers. \nThey are not really maintained b"
  },
  {
    "path": "examples/research_projects/colossalai/README.md",
    "chars": 5230,
    "preview": "# [DreamBooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) by [colossalai](https://github.co"
  },
  {
    "path": "examples/research_projects/colossalai/inference.py",
    "chars": 345,
    "preview": "import torch\n\nfrom diffusers import StableDiffusionPipeline\n\n\nmodel_id = \"path-to-your-trained-model\"\npipe = StableDiffu"
  },
  {
    "path": "examples/research_projects/colossalai/requirement.txt",
    "chars": 64,
    "preview": "diffusers\ntorch\ntorchvision\nftfy\ntensorboard\nJinja2\ntransformers"
  },
  {
    "path": "examples/research_projects/colossalai/train_dreambooth_colossalai.py",
    "chars": 26836,
    "preview": "import argparse\nimport hashlib\nimport math\nimport os\nfrom pathlib import Path\nfrom typing import Optional\n\nimport coloss"
  },
  {
    "path": "examples/research_projects/dreambooth_inpaint/README.md",
    "chars": 4281,
    "preview": "# Dreambooth for the inpainting model\n\nThis script was added by @thedarkzeno .\n\nPlease note that this script is not acti"
  },
  {
    "path": "examples/research_projects/dreambooth_inpaint/requirements.txt",
    "chars": 85,
    "preview": "diffusers==0.9.0\naccelerate\ntorchvision\ntransformers>=4.21.0\nftfy\ntensorboard\nJinja2\n"
  },
  {
    "path": "examples/research_projects/dreambooth_inpaint/train_dreambooth_inpaint.py",
    "chars": 33985,
    "preview": "import argparse\nimport hashlib\nimport itertools\nimport math\nimport os\nimport random\nfrom pathlib import Path\nfrom typing"
  },
  {
    "path": "examples/research_projects/dreambooth_inpaint/train_dreambooth_inpaint_lora.py",
    "chars": 35469,
    "preview": "import argparse\nimport hashlib\nimport math\nimport os\nimport random\nfrom pathlib import Path\nfrom typing import Optional\n"
  },
  {
    "path": "examples/research_projects/intel_opts/README.md",
    "chars": 1030,
    "preview": "## Diffusers examples with Intel optimizations\n\n**This research project is not actively maintained by the diffusers team"
  },
  {
    "path": "examples/research_projects/intel_opts/inference_bf16.py",
    "chars": 1741,
    "preview": "import intel_extension_for_pytorch as ipex\nimport torch\nfrom PIL import Image\n\nfrom diffusers import StableDiffusionPipe"
  },
  {
    "path": "examples/research_projects/intel_opts/textual_inversion/README.md",
    "chars": 3150,
    "preview": "## Textual Inversion fine-tuning example\n\n[Textual inversion](https://arxiv.org/abs/2208.01618) is a method to personali"
  },
  {
    "path": "examples/research_projects/intel_opts/textual_inversion/requirements.txt",
    "chars": 102,
    "preview": "accelerate\ntorchvision\ntransformers>=4.21.0\nftfy\ntensorboard\nJinja2\nintel_extension_for_pytorch>=1.13\n"
  },
  {
    "path": "examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py",
    "chars": 25701,
    "preview": "import argparse\nimport itertools\nimport math\nimport os\nimport random\nfrom pathlib import Path\nfrom typing import Optiona"
  },
  {
    "path": "examples/research_projects/multi_subject_dreambooth/README.md",
    "chars": 12876,
    "preview": "# Multi Subject DreamBooth training\n\n[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2imag"
  },
  {
    "path": "examples/research_projects/multi_subject_dreambooth/requirements.txt",
    "chars": 67,
    "preview": "accelerate\ntorchvision\ntransformers>=4.25.1\nftfy\ntensorboard\nJinja2"
  },
  {
    "path": "examples/research_projects/multi_subject_dreambooth/train_multi_subject_dreambooth.py",
    "chars": 37235,
    "preview": "import argparse\nimport hashlib\nimport itertools\nimport logging\nimport math\nimport os\nimport warnings\nfrom pathlib import"
  },
  {
    "path": "examples/research_projects/onnxruntime/README.md",
    "chars": 566,
    "preview": "## Diffusers examples with ONNXRuntime optimizations\n\n**This research project is not actively maintained by the diffuser"
  },
  {
    "path": "examples/research_projects/onnxruntime/text_to_image/README.md",
    "chars": 2893,
    "preview": "# Stable Diffusion text-to-image fine-tuning\n\nThe `train_text_to_image.py` script shows how to fine-tune stable diffusio"
  },
  {
    "path": "examples/research_projects/onnxruntime/text_to_image/requirements.txt",
    "chars": 81,
    "preview": "accelerate\ntorchvision\ntransformers>=4.25.1\ndatasets\nftfy\ntensorboard\nmodelcards\n"
  }
]

// ... and 345 more files (download for full content)

About this extraction

This page contains the full source code of the Stability-AI/diffusers GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 545 files (5.6 MB), approximately 1.5M tokens, and a symbol index with 4032 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!