main dbfe68b103a1 cached
6 files
805.3 KB
261.9k tokens
1 requests
Download .txt
Showing preview only (826K chars total). Download the full file or copy to clipboard to get everything.
Repository: yzhang2016/video-generation-survey
Branch: main
Commit: dbfe68b103a1
Files: 6
Total size: 805.3 KB

Directory structure:
gitextract_ur3l9zjp/

├── Editing-in-Diffusion.md
├── Multi-modality Generation.md
├── README.md
├── Text-to-Image.md
├── video-generation.md
└── virtual_human.md

================================================
FILE CONTENTS
================================================

================================================
FILE: Editing-in-Diffusion.md
================================================
# Image Editing In Diffusion 

## text-to-image
[arxiv 2025.03] Lumina-Image 2.0: A Unified and Efficient Image Generative Framework  [[PDF](https://arxiv.org/abs/2503.21758)]

[arxiv 2025.04] Seedream 3.0 Technical Report  [[PDF](https://arxiv.org/abs/2504.11346),[Page](https://team.doubao.com/zh/tech/seedream3_0)] 

[arxiv 2025.09] SD3.5-Flash: Distribution-Guided Distillation of Generative Flows [[PDF](https://arxiv.org/pdf/2509.21318)]

[arxiv 2025.09]  Seedream 4.0: Toward Next-generation Multimodal Image Generation [[PDF](https://arxiv.org/abs/2509.20427), [Page](https://seed.bytedance.com/zh/seedream4_0)]

[arxiv 2025.12] Ovis-Image Technical Report  [[PDF](https://arxiv.org/abs/2511.22982),[Page](https://github.com/AIDC-AI/Ovis-Image)] ![Code](https://img.shields.io/github/stars/AIDC-AI/Ovis-Image?style=social&label=Star)

[arxiv 2025.12] Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer  [[PDF](https://arxiv.org/abs/2511.22699),[Page](https://tongyi-mai.github.io/Z-Image-blog/)] ![Code](https://img.shields.io/github/stars/Tongyi-MAI/Z-Image?style=social&label=Star)

[arxiv 2026.01] GLM-Image  [[PDF](https://z.ai/blog/glm-image),[Page](https://github.com/zai-org/GLM-Image)] ![Code](https://img.shields.io/github/stars/zai-org/GLM-Image?style=social&label=Star)

[arxiv 2026.02] FireRed-Image-Edit-1.0 Techinical Report  [[PDF](https://arxiv.org/abs/2602.13344),[Page](https://github.com/FireRedTeam/FireRed-Image-Edit)] ![Code](https://img.shields.io/github/stars/FireRedTeam/FireRed-Image-Edit?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

## pixel 
[arxiv 2025.11]  DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation [[PDF](https://arxiv.org/pdf/2511.19365),[Page](https://zehong-ma.github.io/DeCo/)] ![Code](https://img.shields.io/github/stars/Zehong-Ma/DeCo?style=social&label=Star)

[arxiv 2025.11] Back to Basics: Let Denoising Generative Models Denoise  [[PDF](https://arxiv.org/abs/2511.13720),[Page](https://github.com/LTH14/JiT)] ![Code](https://img.shields.io/github/stars/LTH14/JiT?style=social&label=Star)

[arxiv 2025.11]  DiP: Taming Diffusion Models in Pixel Space [[PDF](https://arxiv.org/pdf/2511.18822)]

[arxiv 2025.11] PixelDiT: Pixel Diffusion Transformers for Image Generation  [[PDF](https://arxiv.org/pdf/2511.20645)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## Editing 

*[ICLR2022; Stanford & CMU] ***SDEdit:*** Guided Image Synthesis and Editing with Stochastic Differential Equations [[PDF](https://arxiv.org/pdf/2108.01073.pdf), [Page](https://sde-image-editing.github.io/)]

*[arxiv 22.08; meta] ***Prompt-to-Prompt*** Image Editing with Cross Attention Control [[PDF](https://arxiv.org/abs/2208.01626) ]

[arxiv 22.08; Scale AI] ***Direct Inversion***: Optimization-Free Text-Driven Real Image Editing with Diffusion Models [[PDF](https://arxiv.org/pdf/2211.07825)]

[arxiv 22.11; UC Berkeley] ***InstructPix2Pix***: Learning to Follow Image Editing Instructions [[PDF](https://arxiv.org/pdf/2211.09800.pdf), [Page](https://www.timothybrooks.com/instruct-pix2pix)]

[arxiv 2022; Nvidia ] eDiffi: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers \[[PDF](https://arxiv.org/pdf/2211.01324.pdf), Code\]

[arxiv 2022; Goolge ] Imagic: Text-Based Real Image Editing with Diffusion Models \[[PDF](https://arxiv.org/pdf/2210.09276.pdf), Code\]

[arxiv 2022] ***DiffEdit***: Diffusion-based semantic image editing with mask guidance [[Paper](https://openreview.net/forum?id=3lge0p5o-M-)]

[arxiv 2022] ***DiffIT***: Diffusion-based Image Translation Using Disentangled Style and Content Repesentation [[Paper]](https://openreview.net/pdf?id=Nayau9fwXU)  

[arxiv 2022] Dual Diffusion Implicit Bridges for Image-to-image Translation [[Paper]](https://openreview.net/pdf?id=5HLoTvVGDe)  
*[ICLR 23, Google] Classifier-free Diffusion Guidance [[Paper]](https://arxiv.org/pdf/2207.12598.pdf)

[arxiv 2022] ***EDICT***: Exact Diffusion Inversion via Coupled Transformations \[[PDF](https://arxiv.org/abs/2211.12446)\]  

[arxiv 22.11] ***Paint by Example***: Exemplar-based Image Editing with Diffusion Models [[PDF]](https://arxiv.org/abs/2211.13227)  

[arxiv 2022.10; ByteDance]MagicMix: Semantic Mixing with Diffusion Models [[PDF]](https://arxiv.org/abs/2210.16056)  

[arxiv 2022.12; Microsoft]X-Paste: Revisit Copy-Paste at Scale with CLIP and StableDiffusion\[[PDF](https://arxiv.org/pdf/2212.03863.pdf)\]

[arxi 2022.12]SINE: SINgle Image Editing with Text-to-Image Diffusion Models \[[PDF](https://arxiv.org/pdf/2212.04489.pdf)\]

[arxiv 2022.12]Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models[[PDF](https://arxiv.org/pdf/2212.08698.pdf)]

[arxiv 2022.12]Optimizing Prompts for Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2212.09611.pdf)]

[arxiv 2023.01]Guiding Text-to-Image Diffusion Model Towards Grounded Generation [[PDF](https://arxiv.org/pdf/2301.05221.pdf), [Page](https://lipurple.github.io/Grounded_Diffusion/)]

[arxiv 2023.02, Adobe]Controlled and Conditional Text to Image Generation with Diffusion Prior [[PDF](https://arxiv.org/abs/2302.11710)]

[arxiv 2023.02]Learning Input-agnostic Manipulation Directions in StyleGAN with Text Guidance [[PDF](https://arxiv.org/abs/2302.13331)]

[arxiv 2023.02]Towards Enhanced Controllability of Diffusion Models[[PDF](https://arxiv.org/pdf/2302.14368.pdf)]

[arxiv 2023.03]X&Fuse: Fusing Visual Information in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2303.01000)]

[arxiv 2023.03]Lformer: Text-to-Image Generation with L-shape Block Parallel Decoding [[PDF](https://arxiv.org/abs/2303.03800)]

[arxiv 2023.03]CoralStyleCLIP: Co-optimized Region and Layer Selection for Image Editing [[PDF](https://arxiv.org/abs/2303.05031)]

[arxiv 2023.03]Erasing Concepts from Diffusion Models [[PDF](https://arxiv.org/abs/2303.07345), [Code](https://github.com/rohitgandikota/erasing)]

[arxiv 2023.03]Editing Implicit Assumptions in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2303.08084), [Page](https://time-diffusion.github.io/)]

[arxiv 2023.03]Localizing Object-level Shape Variations with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2303.11306), [Page](https://orpatashnik.github.io/local-prompt-mixing/)]

[arxiv 2023.03]SVDiff: Compact Parameter Space for Diffusion Fine-Tuning[[PDF](https://arxiv.org/abs/2303.11305)]

[arxiv 2023.03]Ablating Concepts in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2303.13516), [Page](https://www.cs.cmu.edu/~concept-ablation/)]

[arxiv 2023.03]ReVersion : Diffusion-Based  Relation Inversion from Images[[PDF](https://arxiv.org/abs/2303.13495), [Page](https://ziqihuangg.github.io/projects/reversion.html)]

[arxiv 2023.03]MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models [[PDF](https://arxiv.org/abs/2303.13126)]

[arxiv 2023.04]One-shot Unsupervised Domain Adaptation with Personalized Diffusion Models [[PDF](https://arxiv.org/abs/2303.18080)]

[arxiv 2023.04]3D-aware Image Generation using 2D Diffusion Models [[PDF](https://arxiv.org/abs/2303.17905)]

[arxiv 2023.04]Inst-Inpaint: Instructing to Remove Objects with Diffusion Models[[PDF](https://arxiv.org/abs/2304.03246)]

[arxiv 2023.04]Harnessing the Spatial-Temporal Attention of Diffusion Models for High-Fidelity Text-to-Image Synthesis [[PDF](https://t.co/GJNrYFA8wS)]

->[arxiv 2023.04]Expressive Text-to-Image Generation with Rich Text [[PDF](https://arxiv.org/abs/2304.06720), [Page](https://rich-text-to-image.github.io/)]

[arxiv 2023.04]DiffusionRig: Learning Personalized Priors for Facial Appearance Editing [[PDF](https://arxiv.org/abs/2304.06711)]

[arxiv 2023.04]An Edit Friendly DDPM Noise Space: Inversion and Manipulations [[PDF](https://arxiv.org/abs/2304.06140)]

[arxiv 2023.04]Gradient-Free Textual Inversion [[PDF](https://arxiv.org/abs/2304.05818)]

[arxiv 2023.04]Improving Diffusion Models for Scene Text Editing with Dual Encoders [[PDF](https://arxiv.org/pdf/2304.05568.pdf)]

[arxiv 2023.04]Delta Denoising Score [[PDF](https://arxiv.org/abs/2304.07090), [Page](https://delta-denoising-score.github.io/)]

[arxiv 2023.04]MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing [[PDF](https://arxiv.org/abs/2304.08465), [Page](https://ljzycmd.github.io/projects/MasaCtrl)]

[arxiv 2023.04]Edit Everything: A Text-Guided Generative System for Images Editing [[PDF](https://arxiv.org/pdf/2304.14006.pdf)]

[arxiv 2023.05]In-Context Learning Unlocked for Diffusion Models [[PDF](https://arxiv.org/pdf/2305.01115.pdf)]

[arxiv 2023.05]ReGeneration Learning of Diffusion Models with Rich Prompts for Zero-Shot Image Translation [[PDF](https://arxiv.org/abs/2305.04651)]

[arxiv 2023.05]RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths [[PDF](https://arxiv.org/abs/2305.18295)]

[arxiv 2023.05]Controllable Text-to-Image Generation with GPT-4 [[PDF](https://arxiv.org/abs/2305.18583)]

[arxiv 2023.06]Diffusion Self-Guidance for Controllable Image Generation [[PDF](https://arxiv.org/abs/2306.00986), [Page](https://dave.ml/selfguidance/)]

[arxiv 2023.06]SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions [[PDF](https://arxiv.org/abs/2306.05178), [Page](https://syncdiffusion.github.io/)]

[arxiv 2023.06]MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing[[PDF](https://arxiv.org/abs/2306.10012), [Page](https://osu-nlp-group.github.io/MagicBrush/)]

[arxiv 2023.06 ]Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation [[PDf](https://arxiv.org/abs/2306.08247)]

->[arxiv 2023.06]Paste, Inpaint and Harmonize via Denoising: Subject-Driven Image Editing with Pre-Trained Diffusion Model [[PDF](https://arxiv.org/abs/2306.07596)]

[arxiv 2023.06]Controlling Text-to-Image Diffusion by Orthogonal Finetuning [[PDF](https://arxiv.org/abs/2306.07280)]

[arxiv 2023.06]Localized Text-to-Image Generation for Free via Cross Attention Control[[PDF](https://arxiv.org/abs/2306.14636)]

[arxiv 2023.06]Filtered-Guided Diffusion: Fast Filter Guidance for Black-Box Diffusion Models [[PDF](https://arxiv.org/abs/2306.17141)]

[arxiv 2023.06]PFB-Diff: Progressive Feature Blending Diffusion for Text-driven Image Editing [[PDF](https://arxiv.org/abs/2306.16894)]

[arxiv 2023.06]DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing[[PDF](https://yujun-shi.github.io/projects/dragdiffusion.html)]

[arxiv 2023.06]Diffusion Self-Guidance for Controllable Image Generation [[PDF](https://dave.ml/selfguidance/)]

[arxiv 2023.07]Counting Guidance for High Fidelity Text-to-Image Synthesis [[PDF](https://arxiv.org/pdf/2306.17567.pdf)]

[arxiv 2023.07]LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance [[PDF](https://arxiv.org/abs//2307.00522)]

[arxiv 2023.07]DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models [[PDF](https://arxiv.org/pdf/2307.02421.pdf)]

[arxiv 2023.07]Taming Encoder for Zero Fine-tuning Image Customization with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2304.02642)]

[arxiv 2023.07]Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation [[PDF](https://arxiv.org/abs/2307.08448)]

[arxiv 2023.07]FABRIC: Personalizing Diffusion Models with Iterative Feedback [[PDF](https://arxiv.org/pdf/2307.10159.pdf)]

[arxiv 2023.07]Understanding the Latent Space of Diffusion Models through the Lens of Riemannian Geometry [[PDF](https://arxiv.org/pdf/2307.12868.pdf)]

[arxiv 2023.07]Interpolating between Images with Diffusion Models [[PDF](https://arxiv.org/abs/2307.12560)]

[arxiv 2023.07]TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition [[PDF](https://arxiv.org/abs/2307.12493)]

[arxiv 2023.08]ImageBrush: Learning Visual In-Context Instructions for Exemplar-Based Image Manipulation [[PDF](https://arxiv.org/abs/2308.00906)]

[arxiv 2023.09]Iterative Multi-granular Image Editing using Diffusion Models [[PDF](https://arxiv.org/pdf/2309.00613.pdf)]

[arxiv 2023.09]InstructDiffusion: A Generalist Modeling Interface for Vision Tasks [[PDF](https://arxiv.org/abs/2309.03895)]

[arxiv 2023.09]InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation [[PDF](https://arxiv.org/abs/2309.06380),[Page](https://github.com/gnobitab/InstaFlow)]

[arxiv 2023.09]ITI-GEN: Inclusive Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2309.05569.pdf), [Page](https://czhang0528.github.io/iti-gen)]

[arxiv 2023.09]MaskDiffusion: Boosting Text-to-Image Consistency with Conditional Mask [[PDF](https://arxiv.org/pdf/2309.04399.pdf)]

[arxiv 2023.09]FreeU : Free Lunch in Diffusion U-Net [[PDF](https://arxiv.org/abs/2309.11497),[Page](https://chenyangsi.top/FreeU/)]

[arxiv 2023.09]Dream the Impossible: Outlier Imagination with Diffusion Models [[PDF](https://arxiv.org/abs/2309.13415)]

[arxiv 2023.09]Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing [[PDF](https://arxiv.org/abs/2309.15664), [Page](https://github.com/wangkai930418/DPL)]

[arxiv 2023.09]RealFill: Reference-Driven Generation for Authentic Image Completion [[PDF](https://arxiv.org/pdf/2309.16668.pdf), [Page](https://realfill.github.io/)]

[arxiv 2023.10]Aligning Text-to-Image Diffusion Models with Reward Backpropagation [[PDF](https://arxiv.org/abs/2310.03739),[Page](https://align-prop.github.io/)]

[arxiv 2023.10]InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists [[PDF](https://arxiv.org/abs/2310.00390)]

[arxiv 2023.10]Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis [[PDF](https://arxiv.org/abs/2310.00224)]

[arxiv 2023.10]Guiding Instruction-based Image Editing via Multimodal Large Language Models [[PDF](https://arxiv.org/abs/2309.17102),[Page](https://mllm-ie.github.io/)]

[arxiv 2023.10]Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion [[PDF](https://arxiv.org/abs/2310.03502)]

[arxiv 2023.10]JOINTNET: EXTENDING TEXT-TO-IMAGE DIFFUSION FOR DENSE DISTRIBUTION MODELING [[PDF](https://arxiv.org/pdf/2310.06347.pdf)]

[arxiv 2023.10]Uni-paint: A Unified Framework for Multimodal Image Inpainting with Pretrained Diffusion Model [[PDF](https://arxiv.org/abs/2310.07222)]

[arxiv 2023.10]Unsupervised Discovery of Interpretable Directions in h-space of Pre-trained Diffusion Models [[PDF](https://arxiv.org/abs/2310.09912)]

[arxiv 2023.10]Iterative Self-Refinement with GPT-4V(ision) for Automatic Image Design and Generation [[PDf](https://arxiv.org/abs/2310.08541),[Page](https://idea2img.github.io/)]

[arxiv 2023.10]SingleInsert: Inserting New Concepts from a Single Image into Text-to-Image Models for Flexible Editing [[PDF](https://arxiv.org/abs/2310.08094),[Page](https://jarrentwu1031.github.io/SingleInsert-web/)]

[arxiv 2023.10]CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for Image Manipulation [[PDF](https://arxiv.org/abs/2310.13165) ]

[arxiv 2023.10]CustomNet: Zero-shot Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models[[PDF](https://arxiv.org/abs/2310.19784),[Page](https://jiangyzy.github.io/CustomNet/)]

[arxiv 2023.11]LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing [[PDF](https://arxiv.org/abs/2311.00571),[Page](https://llava-vl.github.io/llava-interactive/)]

[arxiv 2023.11]The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing[[PDF](https://arxiv.org/abs/2311.01410)]

[arxiv 2023.11]FaceComposer: A Unified Model for Versatile Facial Content Creation [[PDF](https://openreview.net/pdf?id=xrK3QA9mLo)]

[arxiv 2023.11]Emu Edit: Precise Image Editing via Recognition and Generation Tasks[[PDF](https://arxiv.org/abs/2311.10089)]

[arxiv 2023.11]Fine-grained Appearance Transfer with Diffusion Models [[PDF](https://arxiv.org/abs/2311.16513), [Page](https://github.com/babahui/Fine-grained-Appearance-Transfer)]

[arxiv 2023.11]Text-Driven Image Editing via Learnable Regions [[PDF](https://arxiv.org/abs/2311.16432), [Page](https://yuanze-lin.me/LearnableRegions_page)]

[arxiv 2023.12]Contrastive Denoising Score for Text-guided Latent Diffusion Image Editing [[PDF](https://arxiv.org/abs/2311.18608),[Page](https://hyelinnam.github.io/CDS/)]

[arxiv 2023.12]Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models [[PDF](https://arxiv.org/abs/2312.04410),[Page](https://github.com/SHI-Labs/Smooth-Diffusion)]

[arxiv 2023.12]ControlNet-XS: Designing an Efficient and Effective Architecture for Controlling Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2312.06573)]

[arxiv 2023.12]Emu Edit: Precise Image Editing via Recognition and Generation Tasks [[PDF](https://emu-edit.metademolab.com/assets/emu_edit.pdf),[Page](https://emu-edit.metademolab.com/)]

[arxiv 2023.12]DiffMorpher: Unleashing the Capability of Diffusion Models for Image Morphing [[PDF](https://arxiv.org/abs/2312.07409)]

[arxiv 2023.12]AdapEdit: Spatio-Temporal Guided Adaptive Editing Algorithm for Text-Based Continuity-Sensitive Image Editing [[PDF](https://arxiv.org/abs/2312.08019)]

[arxiv 2023.12]LIME: Localized Image Editing via Attention Regularization in Diffusion Models [[PDF](https://arxiv.org/abs/2312.09256)]

[arxiv 2023.12]Diffusion Cocktail: Fused Generation from Diffusion Models [[PDF](https://arxiv.org/abs/2312.08873)]

[arxiv 2023.12]Prompting Hard or Hardly Prompting: Prompt Inversion for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2312.12416)]

[arxiv 2023.12]Fixed-point Inversion for Text-to-image diffusion models [[PDF](https://arxiv.org/abs/2312.12540)]

[arxiv 2023.12]StreamDiffusion: A Pipeline-level Solution for Real-time Interactive Generation [[PDF](https://arxiv.org/abs/2312.12491)]

[arxiv 2023.12]MAG-Edit: Localized Image Editing in Complex Scenarios via Mask-Based Attention-Adjusted Guidance [[PDF](https://arxiv.org/abs/2312.11396),[Page](https://mag-edit.github.io/)]

[arxiv 2023.12]Tuning-Free Inversion-Enhanced Control for Consistent Image Editing [[PDF](https://arxiv.org/abs/2312.14611)]

[arxiv 2023.12]High-Fidelity Diffusion-based Image Editing [[PDF](https://arxiv.org/abs/2312.15707)]

[arxiv 2023.12]ZONE: Zero-Shot Instruction-Guided Local Editing [[PDF](https://arxiv.org/abs/2312.16794)]

[arxiv 2024.1]PIXART-δ: Fast and Controllable Image Generation with Latent Consistency Models [[PDF](https://arxiv.org/abs/2401.05252)]

[arxiv 2024.1]Wavelet-Guided Acceleration of Text Inversion in Diffusion-Based Image Editing [[PDF](https://arxiv.org/abs/2401.09794)]

[arxiv 2024.1]Edit One for All: Interactive Batch Image Editing [[PDF](https://arxiv.org/abs/2401.10219),[Page](https://thaoshibe.github.io/edit-one-for-all/)]

[arxiv 2024.01]UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion [[PDF](https://arxiv.org/abs/2401.13388)]

[arxiv 2024.01]Text Image Inpainting via Global Structure-Guided Diffusion Models [[PDF](https://arxiv.org/abs/2401.14832)]

[arxiv 2024.01]Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators [[PDF](https://arxiv.org/abs/2401.18085)]

[arxiv 2024.02]Latent Inversion with Timestep-aware Sampling for Training-free Non-rigid Editing [[PDf](https://arxiv.org/abs/2402.08601)]

[arxiv 2024.02]DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation[[PDF](https://arxiv.org/abs/2402.11929)]

[arxiv 2024.02]CustomSketching: Sketch Concept Extraction for Sketch-based Image Synthesis and Editing [[PDF](https://arxiv.org/abs/2402.17624)]

[arxiv 2024.03]Diff-Plugin: Revitalizing Details for Diffusion-based Low-level Tasks [[PDF](https://arxiv.org/abs/2403.00644)]

[arxiv 2024.03]LoMOE: Localized Multi-Object Editing via Multi-Diffusion [[PDF](https://arxiv.org/abs/2403.00437)]

[arxiv 2024.03]Towards Understanding Cross and Self-Attention in Stable Diffusion for Text-Guided Image Editing [[PDF](https://arxiv.org/abs/2403.03431)]

[arxiv 2024.03]StableDrag: Stable Dragging for Point-based Image Editing[[PDF](https://arxiv.org/abs/2403.04437)]

[arxiv 2024.03]InstructGIE: Towards Generalizable Image Editing [[PDF](https://arxiv.org/abs/2403.05018)]

[arxiv 2024.03]An Item is Worth a Prompt: Versatile Image Editing with Disentangled Control [[PDF](https://arxiv.org/abs/2403.04880)]

[arxiv 2024.03]Holo-Relighting: Controllable Volumetric Portrait Relighting from a Single Image [PDF(https://arxiv.org/abs/2403.09632)]

[arxiv 2024.03]Editing Massive Concepts in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2403.13807),[Page](https://silentview.github.io/EMCID/)]

[arxiv 2024.03]Ground-A-Score: Scaling Up the Score Distillation for Multi-Attribute Editing [[PDF](https://arxiv.org/abs/2403.13551)]

[arxiv 2024.03]Magic Fixup: Streamlining Photo Editing by Watching Dynamic Videos [[PDF](https://magic-fixup.github.io/magic_fixup.pdf),[Page](https://magic-fixup.github.io/)]

[arxiv 2024.03]LASPA: Latent Spatial Alignment for Fast Training-free Single Image Editing [[PDF](https://arxiv.org/abs/2403.12585)]

[arxiv 2024.03]ReNoise: Real Image Inversion Through Iterative Noising[[PDF](https://arxiv.org/abs/2403.14602),[Page](https://garibida.github.io/ReNoise-Inversion/)]

[arxiv 2024.03]AID: Attention Interpolation of Text-to-Image Diffusion [[PDF](https://arxiv.org/abs/2403.17924),[Page](https://github.com/QY-H00/attention-interpolation-diffusion)]

[arxiv 2024.03]InstructBrush: Learning Attention-based Instruction Optimization for Image Editing [[PDF](https://arxiv.org/abs/2403.18660)]

[arxiv 2024.03]TextCraftor: Your Text Encoder Can be Image Quality Controller [[PDF](https://arxiv.org/abs/2403.18978)]

[arxiv 2024.04]Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation [[PDF](https://arxiv.org/abs/2404.01050),[Page](https://github.com/haofengl/DragNoise)]

[arxiv 2024.04]Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2404.02747)]

[arxiv 2024.04]SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing [[PDF](https://arxiv.org/abs/2404.05717)]

[arxiv 2024.04]Responsible Visual Editing [[PDF](https://arxiv.org/abs/2404.05580)]

[arxiv 2024.04]ByteEdit: Boost, Comply and Accelerate Generative Image Editing [[PDF](https://arxiv.org/abs/2404.04860)]

[arxiv 2024.04]ShoeModel: Learning to Wear on the User-specified Shoes via Diffusion Model [[PDF](https://arxiv.org/abs/2404.04833)]

[arxiv 2024.040]GoodDrag: Towards Good Practices for Drag Editing with Diffusion Models[[PDF](https://arxiv.org/abs/2404.07206),[Page](https://arxiv.org/abs/2404.07206)]

[arxiv 2024.04]HQ-Edit: A High-Quality Dataset for Instruction-based Image Editing [[PDF](https://arxiv.org/abs/2404.09990)]

[arxiv 2024.04]MaxFusion: Plug&Play Multi-Modal Generation in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2404.09977)]

[arxiv 2024.04]Magic Clothing: Controllable Garment-Driven Image Synthesis [[PDF](https://arxiv.org/abs/2404.09512)]

[arxiv 2024.04]Factorized Diffusion: Perceptual Illusions by Noise Decomposition [[PDF](https://arxiv.org/abs/2404.11615),[Page](https://dangeng.github.io/factorized_diffusion/)]

[arxiv 2024.04]TiNO-Edit: Timestep and Noise Optimization for Robust Diffusion-Based Image Editing [[PDF](https://arxiv.org/abs/2404.11120)]

[arxiv 2024.04]Lazy Diffusion Transformer for Interactive Image Editing [[PDF](https://arxiv.org/abs/2404.12382)]

[arxiv 2024.04]FreeDiff: Progressive Frequency Truncation for Image Editing with Diffusion Models [[PDF](https://arxiv.org/abs/2404.11895)]

[arxiv 2024.04]GeoDiffuser: Geometry-Based Image Editing with Diffusion Models [[PDF](https://arxiv.org/abs/2404.14403)]

[arxiv 2024.04]LocInv: Localization-aware Inversion for Text-Guided Image Editing [[PDF](https://arxiv.org/abs/2405.01496)]

[arxiv 2024.05]SonicDiffusion: Audio-Driven Image Generation and Editing with Pretrained Diffusion Models[[PDF](https://arxiv.org/abs/2405.00878)]

[arxiv 2024.05]MMTryon: Multi-Modal Multi-Reference Control for High-Quality Fashion Generation [[PDF(https://arxiv.org/abs/2405.00448)]

[arxiv 2024.05]Streamlining Image Editing with Layered Diffusion Brushes [[PDF](https://arxiv.org/abs/2405.00313)]

[arxiv 2024.05]SOEDiff: Efficient Distillation for Small Object Editing [[PDF](https://arxiv.org/abs/2405.09114)]

[arxiv 2024.05]Analogist: Out-of-the-box Visual In-Context Learning with Image Diffusion Model [[PDF](https://arxiv.org/abs/2405.10316),[Page](https://analogist2d.github.io/)]

[arxiv 2024.05]Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control [[PDF](https://arxiv.org/abs/2405.12970),[Page](https://faceadapter.github.io/face-adapter.github.io/)]

[arxiv 2024.05] EmoEdit: Evoking Emotions through Image Manipulation  [[PDF](https://arxiv.org/abs/2405.12661)]

[arxiv 2024.05] ReasonPix2Pix: Instruction Reasoning Dataset for Advanced Image Editing [[PDF](https://arxiv.org/abs/2405.11190)]

[arxiv 2024.05] EditWorld: Simulating World Dynamics for Instruction-Following Image Editing [[PDF](https://arxiv.org/abs/2405.14785),[Page](https://github.com/YangLing0818/EditWorld)]

[arxiv 2024.05]InstaDrag: Lightning Fast and Accurate Drag-based Image Editing Emerging from Videos [[PDF](https://arxiv.org/abs/2405.13722),[Page](https://instadrag.github.io/)]

[arxiv 2024.05] FastDrag: Manipulate Anything in One Step [[PDF](https://arxiv.org/abs/2405.15769)]

[arxiv 2024.05] Enhancing Text-to-Image Editing via Hybrid Mask-Informed Fusion  [[PDF](https://arxiv.org/abs/2405.15313)]


[arxiv 2024.06] DiffUHaul: A Training-Free Method for Object Dragging in Images  [[PDF](https://arxiv.org/abs/2406.01594),[Page](https://omriavrahami.com/diffuhaul/)]

[arxiv 2024.06]  MultiEdits: Simultaneous Multi-Aspect Editing with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2406.00985),[Page](https://mingzhenhuang.com/projects/MultiEdits.html)]

[arxiv 2024.06] Dreamguider: Improved Training free Diffusion-based Conditional Generation [[PDF](https://arxiv.org/abs/2406.02549),[Page](https://nithin-gk.github.io/dreamguider.github.io/)]

[arxiv 2024.06]Zero-shot Image Editing with Reference Imitation [[PDF](https://arxiv.org/abs/2406.07547),[Page](https://xavierchen34.github.io/MimicBrush-Page)]

[arxiv 2024.07] Image Inpainting Models are Effective Tools for Instruction-guided Image Editing[[PDF](https://arxiv.org/abs/2407.13139)]

[arxiv 2024.07]Text2Place: Affordance-aware Text Guided Human Placement  [[PDF](https://arxiv.org/abs/2407.15446),[Page](https://rishubhpar.github.io/Text2Place/)]

[arxiv 2024.07] RegionDrag: Fast Region-Based Image Editing with Diffusion Models[[PDF](https://arxiv.org/abs/2407.18247),[Page](https://visual-ai.github.io/regiondrag)]

[arxiv 2024.07] FlexiEdit: Frequency-Aware Latent Refinement for Enhanced Non-Rigid Editing
  [[PDF](https://arxiv.org/abs/2407.17850)]

[arxiv 2024.07] DragText: Rethinking Text Embedding in Point-based Image Editing  [[PDF](https://arxiv.org/abs/2407.17843),[Page](https://micv-yonsei.github.io/dragtext2025/)]

[arxiv 2024.08] MagicFace: Training-free Universal-Style Human Image Customized Synthesis  [[PDF](https://arxiv.org/abs/2408.07433),[Page](https://codegoat24.github.io/MagicFace)]

[arxiv 2024.08] TurboEdit: Instant text-based image editing[[PDF](https://arxiv.org/abs/2408.08332),[Page](https://betterze.github.io/TurboEdit/)]

[arxiv 2024.08] FlexEdit: Marrying Free-Shape Masks to VLLM for Flexible Image Editing  [[PDF](https://arxiv.org/abs/2408.12429),[Page](https://github.com/A-new-b/flex)]

[arxiv 2024.08]  CODE: Confident Ordinary Differential Editing [[PDF](https://arxiv.org/abs/2408.12418),[Page](https://github.com/vita-epfl/CODE/)]

[arxiv 2024.08]  Prompt-Softbox-Prompt: A free-text Embedding Control for Image Editing [[PDF](https://arxiv.org/abs/2408.13623)]

[arxiv 2024.08]  Task-Oriented Diffusion Inversion for High-Fidelity Text-based Editing= [[PDF](https://arxiv.org/abs/2408.13395)]

[arxiv 2024.08] DiffAge3D: Diffusion-based 3D-aware Face Aging [[PDF](https://arxiv.org/abs/2408.15922),]

[arxiv 2024.09] Guide-and-Rescale: Self-Guidance Mechanism for Effective Tuning-Free Real Image Editing  [[PDF](https://arxiv.org/abs/2409.01322),[Page](https://github.com/FusionBrainLab/Guide-and-Rescale)]

[arxiv 2024.09] InstantDrag: Improving Interactivity in Drag-based Image Editing  [[PDF](https://arxiv.org/abs/2409.08857),[Page](https://joonghyuk.com/instantdrag-web/)]

[arxiv 2024.09] SimInversion: A Simple Framework for Inversion-Based Text-to-Image Editing  [[PDF](https://arxiv.org/abs/2409.10476)]


[arxiv 2024.09]FreeEdit: Mask-free Reference-based Image Editing with Multi-modal Instruction [[PDF](https://arxiv.org/abs/2409.18071),[Page](https://freeedit.github.io/)]


[arxiv 2024.09] GroupDiff: Diffusion-based Group Portrait Editing  [[PDF](https://arxiv.org/abs/2409.14379)]

[arxiv 2024.10]  Combing Text-based and Drag-based Editing for Precise and Flexible Image Editing [[PDF](https://arxiv.org/abs/2410.03097)]

[arxiv 2024.10] PostEdit: Posterior Sampling for Efficient Zero-Shot Image Editing  [[PDF](https://arxiv.org/abs/2410.04844),[Page](https://github.com/TFNTF/PostEdit)]

[arxiv 2024.10] Context-Aware Full Body Anonymization using Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.08551)]

[arxiv 2024.10] BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models  [[PDF](https://arxiv.org/abs/2410.07273)

[arxiv 2024.10]  Vision-guided and Mask-enhanced Adaptive Denoising for Prompt-based Image Editing [[PDF](https://arxiv.org/html/2410.10496v1)]

[arxiv 2024.10]MagicEraser: Erasing Any Objects via Semantics-Aware Control[[PDF](https://arxiv.org/abs/2410.10207)]

[arxiv 2024.10]  SGEdit: Bridging LLM with Text2Image Generative Model for Scene Graph-based Image Editing [[PDF](https://arxiv.org/abs/2410.11815),[Page](https://bestzzhang.github.io/SGEdit/)]

[arxiv 2024.10] AdaptiveDrag: Semantic-Driven Dragging on Diffusion-Based Image Editing [[PDF](https://arxiv.org/abs/2410.12696),[Page](https://github.com/Calvin11311/AdaptiveDrag)]

[arxiv 2024.10] MambaPainter: Neural Stroke-Based Rendering in a Single Step[[PDF](https://arxiv.org/abs/2410.12524)]

[arxiv 2024.10] ERDDCI: Exact Reversible Diffusion via Dual-Chain Inversion for High-Quality Image Editing  [[PDF](https://arxiv.org/abs/2410.14247)]

[arxiv 2024.10] Schedule Your Edit: A Simple yet Effective Diffusion Noise Schedule for Image Editing  [[PDF](https://arxiv.org/abs/2410.18756)]

[arxiv 2024.11]  DiT4Edit: Diffusion Transformer for Image Editing [[PDF](https://arxiv.org/abs/2411.03286),[Page](https://github.com/fkyyyy/DiT4Edit)]

[arxiv 2024.11] ReEdit: Multimodal Exemplar-Based Image Editing with Diffusion Models  [[PDF](https://arxiv.org/abs/2411.03982)]

[arxiv 2024.11] ProEdit: Simple Progression is All You Need for High-Quality 3D Scene Editing  [[PDF](https://arxiv.org/abs/2411.05006),[Page](https://immortalco.github.io/ProEdit/)]

[arxiv 2024.11] Taming Rectified Flow for Inversion and Editing  [[PDF](https://arxiv.org/abs/2411.04746),[Page](https://github.com/wangjiangshan0725/RF-Solver-Edit)]

[arxiv 2024.11] Multi-Reward as Condition for Instruction-based Image Editing  [[PDF](https://arxiv.org/abs/2411.04713)]

[arxiv 2024.11] ColorEdit: Training-free Image-Guided Color editing with diffusion model  [[PDF](https://arxiv.org/abs/2411.10232),[Page]()]

[arxiv 2024.11]  Test-time Conditional Text-to-Image Synthesis Using Diffusion Models [[PDF](https://arxiv.org/abs/2411.10800)]

[arxiv 2024.11] HeadRouter: A Training-free Image Editing Framework for MM-DiTs by Adaptively Routing Attention Heads  [[PDF](https://arxiv.org/abs/2411.15034),[Page](https://yuci-gpt.github.io/headrouter/)] ![Code](https://img.shields.io/github/stars/ICTMCG/HeadRouter?style=social&label=Star)

[arxiv 2024.11] Pathways on the Image Manifold: Image Editing via Video Generation  [[PDF](https://arxiv.org/abs/2411.16819)] 

[arxiv 2024.12] SOWing Information: Cultivating Contextual Coherence with MLLMs in Image Generation [[PDF](https://arxiv.org/abs/2411.19182),[Page](https://pyh-129.github.io/SOW/)] ![Code](https://img.shields.io/github/stars/wangruoyu02/COW?style=social&label=Star)

[arxiv 2024.12]  PainterNet: Adaptive Image Inpainting with Actual-Token Attention and Diverse Mask Control [[PDF](https://arxiv.org/abs/2412.01223)]

[arxiv 2024.12] InstantSwap: Fast Customized Concept Swapping across Sharp Shape Differences  [[PDF](https://arxiv.org/abs/2412.01197),[Page](https://instantswap.github.io/)] ![Code](https://img.shields.io/github/stars/chenyangzhu1/InstantSwap?style=social&label=Star)

[arxiv 2024.12] FreeCond: Free Lunch in the Input Conditions of Text-Guided Inpainting  [[PDF](https://arxiv.org/abs/2412.00427),[Page](https://github.com/basiclab/FreeCond)] ![Code](https://img.shields.io/github/stars/basiclab/FreeCond?style=social&label=Star)

[arxiv 2024.12]  Steering Rectified Flow Models in the Vector Field for Controlled Image Generation [[PDF](https://arxiv.org/abs/2412.00100),[Page](https://github.com/FlowChef/flowchef)] ![Code](https://img.shields.io/github/stars/FlowChef/flowchef?style=social&label=Star)

[arxiv 2024.12]  BrushEdit: All-In-One Image Inpainting and Editing [[PDF](https://liyaowei-stu.github.io/project/BrushEdit/),[Page](https://liyaowei-stu.github.io/project/BrushEdit/)]

[arxiv 2024.12]  Dual-Schedule Inversion: Training- and Tuning-Free Inversion for Real Image Editing [[PDF](https://arxiv.org/pdf/2412.11152)]

[arxiv 2024.12]  Prompt Augmentation for Self-supervised Text-guided Image Manipulation [[PDF](https://arxiv.org/pdf/2412.13081)]

[arxiv 2024.12] PixelMan: Consistent Object Editing with Diffusion Models via Pixel Manipulation and Generation  [[PDF](https://arxiv.org/abs/2412.14283),[Page](Ascend-Research/PixelMan)] 

[arxiv 2024.12]  Explaining in Diffusion: Explaining a Classifier Through Hierarchical Semantics with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2412.18604),[Page](https://explain-in-diffusion.github.io/)] 

[arxiv 2025.01] Edicho: Consistent Image Editing in the Wild  [[PDF](https://arxiv.org/abs/2412.21079),[Page](https://ezioby.github.io/edicho/)] ![Code](https://img.shields.io/github/stars/EzioBy/edicho?style=social&label=Star)

[arxiv 2025.01]  Exploring Optimal Latent Trajetory for Zero-shot Image Editing [[PDF](https://arxiv.org/pdf/2501.03631)]

[arxiv 2025.01] FramePainter: Endowing Interactive Image Editing with Video Diffusion Priors  [[PDF](https://arxiv.org/abs/2501.08225),[Page](https://github.com/YBYBZhang/FramePainter)] ![Code](https://img.shields.io/github/stars/YBYBZhang/FramePainter?style=social&label=Star)

[arxiv 2025.02] PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models  [[PDF](https://arxiv.org/abs/2502.04050),[Page](https://partedit.github.io/PartEdit/)]

[arxiv 2025.02] PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data  [[PDF](https://arxiv.org/abs/2502.14397),[Page](https://github.com/showlab/PhotoDoodle)] ![Code](https://img.shields.io/github/stars/showlab/PhotoDoodle?style=social&label=Star)

[arxiv 2025.02] KV-Edit: Training-Free Image Editing for Precise Background Preservation  [[PDF](https://arxiv.org/abs/2502.17363),[Page](https://xilluill.github.io/projectpages/KV-Edit/)] ![Code](https://img.shields.io/github/stars/Xilluill/KV-Edit?style=social&label=Star)

[arxiv 2025.02]  Tight Inversion: Image-Conditioned Inversion for Real Image Editing [[PDF](https://arxiv.org/pdf/2502.20376)]

[arxiv 2025.03] DiffBrush:Just Painting the Art by Your Hands  [[PDF](https://arxiv.org/abs/2502.20904)]

[arxiv 2025.03]  h-Edit: Effective and Flexible Diffusion-Based Editing via Doob’s h-Transform [[PDF](https://arxiv.org/pdf/2503.02187),[Page](https://github.com/nktoan/h-edit)] ![Code](https://img.shields.io/github/stars/nktoan/h-edit?style=social&label=Star)

[arxiv 2025.03] InteractEdit: Zero-Shot Editing of Human-Object Interactions in Images  [[PDF](https://arxiv.org/pdf/2503.09130),[Page](https://jiuntian.github.io/interactedit/)] ![Code](https://img.shields.io/github/stars/jiuntian/interactedit?style=social&label=Star)

[arxiv 2025.03] MoEdit: On Learning Quantity Perception for Multi-object Image Editing  [[PDF](https://arxiv.org/abs/2503.10112),[Page](https://github.com/Tear-kitty/MoEdit)] ![Code](https://img.shields.io/github/stars/Tear-kitty/MoEdit?style=social&label=Star)

[arxiv 2025.03]  Edit Transfer: Learning Image Editing via Vision In-Context Relations [[PDF](https://arxiv.org/abs/2503.13327),[Page](https://cuc-mipg.github.io/EditTransfer.github.io/)] ![Code](https://img.shields.io/github/stars/CUC-MIPG/Edit-Transfer?style=social&label=Star)

[arxiv 2025.03] Single Image Iterative Subject-driven Generation and Editing  [[PDF](https://arxiv.org/pdf/2503.16025)]

[arxiv 2025.03] Adams Bashforth Moulton Solver for Inversion and Editing in Rectified Flow  [[PDF](https://arxiv.org/pdf/2503.16522)]

[arxiv 2025.03] EditCLIP: Representation Learning for Image Editing  [[PDF](https://arxiv.org/abs/2503.20318),[Page](https://qianwangx.github.io/EditCLIP/)] ![Code](https://img.shields.io/github/stars/QianWangX/EditCLIP?style=social&label=Star)

[arxiv 2025.04]  FreeInv: Free Lunch for Improving DDIM Inversion [[PDF](https://arxiv.org/pdf/2503.23035),[Page](https://yuxiangbao.github.io/FreeInv/)] 

[arxiv 2025.04] TurboFill: Adapting Few-step Text-to-image Model for Fast Image Inpainting  [[PDF](https://arxiv.org/abs/2504.00996),[Page](https://liangbinxie.github.io/projects/TurboFill/)]

[arxiv 2025.04]  UNIEDIT-FLOW: Unleashing Inversion and Editing in the Era of Flow Models [[PDF](https://arxiv.org/pdf/2504.13109),[Page](https://uniedit-flow.github.io/)] 

[arxiv 2025.04]  SPICE: A Synergistic, Precise, Iterative, and Customizable Image Editing Workflow [[PDF](https://arxiv.org/abs/2504.09697),[Page](https://kenantang.github.io/spice/)] ![Code](https://img.shields.io/github/stars/kenantang/spice?style=social&label=Star)

[arxiv 2025.05] InstructAttribute: Fine-grained Object Attributes editing with Instruction  [[PDF](https://arxiv.org/html/2505.00751v1)]

[arxiv 2025.05] Towards Scalable Human-aligned Benchmark for Text-guided Image Editing  [[PDF](https://arxiv.org/abs/2505.00502),[Page](https://github.com/SuhoRyu/HATIE)] ![Code](https://img.shields.io/github/stars/SuhoRyu/HATIE?style=social&label=Star)

[arxiv 2025.05]  PixelHacker: Image Inpainting with Structural and Semantic Consistency [[PDF](https://arxiv.org/abs/2504.20438),[Page](https://hustvl.github.io/PixelHacker/)] ![Code](https://img.shields.io/github/stars/hustvl/PixelHacker?style=social&label=Star)

[arxiv 2025.05]  CompleteMe: Reference-based Human Image Completion [[PDF](https://arxiv.org/pdf/2504.20042)]

[arxiv 2025.05] SuperEdit: Rectifying and Facilitating Supervision for Instruction-Based Image Editing  [[PDF](https://arxiv.org/abs/2505.02370),[Page](https://liming-ai.github.io/SuperEdit)] ![Code](https://img.shields.io/github/stars/bytedance/SuperEdit?style=social&label=Star)

[arxiv 2025.05] Multi-turn Consistent Image Editing  [[PDF](https://arxiv.org/pdf/2505.04320)]

[arxiv 2025.05] MDE-Edit: Masked Dual-Editing for Multi-Object Image Editing via Diffusion Models  [[PDF](https://arxiv.org/pdf/2505.05101)]

[arxiv 2025.05]  R-Genie: Reasoning-Guided Generative Image Editing [[PDF](https://arxiv.org/abs/2505.17768),[Page](https://dongzhang89.github.io/RGenie.github.io/)] ![Code](https://img.shields.io/github/stars/HE-Lingfeng/R-Genie-public?style=social&label=Star)

[arxiv 2025.06]  EDITOR: Effective and Interpretable Prompt Inversion for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2506.03067)]

[arxiv 2025.06] DCI: Dual-Conditional Inversion for Boosting Diffusion-Based Image Editing  [[PDF](https://arxiv.org/abs/2506.02560)]

[arxiv 2025.06]  Image Editing As Programs with Diffusion Models [[PDF](https://arxiv.org/abs/2506.04158),[Page](https://github.com/YujiaHu1109/IEAP)] ![Code](https://img.shields.io/github/stars/YujiaHu1109/IEAP?style=social&label=Star)

[arxiv 2025.06]  PairEdit: Learning Semantic Variations for Exemplar-based Image Editing [[PDF](https://arxiv.org/pdf/2506.07992),[Page](https://github.com/xudonmao/PairEdit)] ![Code](https://img.shields.io/github/stars/xudonmao/PairEdit?style=social&label=Star)

[arxiv 2025.06] DragNeXt: Rethinking Drag-Based Image Editing  [[PDF](https://arxiv.org/abs/2506.07611),[Page](https://arxiv.org/pdf/2506.07611)] 

[arxiv 2025.06]  AttentionDrag: Exploiting Latent Correlation Knowledge in Pre-trained Diffusion Models for Image Editing [[PDF](https://arxiv.org/abs/2506.13301),[Page](https://github.com/GPlaying/AttentionDrag)] ![Code](https://img.shields.io/github/stars/GPlaying/AttentionDrag?style=social&label=Star)

[arxiv 2025.06] CPAM: Context-Preserving Adaptive Manipulation for Zero-Shot Real Image Editing  [[PDF](https://arxiv.org/pdf/2506.18438)]

[arxiv 2025.07] Beyond Simple Edits: X-Planner for Complex Instruction-Based Image Editing  [[PDF](http://arxiv.org/abs/2507.05259),[Page](https://danielchyeh.github.io/x-planner/)] 

[arxiv 2025.07] Stable Score Distillation  [[PDF](http://arxiv.org/abs/2507.09168),[Page](https://github.com/Alex-Zhu1/SSD)] ![Code](https://img.shields.io/github/stars/Alex-Zhu1/SSD?style=social&label=Star)

[arxiv 2025.08]FLUX-Makeup: High-Fidelity, Identity-Consistent, and Robust Makeup Transfer via Diffusion Transformer[[PDF](https://arxiv.org/pdf/2508.05069)]

[arxiv 2025.08]  Follow-Your-Shape: Shape-Aware Image Editing via Trajectory-Guided Region Control [[PDF](https://arxiv.org/abs/2508.08134),[Page](https://follow-your-shape.github.io/)] ![Code](https://img.shields.io/github/stars/mayuelala/FollowYourShape?style=social&label=Star)

[arxiv 2025.08] Exploring Multimodal Diffusion Transformers for Enhanced Prompt-based Image Editing  [[PDF](https://arxiv.org/pdf/2508.07519)]

[arxiv 2025.08] Training-Free Text-Guided Color Editing with Multi-Modal Diffusion Transformer  [[PDF](https://arxiv.org/pdf/2508.09131),[Page](https://zxyin.github.io/ColorCtrl/)] 

[arxiv 2025.08]  TweezeEdit: Consistent and Efficient Image Editing with Path Regularization [[PDF](https://arxiv.org/abs/2508.10498)]

[arxiv 2025.09]  Inpaint4Drag: Repurposing Inpainting Models for Drag-Based Image Editing via Bidirectional Warping [[PDF](https://arxiv.org/abs/2509.04582),[Page](https://visual-ai.github.io/inpaint4drag/)] ![Code](https://img.shields.io/github/stars/Visual-AI/Inpaint4Drag?style=social&label=Star)

[arxiv 2025.09] LazyDrag: Enabling Stable Drag-Based Editing on Multi-Modal Diffusion Transformers via Explicit Correspondence  [[PDF](https://arxiv.org/abs/2509.12203),[Page](https://zxyin.github.io/LazyDrag)] 

[arxiv 2025.09]  AutoEdit: Automatic Hyperparameter Tuning for Image Editing [[PDF](https://arxiv.org/abs/2509.15031)]

[arxiv 2025.09] FlashEdit: Decoupling Speed, Structure, and Semantics for Precise Image Editing  [[PDF](https://arxiv.org/abs/2509.22244),[Page](https://github.com/JunyiWuCode/FlashEdit)] ![Code](https://img.shields.io/github/stars/JunyiWuCode/FlashEdit?style=social&label=Star)

[arxiv 2025.09]  TDEdit: A Unified Diffusion Framework for Text-Drag Guided Image Manipulation [[PDF](https://arxiv.org/abs/2509.21905)]

[arxiv 2025.10]  IMAGEdit: Let Any Subject Transform [[PDF](https://arxiv.org/abs/2510.01186),[Page](https://muzishen.github.io/IMAGEdit/)] ![Code](https://img.shields.io/github/stars/XWH-A/IMAGEdit?style=social&label=Star)

[arxiv 2025.10] EditTrack: Detecting and Attributing AI-assisted Image Editing  [[PDF](https://arxiv.org/abs/2510.01173)]

[arxiv 2025.10]  Object-AVEdit: An Object-level Audio-Visual Editing Model [[PDF](https://arxiv.org/abs/2510.00050)]

[arxiv 2025.10] Optimal Control Meets Flow Matching: A Principled Route to Multi-Subject Fidelity  [[PDF](https://arxiv.org/abs/2510.02315),[Page](https://github.com/ericbill21/FOCUS/)] ![Code](https://img.shields.io/github/stars/ericbill21/FOCUS/?style=social&label=Star)

[arxiv 2025.10]  DragFlow: Unleashing DiT Priors with Region Based Supervision for Drag Editing [[PDF](https://arxiv.org/abs/2510.02253)]

[arxiv 2025.10]  TBStar-Edit: From Image Editing Pattern Shifting to Consistency Enhancement [[PDF](https://arxiv.org/abs/2510.04483)]

[arxiv 2025.10] SAEdit: Token-level control for continuous image editing via Sparse AutoEncoder  [[PDF](https://arxiv.org/abs/2510.05081),[Page](https://ronen94.github.io/SAEdit/)] 

[arxiv 2025.10]  Fine-grained Defocus Blur Control for Generative Image Models [[PDF](https://arxiv.org/abs/2510.06215),[Page](https://www.ayshrv.com/defocus-blur-gen)] 

[arxiv 2025.10]  Kontinuous Kontext: Continuous Strength Control for Instruction-based Image Editing [[PDF](https://arxiv.org/abs/2510.08532),[Page](https://snap-research.github.io/kontinuouskontext/#)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.10] One Stone with Two Birds: A Null-Text-Null Frequency-Aware Diffusion Models for Text-Guided Image Inpainting [[PDF](https://arxiv.org/abs/2510.08273),[Page](https://github.com/htyjers/NTN-Diff)] ![Code](https://img.shields.io/github/stars/htyjers/NTN-Diff?style=social&label=Star)

[arxiv 2025.10] Learning an Image Editing Model without Image Editing Pairs  [[PDF](https://arxiv.org/abs/2510.14978),[Page](https://nupurkmr9.github.io/npedit/)] 

[arxiv 2025.10] ConsistEdit: Highly Consistent and Precise Training-free Visual Editing  [[PDF](https://arxiv.org/abs/2510.17803),[Page](https://zxyin.github.io/ConsistEdit/)] ![Code](https://img.shields.io/github/stars/zxYin/ConsistEdit_Code?style=social&label=Star)

[arxiv 2025.10] FlowCycle: Pursuing Cycle-Consistent Flows for Text-based Editing  [[PDF](https://arxiv.org/abs/2510.20212),[Page](https://github.com/HKUST-LongGroup/FlowCycle)] ![Code](https://img.shields.io/github/stars/HKUST-LongGroup/FlowCycle?style=social&label=Star)

[arxiv 2025.10]  Group-Relative Attention Guidance for Image Editing [[PDF](https://arxiv.org/abs/2510.24657),[Page](https://github.com/little-misfit/GRAG-Image-Editing)] ![Code](https://img.shields.io/github/stars/little-misfit/GRAG-Image-Editing?style=social&label=Star)

[arxiv 2025.10]  RegionE: Adaptive Region-Aware Generation for Efficient Image Editing [[PDF](https://arxiv.org/abs/2510.25590),[Page](https://github.com/Peyton-Chen/RegionE)] ![Code](https://img.shields.io/github/stars/Peyton-Chen/RegionE?style=social&label=Star)

[arxiv 2025.10]  SplitFlow: Flow Decomposition for Inversion-Free Text-to-Image Editing [[PDF](https://arxiv.org/abs/2510.25970)]

[arxiv 2025.11]  Personalized Image Editing in Text-to-Image Diffusion Models via Collaborative Direct Preference Optimization [[PDF](https://personalized-editing.github.io/),[Page](https://personalized-editing.github.io/)] 

[arxiv 2025.11] Are Image-to-Video Models Good Zero-Shot Image Editors?  [[PDF](https://arxiv.org/pdf/2511.19435)]

[arxiv 2025.11]  Video4Edit: Viewing Image Editing as a Degenerate Temporal Process [[PDF](https://arxiv.org/pdf/2511.18131)]

[arxiv 2025.12] FreqEdit: Preserving High-Frequency Features for Robust Multi-Turn Image Editing  [[PDF](https://arxiv.org/abs/2512.01755),[Page](https://freqedit.github.io/)] ![Code](https://img.shields.io/github/stars/FreqEdit/FreqEdit?style=social&label=Star)

[arxiv 2025.12]  ProEdit: Inversion-based Editing From Prompts Done Right [[PDF](https://arxiv.org/abs/2512.22118),[Page](https://isee-laboratory.github.io/ProEdit/)] ![Code](https://img.shields.io/github/stars/iSEE-Laboratory/ProEdit?style=social&label=Star)

[arxiv 2025.12] On Exact Editing of Flow-Based Diffusion Models  [[PDF](https://arxiv.org/pdf/2512.24015)]

[arxiv 2026.01] Talk2Move: Reinforcement Learning for Text-Instructed Object-Level Geometric Transformation in Scenes  [[PDF](https://arxiv.org/abs/2601.02356),[Page](https://sparkstj.github.io/talk2move/)] ![Code](https://img.shields.io/github/stars/sparkstj/Talk2Move?style=social&label=Star)

[arxiv 2026.01]  Unraveling MMDiT Blocks: Training-free Analysis and Enhancement of Text-conditioned Diffusion [[PDF](https://arxiv.org/pdf/2601.02211)]

[arxiv 2026.01]  TalkPhoto: A Versatile Training-Free Conversational Assistant for Intelligent Image Editing [[PDF](https://arxiv.org/abs/2601.01915)]

[arxiv 2026.02]  Controlling Your Image via Simplified Vector Graphics [[PDF](https://arxiv.org/abs/2602.14443),[Page](https://guolanqing.github.io/Vec2Pix/)] 

[arxiv 2026.02] Instruction-based Image Editing with Planning, Reasoning, and Generation  [[PDF](https://arxiv.org/pdf/2602.22624)]

[arxiv 2026.03] CARE-Edit: Condition-Aware Routing of Experts for Contextual Image Editing  [[PDF](https://arxiv.org/abs/2603.08589),[Page](https://care-edit.github.io/)] ![Code](https://img.shields.io/github/stars/CARE-Edit/Code?style=social&label=Star)

[arxiv 2026.03] VeloEdit: Training-Free Consistent and Continuous Instruction-Based Image Editing via Velocity Field Decomposition  [[PDF](https://arxiv.org/abs/2603.13388),[Page](https://github.com/xmulzq/VeloEdit)] ![Code](https://img.shields.io/github/stars/xmulzq/VeloEdit?style=social&label=Star)

[arxiv 2026.03] MSRAMIE: Multimodal Structured Reasoning Agent for Multi-instruction Image Editing  [[PDF](https://arxiv.org/abs/2603.16967)]

[arxiv 2026.03] Diffusion-Based Makeup Transfer with Facial Region-Aware Makeup Features [[[PDF](https://arxiv.org/abs/2603.20012)]]

[arxiv 2026.03] AdaEdit: Adaptive Temporal and Channel Modulation for Flow-Based Image Editing  [[PDF](https://arxiv.org/abs/2603.21615)] ![Code](https://img.shields.io/github/stars/leeguandong/AdaEdit?style=social&label=Star)

[arxiv 2026.03] Group Editing: Edit Multiple Images in One Go  [[PDF](https://arxiv.org/abs/2603.22883),[Page](https://group-editing.github.io/)] ![Code](https://img.shields.io/github/stars/mayuelala/GroupEditing?style=social&label=Star)


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

## end of editing 


## Analysis
[arxiv 2025.02]  SliderSpace: Decomposing the Visual Capabilities of Diffusion Models [[PDF](https://arxiv.org/pdf/2502.01639),[Page](https://sliderspace.baulab.info/)] 

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## reason
[arxiv 2025.06]  MMMG: A Massive, Multidisciplinary, Multi-Tier Generation Benchmark for Text-to-Image Reasoning [[PDF](https://arxiv.org/abs/2506.10963),[Page](https://mmmgbench.github.io/)] ![Code](https://img.shields.io/github/stars/MMMGBench/MMMG/?style=social&label=Star)

[arxiv 2025.07]  Reasoning to Edit: Hypothetical Instruction-Based Image Editing with Visual Reasoning [[PDF](https://arxiv.org/abs/2507.01908),[Page](https://github.com/hithqd/ReasonBrain)] ![Code](https://img.shields.io/github/stars/hithqd/ReasonBrain?style=social&label=Star)

[arxiv 2025.09] FLUX-Reason-6M & PRISM-Bench: A Million-Scale Text-to-Image Reasoning Dataset and Comprehensive Benchmark  [[PDF](https://arxiv.org/abs/2509.09680),[Page](https://flux-reason-6m.github.io/)] ![Code](https://img.shields.io/github/stars/rongyaofang/prism-bench?style=social&label=Star)

[arxiv 2025.12] MIRA: Multimodal Iterative Reasoning Agent for Image Editing  [[PDF](https://arxiv.org/abs/2511.21087),[Page](https://zzzmyyzeng.github.io/MIRA/)] 

[arxiv 2025.12]  ThinkGen: Generalized Thinking for Visual Generation [[PDF](https://arxiv.org/pdf/2512.23568),[Page](https://github.com/jiaosiyuu/ThinkGen)] ![Code](https://img.shields.io/github/stars/jiaosiyuu/ThinkGen?style=social&label=Star)

[arxiv 2025.12] DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models  [[PDF](https://arxiv.org/abs/2512.24165),[Page](https://diffthinker-project.github.io/)] ![Code](https://img.shields.io/github/stars/lcqysl/DiffThinker?style=social&label=Star)

[arxiv 2026.01] Unified Thinker: A General Reasoning Modular Core for Image Generation  [[PDF](https://arxiv.org/pdf/2601.03127),[Page](https://github.com/alibaba/UnifiedThinker)] ![Code](https://img.shields.io/github/stars/alibaba/UnifiedThinker?style=social&label=Star)

[arxiv 2026.01]  ThinkRL-Edit: Thinking in Reinforcement Learning for Reasoning-Centric Image Editing [[PDF](https://arxiv.org/abs/2601.03467)]

[arxiv 2026.01] Think-Then-Generate: Reasoning-Aware Text-to-Image Diffusion with LLM Encoders  [[PDF](https://arxiv.org/abs/2601.10332),[Page](https://github.com/SJTU-DENG-Lab/Think-Then-Generate)] ![Code](https://img.shields.io/github/stars/SJTU-DENG-Lab/Think-Then-Generate?style=social&label=Star)

[arxiv 2026.01] UniReason 1.0: A Unified Reasoning Framework for World Knowledge Aligned Image Generation and Editing  [[PDF](https://arxiv.org/abs/2602.02437),[Page](https://github.com/AlenjandroWang/UniReason)] ![Code](https://img.shields.io/github/stars/AlenjandroWang/UniReason?style=social&label=Star)

[arxiv 2026.02] Uni-Animator: Towards Unified Visual Colorization  [[PDF](https://arxiv.org/pdf/2602.23191)]

[arxiv 2026.03]  Generative Visual Chain-of-Thought for Image Editing [[PDF](https://arxiv.org/abs/2603.01893),[Page](https://pris-cv.github.io/GVCoT/)] ![Code](https://img.shields.io/github/stars/PRIS-CV/GVCoT?style=social&label=Star)

[arxiv 2026.03] GRADE: Benchmarking Discipline-Informed Reasoning in Image Editing  [[PDF](https://arxiv.org/abs/2603.12264),[Page](https://grade-bench.github.io/)] ![Code](https://img.shields.io/github/stars/VisionXLab/GRADE?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## Unified Generation

[arxiv 2024.10] A Simple Approach to Unifying Diffusion-based Conditional Generation [[PDF](https://arxiv.org/abs/2410.11439),[Page](https://lixirui142.github.io/unicon-diffusion/)] ![Code](https://img.shields.io/github/stars/lixirui142/UniCon?style=social&label=Star)

[arxiv 2024.11] One Diffusion to Generate Them All  [[PDF](https://arxiv.org/abs/2411.16318),[Page](https://github.com/lehduong/OneDiffusion)] ![Code](https://img.shields.io/github/stars/lehduong/OneDiffusion?style=social&label=Star)

[arxiv 2024.11] OminiControl: Minimal and Universal Control for Diffusion Transformer  [[PDF](https://arxiv.org/abs/2411.15098),[Page](https://github.com/Yuanshi9815/OminiControl)] ![Code](https://img.shields.io/github/stars/Yuanshi9815/OminiControl?style=social&label=Star)

[arxiv 2024.12] Adaptive Blind All-in-One Image Restoration  [[PDF](https://arxiv.org/abs/2411.18412),[Page](https://aba-ir.github.io/)] ![Code](https://img.shields.io/github/stars/davidserra9/abair/?style=social&label=Star)

[arxiv 2024.12] OmniFlow: Any-to-Any Generation with Multi-Modal Rectified Flows  [[PDF](https://arxiv.org/abs/2412.01169),[Page]()] ![Code](https://img.shields.io/github/stars/jacklishufan/OmniFlows?style=social&label=Star)

[arxiv 2024.12] UniReal: Universal Image Generation and Editing via Learning Real-world Dynamics  [[PDF](https://arxiv.org/abs/2412.07774),[Page](https://xavierchen34.github.io/UniReal-Page/)]

[arxiv 2024.12]  BrushEdit: All-In-One Image Inpainting and Editing [[PDF](https://liyaowei-stu.github.io/project/BrushEdit/),[Page](https://liyaowei-stu.github.io/project/BrushEdit/)]

[arxiv 2024.12] Multimodal Understanding and Generation via Instruction Tuning  [[PDF](https://arxiv.org/abs/2412.14164v1),[Page](https://tsb0601.github.io/metamorph/)] 

[arxiv 2024.12]  DreamOmni: Unified Image Generation and Editing [[PDF](https://arxiv.org/pdf/2412.17098),[Page](https://zj-binxia.github.io/DreamOmni-ProjectPage/)] 

[arxiv 2025.01] ACE++: Instruction-Based Image Creation and Editing via Context-Aware Content Filling  [[PDF](https://arxiv.org/abs/2501.02487),[Page](https://ali-vilab.github.io/ACE_plus_page/)] ![Code](https://img.shields.io/github/stars/ali-vilab/ACE_plus?style=social&label=Star)

[arxiv 2025.01] EditAR: Unified Conditional Generation with Autoregressive Models  [[PDF](https://arxiv.org/abs/2501.04699),[Page](https://jitengmu.github.io/EditAR/)] ![Code](https://img.shields.io/github/stars/JitengMu/EditAR?style=social&label=Star)

[arxiv 2025.03] MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing  [[PDF](https://arxiv.org/abs/2502.21291),[Page](https://github.com/Eureka-Maggie/MIGE)] ![Code](https://img.shields.io/github/stars/Eureka-Maggie/MIGE?style=social&label=Star)

[arxiv 2025.03]  WeGen: A Unified Model for Interactive Multimodal Generation as We Chat [[PDF](https://arxiv.org/pdf/2503.01115),[Page](https://github.com/hzphzp/WeGen)] ![Code](https://img.shields.io/github/stars/hzphzp/WeGen?style=social&label=Star) 

[arxiv 2025.03] UniCombine: Unified Multi-Conditional Combination with Diffusion Transformer  [[PDF](https://arxiv.org/pdf/2503.09277),[Page](https://github.com/Xuan-World/UniCombine)] ![Code](https://img.shields.io/github/stars/Xuan-World/UniCombine?style=social&label=Star)

[arxiv 2025.03] RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models  [[PDF](https://arxiv.org/pdf/2503.10406),[Page](https://lyne1.github.io/RealGeneral/)] !

[arxiv 2025.03]  BlobCtrl: A Unified and Flexible Framework for Element-level Image Generation and Editing [[PDF](https://arxiv.org/pdf/2503.13434),[Page](https://liyaowei-stu.github.io/project/BlobCtrl/)] ![Code](https://img.shields.io/github/stars/TencentARC/BlobCtrl?style=social&label=Star)

[arxiv 2025.04] VisualCloze : A Universal Image Generation Framework via Visual In-Context Learning  [[PDF](https://arxiv.org/abs/2504.07960),[Page](https://visualcloze.github.io/)] ![Code](https://img.shields.io/github/stars/lzyhha/VisualCloze?style=social&label=Star)

[arxiv 2025.04]  Step1X-Edit: A Practical Framework for General Image Editing [[PDF](https://arxiv.org/abs/2504.17761),[Page](https://github.com/stepfun-ai/Step1X-Edit)] ![Code](https://img.shields.io/github/stars/stepfun-ai/Step1X-Edit?style=social&label=Star)

[arxiv 2025.05]  In-Context Edit: Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer [[PDF](https://arxiv.org/pdf/2504.20690),[Page](https://river-zhang.github.io/ICEdit-gh-pages/)] ![Code](https://img.shields.io/github/stars/River-Zhang/ICEdit?style=social&label=Star)

[arxiv 2025.06]  SeedEdit 3.0: Fast and High-Quality Generative Image Editing [[PDF](https://arxiv.org/abs/2506.05083),[Page](https://seed.bytedance.com/zh/tech/seededit)] 

[arxiv 2025.06]  VINCIE: Unlocking In-context Image Editing from Video [[PDF](https://arxiv.org/pdf/2506.10941),[Page](https://vincie2025.github.io/)] 

[arxiv 2025.06] FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space  [[PDF](https://arxiv.org/abs/2506.15742),[Page](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev)] ![Code](https://img.shields.io/github/stars/black-forest-labs/flux?style=social&label=Star)

[arxiv 2025.07]  UniLDiff: Unlocking the Power of Diffusion Priors for All-in-One Image Restoration [[PDF](https://arxiv.org/abs/2507.23685)]

[arxiv 2025.08] UniEdit-I: Training-free Image Editing for Unified VLM via Iterative Understanding, Editing and Verifying  [[PDF](https://arxiv.org/pdf/2508.03142)]

[arxiv 2025.08]  EvoMakeup: High-Fidelity and Controllable Makeup Editing with MakeupQuad [[PDF](https://arxiv.org/pdf/2508.05994)]

[arxiv 2025.09]  Lego-Edit: A General Image Editing Framework with Model-Level Bricks and MLLM Builder [[PDF](https://arxiv.org/abs/2509.12883),[Page](https://xiaomi-research.github.io/lego-edit/)] ![Code](https://img.shields.io/github/stars/xiaomi-research/lego-edit?style=social&label=Star)

[arxiv 2025.09] MultiEdit: Advancing Instruction-based Image Editing on Diverse and Challenging Tasks  [[PDF](https://arxiv.org/abs/2509.14638),[Page](https://huggingface.co/datasets/inclusionAI/MultiEdit)] 

[arxiv 2025.09] UniVid: Unifying Vision Tasks with Pre-trained Video Generation Models  [[PDF](https://arxiv.org/abs/2509.21760),[Page](https://github.com/CUC-MIPG/UniVid)] ![Code](https://img.shields.io/github/stars/CUC-MIPG/UniVid?style=social&label=Star)

[arxiv 2025.10] Query-Kontext: An Unified Multimodal Model for Image Generation and Editing  [[PDF](https://arxiv.org/abs/2509.26641)]

[arxiv 2025.10]  ChronoEdit: Towards Temporal Reasoning for Image Editing and World Simulation [[PDF](https://arxiv.org/abs/2510.04290),[Page](https://research.nvidia.com/labs/toronto-ai/chronoedit)] ![Code](https://img.shields.io/github/stars/nv-tlabs/ChronoEdit?style=social&label=Star)

[arxiv 2025.10] DreamOmni2: Multimodal Instruction-based Editing and Generation  [[PDF](https://arxiv.org/abs/2510.06679),[Page](https://github.com/dvlab-research/DreamOmni2)] ![Code](https://img.shields.io/github/stars/dvlab-research/DreamOmni2?style=social&label=Star)

[arxiv 2025.10]  Ming-UniVision: Joint Image Understanding and Generation with a Unified Continuous Tokenizer [[PDF](https://arxiv.org/abs/2510.06590),[Page](https://github.com/inclusionAI/Ming-UniVision)] ![Code](https://img.shields.io/github/stars/inclusionAI/Ming-UniVision?style=social&label=Star) 

[arxiv 2025.10]  UniFusion: Vision-Language Model as Unified Encoder in Image Generation [[PDF](https://arxiv.org/abs/2510.12789),[Page](https://thekevinli.github.io/unifusion/)] 

[arxiv 2025.10]  Uniworld-V2: Reinforce Image Editing with Diffusion Negative-Aware Finetuning and MLLM Implicit Feedback [[PDF](https://arxiv.org/abs/2510.16888),[Page](https://github.com/PKU-YuanGroup/UniWorld-V2)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/UniWorld-V2?style=social&label=Star)

[arxiv 2025.11] iMontage: Unified, Versatile, Highly Dynamic Many-to-many Image Generation  [[PDF](https://arxiv.org/abs/2511.20635),[Page](https://github.com/Kr1sJFU/iMontage)] ![Code](https://img.shields.io/github/stars/Kr1sJFU/iMontage?style=social&label=Star)

[arxiv 2025.12] ReasonEdit: Towards Reasoning-Enhanced Image Editing Models  [[PDF](https://arxiv.org/abs/2511.22625),[Page](https://github.com/stepfun-ai/Step1X-Edit)] ![Code](https://img.shields.io/github/stars/stepfun-ai/Step1X-Edit?style=social&label=Star)

[arxiv 2025.12] ClusIR: Towards Cluster-Guided All-in-One Image Restoration  [[PDF](https://arxiv.org/abs/2512.10948)]

[arxiv 2025.12]  DreamOmni3: Scribble-based Editing and Generation [[PDF](https://arxiv.org/abs/2512.22525),[Page](https://github.com/dvlab-research/DreamOmni3)] ![Code](https://img.shields.io/github/stars/dvlab-research/DreamOmni3?style=social&label=Star)

[arxiv 2026.02]  CoLoGen: Progressive Learning of Concept–Localization Duality for Unified Image Generation [[PDF](https://arxiv.org/abs/2602.22150)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## Generation and Understanding in a Unified Framework 
[arxiv 2024.11] Diff-2-in-1: Bridging Generation and Dense Perception with Diffusion Models  [[PDF](https://arxiv.org/abs/2411.05005),[Page]()]

[arxiv 2024.11] Scaling Properties of Diffusion Models for Perceptual Tasks  [[PDF](https://arxiv.org/abs/2411.08034),[Page](https://scaling-diffusion-perception.github.io/)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## Architecture

[arxiv 2024.03]Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts [[PDF](https://arxiv.org/abs/2403.09176),[Page](https://byeongjun-park.github.io/Switch-DiT/)]

[arxiv 2024.05] TerDiT: Ternary Diffusion Models with Transformers [[PDF](https://arxiv.org/abs/2405.14854),[Page](https://github.com/Lucky-Lance/TerDiT)]

[arxiv 2024.05] DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention  [[PDF](https://arxiv.org/abs/2405.18428),[Page](https://github.com/hustvl/DiG)]

[arxiv 2024.05]  ViG: Linear-complexity Visual Sequence Learning with Gated Linear Attention [[PDF](https://arxiv.org/abs/2405.18425),[Page](https://github.com/hustvl/ViG)]

[arxiv 2024.06] Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models[[PDF](https://arxiv.org/abs/2406.09416),[Page](https://qihao067.github.io/projects/DiMR)]

[arxiv 2024.07] UltraEdit: Instruction-based Fine-Grained Image Editing at Scale [[PDF](https://arxiv.org/abs/2407.05282),[Page](https://ultra-editing.github.io/)]

[arxiv 2024.07] Add-SD: Rational Generation without Manual Reference  [[PDF](https://arxiv.org/abs/2407.21016),[Page](https://github.com/ylingfeng/Add-SD)]

[arxiv 2024.07] Specify and Edit: Overcoming Ambiguity in Text-Based Image Editing  [[PDF](https://arxiv.org/abs/2407.20232),[Page](https://github.com/fabvio/SANE)]

[arxiv 2024.08] FastEdit: Fast Text-Guided Single-Image Editing via Semantic-Aware Diffusion Fine-Tuning  [[PDF](https://arxiv.org/abs/2408.03355),[Page](https://fastedit-sd.github.io/)]

[arxiv 2024.08] EasyInv: Toward Fast and Better DDIM Inversion [[PDF](https://arxiv.org/abs/2408.05159)]

[arxiv 2024.08] AnyDesign: Versatile Area Fashion Editing via Mask-Free Diffusion [[PDF](https://arxiv.org/abs/2408.11553)]

[arxiv 2024.10] On Inductive Biases That Enable Generalization of Diffusion Transformers  [[PDF](https://arxiv.org/abs/2410.21273),[Page](https://dit-generalization.github.io/)]

[arxiv 2024.12] Causal Diffusion Transformer for Generative Modeling  [[PDF](https://arxiv.org/abs/2412.12095),[Page](https://github.com/causalfusion/causalfusion)] ![Code](https://img.shields.io/github/stars/causalfusion/causalfusion?style=social&label=Star)

[arxiv 2024.12] E-CAR: Efficient Continuous Autoregressive Image Generation via Multistage Modeling  [[PDF](https://arxiv.org/abs/2412.14170)]

[arxiv 2025.02]  Fractal Generative Models [[PDF](https://arxiv.org/pdf/2502.17437),[Page](https://github.com/LTH14/fractalgen)] ![Code](https://img.shields.io/github/stars/LTH14/fractalgen?style=social&label=Star)

[arxiv 2025.03] Spiking Transformer: Introducing Accurate Addition-Only Spiking Self-Attention for Transformer  [[PDF](https://arxiv.org/pdf/2503.00226)]

[arxiv 2025.03] DiT-Air: Revisiting the Efficiency of Diffusion Model Architecture Design in Text to Image Generation  [[PDF](https://arxiv.org/abs/2503.10618)]

[arxiv 2025.06] M4V: Multi-Modal Mamba for Text-to-Video Generation  [[PDF](https://arxiv.org/abs/xxx),[Page](https://huangjch526.github.io/M4V_project/)] 

[arxiv 2025.12] Visual Generation Tuning  [[PDF](https://arxiv.org/pdf/2511.23469),[Page](https://github.com/hustvl/VGT)] ![Code](https://img.shields.io/github/stars/hustvl/VGT?style=social&label=Star)


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## Distribution 

[arxiv 2024.10] Rectified Diffusion: Straightness Is Not Your Need  [[PDF](https://arxiv.org/abs/2410.07303),[Page](https://github.com/G-U-N/Rectified-Diffusion)]

[arxiv 2024.10] Simple ReFlow: Improved Techniques for Fast Flow Models  [[PDF](https://arxiv.org/abs/2410.07815),[Page]()]

[arxiv 2024.10] Consistency Diffusion Bridge Models  [[PDF](https://arxiv.org/abs/2410.22637)]

[arxiv 2024.12]  Orthus: Autoregressive Interleaved Image-Text Generation with Modality-Specific Heads [[PDF](https://arxiv.org/pdf/2412.00127)]

[arxiv 2024.12]  [MASK] is All You Need [[PDF](https://arxiv.org/abs/2412.06787),[Page](https://compvis.github.io/mask/)] ![Code](https://img.shields.io/github/stars/CompVis/mask?style=social&label=Star)

[arxiv 2024.12] See Further When Clear: Curriculum Consistency Model  [[PDF](https://arxiv.org/abs/2412.06295)]

[arxiv 2024.12]  Analyzing and Improving Model Collapse in Rectified Flow Models [[PDF](https://arxiv.org/abs/2412.08175)]

[arxiv 2024.12] Multimodal Latent Language Modeling with Next-Token Diffusion  [[PDF](https://arxiv.org/abs/2412.08635),[Page](https://aka.ms/GeneralAI)] 

[arxiv 2024.12] Exploring Diffusion and Flow Matching Under Generator Matching  [[PDF](https://arxiv.org/abs/2412.11024)]

[arxiv 2025.02] Variational Rectified Flow Matching  [[PDF](https://arxiv.org/pdf/2502.09616)]

[arxiv 2025.02]  Designing a Conditional Prior Distribution for Flow-Based Generative Models [[PDF](https://arxiv.org/pdf/2502.09611)]

[arxiv 2025.02]  Bidirectional Diffusion Bridge Models [[PDF](https://arxiv.org/abs/2502.09655),[Page](https://github.com/kvmduc/BDBM)] ![Code](https://img.shields.io/github/stars/kvmduc/BDBM?style=social&label=Star)

[arxiv 2025.02] Is Noise Conditioning Necessary for Denoising Generative Models?  [[PDF](https://arxiv.org/pdf/2502.13129)]

[arxiv 2025.03] The Curse of Conditions: Analyzing and Improving Optimal Transport for Conditional Flow-Based Generation  [[PDF](https://arxiv.org/abs/2503.10636),[Page](https://hkchengrex.github.io/C2OT)] ![Code](https://img.shields.io/github/stars/hkchengrex/C2OT?style=social&label=Star)

[arxiv 2025.03] Deeply Supervised Flow-Based Models  [[PDF](https://arxiv.org/abs/2503.14494),[Page](https://deepflow-project.github.io/)]

[arxiv 2025.04] PixelFlow: Pixel-Space Generative Models with Flow  [[PDF](https://arxiv.org/abs/2504.07963),[Page](https://github.com/ShoufaChen/PixelFlow)] ![Code](https://img.shields.io/github/stars/ShoufaChen/PixelFlow?style=social&label=Star)

[arxiv 2025.06]  Contrastive Flow Matching [[PDF](https://arxiv.org/pdf/2506.05350),[Page](https://github.com/gstoica27/DeltaFM)] ![Code](https://img.shields.io/github/stars/gstoica27/DeltaFM?style=social&label=Star)

[arxiv 2025.06] STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis  [[PDF](https://arxiv.org/abs/2506.06276)]

[arxiv 2025.06] Improving Progressive Generation with Decomposable Flow Matching  [[PDF](https://arxiv.org/abs/2506.19839),[Page](https://snap-research.github.io/dfm/)] 

[arxiv 2025.07] Pyramidal Patchification Flow for Visual Generation  [[PDF](https://arxiv.org/pdf/2506.23543),[Page](https://github.com/fudan-generative-vision/PPFlow)] ![Code](https://img.shields.io/github/stars/fudan-generative-vision/PPFlow?style=social&label=Star)

[arxiv 2025.07]  FACM: Flow-Anchored Consistency Models [[PDF](https://arxiv.org/abs/2507.03738),[Page](https://github.com/ali-vilab/FACM)] ![Code](https://img.shields.io/github/stars/ali-vilab/FACM?style=social&label=Star)

[arxiv 2025.07] FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization  [[PDF](https://arxiv.org/pdf/2507.13311)]

[arxiv 2025.08]  Next Visual Granularity Generation [[PDF](https://arxiv.org/abs/2508.12811),[Page](https://yikai-wang.github.io/nvg/)] ![Code](https://img.shields.io/github/stars/Yikai-Wang/nvg?style=social&label=Star)

[arxiv 2025.09]  Delta Velocity Rectified Flow for Text-to-Image Editing [[PDF](https://arxiv.org/pdf/2509.05342),[Page](https://github.com/gaspardbd/DeltaVelocityRectifiedFlow)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.09] CAR-Flow: Condition-Aware Reparameterization Aligns Source and Target for Better Flow Matching  [[PDF](https://arxiv.org/abs/2509.19300)]

[arxiv 2025.10]  Authentic Discrete Diffusion Model [[PDF](https://arxiv.org/abs/2510.01047)]

[arxiv 2025.10]  Blockwise Flow Matching: Improving Flow Matching Models For Efficient High-Quality Generation [[PDF](https://arxiv.org/abs/2510.21167)]

[arxiv 2025.12] StreamFlow: Theory, Algorithm, and Implementation for High-Efficiency Rectified Flow Generation  [[PDF](https://arxiv.org/abs/2511.22009),[Page](https://github.com/World-Snapshot/StreamFlow)] ![Code](https://img.shields.io/github/stars/World-Snapshot/StreamFlow?style=social&label=Star)

[arxiv 2025.12] SimFlow: Simplified and End-to-End Training of Latent Normalizing Flows  [[PDF](https://arxiv.org/abs/2512.04084),[Page](https://qinyu-allen-zhao.github.io/SimFlow/)] 

[arxiv 2026.01]  FlowConsist: Make Your Flow Consistent with Real Trajectory [[PDF](https://arxiv.org/pdf/2602.06346)]

[arxiv 2026.03] MPDiT: Multi-Patch Global-to-Local Transformer Architecture For Efficient Flow Matching and Diffusion Model  [[PDF](https://arxiv.org/abs/2603.26357)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## CFG
[arxiv 2025.01] Visual Generation Without Guidance  [[PDF](https://arxiv.org/abs/2501.15420),[Page](https://github.com/thu-ml/GFT)] ![Code](https://img.shields.io/github/stars/thu-ml/GFT?style=social&label=Star)

[arxiv 2025.02] REG: Rectified Gradient Guidance for Conditional Diffusion Models  [[PDF](https://arxiv.org/pdf/2501.18865)]

[arxiv 2025.02]  DICE: Distilling Classifier-Free Guidance into Text Embeddings [[PDF](https://arxiv.org/pdf/2502.03726)]

[arxiv 2025.02] Variational Control for Guidance in Diffusion Models  [[PDF](https://arxiv.org/pdf/2502.03686)]

[arxiv 2025.02] Diffusion Models without Classifier-free Guidance  [[PDF](https://arxiv.org/pdf/2502.12154),[Page](https://github.com/tzco/Diffusion-wo-CFG)] ![Code](https://img.shields.io/github/stars/tzco/Diffusion-wo-CFG?style=social&label=Star)

[arxiv 2025.02] Classifier-free Guidance with Adaptive Scaling  [[PDF](https://arxiv.org/pdf/2502.10574)]

[arxiv 2025.10]  Rectified-CFG++ for Flow Based Models [[PDF](https://arxiv.org/abs/2510.07631),[Page](https://rectified-cfgpp.github.io/)] ![Code](https://img.shields.io/github/stars/shreshthsaini/Rectified-CFGpp?style=social&label=Star)

[arxiv 2026.03] CFG-Ctrl: Control-Based Classifier-Free Diffusion Guidance  [[PDF](https://arxiv.org/abs/2603.03281),[Page](https://hanyang-21.github.io/CFG-Ctrl/)] ![Code](https://img.shields.io/github/stars/hanyang-21/CFG-Ctrl?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## ROPE

[arxiv 2025.02] VideoRoPE: What Makes for Good Video Rotary Position Embedding?  [[PDF](https://arxiv.org/abs/2502.05173),[Page](https://github.com/Wiselnn570/VideoRoPE)] ![Code](https://img.shields.io/github/stars/Wiselnn570/VideoRoPE?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

# VLM guided Generation
[arxiv 2025.06] Dual-Process Image Generation  [[PDF](https://arxiv.org/abs/2506.01955),[Page](https://dual-process.github.io/)] ![Code](https://img.shields.io/github/stars/https://github.com/g-luo/dual_process?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## Chat for editing 
[arxiv 2024.12] ChatDiT: A Training-Free Baseline for Task-Agnostic Free-Form Chatting with Diffusion Transformers  [[PDF](https://arxiv.org/abs/2412.12571),[Page](https://github.com/ali-vilab/ChatDiT)] ![Code](https://img.shields.io/github/stars/ali-vilab/ChatDiT?style=social&label=Star)


## Instruct for editing 
[arxiv 2024.07] GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing  [[PDF](https://arxiv.org/abs/2407.05600),[Page](https://zhenyuw16.github.io/GenArtist_page/)]

[arxiv 2024.07] UltraEdit: Instruction-based Fine-Grained Image Editing at Scale  [[PDF](https://arxiv.org/abs/2407.05282),[Page](https://ultra-editing.github.io/)]

[arxiv 2024.11] SeedEdit: Align Image Re-Generation to Image Editing  [[PDF](https://arxiv.org/abs/2411.06686),[Page](https://team.doubao.com/en/special/seededit)]

[arxiv 2024.11] Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models [[PDF](https://arxiv.org/abs/2411.07232),[Page](https://research.nvidia.com/labs/par/addit/)]

[arxiv 2024.11] OmniEdit: Building Image Editing Generalist Models Through Specialist Supervision [[PDF](https://arxiv.org/abs/2411.07199),[Page](https://tiger-ai-lab.github.io/OmniEdit/)]

[arxiv 2024.11] InsightEdit: Towards Better Instruction Following for Image Editing  [[PDF](https://poppyxu.github.io/InsightEdit_web/),[Page](https://poppyxu.github.io/InsightEdit_web/)] ![Code](https://img.shields.io/github/stars/poppyxu/InsightEdit?style=social&label=Star)

[arxiv 2024.12] HumanEdit : A High-Quality Human-Rewarded Dataset for Instruction-based Image Editing  [[PDF](https://arxiv.org/abs/2412.04280),[Page](https://viiika.github.io/HumanEdit/)] ![Code](https://img.shields.io/github/stars/viiika/HumanEdit/?style=social&label=Star)

[arxiv 2024.12] FireFlow: Fast Inversion of Rectified Flow for Image Semantic Editing  [[PDF](https://arxiv.org/abs/2412.07517),[Page](https://github.com/HolmesShuan/FireFlow-Fast-Inversion-of-Rectified-Flow-for-Image-Semantic-Editing)] ![Code](https://img.shields.io/github/stars/HolmesShuan/FireFlow-Fast-Inversion-of-Rectified-Flow-for-Image-Semantic-Editing?style=social&label=Star)

[arxiv 2024.12] FluxSpace: Disentangled Semantic Editing in Rectified Flow Transformers  [[PDF](https://arxiv.org/abs/2312.05390),[Page](https://fluxspace.github.io/)] 

[arxiv 2024.12] Instruction-based Image Manipulation by Watching How Things Move  [[PDF](https://arxiv.org/abs/2412.12087),[Page](https://ljzycmd.github.io/projects/InstructMove/)] 

[arxiv 2024.12] UIP2P: Unsupervised Instruction-based Image Editing via Cycle Edit Consistency  [[PDF](https://arxiv.org/abs/2412.15216),[Page](https://enis.dev/uip2p/)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## Improve T2I base modules
[arxiv 2023]LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts [[PDF](https://arxiv.org/abs/2310.10640),[Page](https://github.com/hananshafi/llmblueprint)]

[arxiv 2023.11]Self-correcting LLM-controlled Diffusion Models [[PDF](https://arxiv.org/abs/2311.16090)]

[arxiv 2023.11]Enhancing Diffusion Models with Text-Encoder Reinforcement Learning [[PDF](https://arxiv.org/abs/2311.15657)]

[arxiv 2023.11]Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following [[PDF](https://arxiv.org/abs/2311.17002)]

[arxiv 2023.12]Unlocking Spatial Comprehension in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2311.17937)]

[arxiv 2023.12]Fair Text-to-Image Diffusion via Fair Mapping [[PDF](https://arxiv.org/abs/2311.17695)]

[arxiv 2023.12]CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image Diffusion Models[[PDF](https://arxiv.org/abs/2312.06059)]

[arxiv 2023.120]DreamDistribution: Prompt Distribution Learning for Text-to-Image Diffusion Models[ [PDF](https://arxiv.org/abs/2312.14216),[Page](https://briannlongzhao.github.io/DreamDistribution)]

[arxiv 2023.12]Prompt Expansion for Adaptive Text-to-Image Generation [[PDF](https://arxiv.org/abs/2312.16720)]

[arxiv 2023.12]Diffusion Model with Perceptual Loss [[PDF](https://arxiv.org/abs/2401.00110)]

[arxiv 2024.01]EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2401.04608)]

[arxiv 2024.01]DiffusionGPT: LLM-Driven Text-to-Image Generation System [[PDF](https://arxiv.org/abs/2401.10061)]

[arxiv 2024.01]Divide and Conquer: Language Models can Plan and Self-Correct for Compositional Text-to-Image Generation[[PDF](https://arxiv.org/abs/2401.15688)]

[arxiv 2024.02]MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis [[PDF](https://arxiv.org/pdf/2402.05408.pdf),[Page](https://migcproject.github.io/)]

[arxiv 2024.02]Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2402.05375),[Page](https://github.com/sen-mao/SuppressEOT)]

[arxiv 2024.02]InstanceDiffusion: Instance-level Control for Image Generation [[PDF](https://arxiv.org/abs/2402.03290),[Page](https://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/)]

[arxiv 2024.02]Learning Continuous 3D Words for Text-to-Image Generation[[PDF](https://ttchengab.github.io/continuous_3d_words/c3d_words.pdf),[Page](https://ttchengab.github.io/continuous_3d_words/)]

[arxiv 2024.02]Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation[[PDF](https://arxiv.org/abs/2402.10210)]

[arxiv 2024.02]RealCompo: Dynamic Equilibrium between Realism and Compositionality Improves Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2402.12908),[Page](https://github.com/YangLing0818/RealCompo)]

[arxiv 2024.02]A User-Friendly Framework for Generating Model-Preferred Prompts in Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2402.12760)]

[arxiv 2024.02]Contrastive Prompts Improve Disentanglement in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2402.13490)]

[arxiv 2024.02]Structure-Guided Adversarial Training of Diffusion Models[[PDF](https://arxiv.org/abs/2402.17563)]

[arxiv 2024.03]SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data [[PDF](https://arxiv.org/abs/2403.06952),[Page](https://selma-t2i.github.io/)]

[arxiv 2024.03]ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment [[PDF](https://arxiv.org/abs/2403.05135),[Page](https://ella-diffusion.github.io/)]

[arxiv 2024.03]Bridging Different Language Models and Generative Vision Models for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.07860),[Page](https://github.com/ShihaoZhaoZSH/LaVi-Bridge)]

[arxiv 2024.03]Optimizing Negative Prompts for Enhanced Aesthetics and Fidelity in Text-To-Image Generation [[PDF](https://arxiv.org/abs/2403.07605)]

[arxiv 2024.03]FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis [[PDF](https://arxiv.org/abs/2403.12963)]

[arxiv 2024.04]Getting it Right: Improving Spatial Consistency in Text-to-Image Models [[PDF](https://arxiv.org/abs/2404.01197),[Page](https://spright-t2i.github.io/)]

[arxiv 2024.04]Dynamic Prompt Optimizing for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2404.04095)]

[arxiv 2024.04]Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching [[PDF](https://arxiv.org/abs/2404.03653),[Page](https://caraj7.github.io/comat/)]

[arxiv 2024.04]Align Your Steps: Optimizing Sampling Schedules in Diffusion Models [[PDF](https://arxiv.org/abs/2404.14507),[Page](https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/)]

[arxiv 2024.04]Stylus: Automatic Adapter Selection for Diffusion Models [[PDF](https://arxiv.org/abs/2404.18928),[Page](https://stylus-diffusion.github.io/)]

[arxiv 2024.05]Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2405.00760)]

[arxiv 2024.05]Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model [[PDF](https://arxiv.org/abs/2405.03958)]

[arxiv 2024.05]Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models [[PDF](https://arxiv.org/abs/2405.05252)]

[arxiv 2024.05]An Empirical Study and Analysis of Text-to-Image Generation Using Large Language Model-Powered Textual Representation [[PDF](https://arxiv.org/abs/2405.12914)]

[arxiv 2024.05] Learning Multi-dimensional Human Preference for Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2405.14705)]

[arxiv 2024.05] Class-Conditional self-reward mechanism for improved Text-to-Image models  [[PDF](https://arxiv.org/abs/2405.13473)]

[arxiv 2024.05]  LiteVAE: Lightweight and Efficient Variational Autoencoders for Latent Diffusion Models [[PDF](https://arxiv.org/abs/2405.14477)]

[arxiv 2024.05] SG-Adapter: Enhancing Text-to-Image Generation with Scene Graph Guidance  [[PDF](https://arxiv.org/abs/2405.15321)]

[arxiv 2024.05] Training-free Editioning of Text-to-Image Models [[PDF](https://arxiv.org/abs/2405.17069)]

[arxiv 2024.05] PromptFix: You Prompt and We Fix the Photo [[PDF](https://arxiv.org/abs/2405.16785),[Page](https://github.com/yeates/PromptFix)]

[arxiv 2024.06] Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling [[PDF](https://arxiv.org/abs/2405.21048)]

[arxiv 2024.06]Improving GFlowNets for Text-to-Image Diffusion Alignment [[PDF](https://arxiv.org/abs/2406.00633)]

[arxiv 2024.06] Diffusion Soup: Model Merging for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2406.08431),[Page]()]

[arxiv 2024.06] CFG++: Manifold-constrained Classifier Free Guidance for Diffusion Models[[PDF](https://arxiv.org/abs/2406.08070),[Page](https://github.com/CFGpp-diffusion/CFGpp)]

[arxiv 2024.06]Understanding and Mitigating Compositional Issues in Text-to-Image Generative Models [[PDF](https://arxiv.org/abs/2406.07844),[Page](https://github.com/ArmanZarei/Mitigating-T2I-Comp-Issues)]

[arxiv 2024.06] Make It Count: Text-to-Image Generation with an Accurate Number of Objects [[PDF](https://arxiv.org/abs/2406.10210),[Page](https://make-it-count-paper.github.io/)]

[arxiv 2024.06]  AITTI: Learning Adaptive Inclusive Token for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2406.12805),[Page](https://github.com/itsmag11/AITTI)]

[arxiv 2024.06] Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models  [[PDF](https://arxiv.org/abs/2406.11831)]

[arxiv 2024.06] Neural Residual Diffusion Models for Deep Scalable Vision Generation [[PDF](https://arxiv.org/abs/2406.13215)]

[arxiv 2024.06] ARTIST: Improving the Generation of Text-rich Images by Disentanglement  [[PDF](https://arxiv.org/abs/2406.12044),[Page]()]

[arxiv 2024.06] Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2406.12042)]

[arxiv 2024.06]Fine-tuning Diffusion Models for Enhancing Face Quality in Text-to-image Generation[[PDF](https://arxiv.org/abs/2406.17100)]

[arxiv 2024.07]PopAlign: Population-Level Alignment for Fair Text-to-Image Generation [[PDF](https://arxiv.org/abs/2406.19668)]

[arxiv 2024.07]  LLM4GEN: Leveraging Semantic Representation of LLMs for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2407.00737),[Page](https://xiaobul.github.io/LLM4GEN/)]

[arxiv 2024.07]  Prompt Refinement with Image Pivot for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2407.00247),[Page]()]

[arxiv 2024.07] Improved Noise Schedule for Diffusion Training  [[PDF](https://arxiv.org/abs/2407.03297)]

[arxiv 2024.07]  No Training, No Problem: Rethinking Classifier-Free Guidance for Diffusion Models [[PDF](https://arxiv.org/abs/2407.02687)]

[arxiv 2024.07] Not All Noises Are Created Equally:Diffusion Noise Selection and Optimization  [[PDF](https://arxiv.org/abs/2407.14041)]

[arxiv 2024.07] GeoGuide: Geometric guidance of diffusion models [[PDF](https://arxiv.org/abs/2407.12889)]

[arxiv 2024.08]  Understanding the Local Geometry of Generative Model Manifolds [[PDF](https://arxiv.org/pdf/2408.08307)]

[arxiv 2024.08] Iterative Object Count Optimization for Text-to-image Diffusion Models[[PDF](https://arxiv.org/abs/2408.11721)]

[arxiv 2024.08]FRAP: Faithful and Realistic Text-to-Image Generation with Adaptive Prompt Weighting[[PDF](https://arxiv.org/abs/2408.11706)]

[arxiv 2024.08] Compress Guidance in Conditional Diffusion Sampling  [[PDF](https://arxiv.org/abs/2408.11194)]

[arxiv 2024.09]  Elucidating Optimal Reward-Diversity Tradeoffs in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2409.06493)]

[arxiv 2024.09] Generalizing Alignment Paradigm of Text-to-Image Generation with Preferences through f-divergence Minimization  [[PDF](https://arxiv.org/abs/2409.09774)]

[arxiv 2024.09]  Pixel-Space Post-Training of Latent Diffusion Models [[PDF](https://arxiv.org/pdf/2409.17565),[Page]()]

[arxiv 2024.09] Improvements to SDXL in NovelAI Diffusion V3 [[PDF](https://arxiv.org/abs/2409.15997)]

[arxiv 2024.10] Removing Distributional Discrepancies in Captions Improves Image-Text Alignment [[PDF](https://github.com/adobe-research/llava-score),[Page](https://yuheng-li.github.io/LLaVA-score/)]

[arxiv 2024.10] ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2410.01731),[Page](https://comfygen-paper.github.io/)]

[arxiv 2024.10] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think  [[PDF](https://arxiv.org/abs/2410.06940)]

[arxiv 2024.10] Decouple-Then-Merge: Towards Better Training for Diffusion Models  [[PDF](https://arxiv.org/abs/2410.06664)]

[arxiv 2024.10] Sparse Repellency for Shielded Generation in Text-to-image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.06025),[Page]()]


[arxiv 2024.10] Training-free Diffusion Model Alignment with Sampling Demons  [[PDF](https://arxiv.org/abs/2410.05760)]

[arxiv 2024.10]  Diffusion Models Need Visual Priors for Image Generation [[PDF](https://arxiv.org/abs/2410.08531)]

[arxiv 2024.10] Improving Long-Text Alignment for Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.11817),[Page](https://github.com/luping-liu/LongAlign)]

[arxiv 2024.10] Dynamic Negative Guidance of Diffusion Models[[PDF](https://arxiv.org/abs/2410.14398)]

[arxiv 2024.10] GraspDiffusion: Synthesizing Realistic Whole-body Hand-Object Interaction [[PDF](https://arxiv.org/abs/2410.13911),[Page]()]

[arxiv 2024.10]  Progressive Compositionality In Text-to-Image Generative Models [[PDF](https://arxiv.org/abs/2410.16719),[Page](https://github.com/evansh666/EvoGen)]

[arxiv 2024.11]  HandCraft: Anatomically Correct Restoration of Malformed Hands in Diffusion Generated Images [[PDF](https://arxiv.org/abs/2411.04332),[Page](https://kfzyqin.github.io/handcraft/)]

[arxiv 2024.11] Improving image synthesis with diffusion-negative sampling  [[PDF](https://arxiv.org/abs/2411.05473)]

[arxiv 2024.11]  Token Merging for Training-Free Semantic Binding in Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2411.07132),[Page](https://github.com/hutaiHang/ToMe)]

[arxiv 2024.11] Noise Diffusion for Enhancing Semantic Faithfulness in Text-to-Image Synthesis  [[PDF](https://arxiv.org/abs/2411.16503),[Page](https://github.com/Bomingmiao/NoiseDiffusion)] ![Code](https://img.shields.io/github/stars/Bomingmiao/NoiseDiffusion?style=social&label=Star)

[arxiv 2024.11] Text Embedding is Not All You Need: Attention Control for Text-to-Image Semantic Alignment with Text Self-Attention Maps  [[PDF](https://arxiv.org/abs/2411.15236)]

[arxiv 2024.11] Relations, Negations, and Numbers: Looking for Logic in Generative Text-to-Image Models  [[PDF](https://arxiv.org/abs/2411.17066),[Page](https://github.com/ColinConwell/T2I-Probology)] ![Code](https://img.shields.io/github/stars/ColinConwell/T2I-Probology?style=social&label=Star)

[arxiv 2024.11] Contrastive CFG: Improving CFG in Diffusion Models by Contrasting Positive and Negative Concepts  [[PDF](https://arxiv.org/abs/2411.17077)] 

[arxiv 2024.12] Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar Subjects  [[PDF](https://arxiv.org/pdf/2411.18936) ]

[arxiv 2024.12] Enhancing Compositional Text-to-Image Generation with Reliable Random Seeds  [[PDF](https://arxiv.org/abs/2411.18810)]

[arxiv 2024.12]  Enhancing MMDiT-Based Text-to-Image Models for Similar Subject Generation [[PDF](https://arxiv.org/pdf/2411.18301),[Page](https://github.com/wtybest/EnMMDiT)] ![Code](https://img.shields.io/github/stars/wtybest/EnMMDiT?style=social&label=Star)

[arxiv 2024.12] Addressing Attribute Leakages in Diffusion-based Image Editing without Training  [[PDF](https://arxiv.org/abs/2412.04715)]

[arxiv 2024.12]  Learning Visual Generative Priors without Text [[PDF](https://arxiv.org/abs/2412.07767),[Page](https://xiaomabufei.github.io/lumos/)] ![Code](https://img.shields.io/github/stars/xiaomabufei/lumos?style=social&label=Star)

[arxiv 2024.12] FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2412.07674),[Page](https://fiva-dataset.github.io/)] ![Code](https://img.shields.io/github/stars/wutong16/FiVA?style=social&label=Star)

[arxiv 2024.12]  Fast Prompt Alignment for Text-to-Image Generation [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2024.12] Context Canvas: Enhancing Text-to-Image Diffusion Models with Knowledge Graph-Based RAG  [[PDF](https://arxiv.org/abs/2412.09614),[Page](https://context-canvas.github.io/)] 

[arxiv 2024.12] CoMPaSS: Enhancing Spatial Understanding in Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2412.13195),[Page](https://github.com/blurgyy/CoMPaSS)] ![Code](https://img.shields.io/github/stars/blurgyy/CoMPaSS?style=social&label=Star)

[arxiv 2025.01] E2EDiff: Direct Mapping from Noise to Data for Enhanced Diffusion Models  [[PDF](https://arxiv.org/abs/2412.21044)] 

[arxiv 2025.01] Focus-N-Fix: Region-Aware Fine-Tuning for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2501.06481)]

[arxiv 2025.03] T2ICount: Enhancing Cross-modal Understanding for Zero-Shot Counting  [[PDF](https://arxiv.org/abs/2502.20625),[Page](https://github.com/cha15yq/T2ICount)] ![Code](https://img.shields.io/github/stars/cha15yq/T2ICount?style=social&label=Star)

[arxiv 2025.03]  Investigating and Improving Counter-Stereotypical Action Relation in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/pdf/2503.10037)]

[arxiv 2025.04] ESPLoRA: Enhanced Spatial Precision with Low-Rank Adaption in Text-to-Image Diffusion Models for High-Definition Synthesis  [[PDF](https://arxiv.org/abs/2504.13745)]

[arxiv 2025.05]  VSC: Visual Search Compositional Text-to-Image Diffusion Model [[PDF](https://arxiv.org/abs/2505.01104)]

[arxiv 2025.05] DetailMaster: Can Your Text-to-Image Model Handle Long Prompts?  [[PDF](https://arxiv.org/abs/2505.16915)]

[arxiv 2025.05]  Harnessing Caption Detailness for Data-Efficient Text-to-Image Generation[[PDF](https://arxiv.org/abs/2505.15172)]

[arxiv 2025.06]  IMAGHarmony: Controllable Image Editing with Consistent Object Quantity and Layout [[PDF](https://arxiv.org/abs/2506.01949),[Page](https://github.com/muzishen/IMAGHarmony)] ![Code](https://img.shields.io/github/stars/muzishen/IMAGHarmony?style=social&label=Star)

[arxiv 2025.06]  TACA: Rethinking Cross-Modal Interaction in Multimodal Diffusion Transformers [[PDF](https://arxiv.org/pdf/2506.07986),[Page](https://github.com/Vchitect/TACA)] ![Code](https://img.shields.io/github/stars/Vchitect/TACA?style=social&label=Star)

[arxiv 2025.08] CountLoop:Iterative Agent Guided High Instance Image Generation  [[PDF](https://openreview.net/pdf?id=NZ0H1XtcZG),[Page](https://mondalanindya.github.io/CountLoop/)] ![Code](https://img.shields.io/github/stars/mondalanindya/CountLoop/?style=social&label=Star)

[arxiv 2025.09]  Maestro: Self-Improving Text-to-Image Generation via Agent Orchestration [[PDF](https://arxiv.org/abs/2509.10704)]

[arxiv 2025.09]  Understand Before You Generate: Self-Guided Training for Autoregressive Image Generation [[PDF](https://arxiv.org/abs/2509.15185)]

[arxiv 2025.10]  Asynchronous Denoising Diffusion Models for Aligning Text-to-Image Generation [[PDF](https://arxiv.org/abs/2510.04504),[Page](https://github.com/hu-zijing/AsynDM)] ![Code](https://img.shields.io/github/stars/hu-zijing/AsynDM?style=social&label=Star)

[arxiv 2025.10]  Head-wise Adaptive Rotary Positional Encoding for Fine-Grained Image Generation [[PDF](https://arxiv.org/abs/2510.10489)]

[arxiv 2026.01] Agentic Retoucher for Text-To-Image Generation  [[PDF](https://arxiv.org/pdf/2601.02046)]

[arxiv 2026.03] Early Failure Detection and Intervention in Video Diffusion Models  [[[PDF](https://arxiv.org/abs/2603.14320)]]

[arxiv 2026.03] Semantic-Aware Prefix Learning for Token-Efficient Image Generation  [[PDF](https://arxiv.org/abs/2603.25249)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## data augmentation
[arxiv 2025.03] How far can we go with ImageNet for Text-to-Image generation?  [[PDF](https://arxiv.org/abs/2502.21318),[Page](https://lucasdegeorge.github.io/projects/t2i_imagenet/)] ![Code](https://img.shields.io/github/stars/lucasdegeorge/T2I-ImageNet?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## VAE /  tokenizer

[arxiv 2024.06] Scaling the Codebook Size of VQGAN to 100,000 with a Utilization Rate of 99%  [[PDF](https://arxiv.org/abs/2406.11837))]

[arxiv 2024.11]  Adaptive Length Image Tokenization via Recurrent Allocation [[PDF](https://arxiv.org/abs/2411.02393),[Page](https://github.com/ShivamDuggal4/adaptive-length-tokenizer)]

[arxiv 2025.01] CAT: Content-Adaptive Image Tokenization  [[PDF](https://arxiv.org/pdf/2501.03120)]

[arxiv 2025.01] One-D-Piece: Image Tokenizer Meets Quality-Controllable Compression  [[PDF](https://arxiv.org/abs/2501.10064),[Page](https://turingmotors.github.io/one-d-piece-tokenizer)] 

[arxiv 2025.02] Diffusion Autoencoders are Scalable Image Tokenizers  [[PDF](https://arxiv.org/abs/2501.18593),[Page](https://yinboc.github.io/dito/)] ![Code](https://img.shields.io/github/stars/yinboc/dito?style=social&label=Star)

[arxiv 2025.02]  Masked Autoencoders Are Effective Tokenizers for Diffusion Models [[PDF](https://arxiv.org/abs/2502.03444)]

[arxiv 2025.02] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation  [[PDF](https://arxiv.org/abs/2502.05178),[Page](https://nvlabs.github.io/QLIP/)] ![Code](https://img.shields.io/github/stars/NVlabs/QLIP?style=social&label=Star)

[arxiv 2025.03] DLF: Extreme Image Compression with Dual-generative Latent Fusion  [[PDF](https://arxiv.org/pdf/2503.01428)]

[arxiv 2025.03] FlowTok: Flowing Seamlessly Across Text and Image Tokens  [[PDF](https://arxiv.org/pdf/2503.10772),[Page](https://tacju.github.io/projects/flowtok.html)] ![Code](https://img.shields.io/github/stars/bytedance/1d-tokenizer?style=social&label=Star)

[arxiv 2025.03] Tokenize Image as a Set  [[PDF](https://arxiv.org/pdf/2503.16425),[Page](https://github.com/Gengzigang/TokenSet)] ![Code](https://img.shields.io/github/stars/Gengzigang/TokenSet?style=social&label=Star)

[arxiv 2025.04] GigaTok: Scaling Visual Tokenizers to 3 Billion Parameters for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2504.08736),[Page](https://silentview.github.io/GigaTok/)] ![Code](https://img.shields.io/github/stars/SilentView/GigaTok?style=social&label=Star)

[arxiv 2025.05]  TokBench: Evaluating Your Visual Tokenizer before Visual Generation [[PDF](https://arxiv.org/pdf/2505.18142),[Page](https://wjf5203.github.io/TokBench/)] ![Code](https://img.shields.io/github/stars/wjf5203/TokBench?style=social&label=Star)

[arxiv 2025.06]  AliTok: Towards Sequence Modeling Alignment between Tokenizer and Autoregressive Model [[PDF](https://arxiv.org/pdf/2506.05289),[Page](https://github.com/ali-vilab/alitok)] ![Code](https://img.shields.io/github/stars/ali-vilab/alitok?style=social&label=Star)

[arxiv 2025.06] Highly Compressed Tokenizer Can Generate Without Training  [[PDF](https://arxiv.org/html/2506.08257v1),[Page](https://github.com/lukaslaobeyer/token-opt)] ![Code](https://img.shields.io/github/stars/lukaslaobeyer/token-opt?style=social&label=Star)

[arxiv 2025.06]  FlexTok: Resampling Images into 1D Token Sequences of Flexible Length [[PDF](https://arxiv.org/abs/2502.13967),[Page](https://github.com/apple/ml-flextok)] ![Code](https://img.shields.io/github/stars/apple/ml-flextok?style=social&label=Star)

[arxiv 2025.07]  Holistic Tokenizer for Autoregressive Image Generation [[PDF](https://arxiv.org/pdf/2507.02358),[Page](https://github.com/CVMI-Lab/Hita)] ![Code](https://img.shields.io/github/stars/CVMI-Lab/Hita?style=social&label=Star)

[arxiv 2025.07]  MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization [[PDF](https://arxiv.org/abs/2507.07997),[Page](https://github.com/MKJia/MGVQ)] ![Code](https://img.shields.io/github/stars/MKJia/MGVQ?style=social&label=Star)

[arxiv 2025.07] Quantize-then-Rectify: Efficient VQ-VAE Training  [[PDF](https://arxiv.org/abs/2507.10547)】

[arxiv 2025.07] Latent Denoising Makes Good Visual Tokenizers  [[PDF](https://arxiv.org/abs/2507.15856),[Page](https://github.com/Jiawei-Yang/DeTok)] ![Code](https://img.shields.io/github/stars/Jiawei-Yang/DeTok?style=social&label=Star)

[arxiv 2025.07]  DC-Gen: Accelerating Diffusion Models with Compressed Latent Space [[PDF](https://arxiv.org/abs/2508.00413),[Page](https://github.com/dc-ai-projects/DC-Gen)] ![Code](https://img.shields.io/github/stars/dc-ai-projects/DC-Gen?style=social&label=Star)

[arxiv 2025.09]  Image Tokenizer Needs Post-Training [[PDF](https://arxiv.org/abs/2509.12474),[Page](https://qiuk2.github.io/works/RobusTok/index.html)] ![Code](https://img.shields.io/github/stars/qiuk2/RobusTok?style=social&label=Star)

[arxiv 2025.10] SSDD: Single-Step Diffusion Decoder for Efficient Image Tokenization  [[PDF](https://arxiv.org/abs/2510.04961),[Page](https://github.com/facebookresearch/SSDD)] ![Code](https://img.shields.io/github/stars/facebookresearch/SSDD?style=social&label=Star)

[arxiv 2025.12] Both Semantics and Reconstruction Matter: Making Representation Encoders Ready for Text-to-Image Generation and Editing  [[PDF](https://jshilong.github.io/PS-VAE-PAGE/),[Page](https://jshilong.github.io/PS-VAE-PAGE/)] 

[arxiv 2026.01] NativeTok: Native Visual Tokenization for Improved Image Generation  [[PDF](https://arxiv.org/abs/2601.22837),[Page](https://github.com/wangbei1/Nativetok)] ![Code](https://img.shields.io/github/stars/wangbei1/Nativetok?style=social&label=Star)

[arxiv 2026.03] CaTok: Taming Mean Flows for One-Dimensional Causal Image Tokenization  [[PDF](https://arxiv.org/abs/2603.06449),[Page](https://sharelab-sii.github.io/catok-web/)] ![Code](https://img.shields.io/github/stars/ShareLab-SII/CaTok?style=social&label=Star)

[arxiv 2026.03] RPiAE: A Representation-Pivoted Autoencoder Enhancing Both Image Generation and Editing  [[PDF](https://arxiv.org/abs/2603.19206),[Page](https://arthuring.github.io/RPiAE-page/)]

[arxiv 2026.03] End-to-End Training for Unified Tokenization and Latent Denoising  [[PDF](https://arxiv.org/abs/2603.22283)]

[arxiv 2026.03] DA-VAE: Plug-in Latent Compression for Diffusion via Detail Alignment  [[PDF](https://arxiv.org/abs/2603.22125),[Page](https://caixin98.github.io/davae/#)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## autoregressive

[arxiv 2024.11] Autoregressive Models in Vision: A Survey  [[PDF](https://arxiv.org/abs/2411.05902),[Page](https://github.com/ChaofanTao/Autoregressive-Models-in-Vision-Survey)]

[arxiv 2024.10] ControlAR: Controllable Image Generation with Autoregressive Models  [[PDF](https://arxiv.org/abs/2410.02705),[Page](https://github.com/hustvl/ControlAR)]

[arxiv 2024.10] LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding  [[PDF](https://arxiv.org/abs/2410.03355)]

[arxiv 2024.10] CAR: Controllable Autoregressive Modeling for Visual Generation  [[PDF](https://arxiv.org/abs/2410.04671),[Page](https://github.com/MiracleDance/CAR)]

[arxiv 2024.10]  Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective [[PDF](https://arxiv.org/abs/2410.12490),[Page](https://github.com/DAMO-NLP-SG/DiGIT)]

[arxiv 2024.10] LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior  [[PDF](https://arxiv.org/abs/2410.21264),[Page](https://hywang66.github.io/larp/)]

[arxiv 2024.11] Randomized Autoregressive Visual Generation  [[PDF](https://arxiv.org/abs/2411.00776),[Page](https://yucornetto.github.io/projects/rar.html)]

[arxiv 2024.11] A Survey on Vision Autoregressive Model  [[PDF](https://arxiv.org/abs/2411.08666)]

[arxiv 2024.11] M-VAR: Decoupled Scale-wise Autoregressive Modeling for High-Quality Image Generation  [[PDF](https://arxiv.org/abs/2411.10433),[Page](https://github.com/OliverRensu/MVAR)]

[arxiv 2024.11] LaVin-DiT: Large Vision Diffusion Transformer  [[PDF](https://arxiv.org/abs/2411.11505)]

[arxiv 2024.11] Scalable Autoregressive Monocular Depth Estimation  [[PDF](https://arxiv.org/abs/2411.11361)]

[arxiv 2024.11] Continuous Speculative Decoding for Autoregressive Image Generation [[PDF](https://arxiv.org/abs/2411.11925),[Page](https://github.com/MarkXCloud/CSpD)]

[arxiv 2024.11]  Sample- and Parameter-Efficient Auto-Regressive Image Models [[PDF](https://arxiv.org/abs/2411.15648),[Page](https://github.com/elad-amrani/xtra)] ![Code](https://img.shields.io/github/stars/elad-amrani/xtra?style=social&label=Star)

[arxiv 2024.11] LiteVAR: Compressing Visual Autoregressive Modelling with Efficient Attention and Quantization  [[PDF](https://arxiv.org/abs/2411.17178)] 

[arxiv 2024.12]  CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient [[PDF](https://arxiv.org/abs/2411.17787),[Page](https://czg1225.github.io/CoDe_page/)] ![Code](https://img.shields.io/github/stars/czg1225/CoDe?style=social&label=Star)

[arxiv 2024.12] RandAR: Decoder-only Autoregressive Visual Generation in Random Orders  [[PDF](https://arxiv.org/abs/2412.01827),[Page](https://rand-ar.github.io/)] ![Code](https://img.shields.io/github/stars/ziqipang/RandAR?style=social&label=Star)

[arxiv 2024.12] Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis  [[PDF](https://arxiv.org/pdf/2412.01819),[Page](https://yandex-research.github.io/switti/)] ![Code](https://img.shields.io/github/stars/yandex-research/switti?style=social&label=Star)

[arxiv 2024.12] Taming Scalable Visual Tokenizer for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2412.02692),[Page](TencentARC/SEED-Voken)] 

[arxiv 2024.12] XQ-GAN: An Open-source Image Tokenization Framework for Autoregressive Generation  [[PDF](https://arxiv.org/abs/2412.01762),[Page](https://github.com/lxa9867/ImageFolder)] ![Code](https://img.shields.io/github/stars/lxa9867/ImageFolder?style=social&label=Star)

[arxiv 2024.12]  TinyFusion: Diffusion Transformers Learned Shallow [[PDF](https://arxiv.org/abs/2412.01199),[Page](https://github.com/VainF/TinyFusion)] ![Code](https://img.shields.io/github/stars/VainF/TinyFusion?style=social&label=Star)

[arxiv 2024.12] Infinity ∞: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis  [[PDF](https://arxiv.org/abs/2412.04431),[Page](https://github.com/FoundationVision/Infinity)] ![Code](https://img.shields.io/github/stars/FoundationVision/Infinity?style=social&label=Star)

[arxiv 2024.12] ZipAR: Accelerating Autoregressive Image Generation through Spatial Locality  [[PDF](https://arxiv.org/abs/2412.04062)]

[arxiv 2024.12] ACDiT: Interpolating Autoregressive Conditional Modeling and Diffusion Transformer  [[PDF](https://arxiv.org/abs/2412.07720)]

[arxiv 2024.12]  FlowAR: Scale-wise Autoregressive Image Generation Meets Flow Matching [[PDF](https://arxiv.org/abs/2412.15205),[Page](https://github.com/OliverRensu/FlowAR)] ![Code](https://img.shields.io/github/stars/OliverRensu/FlowAR?style=social&label=Star)

[arxiv 2024.12]  Parallelized Autoregressive Visual Generation [[PDF](https://epiphqny.github.io/PAR-project/#),[Page](https://epiphqny.github.io/PAR-project/)] ![Code](https://img.shields.io/github/stars/Epiphqny/PAR?style=social&label=Star)

[arxiv 2024.12] RDPM: Solve Diffusion Probabilistic Models via Recurrent Token Prediction  [[PDF](https://arxiv.org/pdf/2412.18390)]

[arxiv 2025.02] Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generation  [[PDF](https://arxiv.org/abs/2502.20388),[Page](https://oliverrensu.github.io/project/xAR/)] ![Code](https://img.shields.io/github/stars/OliverRensu/xAR?style=social&label=Star)

[arxiv 2025.03]  NFIG: Autoregressive Image Generation with Next-Frequency Prediction [[PDF](https://arxiv.org/abs/2503.07076)]

[arxiv 2025.03] Autoregressive Image Generation with Randomized Parallel Decoding  [[PDF](https://arxiv.org/abs/2503.10568),[Page](https://github.com/hp-l33/ARPG)] ![Code](https://img.shields.io/github/stars/hp-l33/ARPG?style=social&label=Star)

[arxiv 2025.03]  Direction-Aware Diagonal Autoregressive Image Generation [[PDF](https://arxiv.org/pdf/2503.11129)]

[arxiv 2025.03] TokenBridge: Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation  [[PDF](https://arxiv.org/abs/2503.16430),[Page](https://yuqingwang1029.github.io/TokenBridge/)] ![Code](https://img.shields.io/github/stars/yuqingwang1029/TokenBridge?style=social&label=Star)

[arxiv 2025.03]  Improving Autoregressive Image Generation through Coarse-to-Fine Token Prediction [[PDF](https://arxiv.org/abs/2503.16194)]

[arxiv 2025.04]  FastVAR: Linear Visual Autoregressive Modeling via Cached Token Pruning [[PDF](https://arxiv.org/abs/2503.23367)]

[arxiv 2025.04] Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2504.02612)]

[arxiv 2025.04]  SimpleAR: Pushing the Frontier of Autoregressive Visual Generation through Pretraining, SFT, and RL [[PDF](https://arxiv.org/abs/2504.11455),[Page](https://github.com/wdrink/SimpleAR)] ![Code](https://img.shields.io/github/stars/wdrink/SimpleAR?style=social&label=Star)

[arxiv 2025.04] Head-Aware KV Cache Compression for Efficient Visual Autoregressive Modeling  [[PDF](https://arxiv.org/abs/2504.09261)]

[arxiv 2025.04] Token-Shuffle: Towards High-Resolution Image Generation with Autoregressive Models  [[PDF](https://arxiv.org/abs/2504.17789)]

[arxiv 2025.05]  TensorAR: Refinement is All You Need in Autoregressive Image Generation [[PDF](https://arxiv.org/pdf/2505.16324)]

[arxiv 2025.06] HMAR: Efficient Hierarchical Masked Auto-Regressive Image Generation  [[PDF](https://arxiv.org/abs/2506.04421),[Page](https://research.nvidia.com/labs/dir/hmar/)] 

[arxiv 2025.06] SpectralAR: Spectral Autoregressive Visual Generation  [[PDF](https://arxiv.org/abs/2506.10962),[Page](https://huang-yh.github.io/spectralar/)] ![Code](https://img.shields.io/github/stars/huang-yh/SpectralAR?style=social&label=Star)

[arxiv 2025.08]  NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale [[PDF](https://arxiv.org/abs/2508.10711),[Page](https://github.com/stepfun-ai/NextStep-1)] ![Code](https://img.shields.io/github/stars/stepfun-ai/NextStep-1?style=social&label=Star)

[arxiv 2025.10] Go with Your Gut: Scaling Confidence for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2509.26376),[Page](https://github.com/EnVision-Research/ScalingAR)] ![Code](https://img.shields.io/github/stars/EnVision-Research/ScalingAR?style=social&label=Star)

[arxiv 2025.10] FARMER: Flow AutoRegressive Transformer over Pixels  [[PDF](https://arxiv.org/abs/2510.23588)]

[arxiv 2025.11] Diversity Has Always Been There in Your Visual Autoregressive Models  [[PDF](https://arxiv.org/abs/2511.17074),[Page](https://github.com/wangtong627/DiverseVAR)] ![Code](https://img.shields.io/github/stars/wangtong627/DiverseVAR?style=social&label=Star)

[arxiv 2025.12] Progress by Pieces: Test-Time Scaling for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2511.21185),[Page](https://grid-ar.github.io/)] 

[arxiv 2026.02] Autoregressive Image Generation with Masked Bit Modeling  [[PDF](https://bar-gen.github.io/),[Page](https://arxiv.org/abs/2602.09024)] ![Code](https://img.shields.io/github/stars/amazon-far/BAR?style=social&label=Star)

[arxiv 2026.02]  BitDance: Scaling Autoregressive Generative Models with Binary Tokens [[PDF](https://arxiv.org/abs/2602.14041),[Page](https://bitdance.csuhan.com/)] ![Code](https://img.shields.io/github/stars/shallowdream204/BitDance?style=social&label=Star)


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

## autoregressive improvement
[arxiv 2025.10] REAR: Rethinking Visual Autoregressive Models via Generator-Tokenizer Consistency Regularization  [[PDF](https://arxiv.org/abs/2510.04450)]

[arxiv 2025.12]  DiverseVAR: Balancing Diversity and Quality of Next-Scale Visual Autoregressive Models [[PDF](https://arxiv.org/pdf/2511.21415)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## autoregressive Editing 
[arxiv 2025.04] Training-Free Text-Guided Image Editing with Visual Autoregressive Model  [[PDF](https://arxiv.org/abs/2503.23897),[Page](https://github.com/wyf0912/AREdit)] ![Code](https://img.shields.io/github/stars/wyf0912/AREdit?style=social&label=Star)

[arxiv 2025.04] Anchor Token Matching: Implicit Structure Locking for Training-free AR Image Editing  [[PDF](https://arxiv.org/abs/2504.10434),[Page](https://github.com/hutaiHang/ATM)] ![Code](https://img.shields.io/github/stars/hutaiHang/ATM?style=social&label=Star)

[arxiv 2025.05] Context-Aware Autoregressive Models for Multi-Conditional Image Generation  [[PDF](https://arxiv.org/abs/2505.12274)]

[arxiv 2025.08]  Visual Autoregressive Modeling for Instruction-Guided Image Editing [[PDF](https://arxiv.org/abs/2508.15772),[Page](https://github.com/HiDream-ai/VAREdit)] ![Code](https://img.shields.io/github/stars/HiDream-ai/VAREdit?style=social&label=Star)

[arxiv 2025.09]  Discrete Noise Inversion for Next-scale Autoregressive Text-based Image Editing [[PDF](https://arxiv.org/abs/2509.01984)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## autoregressive concept
[arxiv 2025.04] Personalized Text-to-Image Generation with Auto-Regressive Models  [[PDF](https://arxiv.org/abs/2504.13162),[Page](https://github.com/KaiyueSun98/T2I-Personalization-with-AR)] ![Code](https://img.shields.io/github/stars/KaiyueSun98/T2I-Personalization-with-AR?style=social&label=Star)

[arxiv 2025.06]  CoAR: Concept Injection into Autoregressive Models for Personalized Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2508.07341),[Page](https://github.com/KZFkzf/CoAR)] ![Code](https://img.shields.io/github/stars/KZFkzf/CoAR?style=social&label=Star)

[arxiv 2025.10]  EchoGen: Generating Visual Echoes in Any Scene via Feed-Forward Subject-Driven Auto-Regressive Model [[PDF](https://arxiv.org/abs/2509.26127)]

[arxiv 2025.10] TokenAR: Multiple Subject Generation via Autoregressive Token-level enhancement  [[PDF](https://arxiv.org/abs/2510.16332),[Page](https://github.com/lyrig/TokenAR)] ![Code](https://img.shields.io/github/stars/lyrig/TokenAR?style=social&label=Star)

[arxiv 2026.01]  DreamVAR: Taming Reinforced Visual Autoregressive Model for High-Fidelity Subject-Driven Image Generation [[PDF](https://arxiv.org/abs/2601.22507)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## autoregressive speed
[arxiv 2025.04]  Fast Autoregressive Models for Continuous Latent Generation [[PDF](https://arxiv.org/pdf/2504.18391)]

[arxiv 2025.06] SkipVAR: Accelerating Visual Autoregressive Modeling via Adaptive Frequency-Aware Skipping  [[PDF](https://arxiv.org/abs/2506.08908),[Page](https://github.com/fakerone-li/SkipVAR)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.07] Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2507.01957),[Page](https://github.com/mit-han-lab/lpd)] ![Code](https://img.shields.io/github/stars/mit-han-lab/lpd?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

## autoregressive continuous
[arxiv 2025.05]  Continuous Visual Autoregressive Generation via Score Maximization [[PDF](https://arxiv.org/pdf/2505.07812)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## autoregressive apps
[arxiv 2025.07]  A Training-Free Style-Personalization via Scale-wise Autoregressive Model [[PDF](https://arxiv.org/pdf/2507.04482),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.07]  CSD-VAR: Content-Style Decomposition in Visual Autoregressive Models [[PDF](https://arxiv.org/pdf/2507.13984)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

## autoregressive cot
[arxiv 2025.10] Improving Chain-of-Thought Efficiency for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2510.05593)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## autoregressive feedback
[arxiv 2025.08] AR-GRPO: Training Autoregressive Image Generation Models via Reinforcement Learning [[PDF](https://arxiv.org/pdf/2508.06924),[Page](https://github.com/Kwai-Klear/AR-GRPO)] ![Code](https://img.shields.io/github/stars/Kwai-Klear/AR-GRPO?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## Distill Diffusion Model 
[arxiv 2024.05]Distilling Diffusion Models into Conditional GANs [[PDF](https://arxiv.org/abs/2405.05967),[Page](https://mingukkang.github.io/Diffusion2GAN/)]

[arxiv 2024.06] Plug-and-Play Diffusion Distillation [[PDF](https://arxiv.org/abs/2406.01954)]

[arxiv 2024.10] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models  [[PDF](https://arxiv.org/abs/2410.07133)]

[arxiv 2024.10]  DDIL: Improved Diffusion Distillation With Imitation Learning[[PDF](https://arxiv.org/abs/2410.11971)]


[arxiv 2025.03] Scale-wise Distillation of Diffusion Models  [[PDF](https://arxiv.org/abs/2503.16397),[Page](https://yandex-research.github.io/swd/)] ![Code](https://img.shields.io/github/stars/yandex-research/swd?style=social&label=Star)

[arxiv 2025.04]  Autoregressive Distillation of Diffusion Transformers [[PDF](https://arxiv.org/abs/2504.11295),[Page](https://github.com/alsdudrla10/ARD)] ![Code](https://img.shields.io/github/stars/alsdudrla10/ARD?style=social&label=Star)

[arxiv 2025.08] Echo-4o: Harnessing the Power of GPT-4o Synthetic Images for Improved Image Generation  [[PDF](https://arxiv.org/abs/2508.09987),[Page](https://github.com/yejy53/Echo-4o)] ![Code](https://img.shields.io/github/stars/yejy53/Echo-4o?style=social&label=Star)


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## Try-on 
[arxiv 2024.03]Time-Efficient and Identity-Consistent Virtual Try-On Using A Variant of Altered Diffusion Models [[PDF](https://arxiv.org/abs/2403.07371)]

[arxiv 2024.03]Wear-Any-Way: Manipulable Virtual Try-on via Sparse Correspondence Alignment [[PDF](https://arxiv.org/abs/2403.12965),[Page](https://mengtingchen.github.io/wear-any-way-page/)]

[arxiv 2024.04]Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On [[PDF](https://arxiv.org/abs/2404.01089)]

[arxiv 2024.04]TryOn-Adapter: Efficient Fine-Grained Clothing Identity Adaptation for High-Fidelity Virtual Try-On [[PDF](https://arxiv.org/abs/2404.00878),[Page](https://github.com/jiazheng-xing/TryOn-Adapter)]

[arxiv 2024.04]FLDM-VTON: Faithful Latent Diffusion Model for Virtual Try-on [[PDF](https://arxiv.org/abs/2404.14162)]

[arxiv 2024.03]Improving Diffusion Models for Authentic Virtual Try-on in the Wild [[PDF](https://arxiv.org/abs/2403.05139),[Page](https://idm-vton.github.io/)]

[arxiv 2024.04]MV-VTON: Multi-View Virtual Try-On with Diffusion Models [[PDF](https://arxiv.org/abs/2404.17364)]

[arxiv 2024.05]AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario [[PDF](https://arxiv.org/abs/2405.18172),[Page](https://colorful-liyu.github.io/anyfit-page/)]

[arxiv 2024.06]  GraVITON: Graph based garment warping with attention guided inversion for Virtual-tryon [[PDF](https://arxiv.org/abs/2406.02184)]

[arxiv 2024.06]M&M VTO: Multi-Garment Virtual Try-On and Editing[[PDF](https://arxiv.org/abs/2406.04542),[Page](https://mmvto.github.io/)]

[arxiv 2024.06]Self-Supervised Vision Transformer for Enhanced Virtual Clothes Try-On [[PDF](https://arxiv.org/abs/2406.10539)]

[arxiv 2024.06] MaX4Zero: Masked Extended Attention for Zero-Shot Virtual Try-On In The Wild  [[PDF](https://nadavorzech.github.io/max4zero.github.io/),[Page](https://nadavorzech.github.io/max4zero.github.io/)]

[arxiv 2024.07]  D4-VTON: Dynamic Semantics Disentangling for Differential Diffusion based Virtual Try-On [[PDF](https://arxiv.org/abs/2407.15111),[Page](https://github.com/Jerome-Young/D4-VTON)]

[arxiv 2024.07] DreamVTON: Customizing 3D Virtual Try-on with Personalized Diffusion Models  [[PDF](https://arxiv.org/abs/2407.16511)]

[arxiv 2024.07]OutfitAnyone: Ultra-high Quality Virtual Try-On for Any Clothing and Any Person[[PDF](https://arxiv.org/abs/2407.16224),[Page](https://humanaigc.github.io/outfit-anyone/)]

[arxiv 2024.07] CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models  [[PDF](https://arxiv.org/abs/2407.15886),[Page](https://github.com/Zheng-Chong/CatVTON)]

[arxiv 2024.08] BooW-VTON: Boosting In-the-Wild Virtual Try-On via Mask-Free Pseudo Data Training  [[PDF](https://arxiv.org/abs/2408.06047),[Page](https://github.com/little-misfit/BooW-VTON)]

[arxiv 2024.09] Improving Virtual Try-On with Garment-focused Diffusion Models  [[PDF](https://arxiv.org/abs/2409.08258),[Page](https://github.com/siqi0905/GarDiff/tree/master)]

[arxiv 2024.09] AnyLogo: Symbiotic Subject-Driven Diffusion System with Gemini Status  [[PDF](https://arxiv.org/abs/2409.17740)]

[arxiv 2024.10] GS-VTON: Controllable 3D Virtual Try-on with Gaussian Splatting[[PDF](https://arxiv.org/abs/2410.05259),[Page](https://yukangcao.github.io/GS-VTON/)]

[arxiv 2024.11]  Try-On-Adapter: A Simple and Flexible Try-On Paradigm [[PDF](https://arxiv.org/abs/2411.10187)]

[arxiv 2024.11]  FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on [[PDF](https://arxiv.org/abs/2411.10499),[Page](https://byjiang.com/FitDiT/)]

[arxiv 2024.11] TED-VITON: Transformer-Empowered Diffusion Models for Virtual Try-On  [[PDF](https://arxiv.org/abs/2411.17017)]

[arxiv 2024.12]  TryOffDiff: Virtual-Try-Off via High-Fidelity Garment Reconstruction using Diffusion Models [[PDF](https://arxiv.org/abs/2411.18350),[Page](https://rizavelioglu.github.io/tryoffdiff/)] 

[arxiv 2024.12]  AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models [[PDF](https://arxiv.org/abs/2412.04146),[Page](https://crayon-shinchan.github.io/AnyDressing/)] ![Code](https://img.shields.io/github/stars/Crayon-Shinchan/AnyDressing?style=social&label=Star)

[arxiv 2024.12] PEMF-VVTO: Point-Enhanced Video Virtual Try-on via Mask-free Paradigm  [[PDF](https://arxiv.org/pdf/2412.03021)]

[arxiv 2024.12]  Leffa: Learning Flow Fields in Attention for Controllable Person Image Generation [[PDF](https://arxiv.org/abs/2412.08486),[Page](https://github.com/franciszzj/Leffa)] ![Code](https://img.shields.io/github/stars/franciszzj/Leffa?style=social&label=Star)

[arxiv 2024.12]  SwiftTry: Fast and Consistent Video Virtual Try-On with Diffusion Models [[PDF](https://arxiv.org/abs/2412.10178)]

[arxiv 2024.12] Dynamic Try-On: Taming Video Virtual Try-on with Dynamic Attention Mechanism  [[PDF](https://arxiv.org/abs/2412.09822),[Page](https://zhengjun-ai.github.io/dynamic-tryon-page/)] 

[arxiv 2024.12] Learning Implicit Features with Flow Infused Attention for Realistic Virtual Try-On  [[PDF](https://arxiv.org/abs/2412.11435)]

[arxiv 2024.12] FashionComposer : Compositional Fashion Image Generation  [[PDF](https://arxiv.org/abs/2412.14168),[Page](https://sihuiji.github.io/FashionComposer-Page/)] ![Code](https://img.shields.io/github/stars/SihuiJi/FashionComposer?style=social&label=Star)

[arxiv 2024.12] DiffusionTrend: A Minimalist Approach to Virtual Fashion Try-On  [[PDF](https://arxiv.org/abs/2412.14465)]

[arxiv 2024.12] PromptDresser: Improving the Quality and Controllability of Virtual Try-On via Generative Textual Prompt and Prompt-aware Mask  [[PDF](https://arxiv.org/abs/2412.16978),[Page](https://github.com/rlawjdghek/PromptDresser)] ![Code](https://img.shields.io/github/stars/rlawjdghek/PromptDresser?style=social&label=Star)

[arxiv 2024.12] Fashionability-Enhancing Outfit Image Editing with Conditional Diffusion Model  [[PDF](https://arxiv.org/pdf/2412.18421)]

[arxiv 2025.01] MC-VTON: Minimal Control Virtual Try-On Diffusion Transformer [[PDF](https://arxiv.org/abs/2501.03630)]

[arxiv 2025.01] Enhancing Virtual Try-On with Synthetic Pairs and Error-Aware Noise Scheduling  [[PDF](https://arxiv.org/abs/2501.04666)]

[arxiv 2025.01]  1-2-1: Renaissance of Single-Network Paradigm for Virtual Try-On [[PDF](https://arxiv.org/abs/2501.05369),[Page](https://ningshuliang.github.io/2023/Arxiv/index.html)] ![Code](https://img.shields.io/github/stars/ningshuliang/1-2-1-MNVTON?style=social&label=Star)

[arxiv 2025.02] MFP-VTON: Enhancing Mask-Free Person-to-Person Virtual Try-On via Diffusion Transformer  [[PDF](https://arxiv.org/pdf/2502.01626)]

[arxiv 2025.02] TRUEPOSE: Human-Parsing-guided Attention Diffusion for Full-ID Preserving Pose Transfer  [[PDF](https://arxiv.org/pdf/2502.03426)]

[arxiv 2025.02] CrossVTON: Mimicking the Logic Reasoning on Cross-category Virtual Try-on guided by Tri-zone Priors  [[PDF](https://arxiv.org/pdf/2502.14373)]

[arxiv 2025.03] MF-VITON: High-Fidelity Mask-Free Virtual Try-On with Minimal Input  [[PDF](https://arxiv.org/pdf/2503.08650),[Page](https://zhenchenwan.github.io/MF-VITON/)] ![Code](https://img.shields.io/github/stars/ZhenchenWan/MF-VITON-High-Fidelity-Mask-Free-Virtual-Try-On-with-Minimal-Input?style=social&label=Star)

[arxiv 2025.03] Shining Yourself: High-Fidelity Ornaments Virtual Try-on with Diffusion Model  [[PDF](https://arxiv.org/pdf/2503.16065),[Page](https://shiningyourself.github.io/)] 

[arxiv 2025.03] Multi-focal Conditioned Latent Diffusion for Person Image Synthesis  [[PDF](https://arxiv.org/pdf/2503.15686),[Page](https://github.com/jqliu09/mcld)] ![Code](https://img.shields.io/github/stars/jqliu09/mcld?style=social&label=Star)

[arxiv 2025.04] TryOffDiff: Virtual-Try-Off via High-Fidelity Garment Reconstruction using Diffusion Models  [[PDF](https://arxiv.org/abs/2411.18350),[Page](https://rizavelioglu.github.io/tryoffdiff/)] ![Code](https://img.shields.io/github/stars/rizavelioglu/tryoffdiff/?style=social&label=Star)

[arxiv 2025.04] 3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models  [[PDF](https://arxiv.org/abs/2504.17414),[Page](https://2y7c3.github.io/3DV-TON/)]

[arxiv 2025.05]  Pursuing Temporal-Consistent Video Virtual Try-On via Dynamic Pose Interaction [[PDF](https://arxiv.org/abs/2505.16980)]

[arxiv 2025.06] ChronoTailor: Harnessing Attention Guidance for Fine-Grained Video Virtual Try-On  [[PDF](https://arxiv.org/abs/2506.05858)]

[arxiv 2025.07] OmniVTON: Training-Free Universal Virtual Try-On  [[PDF](https://arxiv.org/abs/2507.15037),[Page](https://github.com/Jerome-Young/OmniVTON)] 

[arxiv 2025.07]  FW-VTON: Flattening-and-Warping for Person-to-Person Virtual Try-on [[PDF](https://arxiv.org/pdf/2507.16010)]

[arxiv 2025.08] One Model For All: Partial Diffusion for Unified Try-On and Try-Off in Any Pose  [[PDF](https://arxiv.org/abs/2508.04559),[Page](https://onemodelforall.github.io/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.08] MuGa-VTON: Multi-Garment Virtual Try-On via Diffusion Transformers with Prompt Customization  [[PDF](https://arxiv.org/pdf/2508.08488)]

[arxiv 2025.08] DualFit: A Two-Stage Virtual Try-On via Warping and Synthesis  [[PDF](https://arxiv.org/abs/2508.12131)]

[arxiv 2025.08]  OmniTry: Virtual Try-On Anything without Masks [[PDF](https://arxiv.org/abs/2508.13632),[Page](https://omnitry.github.io/)] ![Code](https://img.shields.io/github/stars/Kunbyte-AI/OmniTry?style=social&label=Star)

[arxiv 2025.08] JCo-MVTON: Jointly Controllable Multi-Modal Diffusion Transformer for Mask-Free Virtual Try-on  [[PDF](https://arxiv.org/abs/2508.17614),[Page](https://github.com/damo-cv/JCo-MVTON)] ![Code](https://img.shields.io/github/stars/damo-cv/JCo-MVTON?style=social&label=Star)

[arxiv 2025.08] FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models  [[PDF](https://arxiv.org/abs/2508.20586),[Page](https://github.com/Zheng-Chong/FastFit)] ![Code](https://img.shields.io/github/stars/Zheng-Chong/FastFit?style=social&label=Star)

[arxiv 2025.08]  Dress&Dance: Dress up and Dance as You Like It [[PDF](https://arxiv.org/abs/2508.21070),[Page](https://immortalco.github.io/DressAndDance/)] 

[arxiv 2025.09] Virtual Fitting Room: Generating Arbitrarily Long Videos of Virtual Try-On from a Single Image -- Technical Preview  [[PDF](https://arxiv.org/abs/2509.04450),[Page](https://immortalco.github.io/VirtualFittingRoom/)] 

[arxiv 2025.09]  HoloGarment: 360° Novel View Synthesis of In-the-Wild Garments [[PDF](https://arxiv.org/abs/2509.12187),[Page](https://johannakarras.github.io/HoloGarment/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.09] Efficient Encoder-Free Pose Conditioning and Pose Control for Virtual Try-On  [[PDF](https://arxiv.org/abs/2509.20343),[Page](https://pose-vton.github.io/vto-pose-conditioning/)] 

[arxiv 2025.10]  AvatarVTON: 4D Virtual Try-On for Animatable Avatars [[PDF](https://arxiv.org/abs/2510.04822)]

[arxiv 2025.10]  DiT-VTON: Diffusion Transformer Framework for Unified Multi-Category Virtual Try-On and Virtual Try-All with Integrated Image Editing [[PDF](https://arxiv.org/abs/2510.04797)]

[arxiv 2025.12] FitControler: Toward Fit-Aware Virtual Try-On  [[PDF](https://arxiv.org/pdf/2512.24016)]

[arxiv 2026.03] MOBILE-VTON: High-Fidelity On-Device Virtual Try-On  [[PDF](https://arxiv.org/pdf/2603.00947)]

[arxiv 2026.03] Garments2Look: A Multi-Reference Dataset for High-Fidelity Outfit-Level Virtual Try-On with Clothing and Accessories  [[PDF](https://arxiv.org/abs/2603.14153),[Page](https://artmesciencelab.github.io/Garments2Look)]

[arxiv 2026.03] PROMO: Promptable Outfitting for Efficient High-Fidelity Virtual Try-On  [[PDF](https://arxiv.org/abs/2603.11675)]

[arxiv 2026.03] VTEdit-Bench: A Comprehensive Benchmark for Multi-Reference Image Editing Models in Virtual Try-On  [[PDF](https://arxiv.org/abs/2603.11734)]

[arxiv 2026.03] OmniDiT: Extending Diffusion Transformer to Omni-VTON Framework [[[PDF](https://arxiv.org/abs/2603.19643)]]

[arxiv 2026.03] Dress-ED: Instruction-Guided Editing for Virtual Try-On and Try-Off  [[PDF](https://arxiv.org/abs/2603.22607)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## Model adapatation/Merge 
[arxiv 2023.12]X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model [[PDF](https://arxiv.org/abs/2312.02238),[Page](https://showlab.github.io/X-Adapter/)]

[arxiv 2024.10]  Model merging with SVD to tie the Knots [[PDF](https://arxiv.org/abs/2410.19735),[Page](https://github.com/gstoica27/KnOTS)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)




## Text 
[arxiv 2023.12]UDiffText: A Unified Framework for High-quality Text Synthesis in Arbitrary Images via Character-aware Diffusion Models [[PDF](https://arxiv.org/abs/2312.04884)]

[arxiv 2023.12]Brush Your Text: Synthesize Any Scene Text on Images via Diffusion Model [[PDF](https://arxiv.org/abs/2312.12232)]

[arxiv 2024.04]Glyph-ByT5: A Customized Text Encoder for Accurate Visual Text Rendering [[PDF](https://arxiv.org/abs/2403.09622),[Page](https://glyph-byt5.github.io/)]

[arxiv 2024.05] CustomText: Customized Textual Image Generation using Diffusion Models [[PDF](https://arxiv.org/abs/2405.12531)]

[arxiv 2024.06] SceneTextGen: Layout-Agnostic Scene Text Image Synthesis with Diffusion Models [[PDF](https://arxiv.org/abs/2406.01062)]

[arxiv 2024.06] FontStudio: Shape-Adaptive Diffusion Model for Coherent and Consistent Font Effect Generation [[PDF](https://arxiv.org/abs/2406.08392),[Page](https://font-studio.github.io/)]

[arxiv 2024.09] DiffusionPen: Towards Controlling the Style of Handwritten Text Generation  [[PDF](https://arxiv.org/abs/2409.06065),[Page](https://github.com/koninik/DiffusionPen)]

[arxiv 2024.10] TextCtrl: Diffusion-based Scene Text Editing with Prior Guidance Control  [[PDF](https://arxiv.org/abs/2410.10133),[Page]()]

[arxiv 2024.10]  TextMaster: Universal Controllable Text Edit [[PDF](https://arxiv.org/abs/2410.09879),[Page]()]

[arxiv 2024.11] AnyText2: Visual Text Generation and Editing With Customizable Attributes  [[PDF](https://arxiv.org/abs/2411.15245),[Page](https://github.com/tyxsspa/AnyText2)] ![Code](https://img.shields.io/github/stars/tyxsspa/AnyText2?style=social&label=Star)

[arxiv 2024.11] Conditional Text-to-Image Generation with Reference Guidance  [[PDF](https://arxiv.org/abs/2411.16713)] 

[arxiv 2024.12] Type-R: Automatically Retouching Typos for Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2411.18159)] 

[arxiv 2024.12] FonTS: Text Rendering with Typography and Style Controls  [[PDF](https://arxiv.org/pdf/2412.00136)]

[arxiv 2024.12]  FlowEdit: Inversion-Free Text-Based Editing Using Pre-Trained Flow Models [[PDF](https://arxiv.org/abs/2412.08629),[Page](https://matankleiner.github.io/flowedit/)] ![Code](https://img.shields.io/github/stars/fallenshock/FlowEdit?style=social&label=Star)

[arxiv 2025.02]  Precise Parameter Localization for Textual Generation in Diffusion Models [[PDF](https://arxiv.org/abs/2502.09935),[Page](https://t2i-text-loc.github.io/)] 

[arxiv 2025.02]  ControlText: Unlocking Controllable Fonts in Multilingual Text Rendering without Font Annotations [[PDF](https://arxiv.org/pdf/2502.10999),[Page](https://github.com/bowen-upenn/ControlText)] ![Code](https://img.shields.io/github/stars/bowen-upenn/ControlText?style=social&label=Star)

[arxiv 2025.03]  Recognition-Synergistic Scene Text Editing [[PDF](https://arxiv.org/pdf/2503.08387),[Page](https://github.com/ZhengyaoFang/RS-STE)] ![Code](https://img.shields.io/github/stars/ZhengyaoFang/RS-STE?style=social&label=Star)

[arxiv 2025.03] Beyond Words: Advancing Long-Text Image Generation via Multimodal Autoregressive Models  [[PDF](https://arxiv.org/html/2503.20198v1),[Page](https://fingerrec.github.io/longtextar/)] 

[arxiv 2025.04]  Point-Driven Interactive Text and Image Layer Editing Using Diffusion Models [[PDF](https://arxiv.org/abs/2504.14108)]

[arxiv 2025.05]  FLUX-Text: A Simple and Advanced Diffusion Transformer Baseline for Scene Text Editing [[PDF](https://arxiv.org/abs/2505.03329)]

[arxiv 2025.05]  TextFlux: An OCR-Free DiT Model for High-Fidelity Multilingual Scene Text Synthesis [[PDF](https://arxiv.org/abs/2505.17778),[Page](https://yyyyyxie.github.io/textflux-site/)] ![Code](https://img.shields.io/github/stars/yyyyyxie/textflux?style=social&label=Star)

[arxiv 2025.06]  EasyText: Controllable Diffusion Transformer for Multilingual Text Rendering [[PDF](),[Page](https://fontadapter.github.io/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.06] FontAdapter: Instant Font Adaptation in Visual Text Generation  [[PDF](https://arxiv.org/abs/2506.05843),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.06]  Calligrapher: Freestyle Text Image Customization [[PDF](https://arxiv.org/abs/2506.24123),[Page](https://calligrapher2025.github.io/Calligrapher)] ![Code](https://img.shields.io/github/stars/Calligrapher2025/Calligrapher?style=social&label=Star)

[arxiv 2025.07] UniGlyph: Unified Segmentation-Conditioned Diffusion for Precise Visual Text Synthesis  [[PDF](https://arxiv.org/pdf/2507.00992)]

[arxiv 2025.07] WordCraft: Interactive Artistic Typography with Attention Awareness and Noise Blending  [[PDF](https://arxiv.org/pdf/2507.09573)]

[arxiv 2025.10]  SceneTextStylizer: A Training-Free Scene Text Style Transfer Framework with Diffusion Model [[PDF](https://arxiv.org/abs/2510.10910)]

[arxiv 2025.10] OmniText: A Training-Free Generalist for Controllable Text-Image Manipulation  [[PDF](https://arxiv.org/abs/2510.24093)]

[arxiv 2026.03] GlyphPrinter: Region-Grouped Direct Preference Optimization for Glyph-Accurate Visual Text Rendering  [[PDF](https://arxiv.org/abs/2603.15616),[Page](https://henghuiding.com/GlyphPrinter/)] ![Code](https://img.shields.io/github/stars/FudanCVL/GlyphPrinter?style=social&label=Star)

[arxiv 2026.03] WeEdit: A Dataset, Benchmark and Glyph-Guided Framework for Text-centric Image Editing  [[PDF](https://arxiv.org/abs/2603.11593)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)




## Caption 

[arxiv 2024.10] CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning  [[PDF](https://arxiv.org/abs/2410.11963)]

[arxiv 2024.10] Altogether: Image Captioning via Re-aligning Alt-text  [[PDF](https://arxiv.org/abs/2410.17251),[Page]()]

[arxiv 2024.11] Precision or Recall? An Analysis of Image Captions for Training Text-to-Image Generation Model  [[PDF](https://arxiv.org/abs/2411.05079),[Page](https://github.com/shengcheng/Captions4T2I)]

[arxiv 2025.02]  Decoder-Only LLMs are Better Controllers for Diffusion Models [[PDF](https://arxiv.org/pdf/2502.04412)]

[arxiv 2025.02]  LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation [[PDF](https://arxiv.org/abs/2502.18302),[Page](https://zrealli.github.io/LDGen/)] ![Code](https://img.shields.io/github/stars/zrealli/LDGen?style=social&label=Star)

[arxiv 2025.04] Generating Fine Details of Entity Interactions [[PDF](https://arxiv.org/abs/2504.08714),[Page](https://concepts-ai.com/p/detailscribe/)] ![Code](https://img.shields.io/github/stars/gxy000/DetailScribe?style=social&label=Star)

[arxiv 2025.04] Describe Anything: Detailed Localized Image and Video Captioning  [[PDF](https://arxiv.org/abs/2504.16072),[Page](https://describe-anything.github.io/)] ![Code](https://img.shields.io/github/stars/NVlabs/describe-anything?style=social&label=Star)

[arxiv 2025.10] GenPilot: A Multi-Agent System for Test-Time Prompt Optimization in Image Generation  [[PDF](https://arxiv.org/abs/2510.07217),[Page](https://github.com/27yw/GenPilot)] ![Code](https://img.shields.io/github/stars/27yw/GenPilot?style=social&label=Star)

[arxiv 2025.10] Grasp Any Region: Towards Precise, Contextual Pixel Understanding for Multimodal LLMs  [[PDF](https://arxiv.org/abs/2510.18876),[Page](https://github.com/Haochen-Wang409/Grasp-Any-Region)] ![Code](https://img.shields.io/github/stars/Haochen-Wang409/Grasp-Any-Region?style=social&label=Star)

[arxiv 2026.03] FineViT: Progressively Unlocking Fine-Grained Perception with Dense Recaptions  [[PDF](https://arxiv.org/abs/2603.17326)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## face swapping 
[arxiv 2024.03]Infinite-ID: Identity-preserved Personalization via ID-semantics Decoupling Paradigm [[PDF](https://arxiv.org/abs/2403.11781),[Page](https://infinite-id.github.io/)]

[github] [Reactor](https://github.com/Gourieff/sd-webui-reactor)

[arxiv 2024.11] MegaPortrait: Revisiting Diffusion Control for High-fidelity Portrait Generation  [[PDF](https://arxiv.org/abs/2411.04357)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)
 
## Concept / personalization
*[Arxiv.2208; NVIDIA]  ***An Image is Worth One Word:*** Personalizing Text-to-Image Generation using Textual Inversion [[PDF](https://arxiv.org/abs/2208.01618), [Page](https://github.com/rinongal/textual_inversion), ![Code](https://img.shields.io/github/stars/rinongal/textual_inversion?style=social&label=Star)

[NIPS 22; google] ***DreamBooth***: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation [[PDF](https://arxiv.org/abs/2208.12242), [Page](https://dreambooth.github.io/), [Code](https://github.com/XavierXiao/Dreambooth-Stable-Diffusion)] ![Code](https://img.shields.io/github/stars/XavierXiao/Dreambooth-Stable-Diffusion?style=social&label=Star)

[arxiv 2022.12; UT] Multiresolution Textual Inversion [[PDF]](https://arxiv.org/abs/2210.16056)  

*[arxiv 2022.12]Multi-Concept Customization of Text-to-Image Diffusion \[[PDF](https://arxiv.org/abs/2212.04488), [Page](https://www.cs.cmu.edu/~custom-diffusion/), [code](https://github.com/adobe-research/custom-diffusion)\] ![Code](https://img.shields.io/github/stars/adobe-research/custom-diffusion?style=social&label=Star)

[arxiv 2023.02]ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2302.13848)]

[arxiv 2023.02, tel]Designing an Encoder for Fast Personalization of Text-to-Image Models [[PDF](https://arxiv.org/abs/2302.12228), [Page](https://tuning-encoder.github.io/)]

[arxiv 2023.03]Cones: Concept Neurons in Diffusion Models for Customized Generation [[PDF](https://arxiv.org/abs/2303.05125)]

[arxiv 2023.03]P+: Extended Textual Conditioning in Text-to-Image Generation [[PDF](https://prompt-plus.github.io/files/PromptPlus.pdf)]

[arxiv 2023.03]Highly Personalized Text Embedding for Image Manipulation by Stable Diffusion [[PDF](https://arxiv.org/abs/2303.08767)]

->[arxiv 2023.04]Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA[[PDF](https://arxiv.org/abs/2304.06027), [Page](https://jamessealesmith.github.io/continual-diffusion/)]

[arxiv 2023.04]Controllable Textual Inversion for Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2304.05265)]

*[arxiv 2023.04]InstantBooth: Personalized Text-to-Image Generation without Test-Time Finetuning [[PDF](https://arxiv.org/abs/2304.03411)]

[arxiv 2023.05]Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models [[PDF](https://arxiv.org/abs/2305.18292),[Page](https://showlab.github.io/Mix-of-Show/)] ![Code](https://img.shields.io/github/stars/TencentARC/Mix-of-Show?style=social&label=Star)

[arxiv 2023.05]Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models [[PDF](https://arxiv.org/abs/2305.15779)]

[arxiv 2023.05]DisenBooth: Disentangled Parameter-Efficient Tuning for Subject-Driven Text-to-Image Generation [[PDF](https://arxiv.org/abs/2305.03374)]

[arxiv 2023.05]PHOTOSWAP:Personalized Subject Swapping in Images [[PDF](https://arxiv.org/abs/2305.18286)]

[Siggraph 2023.05]Key-Locked Rank One Editing for Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2305.01644), [Page](https://research.nvidia.com/labs/par/Perfusion/)]

[arxiv 2023.05]A Neural Space-Time Representation for Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2305.15391),[Page](https://neuraltextualinversion.github.io/NeTI/)] ![Code](https://img.shields.io/github/stars/NeuralTextualInversion/NeTI?style=social&label=Star)

->[arxiv 2023.05]BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing [[PDF](https://arxiv.org/abs/2305.14720), [Page](https://github.com/salesforce/LAVIS/tree/main/projects/blip-diffusion)]

[arxiv 2023.05]Concept Decomposition for Visual Exploration and Inspiration[[PDF](https://arxiv.org/abs/2305.18203),[Page](https://inspirationtree.github.io/inspirationtree/)] ![Code](https://img.shields.io/github/stars/google/inspiration_tree?style=social&label=Star)

[arxiv 2023.05]FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention[[PDF](https://arxiv.org/abs/2305.10431),[Page](https://github.com/mit-han-lab/fastcomposer)] ![Code](https://img.shields.io/github/stars/mit-han-lab/fastcomposer?style=social&label=Star)

[arxiv 2023.06]Cones 2: Customizable Image Synthesis with Multiple Subjects [[PDF](https://arxiv.org/abs/2305.19327)]

[arxiv 2023.06]Inserting Anybody in Diffusion Models via Celeb Basis [[PDF](https://arxiv.org/abs/2306.00926), [Page](https://celeb-basis.github.io/)] ![Code](https://img.shields.io/github/stars/ygtxr1997/CelebBasis?style=social&label=Star)

->[arxiv 2023.06]A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis
[[PDF](https://arxiv.org/pdf/2306.14544.pdf)]

[arxiv 2023.06]Generate Anything Anywhere in Any Scene [[PDF](https://arxiv.org/abs/2306.17154),[Page](https://yuheng-li.github.io/PACGen/)] ![Code](https://img.shields.io/github/stars/Yuheng-Li/PACGen?style=social&label=Star)

[arxiv 2023.07]HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models [[PDF](https://arxiv.org/abs/2307.06949),[Page](https://hyperdreambooth.github.io/)]

[arxiv 2023.07]Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models [[PDF](https://arxiv.org/abs/2307.06925), [Page](https://datencoder.github.io/)]

[arxiv 2023.07]ReVersion: Diffusion-Based Relation Inversion from Images [[PDF](https://arxiv.org/abs/2303.13495),[Page](https://ziqihuangg.github.io/projects/reversion.html)] ![Code](https://img.shields.io/github/stars/ziqihuangg/ReVersion?style=social&label=Star)

[arxiv 2023.07]AnyDoor: Zero-shot Object-level Image Customization [[PDF](https://arxiv.org/abs/2307.09481),[Page](https://github.com/ali-vilab/AnyDoor)] ![Code](https://img.shields.io/github/stars/ali-vilab/AnyDoor?style=social&label=Star)

[arxiv 2023.0-7]Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning [[PDF](https://arxiv.org/abs/2307.11410), [Page](https://oppo-mente-lab.github.io/subject_diffusion/)] ![Code](https://img.shields.io/github/stars/OPPO-Mente-Lab/Subject-Diffusion?style=social&label=Star)

[arxiv 2023.08]ConceptLab: Creative Generation using Diffusion Prior Constraints [[PDF](https://arxiv.org/abs/2308.02669),[Page](https://kfirgoldberg.github.io/ConceptLab/)] ![Code](https://img.shields.io/github/stars/kfirgoldberg/ConceptLab?style=social&label=Star)

[arxiv 2023.08]Unified Concept Editing in Diffusion Models [[PDF]https://arxiv.org/pdf/2308.14761.pdf), [Page](https://unified.baulab.info/)] ![Code](https://img.shields.io/github/stars/rohitgandikota/unified-concept-editing?style=social&label=Star)

[arxiv 2023.09]Create Your World: Lifelong Text-to-Image Diffusion[[PDF](https://arxiv.org/abs/2309.04430)]

[arxiv 2023.09]MagiCapture: High-Resolution Multi-Concept Portrait Customization [[PDF](https://arxiv.org/abs/2309.06895)]

[arxiv 2023.10]Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing Else [[PDF](https://arxiv.org/abs/2310.07419)]

[arxiv 2023.11]A Data Perspective on Enhanced Identity Preservation for Diffusion Personalization [[PDF](https://arxiv.org/abs/2311.04315)]

[arxiv 2023.11]The Chosen One: Consistent Characters in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2311.10093), [Page](https://omriavrahami.com/the-chosen-one/)] ![Code](https://img.shields.io/github/stars/ZichengDuan/TheChosenOne?style=social&label=Star)

[arxiv 2023.11]High-fidelity Person-centric Subject-to-Image Synthesis[[PDF](https://arxiv.org/abs/2311.10329)]

[arxiv 2023.11]An Image is Worth Multiple Words: Multi-attribute Inversion for Constrained Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2311.11919)]

[arxiv 2023.11]CatVersion: Concatenating Embeddings for Diffusion-Based Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2311.14631),[Page](https://royzhao926.github.io/CatVersion-page/)] ![Code](https://img.shields.io/github/stars/RoyZhao926/CatVersion?style=social&label=Star)

[arxiv 2023.12]PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding [[PDF](https://arxiv.org/abs/2312.04461),[Page](https://photo-maker.github.io/)] ![Code](https://img.shields.io/github/stars/TencentARC/PhotoMaker?style=social&label=Star)

[arxiv 2023.12]Context Diffusion: In-Context Aware Image Generation [[PDF](https://arxiv.org/abs/2312.03584)]

[arxiv 2023.12]Customization Assistant for Text-to-image Generation [[PDF](https://arxiv.org/abs/2312.03045)]

[arxiv 2023.12]InstructBooth: Instruction-following Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2312.03011)]

[arxiv 2023.12]FaceStudio: Put Your Face Everywhere in Seconds [[PDF](https://arxiv.org/abs/2312.02663),[Page](https://icoz69.github.io/facestudio/)] ![Code](https://img.shields.io/github/stars/TencentQQGYLab/FaceStudio?style=social&label=Star)

[arxiv 2023.12]Orthogonal Adaptation for Modular Customization of Diffusion Models [[PDF](https://arxiv.org/abs/2312.02432),[Page](https://ryanpo.com/ortha/)]

[arxiv 2023.12]Separate-and-Enhance: Compositional Finetuning for Text2Image Diffusion Models [[PDF](https://arxiv.org/abs/2312.06712), [Page](https://zpbao.github.io/projects/SepEn/)] ![Code](https://img.shields.io/github/stars/adobe/SeperateAndEnhance?style=social&label=Star)

[arxiv 2023.12]Compositional Inversion for Stable Diffusion Models [[PDF](https://arxiv.org/abs/2312.08048),[Page](https://github.com/zhangxulu1996/Compositional-Inversion)] ![Code](https://img.shields.io/github/stars/zhangxulu1996/Compositional-Inversion?style=social&label=Star)

[arxiv 2023.12]SimAC: A Simple Anti-Customization Method against Text-to-Image Synthesis of Diffusion Models [[PDF](https://arxiv.org/abs/2312.07865)]

[arxiv 2023.12]InstantID : Zero-shot Identity-Preserving Generation in Seconds [[PDF](),[Page](https://instantid.github.io/)] ![Code](https://img.shields.io/github/stars/instantX-research/InstantID?style=social&label=Star)

[arxiv 2023.12]All but One: Surgical Concept Erasing with Model Preservation in Text-to-Image Diffusion Models[[PDf](https://arxiv.org/abs/2312.12807)]

[arxiv 2023.12]Cross Initialization for Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2312.15905)]

[arxiv 2023.12]PALP: Prompt Aligned Personalization of Text-to-Image Models[[PDF](https://arxiv.org/abs/2401.06105), [Page](https://prompt-aligned.github.io/)]

[arxiv 2024.02]Pick-and-Draw: Training-free Semantic Guidance for Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2401.16762)]

[arxiv 2024.02]Separable Multi-Concept Erasure from Diffusion Models[[PDF](https://arxiv.org/abs/2402.05947)]

[arxiv 2024.02]λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space[[PDF](https://arxiv.org/abs/2402.05195),[Page](https://eclipse-t2i.github.io/Lambda-ECLIPSE/)] ![Code](https://img.shields.io/github/stars/eclipse-t2i/lambda-eclipse-inference?style=social&label=Star)

[arxiv 2024.02]Training-Free Consistent Text-to-Image Generation [[PDF](https://arxiv.org/abs/2402.03286),[Page](https://consistory-paper.github.io/)] ![Code](https://img.shields.io/github/stars/NVlabs/consistory?style=social&label=Star)

[arxiv 2024.02]Textual Localization: Decomposing Multi-concept Images for Subject-Driven Text-to-Image Generation [[PDF](https://arxiv.org/abs/2402.09966),[Page](https://github.com/junjie-shentu/Textual-Localization)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/DreamMatcher?style=social&label=Star)

[arxiv 2024.02]DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2402.09812), [Page](https://ku-cvlab.github.io/DreamMatcher/)]

[arxiv 2024.02]Direct Consistency Optimization for Compositional Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2402.12004),[Page](https://dco-t2i.github.io/)] ![Code](https://img.shields.io/github/stars/kyungmnlee/dco?style=social&label=Star)

[arxiv 2024.02]ComFusion: Personalized Subject Generation in Multiple Specific Scenes From Single Image [[PDF](https://arxiv.org/abs/2402.11849)]

[arxiv 2024.02]Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition[[PDF](https://arxiv.org/abs/2402.15504), [Page](https://danielchyeh.github.io/Gen4Gen/)] ![Code](https://img.shields.io/github/stars/louisYen/Gen4Gen?style=social&label=Star)

[arxiv 2024.02]DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Model [[PDF](https://arxiv.org/abs/2402.17412),[Page](https://diffusekrona.github.io/)] ![Code](https://img.shields.io/github/stars/IBM/DiffuseKronA?style=social&label=Star)

[arxiv 2024.03]RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization [[PDF](https://arxiv.org/abs/2403.00483),[Page](https://corleone-huang.github.io/realcustom/)] ![Code](https://img.shields.io/github/stars/Corleone-Huang/RealCustomProject?style=social&label=Star)

[arxiv 2024.03]Face2Diffusion for Fast and Editable Face Personalization [[PDF](https://arxiv.org/abs/2403.05094),[Page](https://mapooon.github.io/Face2DiffusionPage/)] ![Code](https://img.shields.io/github/stars/mapooon/Face2Diffusion?style=social&label=Star)

[arxiv 2024.03]FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation [[PDF](https://arxiv.org/abs/2403.06775),[Page](https://github.com/modelscope/facechain)] ![Code](https://img.shields.io/github/stars/modelscope/facechain?style=social&label=Star)

[arxiv 2024.03]Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.07500)]

[arxiv 2024.03]LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models [[PDF](https://arxiv.org/abs/2403.11627),[Page](https://github.com/Young98CN/LoRA_Composer)] ![Code](https://img.shields.io/github/stars/Young98CN/LoRA_Composer?style=social&label=Star)

[arxiv 2024.03]OSTAF: A One-Shot Tuning Method for Improved Attribute-Focused T2I Personalization [[PDF](https://arxiv.org/abs/2403.11053)]

[arxiv 2024.03]OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models [[PDF](https://arxiv.org/abs/2403.10983), [Page](https://kongzhecn.github.io/omg-project/)] ![Code](https://img.shields.io/github/stars/kongzhecn/OMG?style=social&label=Star)

[arxiv 2024.03]IDAdapter: Learning Mixed Features for Tuning-Free Personalization of Text-to-Image Models [[PDF](https://arxiv.org/abs/2403.13535)]

[arxiv 2024.03]Tuning-Free Image Customization with Image and Text Guidance [[PDF](https://arxiv.org/abs/2403.12658)]

[ariv 2024.03]Harmonizing Visual and Textual Embeddings for Zero-Shot Text-to-Image Customization [[PDF](https://arxiv.org/abs/2403.14155),[Page](https://ldynx.github.io/harmony-zero-t2i/)] ![Code](https://img.shields.io/github/stars/ldynx/harmony-zero-t2i?style=social&label=Star)

[arxiv 2024.03]FlashFace: Human Image Personalization with High-fidelity Identity Preservation [[PDF](https://arxiv.org/abs/2403.17008),[Page](https://jshilong.github.io/flashface-page)] ![Code](https://img.shields.io/github/stars/ali-vilab/FlashFace?style=social&label=Star)

[arxiv 2024.03]Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.16990),[Page](https://omer11a.github.io/bounded-attention/)] ![Code](https://img.shields.io/github/stars/omer11a/bounded-attention?style=social&label=Star)

[arxiv 2024.03]Isolated Diffusion: Optimizing Multi-Concept Text-to-Image Generation Training-Freely with Isolated Diffusion Guidance [[PDF](https://arxiv.org/abs/2403.16954)]

[arxiv 2024.03]Improving Text-to-Image Consistency via Automatic Prompt Optimization [[PDF](https://arxiv.org/abs/2403.17804)]

[arxiv 2024.03]Attention Calibration for Disentangled Text-to-Image Personalization [[PDF](https://github.com/Monalissaa/DisenDiff),[Page](https://arxiv.org/pdf/2403.18551.pdf)] ![Code](https://img.shields.io/github/stars/Monalissaa/DisenDiff?style=social&label=Star)

[arxiv 2024.04]CLoRA: A Contrastive Approach to Compose Multiple LoRA Models [[PDF](https://arxiv.org/abs/2403.19776)]

[arxiv 2024.04]MuDI: Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models [[PDF](https://arxiv.org/abs/2404.04243),[Page](https://mudi-t2i.github.io/)] ![Code](https://img.shields.io/github/stars/agwmon/MuDI?style=social&label=Star)

[arxiv 2024.04]Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models [[PDF](https://arxiv.org/abs/2404.03913)]

[arxiv 2024.04]LCM-Lookahead for Encoder-based Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2404.03620)]

[arxiv 2024.04]MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation [[PDF](https://arxiv.org/abs/2404.05674),[Page](https://moma-adapter.github.io/)] ![Code](https://img.shields.io/github/stars/bytedance/MoMA/tree/main?style=social&label=Star)

[arxiv 2024.04]MC2: Multi-concept Guidance for Customized Multi-concept Generation [[PDF](https://arxiv.org/abs/2404.05268)]

[arxiv 2024.04]Strictly-ID-Preserved and Controllable Accessory Advertising Image Generation [[PDF](https://arxiv.org/abs/2404.04828)]

[arxiv 2024.04]OneActor: Consistent Character Generation via Cluster-Conditioned Guidance [[PDF](https://arxiv.org/abs/2404.10267)]

[arxiv 2024.04] MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation [[PDF](https://arxiv.org/abs/2404.11565),[Page](https://snap-research.github.io/mixture-of-attention)]

[arxiv 2024.04]MultiBooth: Towards Generating All Your Concepts in an Image from Text[[PDF](https://arxiv.org/abs/2404.14239),[Page](https://multibooth.github.io/)] ![Code](https://img.shields.io/github/stars/chenyangzhu1/MultiBooth?style=social&label=Star)

[arxiv 2024.04]Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting [[PDF](https://arxiv.org/abs/2404.14007)]

[arxiv 2024.04]UVMap-ID: A Controllable and Personalized UV Map Generative Model [[PDF](https://arxiv.org/abs/2404.14568)]

[arxix 2024.04]ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving [[PDF](https://arxiv.org/abs/2404.16771),[Page](https://ssugarwh.github.io/consistentid.github.io/)] ![Code](https://img.shields.io/github/stars/JackAILab/ConsistentID?style=social&label=Star)

[arxiv 2024.04]PuLID: Pure and Lightning ID Customization via Contrastive Alignment [[PDF](https://arxiv.org/abs/2404.16022), [Page](https://github.com/ToTheBeginning/PuLID)] ![Code](https://img.shields.io/github/stars/ToTheBeginning/PuLID?style=social&label=Star)

[arxiv 2024.04] Customizing Text-to-Image Diffusion with Object Viewpoint Control  [[PDF](http://arxiv.org/abs/2404.12333),[Page](https://customdiffusion360.github.io/)] ![Code](https://img.shields.io/github/stars/customdiffusion360/custom-diffusion360?style=social&label=Star)

[arxiv 2024.04]CharacterFactory: Sampling Consistent Characters with GANs for Diffusion Models [[PDF](https://arxiv.org/abs/2404.15677), [Page](https://github.com/qinghew/CharacterFactory)] ![Code](https://img.shields.io/github/stars/qinghew/CharacterFactory?style=social&label=Star)

[arxiv 2024.04]TheaterGen: Character Management with LLM for Consistent Multi-turn Image Generation [[PDF](https://arxiv.org/abs/2404.18919),[Page](https://howe140.github.io/theatergen.io/)] ![Code](https://img.shields.io/github/stars/donahowe/Theatergen?style=social&label=Star)

[arxiv 2024.05]Customizing Text-to-Image Models with a Single Image Pair[[PDF](https://arxiv.org/abs/2405.01536),[Page](https://paircustomization.github.io/)] ![Code](https://img.shields.io/github/stars/PairCustomization/PairCustomization?style=social&label=Star)

[arxiv 2024.05]InstantFamily: Masked Attention for Zero-shot Multi-ID Image Generation [[PDF](https://arxiv.org/abs/2404.19427)]

[arxiv 2024.05]MasterWeaver: Taming Editability and Identity for Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2405.05806),[Page](https://github.com/csyxwei/MasterWeaver)] ![Code](https://img.shields.io/github/stars/csyxwei/MasterWeaver?style=social&label=Star)

[arxiv 2024.05]Training-free Subject-Enhanced Attention Guidance for Compositional Text-to-image Generation [[PDF](https://arxiv.org/abs/2405.06948)]

[arxiv  2024.05]Non-confusing Generation of Customized Concepts in Diffusion Models [[PDF](https://arxiv.org/abs/2405.06914),[Page](https://clif-official.github.io/clif/)] ![Code](https://img.shields.io/github/stars/clif-official/clif_code?style=social&label=Star)

[arxiv 2024.05]Personalized Residuals for Concept-Driven Text-to-Image Generation [[PDF](https://arxiv.org/abs/2405.12978),[Page](https://cusuh.github.io/personalized-residuals/)]

[arxiv 2024.05] FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition [[PDF](https://arxiv.org/abs/2405.13870),[Page](https://github.com/aim-uofa/FreeCustom)] ![Code](https://img.shields.io/github/stars/aim-uofa/FreeCustom?style=social&label=Star)

[arxiv 2024.05]AttenCraft: Attention-guided Disentanglement of Multiple Concepts for Text-to-Image Customization [[PDF](https://arxiv.org/abs/2405.17965),[Page](https://github.com/junjie-shentu/AttenCraft)] ![Code](https://img.shields.io/github/stars/junjie-shentu/AttenCraft?style=social&label=Star) 

[arxiv 2024.05]RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance [[PDF](https://arxiv.org/abs/2405.14677),[Page](https://github.com/feifeiobama/RectifID)] ![Code](https://img.shields.io/github/stars/feifeiobama/RectifID?style=social&label=Star)

[arxiv 2024.06] HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Model  [[PDF](https://arxiv.org/abs/2307.06949),[Page](https://hyperdreambooth.github.io/)]

[arxiv 2024.06]AutoStudio: Crafting Consistent Subjects in Multi-turn Interactive Image Generation [[PDF](https://arxiv.org/abs/2406.01388),[Page](https://howe183.github.io/AutoStudio.io/)] ![Code](https://img.shields.io/github/stars/donahowe/AutoStudio?style=social&label=Star)

[arxiv 2024.06] Inv-Adapter: ID Customization Generation via Image Inversion and Lightweight Adapter[[PDF](https://arxiv.org/abs/2406.02881)]

[arxiv 2024.06] AttnDreamBooth: Towards Text-Aligned Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2406.05000),[Page](https://attndreambooth.github.io/)] ![Code](https://img.shields.io/github/stars/lyuPang/AttnDreamBooth?style=social&label=Star)

[arxiv 2024.06]Tuning-Free Visual Customization via View Iterative Self-Attention Control[[PDF](https://arxiv.org/abs/2406.06258)]

[arxiv 2024.06]PaRa: Personalizing Text-to-Image Diffusion via Parameter Rank Reduction[[PDF](https://arxiv.org/pdf/2406.05641)]

[arxiv 2024.06]MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance [[PDF](https://arxiv.org/abs/2406.07209), [Page](https://ms-diffusion.github.io/)] ![Code](https://img.shields.io/github/stars/MS-Diffusion/MS-Diffusion?style=social&label=Star)

[arxiv 2024.06] Interpreting the Weight Space of Customized Diffusion Models[[PDF](https://arxiv.org/abs/2406.09413), [Page](https://snap-research.github.io/weights2weights)] ![Code](https://img.shields.io/github/stars/snap-research/weights2weights?style=social&label=Star)

[arxiv 2024.06]DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation [[PDF](https://arxiv.org/abs/2406.16855), [Page](https://dreambenchplus.github.io/)] ![Code](https://img.shields.io/github/stars/yuangpeng/dreambench_plus?style=social&label=Star)

[arxiv 2024.06]Character-Adapter: Prompt-Guided Region Control for High-Fidelity Character Customization[[PDF](https://arxiv.org/abs/2406.16537)]

[arxiv 2024.06]LIPE: Learning Personalized Identity Prior for Non-rigid Image Editing [[PDF](https://arxiv.org/abs/2406.17236)]

[arxiv 2024.06] AlignIT: Enhancing Prompt Alignment in Customization of Text-to-Image Models  [[PDF](https://arxiv.org/abs/2406.18893)]

[arxiv 2024.07] JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation[[PDF](https://arxiv.org/abs/2407.06187), [Page](https://research.nvidia.com/labs/dir/jedi/)]

[arxiv 2024.07]LogoSticker: Inserting Logos into Diffusion Models for Customized Generation [[PDF](https://arxiv.org/abs/2407.13752), [Page](https://mingkangz.github.io/logosticker/)]

[arxiv 2024.07] MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequence  [[PDF](https://arxiv.org/abs/2407.16655),[Page](https://aim-uofa.github.io/MovieDreamer/)] ![Code](https://img.shields.io/github/stars/aim-uofa/MovieDreamer?style=social&label=Star)

[arxiv 2024.08]Concept Conductor: Orchestrating Multiple Personalized Concepts in Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2408.03632), [Page](https://github.com/Nihukat/Concept-Conductor)] ![Code](https://img.shields.io/github/stars/Nihukat/Concept-Conductor?style=social&label=Star)

[arxiv 2024.08]PreciseControl: Enhancing Text-To-Image Diffusion Models with Fine-Grained Attribute Control [[PDF](https://arxiv.org/abs/2408.05083), [Page](https://rishubhpar.github.io/PreciseControl.home/)] ![Code](https://img.shields.io/github/stars/rishubhpar/PreciseControl?style=social&label=Star) 

[arxiv 2024.08] DiffLoRA: Generating Personalized Low-Rank Adaptation Weights with Diffusion[[PDF](https://arxiv.org/abs/2408.06740)]

[arxiv 2024.08]RealCustom++: Representing Images as Real-Word for Real-Time Customization [[PDF](https://arxiv.org/abs/2408.09744)]

[arxiv 2024.08] MagicID: Flexible ID Fidelity Generation System[[PDF](https://arxiv.org/abs/2408.09248)]

[arxiv 2024.08] CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization[[PDF](https://arxiv.org/abs/2408.15914)]

[arxiv 2024.09] CustomContrast: A Multilevel Contrastive Perspective For Subject-Driven Text-to-Image Customization[[PDF](https://arxiv.org/abs/2409.05606), [Page](https://cn-makers.github.io/CustomContrast/)]

[arxiv 2024.09]GroundingBooth: Grounding Text-to-Image Customization [[PDF](https://arxiv.org/abs/2409.08520), [Page](https://groundingbooth.github.io/)]

[arxiv 2024.09]TextBoost: Towards One-Shot Personalization of Text-to-Image Models via Fine-tuning Text Encoder [[PDF](https://arxiv.org/abs/2409.08248), [Page](https://textboost.github.io/)]  ![Code](https://img.shields.io/github/stars/nahyeonkaty/textboost?style=social&label=Star)


[arxiv 2024.09]SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [[PDF](https://export.arxiv.org/abs/2409.06633), [Page](https://sjtuplayer.github.io/projects/SaRA/)] ![Code](https://img.shields.io/github/stars/sjtuplayer/SaRA?style=social&label=Star)

[arxiv 2024.09] Resolving Multi-Condition Confusion for Finetuning-Free Personalized Image Generation[[PDF](https://arxiv.org/abs/2409.17920), [Page](https://github.com/hqhQAQ/MIP-Adapter)]  ![Code](https://img.shields.io/github/stars/hqhQAQ/MIP-Adapter?style=social&label=Star)

[arxiv 2024.09] Imagine yourself: Tuning-Free Personalized Image Generation[[PDF](https://arxiv.org/abs/2409.13346)]

[arxiv 2024.10] Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.00700),]

[arxiv 2024.10] Fusion is all you need: Face Fusion for Customized Identity-Preserving Image Synthesis  [[PDF](https://arxiv.org/abs/2409.19111)]

[arxiv 2024.10]Event-Customized Image Generation[[PDF](https://arxiv.org/abs/2410.02483)]

[arxiv 2024.10] DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized Image Generation [[PDF](https://arxiv.org/abs/2410.02067),[Page](https://disenvisioner.github.io/)] ![Code](https://img.shields.io/github/stars/EnVision-Research/DisEnvisioner?style=social&label=Star)

[arxiv 2024.10] HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2410.08192),[Page](https://sites.google.com/view/hybridbooth)]

[arxiv 2024.10]  Learning to Customize Text-to-Image Diffusion In Diverse Context [[PDF](https://arxiv.org/abs/2410.10058)]

[arxiv 2024.10]  FaceChain-FACT: Face Adapter with Decoupled Training for Identity-preserved Personalization [[PDF](https://arxiv.org/abs/2410.12312),[Page](https://github.com/modelscope/facechain)] ![Code](https://img.shields.io/github/stars/modelscope/facechain?style=social&label=Star)

[arxiv 2024.10] MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.13370),[Page](https://correr-zhou.github.io/MagicTailor)] ![Code](https://img.shields.io/github/stars/correr-zhou/MagicTailor?style=social&label=Star)

[arxiv 2024.10] Unbounded: A Generative Infinite Game of Character Life Simulation  [[PDF](https://arxiv.org/abs/2410.18975),[Page](https://generative-infinite-game.github.io/)]

[arxiv 2024.10] How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization?  [[PDF](https://arxiv.org/abs/2410.17594),[Page](https://github.com/JiahuaDong/CIFC)] ![Code](https://img.shields.io/github/stars/JiahuaDong/CIFC?style=social&label=Star)

[arxiv 2024.10] RelationBooth: Towards Relation-Aware Customized Object Generation  [[PDF](https://arxiv.org/abs/2410.23280),[Page](https://shi-qingyu.github.io/RelationBooth/)]

[arxiv 2024.10]  In-Context LoRA for Diffusion Transformers [[PDF](https://arxiv.org/pdf/2410.23775),[Page](https://github.com/ali-vilab/In-Context-LoRA)] ![Code](https://img.shields.io/github/stars/ali-vilab/In-Context-LoRA?style=social&label=Star)

[arxiv 2024.10] Novel Object Synthesis via Adaptive Text-Image Harmony  [[PDF](https://arxiv.org/abs/2410.20823),[Page](https://xzr52.github.io/ATIH/)] 

[arxiv 2024.11] Hollowed Net for On-Device Personalization of Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2411.01179)]

[arxiv 2024.11] DomainGallery: Few-shot Domain-driven Image Generation by Attribute-centric Finetuning  [[PDF](https://arxiv.org/abs/2411.04571),[Page](https://github.com/Ldhlwh/DomainGallery)] ![Code](https://img.shields.io/github/stars/Ldhlwh/DomainGallery?style=social&label=Star)

[arxiv 2024.11] Group Diffusion Transformers are Unsupervised Multitask Learners  [[PDF](https://arxiv.org/abs/2410.15027)]

[arxiv 2024.11] DreamMix: Decoupling Object Attributes for Enhanced Editability in Customized Image Inpainting  [[PDF](https://arxiv.org/abs/2411.17223),[Page](https://github.com/mycfhs/DreamMix)] ![Code](https://img.shields.io/github/stars/mycfhs/DreamMix?style=social&label=Star)

[arxiv 2024.12] DreamBlend: Advancing Personalized Fine-tuning of Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2411.19390)]

[arxiv 2024.12]  Improving Multi-Subject Consistency in Open-Domain Image Generation with Isolation and Reposition Attention [[PDF](https://arxiv.org/abs/2411.19261)]

[arxiv 2024.12]  Diffusion Self-Distillation for Zero-Shot Customized Image Generation [[PDF](https://arxiv.org/abs/2411.18616),[Page](https://primecai.github.io/dsd/)] 

[arxiv 2024.12] UnZipLoRA: Separating Content and Style from a Single Image  [[PDF](https://arxiv.org/abs/2412.04465),[Page](https://unziplora.github.io/)] 

[arxiv 2024.12]  PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation [[PDF](https://arxiv.org/abs/2412.03177),[Page](https://github.com/hqhQAQ/PatchDPO)] ![Code](https://img.shields.io/github/stars/hqhQAQ/PatchDPO?style=social&label=Star)

[arxiv 2024.12] LoRA.rar: Learning to Merge LoRAs via Hypernetworks for Subject-Style Conditioned Image Generation  [[PDF](https://arxiv.org/abs/2412.05148)]

[arxiv 2024.12]  Customized Generation Reimagined: Fidelity and Editability Harmonized [[PDF](https://arxiv.org/abs/2412.04831),[Page](https://github.com/jinjianRick/DCI_ICO)] ![Code](https://img.shields.io/github/stars/jinjianRick/DCI_ICO?style=social&label=Star)

[arxiv 2024.12] StoryWeaver: A Unified World Model for Knowledge-Enhanced Story Character Customization  [[PDF](https://arxiv.org/abs/2412.07375),[Page](https://github.com/Aria-Zhangjl/StoryWeaver)] ![Code](https://img.shields.io/github/stars/Aria-Zhangjl/StoryWeaver?style=social&label=Star)

[arxiv 2024.12] ObjectMate: A Recurrence Prior for Object Insertion and Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2412.08645),[Page](https://object-mate.com/)]

[arxiv 2024.12] DECOR: Decomposition and Projection of Text Embeddings for Text-to-Image Customization  [[PDF](https://arxiv.org/abs/2412.09169)]

[arxiv 2024.12]  A LoRA is Worth a Thousand Pictures [[PDF](https://arxiv.org/pdf/2412.12048)]

[arxiv 2024.12] Personalized Representation from Personalized Generation  [[PDF](https://arxiv.org/abs/2412.16156),[Page](https://personalized-rep.github.io/)] ![Code](https://img.shields.io/github/stars/ssundaram21/personalized-rep?style=social&label=Star)

[arxiv 2025.01] ACE++: Instruction-Based Image Creation and Editing via Context-Aware Content Filling  [[PDF](https://arxiv.org/abs/2501.02487),[Page](https://ali-vilab.github.io/ACE_plus_page/)] ![Code](https://img.shields.io/github/stars/ali-vilab/ACE_plus?style=social&label=Star)

[arxiv 2025.01]  AnyStory: Towards Unified Single and Multiple Subject Personalization in Text-to-Image Generation [[PDF](https://aigcdesigngroup.github.io/AnyStory/),[Page](https://aigcdesigngroup.github.io/AnyStory/)] 

[arxiv 2025.01]  IC-Portrait: In-Context Matching for View-Consistent Personalized Portrait [[PDF](https://arxiv.org/abs/2501.17159)]

[arxiv 2025.02] Generating Multi-Image Synthetic Data for Text-to-Image Customization  [[PDF](https://arxiv.org/abs/2502.01720),[Page](https://www.cs.cmu.edu/~syncd-project/)] ![Code](https://img.shields.io/github/stars/nupurkmr9/syncd-project?style=social&label=Star)

[arxiv 2025.02] Multitwine: Multi-Object Compositing with Text and Layout Control  [[PDF](https://arxiv.org/abs/2502.05165)]

[arxiv 2025.02] Beyond Fine-Tuning: A Systematic Study of Sampling Techniques in Personalized Image Generation  [[PDF](https://arxiv.org/abs/2502.05895),[Page](https://github.com/ControlGenAI/PersonGenSampler)] ![Code](https://img.shields.io/github/stars/ControlGenAI/PersonGenSampler?style=social&label=Star)

[arxiv 2025.02]  FreeBlend: Advancing Concept Blending with Staged Feedback-Driven Interpolation Diffusion [[PDF](https://arxiv.org/abs/2502.05606)]

[arxiv 2025.02]  E-MD3C: Taming Masked Diffusion Transformers for Efficient Zero-Shot Object Customization [[PDF](https://arxiv.org/pdf/2502.09164)]

[arxiv 2025.02]  Personalized Image Generation with Deep Generative Models: A Decade Survey [[PDF](https://arxiv.org/abs/2502.13081),[Page](https://github.com/csyxwei/Awesome-Personalized-Image-Generation)] ![Code](https://img.shields.io/github/stars/csyxwei/Awesome-Personalized-Image-Generation?style=social&label=Star)

[arxiv 2025.02] IP-Composer: Semantic Composition of Visual Concepts  [[PDF](https://arxiv.org/pdf/2502.13951),[Page](https://ip-composer.github.io/IP-Composer/)] ![Code](https://img.shields.io/github/stars/ip-composer/IP-Composer?style=social&label=Star)

[arxiv 2025.03]  LatexBlend: Scaling Multi-concept Customized Generation with Latent Textual Blending [[PDF](https://jinjianrick.github.io/latexblend/unicanvas_ijcv.pdf),[Page](https://jinjianrick.github.io/latexblend/)] ![Code](https://img.shields.io/github/stars/jinjianRick/latexblend?style=social&label=Star)

[arxiv 2025.03] DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability  [[PDF](https://arxiv.org/pdf/2503.06505)]

[arxiv 2025.03]  Personalize Anything for Free with Diffusion Transformer [[PDF](https://arxiv.org/pdf/2503.12590),[Page](https://fenghora.github.io/Personalize-Anything-Page/)] ![Code](https://img.shields.io/github/stars/fenghora/personalize-anything?style=social&label=Star)

[arxiv 2025.03]  EditID: Training-Free Editable ID Customization for Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2503.12526)]

[arxiv 2025.03]  Visual Persona: Foundation Model for Full-Body Human Customization [[PDF](https://arxiv.org/pdf/2503.15406),[Page](https://cvlab-kaist.github.io/Visual-Persona/)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/Visual-Persona?style=social&label=Star)

[arxiv 2025.03]  Efficient Personalization of Quantized Diffusion Model without Backpropagation [[PDF](https://arxiv.org/pdf/2503.14868),[Page](https://ignoww.github.io/ZOODiP_project/)] ![Code](https://img.shields.io/github/stars/ignoww/ZOODiP?style=social&label=Star)

[arxiv 2025.03]  InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity [[PDF](https://arxiv.org/pdf/2503.16418),[Page](https://bytedance.github.io/InfiniteYou)] ![Code](https://img.shields.io/github/stars/bytedance/InfiniteYou?style=social&label=Star)

[arxiv 2025.03] Zero-Shot Visual Concept Blending Without Text Guidance  [[PDF](https://arxiv.org/abs/2503.21277),[Page](https://github.com/ToyotaCRDL/Visual-Concept-Blending)] ![Code](https://img.shields.io/github/stars/ToyotaCRDL/Visual-Concept-Blending?style=social&label=Star)

[arxiv 2025.03] Meta-LoRA: Meta-Learning LoRA Components for Domain-Aware ID Personalization  [[PDF](https://arxiv.org/abs/2503.22352)]

[arxiv 2025.04] Consistent Subject Generation via Contrastive Instantiated Concepts  [[PDF](https://arxiv.org/abs/2503.24387),[Page](https://contrastive-concept-instantiation.github.io/)] ![Code](https://img.shields.io/github/stars/contrastive-concept-instantiation/cocoins?style=social&label=Star)

[arxiv 2025.04]  Enhancing Creative Generation on Stable Diffusion-based Models [[PDF](https://arxiv.org/abs/2503.23538)]

[arxiv 2025.04] Concept Lancet: Image Editing with Compositional Representation Transplant  [[PDF](https://arxiv.org/abs/2504.02828),[Page](https://peterljq.github.io/project/colan)] ![Code](https://img.shields.io/github/stars/peterljq/Concept-Lancet?style=social&label=Star)

[arxiv 2025.04] InstantCharacter: Personalize Any Characters with a Scalable Diffusion Transformer Framework  [[PDF](https://arxiv.org/abs/2504.12395),[Page](https://github.com/Tencent/InstantCharacter)] ![Code](https://img.shields.io/github/stars/Tencent/InstantCharacter?style=social&label=Star)

[arxiv 2025.04] FreeGraftor: Training-Free Cross-Image Feature Grafting for Subject-Driven Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2504.15958),[Page](https://github.com/Nihukat/FreeGraftor)] ![Code](https://img.shields.io/github/stars/Nihukat/FreeGraftor?style=social&label=Star)

[arxiv 2025.04]  Learning Joint ID-Textual Representation for ID-Preserving Image Synthesis [[PDF](https://arxiv.org/abs/2504.14202)]


[arxiv 2025.04] DreamO: A Unified Framework for Image Customization  [[PDF](https://arxiv.org/abs/2504.16915),[Page](https://mc-e.github.io/project/DreamO/)] ![Code](https://img.shields.io/github/stars/bytedance/DreamO?style=social&label=Star)

[arxiv 2025.05] Multi-party Collaborative Attention Control for Image Customization  [[PDF](https://arxiv.org/pdf/2505.01428)]

[arxiv 2025.05] PIDiff: Image Customization for Personalized Identities with Diffusion Models  [[PDF](https://arxiv.org/pdf/2505.05081)]

[arxiv 2025.06] Negative-Guided Subject Fidelity Optimization for Zero-Shot Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2506.03621),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.06] ShowFlow: From Robust Single Concept to Condition-Free Multi-Concept Generation  [[PDF](https://arxiv.org/pdf/2506.18493)]

[arxiv 2025.06] XVerse: Consistent Multi-Subject Control of Identity and Semantic Attributes via DiT Modulation  [[PDF](https://arxiv.org/abs/2506.21416),[Page](https://bytedance.github.io/XVerse/)] ![Code](https://img.shields.io/github/stars/bytedance/XVerse?style=social&label=Star)

[arxiv 2025.07] IC-Custom: Diverse Image Customization via In-Context Learning  [[PDF](https://arxiv.org/abs/2507.01926),[Page](https://liyaowei-stu.github.io/project/IC_Custom)] ![Code](https://img.shields.io/github/stars/TencentARC/IC-Custom?style=social&label=Star)

[arxiv 2025.07]  FreeLoRA: Enabling Training-Free LoRA Fusion for Autoregressive Multi-Subject Personalization [[PDF](https://arxiv.org/pdf/2507.01792),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2025.07]  Memory-Efficient Personalization of Text-to-Image Diffusion Models via Selective Optimization Strategies [[PDF](https://arxiv.org/pdf/2507.10029)]

[arxiv 2025.07] CharaConsist: Fine-Grained Consistent Character Generation  [[PDF](https://arxiv.org/abs/2507.11533),[Page](https://murray-wang.github.io/CharaConsist/)] ![Code](https://img.shields.io/github/stars/Murray-Wang/CharaConsist?style=social&label=Star)

[arxiv 2025.07] Imbalance in Balance: Online Concept Balancing in Generation Models  [[PDF](https://arxiv.org/abs/2507.13345)]

[arxiv 2025.07] PositionIC: Unified Position and Identity Consistency for Image Customization  [[PDF](https://arxiv.org/pdf/2507.13861)]

[arxiv 2025.07] FreeCus: Free Lunch Subject-driven Customization in Diffusion Transformers  [[PDF](https://arxiv.org/abs/2507.15249),[Page](https://github.com/Monalissaa/FreeCus)] ![Code](https://img.shields.io/github/stars/Monalissaa/FreeCus?style=social&label=Star)

[arxiv 2025.08] TARA: Token-Aware LoRA for Composable Personalization in Diffusion Models  [[PDF](https://arxiv.org/pdf/2508.08812),[Page](https://github.com/YuqiPeng77/TARA)] ![Code](https://img.shields.io/github/stars/YuqiPeng77/TARA?style=social&label=Star)

[arxiv 2025.08]  MM-R1: Unleashing the Power of Unified Multimodal Large Language Models for Personalized Image Generation [[PDF](https://arxiv.org/abs/2508.11433)]

[arxiv 2025.08] USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learning  [[PDF](https://arxiv.org/abs/2508.18966),[Page](https://bytedance.github.io/USO/)] ![Code](https://img.shields.io/github/stars/bytedance/USO?style=social&label=Star)

[arxiv 2025.09]  MOSAIC: Multi-Subject Personalized Generation via Correspondence-Aware Alignment and Disentanglement [[PDF](https://arxiv.org/pdf/2509.01977),[Page](https://bytedance-fanqie-ai.github.io/MOSAIC/)] ![Code](https://img.shields.io/github/stars/bytedance-fanqie-ai/MOSAIC?style=social&label=Star)

[arxiv 2025.09] FocusDPO: Dynamic Preference Optimization for Multi-Subject Personalized Image Generation via Adaptive Focus  [[PDF](https://arxiv.org/abs/2509.01181),[Page](https://bytedance-fanqie-ai.github.io/FocusDPO/)] ![Code](https://img.shields.io/github/stars/bytedance-fanqie-ai/FocusDPO?style=social&label=Star)

[arxiv 2025.09]  UMO: Scaling Multi-Identity Consistency for Image Customization via Matching Reward [[PDF](https://arxiv.org/abs/2509.06818),[Page](https://bytedance.github.io/UMO/)] ![Code](https://img.shields.io/github/stars/bytedance/UMO?style=social&label=Star)

[arxiv 2025.09] EditIDv2: Editable ID Customization with Data-Lubricated ID Feature Integration for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2509.05659)]

[arxiv 2025.09] ComposeMe: Attribute-Specific Image Prompts for Controllable Human Image Generation [[PDF](https://arxiv.org/abs/2509.18092),[Page](https://fictionarry.github.io/GeoSVR-project/)] 

[arxiv 2025.09] Mind-the-Glitch: Visual Correspondence for Detecting Inconsistencies in Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2509.21989),[Page](https://abdo-eldesokey.github.io/mind-the-glitch/)] ![Code](https://img.shields.io/github/stars/abdo-eldesokey/mind-the-glitch?style=social&label=Star)

[arxiv 2025.09] MultiCrafter: High-Fidelity Multi-Subject Generation via Spatially Disentangled Attention and Identity-Aware Reinforcement Learning  [[PDF](https://arxiv.org/abs/2509.21953),[Page](https://wutao-cs.github.io/MultiCrafter/)] ![Code](https://img.shields.io/github/stars/WuTao-CS/MultiCrafter?style=social&label=Star)

[arxiv 2025.10] MOLM: Mixture of LoRA Markers  [[PDF](https://arxiv.org/abs/2510.00293)]

[arxiv 2025.10]  ContextGen: Contextual Layout Anchoring for Identity-Consistent Multi-Instance Generation [[PDF](https://arxiv.org/abs/2510.11000),[Page](https://nenhang.github.io/ContextGen/)] ![Code](https://img.shields.io/github/stars/nenhang/ContextGen?style=social&label=Star)

[arxiv 2025.10] ReMix: Towards a Unified View of Consistent Character Generation and Editing  [[PDF](https://arxiv.org/abs/2510.10156)]

[arxiv 2025.10] WithAnyone: Towards Controllable and ID-Consistent Image Generation  [[PDF](https://arxiv.org/abs/2510.14975),[Page](https://doby-xu.github.io/WithAnyone/)] ![Code](https://img.shields.io/github/stars/doby-xu/WithAnyone?style=social&label=Star)

[arxiv 2025.10]  EchoDistill: Bidirectional Concept Distillation for One-Step Diffusion Personalization [[PDF](https://arxiv.org/abs/2510.20512),[Page](https://liulisixin.github.io/EchoDistill-page/)] 

[arxiv 2025.11] Multi-View Consistent Human Image Customization via In-Context Learning  [[PDF](https://arxiv.org/abs/2511.00293)]

[arxiv 2025.12]  DynaIP: Dynamic Image Prompt Adapter for Scalable Zero-shot Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2512.09814)]

[arxiv 2025.12]  Omni-Attribute: Open-vocabulary Attribute Encoder for Visual Concept Personalization
 [[PDF](https://arxiv.org/abs/2512.10955),[Page](https://snap-research.github.io/omni-attribute/)]

[arxiv 2025.12]  3SGen: Unified Subject, Style, and Structure-Driven Image Generation with Adaptive Task-specific Memory [[PDF](https://arxiv.org/pdf/2512.19271)]

[arxiv 2026.01] Re-Align: Structured Reasoning-guided Alignment for In-Context Image Generation and Editing  [[PDF](https://arxiv.org/abs/2601.05124),[Page](https://hrz2000.github.io/realign/)] 

[arxiv 2026.01]  Efficient Autoregressive Video Diffusion with Dummy Head [[PDF](https://arxiv.org/abs/2601.20499)]

[arxiv 2026.01] Hierarchical Concept-to-Appearance Guidance for Multi-Subject Image Generation  [[PDF](https://arxiv.org/abs/2602.03448)]

[arxiv 2026.02]  FlowFixer: Towards Detail-Preserving Subject-Driven Generation [[PDF](https://arxiv.org/pdf/2602.21402)]

[arxiv 2026.03] IdGlow: Dynamic Identity Modulation for Multi-Subject Generation  [[PDF](https://arxiv.org/pdf/2603.00607),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

[arxiv 2026.03] AnyPhoto: Multi-Person Identity Preserving Image Generation with ID Adaptive Modulation on Location Canvas  [[PDF](https://arxiv.org/abs/2603.14770)]

[arxiv 2026.03]  PureCC: Pure Learning for Text-to-Image Concept Customization [[PDF](https://arxiv.org/abs/2603.07561),[Page](https://github.com/lzc-sg/PureCC)] ![Code](https://img.shields.io/github/stars/lzc-sg/PureCC?style=social&label=Star)

[arxiv 2026.03] When Identities Collapse: A Stress-Test Benchmark for Multi-Subject Personalization  [[PDF](https://arxiv.org/abs/2603.26078)]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## end of concept


## MV Concept 
[arxiv 2025.10] MVCustom: Multi-View Customized Diffusion via Geometric Latent Rendering and Completion  [[PDF](https://arxiv.org/abs/2510.13702),[Page](https://minjung-s.github.io/mvcustom)] ![Code](https://img.shields.io/github/stars/minjung-s/MVCustom?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## multi-object
[arxiv 2025.06]  MultiHuman-Testbench: Benchmarking Image Generation for Multiple Humans [[PDF](https://arxiv.org/abs/2506.20879)]

[arxiv 2025.07]  UNIMC: Taming Diffusion Transformer for Unified Keypoint-Guided Multi-Class Image Generation [[PDF](https://arxiv.org/pdf/2507.02713),[Page](https://unimc-dit.github.io/)] 

[arxiv 2025.10] SIGMA-GEN: Structure and Identity Guided Multi-subject Assembly for Image Generation  [[PDF](https://arxiv.org/pdf/%3CARXIV%20PAPER%20ID%3E.pdf),[Page](https://oindrilasaha.github.io/SIGMA-Gen/)] ![Code](https://img.shields.io/github/stars/oindrilasaha/SIGMA-Gen-Code?style=social&label=Star)

[arxiv 2025.10] FreeFuse: Multi-Subject LoRA Fusion via Auto Masking at Test Time  [[PDF](https://arxiv.org/pdf/2510.23515),[Page](https://future-item.github.io/FreeFuse/)] 

[arxiv 2025.12]  Ar2Can: An Architect and an Artist Leveraging a Canvas for Multi-Human Generation [[PDF](https://arxiv.org/pdf/2511.22690)]

[arxiv 2026.02]  UniRef-Image-Edit: Towards Scalable and Consistent Multi-Reference Image Editing [[PDF](https://arxiv.org/abs/2602.14186)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


## group generation 

[arxiv 2024.10]  In-Context LoRA for Diffusion Transformers [[PDF](https://arxiv.org/pdf/2410.23775),[Page](https://github.com/ali-vilab/In-Context-LoRA)] ![Code](https://img.shields.io/github/stars/ali-vilab/In-Context-LoRA?style=social&label=Star)

[arxiv 2024.11] Large-Scale Text-to-Image Model with Inpainting is a Zero-Shot Subject-Driven Image Generator  [[PDF](https://arxiv.org/abs/2411.15466),[Page](https://diptychprompting.github.io/)] 

[arxiv 2024.12]  X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models [[PDF](https://arxiv.org/abs/2412.01824),[Page](https://github.com/SunzeY/X-Prompt)] ![Code](https://img.shields.io/github/stars/SunzeY/X-Prompt?style=social&label=Star)

[arxiv 2024.12] Generative Photography Scene-Consistent Camera Control for Realistic Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2412.02168),[Page](https://generative-photography.github.io/project/)] 

[arxiv 2025.03]  Piece it Together: Part-Based Concepting with IP-Priors [[PDF](https://arxiv.org/pdf/2503.10365),[Page](https://eladrich.github.io/PiT/)] ![Code](https://img.shields.io/github/stars/eladrich/PiT?style=social&label=Star)

[arxiv 2025.03] ConceptGuard: Continual Personalized Text-to-Image Generation with Forgetting and Confusion Mitigation  [[PDF](https://arxiv.org/abs/2503.10358)】

[arxiv 2025.03] Latent Beam Diffusion Models for Decoding Image Sequences  [[PDF](https://arxiv.org/abs/2503.20429)]

[arxiv 2025.04] Consistent Subject Generation via Contrastive Instantiated Concepts  [[PDF](https://arxiv.org/abs/2503.24387)]

[arxiv 2025.04] Less-to-More Generalization: Unlocking More Controllability by In-Context Generation  [[PDF](https://arxiv.org/abs/2504.02160),[Page](https://bytedance.github.io/UNO)] ![Code](https://img.shields.io/github/stars/bytedance/UNO?style=social&label=Star)

[arxiv 2025.04] FlexIP: Dynamic Control of Preservation and Personality for Customized Image Generation  [[PDF](https://arxiv.org/abs/2504.07405),[Page](https://flexip-tech.github.io/flexip/#/)]

[arxiv 2025.08] Scaling Group Inference for Diverse and High-Quality Generation  [[PDF](https://arxiv.org/abs/2508.15773),[Page](https://www.cs.cmu.edu/~group-inference/)] ![Code](https://img.shields.io/github/stars/GaParmar/group-inference?style=social&label=Star)

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)

## interleave generation 
[arxiv 2025.10] IUT-Plug: A Plug-in tool for Interleaved Image-Text Generation  [[PDF](https://arxiv.org/abs/2510.10969)]

[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)



## multi-view consistency


[arxiv 2024.12] MV-Adapter: Multi-view Consistent Image Generation Made Easy  [[PDF](https://arxiv.org/abs/2412.03632),[Page](https://huanngzh.github.io/MV-Adapter-Page/)] ![Code](https://img.shields.io/github/stars/huanngzh/MV-Adapter?style=social&label=Star)


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)





## Story-telling

**[ECCV 2022]** ***Story Dall-E***: Adapting pretrained text-to-image transformers for story continuation [[PDF](https://arxiv.org/pdf/2209.06192.pdf), [code](https://github.com/adymaharana/storydalle)]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/adymaharana/storydalle?style=social&label=Star)

**[arxiv 22.11; Ailibaba]** Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models \[[PDF](https://arxiv.org/pdf/2211.10950.pdf), [code](https://github.com/xichenpan/ARLDM)\]   ![Code](https://img.shields.io/github/stars/xichenpan/ARLDM?style=social&label=Star)

**[CVPR 2023]** ***Make-A-Story***: Visual Memory Conditioned Consistent Story Generation  \[[PDF](https://arxiv.org/pdf/2211.13319.pdf) \]  

[arxiv 2023.01]An Impartial Transformer for Story Visualization [[PDF](https://arxiv.org/pdf/2301.03563.pdf)]

[arxiv 2023.02]Zero-shot Generation of Coherent Storybook from Plain Text Story using Diffusion Models [[PDF](https://arxiv.org/abs/2302.03900)]

[arxiv 2023.05]TaleCrafter: Interactive Story Visualization with Multiple Characters [[PDF](https://arxiv.org/abs/2305.18247), [Page](https://videocrafter.github.io/TaleCrafter/)] ![Code](https://img.shields.io/github/stars/AILab-CVC/TaleCrafter?style=social&label=Star)

[arxiv 2023.06]Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion Models [[PDF](https://arxiv.org/abs/2306.00973), [Page](https://haoningwu3639.github.io/StoryGen_Webpage/)]  ![Code](https://img.shields.io/github/stars/haoningwu3639/StoryGen?style=social&label=Star)

[arxiv 2023.08]Story Visualization by Online Text Augmentation with Context Memory[[PDF](https://arxiv.org/pdf/2308.07575.pdf)]

[arxiv 2023.08]Text-Only Training for Visual Storytelling [[PDF](https://arxiv.org/pdf/2308.08881.pdf)]

[arxiv 2023.08]StoryBench: A Multifaceted Benchmark for Continuous Story Visualization [[PDF](https://arxiv.org/pdf/2308.11606.pdf), [Page](https://github.com/google/storybench)]  ![Code](https://img.shields.io/github/stars/google/storybench?style=social&label=Star)

[arxiv 2023.11]AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort[[PDf](https://arxiv.org/abs/2311.11243),[Page](https://aim-uofa.github.io/AutoStory/)] ![Code](https://img.shields.io/github/stars/aim-uofa/AutoStory?style=social&label=Star)

[arxiv 2023.12]Make-A-Storyboard: A General Framework for Storyboard with Disentangled and Merged Control [[PDF](https://arxiv.org/abs/2312.07549)]

[arxiv 2023.12]CogCartoon: Towards Practical Story Visualization [[PDF](https://arxiv.org/abs/2312.10718)]

[arxiv 2024.03]TARN-VIST: Topic Aware Reinforcement Network for Visual Storytelling [[PDF](https://arxiv.org/abs/2403.11550)]

[arxiv 2024.05] Evolving Storytelling: Benchmarks and Methods for New Character Customization with Diffusion Models  [[PDF](https://arxiv.org/abs/2405.11852)]

[arxiv 2024.07] Boosting Consistency in Story Visualization with Rich-Contextual Conditional Diffusion Models  [[PDF](https://arxiv.org/abs/2407.02482)]

[arxiv 2024.07] SEED-Story: Multimodal Long Story Generation with Large Language Model  [[PDF](https://arxiv.org/abs/2407.08683),[Page](https://github.com/TencentARC/SEED-Story)] ![Code](https://img.shields.io/github/stars/TencentARC/SEED-Story?style=social&label=Star)

[arxiv 2024.07] MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequence  [[PDF](https://arxiv.org/abs/2407.16655),[Page](https://aim-uofa.github.io/MovieDreamer/)] ![Code](https://img.shields.io/github/stars/aim-uofa/MovieDreamer?style=social&label=Star)

[arxiv 2024.08]Story3D-Agent: Exploring 3D Storytelling Visualization with Large Language Models[[PDF](https://arxiv.org/abs/2408.11801),[Page](https://yuzhou914.github.io/Story3D-Agent/)]

[arxiv 2024.10] Storynizor: Consistent Story Generation via Inter-Frame Synchronized and Shuffled ID Injection  [[PDF](https://arxiv.org/abs/2409.19624)]

[arxiv 2024.11]  StoryAgent: Customized Storytelling Video Generation via Multi-Agent Collaboration [[PDF](https://arxiv.org/abs/2411.04925),[Page](https://github.com/storyagent123/Comparison-of-storytelling-video-results/blob/main/demo/readme.md)]  ![Code](https://img.shields.io/github/stars/storyagent123/Comparison-of-storytelling-video-results/tree/main
?style=social&label=Star)

[arxiv 2024.12] VideoGen-of-Thought: A Collaborative Framework for Multi-Shot Video Generation  [[PDF](https://arxiv.org/abs/2011.12948),[Page](https://cheliosoops.github.io/VGoT/)] 

[arxiv 2024.12] DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation  [[PDF](https://arxiv.org/abs/2412.07589),[Page](https://jianzongwu.github.io/projects/diffsensei/)] ![Code](https://img.shields.io/github/stars/jianzongwu/DiffSensei?style=social&label=Star)

[arxiv 2024.12] StoryWeaver: A Unified World Model for Knowledge-Enhanced Story Character Customization  [[PDF](https://arxiv.org/abs/2412.07375),[Page](https://github.com/Aria-Zhangjl/StoryWeaver)] ![Code](https://img.shields.io/github/stars/Aria-Zhangjl/StoryWeaver?style=social&label=Star)

[arxiv 2025.01] One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt  [[PDF](https://arxiv.org/abs/2501.13554),[Page](https://github.com/byliutao/1Prompt1Story)] ![Code](https://img.shields.io/github/stars/byliutao/1Prompt1Story?style=social&label=Star)

[arxiv 2025.01]  Bringing Characters to New Stories: Training-Free Theme-Specific Image Generation via Dynamic Visual Prompting [[PDF](https://arxiv.org/abs/2501.15641)]

[arxiv 2025.04] Object Isolated Attention for Consistent Story Visualization  [[PDF](https://arxiv.org/abs/2503.23353)]

[arxiv 2025.06]  ViStoryBench: Comprehensive Benchmark Suite for Story Visualization [[PDF](https://arxiv.org/abs/2505.24862),[Page](https://vistorybench.github.io/)] ![Code](https://img.shields.io/github/stars/vistorybench/vistorybench?style=social&label=Star)

[arxiv 2025.06] Enhance Multimodal Consistency and Coherence for Text-Image Plan Generation  [[PDF](https://arxiv.org/abs/2506.11380),[Page](https://github.com/psunlpgroup/MPlanner)] ![Code](https://img.shields.io/github/stars/psunlpgroup/MPlanner?style=social&label=Star)

[arxiv 2025.06] ViSTA: Visual Storytelling using Multi-modal Adapters for Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/pdf/2506.12198)]

[arxiv 2025.06]  Audit & Repair: An Agentic Framework for Consistent Story Visualization in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2506.18900),[Page](https://auditandrepair.github.io/)] 

[arxiv 2025.06] TaleForge: Interactive Multimodal System for Personalized Story Creation  [[PDF](https://arxiv.org/abs/2506.21832)]

[arxiv 2025.08] Story2Board: A Training‑Free Approach for Expressive Storyboard Generation  [[PDF](https://arxiv.org/abs/2508.09983),[Page](https://daviddinkevich.github.io/Story2Board/)] ![Code](https://img.shields.io/github/stars/daviddinkevich/Story2Board?style=social&label=Star)

[arxiv 2025.08] From Image Captioning to Visual Storytelling  [[PDF](https://arxiv.org/abs/2508.14045)]

[arxiv 2025.09]  Plot’n Polish: Zero-shot Story Visualization and Disentangled Editing with Text-to-Image Diffusion Models [[PDF](https://plotnpolish.github.io/#),[Page](https://plotnpolish.github.io/)] 

[arxiv 2025.09] TaleDiffusion: Multi-Character Story Generation with Dialogue Rendering  [[PDF](),[Page](https://github.com/ayanban011/TaleDiffusion)] ![Code](https://img.shields.io/github/stars/ayanban011/TaleDiffusion?style=social&label=Star)

[arxiv 2025.10]  SceneDecorator: Towards Scene-Oriented Story Generation with Scene Planning and Scene Consistency [[PDF](https://arxiv.org/pdf/2510.22994),[Page](https://lulupig12138.github.io/SceneDecorator/)] ![Code](https://img.shields.io/github/stars/lulupig12138/SceneDecorator?style=social&label=Star)

[arxiv 2025.12] IdentityStory: Taming Your Identity-Preserving Generator for Human-Centric Story Generation  [[PDF](https://arxiv.org/abs/2512.23519)]

[arxiv 2026.03] Persistent Story World Simulation with Continuous Character Customization  [[[PDF](https://arxiv.org/abs/2603.16285)]]


[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)


[arxiv 2026.03]   [[PDF](),[Pag
Download .txt
gitextract_ur3l9zjp/

├── Editing-in-Diffusion.md
├── Multi-modality Generation.md
├── README.md
├── Text-to-Image.md
├── video-generation.md
└── virtual_human.md
Condensed preview — 6 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (835K chars).
[
  {
    "path": "Editing-in-Diffusion.md",
    "chars": 363738,
    "preview": "# Image Editing In Diffusion \n\n## text-to-image\n[arxiv 2025.03] Lumina-Image 2.0: A Unified and Efficient Image Generati"
  },
  {
    "path": "Multi-modality Generation.md",
    "chars": 71028,
    "preview": "### [Awesome-Multimodal-Large-Language-Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models)\n\n\n##"
  },
  {
    "path": "README.md",
    "chars": 877,
    "preview": "# Video Generation\n\n\n\n\nA reading list of Image/Video Generation, Multi-Modality Understanding and Generation, and Virtua"
  },
  {
    "path": "Text-to-Image.md",
    "chars": 14504,
    "preview": "# Text-to-image Generation \n\n## AIGC Datasets \n * **CommonCanvas** CommonCanvas: An Open Diffusion Model Trained with Cr"
  },
  {
    "path": "video-generation.md",
    "chars": 339849,
    "preview": "# Video Generation Survey\nA reading list of video generation\n\n## Joint audio-video generation product\n*  **Veo3** [[Page"
  },
  {
    "path": "virtual_human.md",
    "chars": 34613,
    "preview": "## Dataset\n[arxiv 2025.07]  Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset [[PDF](http"
  }
]

About this extraction

This page contains the full source code of the yzhang2016/video-generation-survey GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 6 files (805.3 KB), approximately 261.9k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!